content
stringlengths
85
101k
title
stringlengths
0
150
question
stringlengths
15
48k
answers
list
answers_scores
list
non_answers
list
non_answers_scores
list
tags
list
name
stringlengths
35
137
Q: Parameter validation failed: Invalid type for parameter Key., value: , type: , valid types: I have 2 Lambda, 1 is doing a batch_write and put_item to ddb. The other lambda does the get_item from first lambda (It has permissions to get_item). ERROR: [ERROR] ParamValidationError: Parameter validation failed: Invalid type for parameter Key.active_employee, value: jen, type: <class 'str'>, valid types: <class 'dict'> Traceback (most recent call last): File "/var/task/my_lambda/checks.py", line 100, in lambda_handler response = ddb.get_item(TableName="testtable", Key={"active_employee": user}) Lambda 1: with gzip.open(response["Body"], "rt") as file: try: with table.batch_writer(overwrite_by_pkeys=["active_employee"]) as batch: for active_users in file: user_dict = json.loads(active_users) manager = user_dict["manager"] user = user_dict["user"] if not manager: continue if not user: continue else: batch.put_item( Item={ "active_employee": user, "mgr_email": mgr_email }, ) logger.info("Loaded data into table %s.", table.name) except ClientError: logger.exception("Couldn't load data into table %s.", table.name) raise Lambda 2 user = "jen" ddb = boto3.client("dynamodb") response = ddb.get_item(TableName="testtable", Key={"active_employee": user}) employee_data = json.loads(response["Item"]) if employee_data and employee_data["active_employee"] == user: manager = employee_data["mgr_email"] print(f"{user} is active") print(f"{manager}") else: print("user not in ddb") I am expecting to get in Lambda jen is active then the manager email. I do not know the manager value. Say the DDB has a million in it and I cannot use scan or query. I've read that get_item is a lot faster when getting a single item. How can I fix the error? How do I get_item as dictionary? should the user = "jen" be made into dictionary? What is the syntax? When doing get_item can I only use the pk and expect to also get the other key (mgr_email). I can only get_item the user and I need it to look for the manager email for me of that alias too if it exists and return both. A: In Lambda 2 you are using the low level client, which expects DynamoDB JSON such as: {'active_employee':{'S':'jen'}} Now, for you to make it work in your current context, you would be better using the Resource client, as you do in Lambda 1. dynamodb = boto3.resource("dynamodb", region_name='us-west-2') table = dynamodb.Table('testtable') try: response = table.get_item( Key={ 'active_employee': "jen" } ) except ClientError as e: print(e.response['Error']['Message']) Be careful not to mix your clients up, and always refer to the documentation for the specific client you are using. Resource Client: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Table.get_item Low Level Client: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Client.get_item
Parameter validation failed: Invalid type for parameter Key., value: , type: , valid types:
I have 2 Lambda, 1 is doing a batch_write and put_item to ddb. The other lambda does the get_item from first lambda (It has permissions to get_item). ERROR: [ERROR] ParamValidationError: Parameter validation failed: Invalid type for parameter Key.active_employee, value: jen, type: <class 'str'>, valid types: <class 'dict'> Traceback (most recent call last): File "/var/task/my_lambda/checks.py", line 100, in lambda_handler response = ddb.get_item(TableName="testtable", Key={"active_employee": user}) Lambda 1: with gzip.open(response["Body"], "rt") as file: try: with table.batch_writer(overwrite_by_pkeys=["active_employee"]) as batch: for active_users in file: user_dict = json.loads(active_users) manager = user_dict["manager"] user = user_dict["user"] if not manager: continue if not user: continue else: batch.put_item( Item={ "active_employee": user, "mgr_email": mgr_email }, ) logger.info("Loaded data into table %s.", table.name) except ClientError: logger.exception("Couldn't load data into table %s.", table.name) raise Lambda 2 user = "jen" ddb = boto3.client("dynamodb") response = ddb.get_item(TableName="testtable", Key={"active_employee": user}) employee_data = json.loads(response["Item"]) if employee_data and employee_data["active_employee"] == user: manager = employee_data["mgr_email"] print(f"{user} is active") print(f"{manager}") else: print("user not in ddb") I am expecting to get in Lambda jen is active then the manager email. I do not know the manager value. Say the DDB has a million in it and I cannot use scan or query. I've read that get_item is a lot faster when getting a single item. How can I fix the error? How do I get_item as dictionary? should the user = "jen" be made into dictionary? What is the syntax? When doing get_item can I only use the pk and expect to also get the other key (mgr_email). I can only get_item the user and I need it to look for the manager email for me of that alias too if it exists and return both.
[ "In Lambda 2 you are using the low level client, which expects DynamoDB JSON such as:\n{'active_employee':{'S':'jen'}}\nNow, for you to make it work in your current context, you would be better using the Resource client, as you do in Lambda 1.\ndynamodb = boto3.resource(\"dynamodb\", region_name='us-west-2')\n\ntable = dynamodb.Table('testtable')\n\ntry:\n response = table.get_item(\n Key={\n 'active_employee': \"jen\"\n }\n )\nexcept ClientError as e:\n print(e.response['Error']['Message'])\n\n\nBe careful not to mix your clients up, and always refer to the documentation for the specific client you are using.\nResource Client: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Table.get_item\nLow Level Client: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/dynamodb.html#DynamoDB.Client.get_item\n" ]
[ 0 ]
[]
[]
[ "amazon_dynamodb", "python" ]
stackoverflow_0074502228_amazon_dynamodb_python.txt
Q: Passing a variable to a function that is many calls deep An abstract example: def a(): d_results = [] for i in range(10): b(i, d_results) # do something that needs d_results def b(i, d_results): # do clever b-stuff c(d_results) # more b-stuff def c(d_results): # do clever c-stuff d(d_results) # more c-stuff def d(d_results): result = ... d_results.append(result) I have a function a() that performs an iteration that uses function b(). Function b() uses c() and c() uses d(). Now function d() also produces some results and in function a() I need to have a list of these results. The above solution collects these results by passing a variable list d_results from one function to the other, to which d() is adding data and that is read by a(). This is very inconvenient, because functions b() and c() don't even know what d_results is about! They're just blindly passing a variable. Can this be solved more elegantly? In a way that b() and c() don't have to care about d_results? Global variables are not OK, because this is part of a http request handler and I need the data to be 'local' to a request. I know that in XSLT (quite a different type of language, but hey...) there's a concept of 'tunneling' parameters through a chain of templates, but this is not known in Python, I suppose? Or is this just a sign that my code isn't structured properly? In case you're curious, the d() function stores some data on the GAE datastore and the d_results are Futures for these operations, for which I want to collect the results in function a(). It's not necessarily a()'s business, but for performance reasons the operations must be done async in d() and so I need some place to handle the Futures, which has to be a(), because that's where the iteration is taking place. A: You can make your entire application (or at least this portion of it) a class and have d_results as an attribute. class MyApplication: def __init__(self): self.d_results = [] def a(self): for i in range(10): self.b(self,i) # do something that needs d_results by using self.d_results def b(self,i): # do clever b-stuff self.c(self) # more b-stuff def c(self): # do clever c-stuff self.d() # more c-stuff def d(self): result = ... self.d_results.append(result)
Passing a variable to a function that is many calls deep
An abstract example: def a(): d_results = [] for i in range(10): b(i, d_results) # do something that needs d_results def b(i, d_results): # do clever b-stuff c(d_results) # more b-stuff def c(d_results): # do clever c-stuff d(d_results) # more c-stuff def d(d_results): result = ... d_results.append(result) I have a function a() that performs an iteration that uses function b(). Function b() uses c() and c() uses d(). Now function d() also produces some results and in function a() I need to have a list of these results. The above solution collects these results by passing a variable list d_results from one function to the other, to which d() is adding data and that is read by a(). This is very inconvenient, because functions b() and c() don't even know what d_results is about! They're just blindly passing a variable. Can this be solved more elegantly? In a way that b() and c() don't have to care about d_results? Global variables are not OK, because this is part of a http request handler and I need the data to be 'local' to a request. I know that in XSLT (quite a different type of language, but hey...) there's a concept of 'tunneling' parameters through a chain of templates, but this is not known in Python, I suppose? Or is this just a sign that my code isn't structured properly? In case you're curious, the d() function stores some data on the GAE datastore and the d_results are Futures for these operations, for which I want to collect the results in function a(). It's not necessarily a()'s business, but for performance reasons the operations must be done async in d() and so I need some place to handle the Futures, which has to be a(), because that's where the iteration is taking place.
[ "You can make your entire application (or at least this portion of it) a class and have d_results as an attribute.\nclass MyApplication:\n def __init__(self):\n self.d_results = []\n \n def a(self):\n for i in range(10):\n self.b(self,i)\n # do something that needs d_results by using self.d_results\n\n def b(self,i):\n # do clever b-stuff\n self.c(self)\n # more b-stuff\n\n def c(self):\n # do clever c-stuff\n self.d()\n # more c-stuff\n\n def d(self):\n result = ...\n self.d_results.append(result)\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074504378_python.txt
Q: Divide a LINESTRING with a list of LINESTRING I'm searching a solution to divide Main Line with more than one overlapped lines. In this example I've four lines (I've applied an offset on Line 1, Line 2 and Line 3 in this chart to facilitate reading): Below the lines: from shapely import wkt main_line = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609, 461480.4983426857 4507548.512415529, 461580.7367309019 4507618.493483591)') line_1 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592)') line_2 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609)') line_3 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609, 461480.4983426857 4507548.512415529)') I don't know how many lines I've, in staging I will have a list. Looking the image above Main Line will be divided into 4 parts, but I'm little bit confused on how I can do this. I've used the code below hoping into a brilliant idea but without fortune. line_list = [line_1, line_2, line_3] diff_list = [] first_line_length = line_list[0].length for line in line_list: line_length = line.length if line_length != first_line_length: diff = main_line.symmetric_difference(line) diff_list.append(diff) fig, ax = plt.subplots(figsize=(10, 10)) ax.set_xlabel('X coordinate', fontsize=15) ax.set_ylabel('Y coordinate', fontsize=15) plt.plot(*main_line.xy, label='Main Line', color='blue') plt.plot(*line_1.parallel_offset(distance=5).xy, label='Line 1', color='green') plt.plot(*line_2.parallel_offset(distance=10).xy, label='Line 2', color='red') plt.plot(*line_3.parallel_offset(distance=15).xy, label='Line 3', color='violet') plt.plot(*diff_list[0].parallel_offset(distance=-5).xy, label='Diff Line 1', color='green') plt.plot(*diff_list[1].parallel_offset(distance=-10).xy, label='Diff Line 2', color='red') plt.legend() plt.show() In brief, if I have n lines shortest than a main line, I would like to divide that main line in n+1 parts. A: Assuming the exercise is as the one presented, (all lines have the same origin), modify your code to do the following: Order lines in descending length order, from the longest to the shortest (being the main line the longest) Then iterate the line list and do the symmetric difference only with the line following (main - line1, line1 - line2 and line2- line3). The segments will be the differences calculated in Step 2 plus line3. Being Line1 longer than line2 and Line2 longer than line3 from shapely import wkt from matplotlib import pyplot as plt main_line = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609, 461480.4983426857 4507548.512415529, 461580.7367309019 4507618.493483591)') line_1 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592)') line_2 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609)') line_3 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609, 461480.4983426857 4507548.512415529)') line_list = [main_line,line_1, line_2, line_3] line_list.sort(key=lambda x: x.length, reverse=True) diff_list = [] for ix in range(len(line_list)-1): #for line in line_list: lineA=line_list[ix] lineB=line_list[ix+1] #line_length = line.length #if line_length != first_line_length: diff = lineA.symmetric_difference(lineB) diff_list.append(diff) fig, ax = plt.subplots(figsize=(10, 10)) ax.set_xlabel('X coordinate', fontsize=15) ax.set_ylabel('Y coordinate', fontsize=15) plt.plot(*main_line.xy, label='Main Line', color='blue') plt.plot(*line_list[1].parallel_offset(distance=5).xy, label='Line 1', color='green') plt.plot(*line_list[2].parallel_offset(distance=10).xy, label='Line 2', color='red') plt.plot(*line_list[3].parallel_offset(distance=15).xy, label='Line 3', color='violet') plt.plot(*diff_list[0].parallel_offset(distance=-5).xy, label='Diff main - Line 1', color='blue') plt.plot(*diff_list[1].parallel_offset(distance=-10).xy, label='Diff Line 1 - Line 2', color='green') plt.plot(*diff_list[2].parallel_offset(distance=-15).xy, label='Diff Line 2 - Line 3', color='red') plt.legend() plt.show()
Divide a LINESTRING with a list of LINESTRING
I'm searching a solution to divide Main Line with more than one overlapped lines. In this example I've four lines (I've applied an offset on Line 1, Line 2 and Line 3 in this chart to facilitate reading): Below the lines: from shapely import wkt main_line = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609, 461480.4983426857 4507548.512415529, 461580.7367309019 4507618.493483591)') line_1 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592)') line_2 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609)') line_3 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609, 461480.4983426857 4507548.512415529)') I don't know how many lines I've, in staging I will have a list. Looking the image above Main Line will be divided into 4 parts, but I'm little bit confused on how I can do this. I've used the code below hoping into a brilliant idea but without fortune. line_list = [line_1, line_2, line_3] diff_list = [] first_line_length = line_list[0].length for line in line_list: line_length = line.length if line_length != first_line_length: diff = main_line.symmetric_difference(line) diff_list.append(diff) fig, ax = plt.subplots(figsize=(10, 10)) ax.set_xlabel('X coordinate', fontsize=15) ax.set_ylabel('Y coordinate', fontsize=15) plt.plot(*main_line.xy, label='Main Line', color='blue') plt.plot(*line_1.parallel_offset(distance=5).xy, label='Line 1', color='green') plt.plot(*line_2.parallel_offset(distance=10).xy, label='Line 2', color='red') plt.plot(*line_3.parallel_offset(distance=15).xy, label='Line 3', color='violet') plt.plot(*diff_list[0].parallel_offset(distance=-5).xy, label='Diff Line 1', color='green') plt.plot(*diff_list[1].parallel_offset(distance=-10).xy, label='Diff Line 2', color='red') plt.legend() plt.show() In brief, if I have n lines shortest than a main line, I would like to divide that main line in n+1 parts.
[ "Assuming the exercise is as the one presented, (all lines have the same origin), modify your code to do the following:\n\nOrder lines in descending length order, from the longest to the shortest (being the main line the longest)\nThen iterate the line list and do the symmetric difference only with the line following (main - line1, line1 - line2 and line2- line3).\nThe segments will be the differences calculated in Step 2 plus line3.\n\nBeing Line1 longer than line2 and Line2 longer than line3\nfrom shapely import wkt\nfrom matplotlib import pyplot as plt\n\nmain_line = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609, 461480.4983426857 4507548.512415529, 461580.7367309019 4507618.493483591)')\n\nline_1 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592)')\nline_2 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609)')\nline_3 = wkt.loads('LINESTRING (461179.6655721677 4507148.788223281, 461217.56786209624 4507181.537033379, 461236.3280996226 4507194.537878151, 461241.7247760045 4507197.640095252, 461258.8379542616 4507210.660701941, 461261.9432857035 4507219.791508417, 461270.90091201715 4507254.590010401, 461271.56385885156 4507303.918307676, 461273.67536588735 4507318.460376316, 461286.2322009634 4507358.346460313, 461302.55653224624 4507403.197152592, 461365.2492823085 4507485.060388609, 461480.4983426857 4507548.512415529)')\nline_list = [main_line,line_1, line_2, line_3]\nline_list.sort(key=lambda x: x.length, reverse=True)\ndiff_list = []\n\nfor ix in range(len(line_list)-1):\n#for line in line_list:\n lineA=line_list[ix]\n lineB=line_list[ix+1]\n #line_length = line.length\n\n #if line_length != first_line_length:\n diff = lineA.symmetric_difference(lineB)\n diff_list.append(diff)\n\nfig, ax = plt.subplots(figsize=(10, 10))\nax.set_xlabel('X coordinate', fontsize=15)\nax.set_ylabel('Y coordinate', fontsize=15)\n\nplt.plot(*main_line.xy, label='Main Line', color='blue')\n\nplt.plot(*line_list[1].parallel_offset(distance=5).xy, label='Line 1', color='green')\nplt.plot(*line_list[2].parallel_offset(distance=10).xy, label='Line 2', color='red')\nplt.plot(*line_list[3].parallel_offset(distance=15).xy, label='Line 3', color='violet')\n\nplt.plot(*diff_list[0].parallel_offset(distance=-5).xy, label='Diff main - Line 1', color='blue')\nplt.plot(*diff_list[1].parallel_offset(distance=-10).xy, label='Diff Line 1 - Line 2', color='green')\nplt.plot(*diff_list[2].parallel_offset(distance=-15).xy, label='Diff Line 2 - Line 3', color='red')\n\nplt.legend()\n\nplt.show()\n\n\n" ]
[ 1 ]
[]
[]
[ "python", "shapely" ]
stackoverflow_0074503433_python_shapely.txt
Q: Python Selenium - how to get all urls on a page that only load the link after clicking on the div? I'm trying to scrap the results from this page https://www.zapimoveis.com.br/aluguel/apartamentos/sp+sao-paulo+zona-sul+itaim-bibi/ using Selenium, but I got stuck on obtaining the url of each result. It seems safe to say that each card's url is not stored on a <a> element and apparently not stored at all at any point of the inner html of each div. The only way to obtain the address is by clicking on the div, which opens a new tab. Currently, I'm using selenium to click on each one, copying the address and then closing the tab, but not only this is a much more complex and time consuming process but also could trigger some captcha by doing that many requests to the website. Is there a way to obtain the urls of all offers on this page without this clicking process? I tried using the inspect tool on chrome but couldn't figure out what is the js or wtv resposible for this behavior. Thanks! A: I checked out the site and it looks like each card-container has a data-id that can be used to access the listing. The link for this card: <div data-id="2593637292" class="card-container js-listing-card">{THE HTML FOR THAT CARD}</div> would be https://www.zapimoveis.com.br/imovel/2593637292.
Python Selenium - how to get all urls on a page that only load the link after clicking on the div?
I'm trying to scrap the results from this page https://www.zapimoveis.com.br/aluguel/apartamentos/sp+sao-paulo+zona-sul+itaim-bibi/ using Selenium, but I got stuck on obtaining the url of each result. It seems safe to say that each card's url is not stored on a <a> element and apparently not stored at all at any point of the inner html of each div. The only way to obtain the address is by clicking on the div, which opens a new tab. Currently, I'm using selenium to click on each one, copying the address and then closing the tab, but not only this is a much more complex and time consuming process but also could trigger some captcha by doing that many requests to the website. Is there a way to obtain the urls of all offers on this page without this clicking process? I tried using the inspect tool on chrome but couldn't figure out what is the js or wtv resposible for this behavior. Thanks!
[ "I checked out the site and it looks like each card-container has a data-id that can be used to access the listing.\nThe link for this card:\n<div data-id=\"2593637292\" class=\"card-container js-listing-card\">{THE HTML FOR THAT CARD}</div>\n\nwould be https://www.zapimoveis.com.br/imovel/2593637292.\n" ]
[ 2 ]
[]
[]
[ "javascript", "python", "selenium" ]
stackoverflow_0074504730_javascript_python_selenium.txt
Q: How to distinguish negative numbers from input that is not a number I am trying to build a simple game and I would like Python to return a message when a player enters a negative number. My issue is that negative numbers are interpreted as strings when the player tries to enter them. Here is my script: while True: user_guess = input("Guess a number: ") if user_guess.isdigit(): user_guess = int(user_guess) if user_guess < 0: print("Too low, guess a number between 0 and 10.") if user_guess > 10: print("Too high, guess a number between 0 and 10.") else: print("It is not a number.") break A: The code you have written is not wrong but it's not very idiomatic in Python and because of that you'll have to fight the language to add the "parse negative" functionality. Consider you could write something like: user_guess = input("Guess a number: ") if is_positive_or_negative_number(user_guess): user_guess = int(user_guess) # continue as before def is_positive_or_negative_number(s: str) -> bool: """Checks if a given string represents a positive or negative number""" if s.startswith('-'): s = s[1:] # strip off the optional leading unary negation return s.isdigit() # do not allow decimals, so no need to worry # about allowing a "." However it's easier if you just write idiomatic Python! Your code is written in a style affectionately termed LBYL (Look Before You Leap) code checks to make sure a thing can be done before doing it. Python prefers EAFP (Easier to Ask Forgiveness than Permission), which has you try to do a thing and catch the error if it's thrown. The idiomatic code then just tries to cast the input to int and pays attention if it fails. while True: user_guess = input("Guess a number: ") try: user_guess = int(user_guess) except ValueError: print("It is not a number.") break # if we get here, user_guess is guaranteed to be an int # and int(user_guess) knows how to parse positive and # negative numbers if user_guess < 0: print("Too low, guess a number between 0 and 10.") elif user_guess > 10: print("Too high, guess a number between 0 and 10.") A: The reason it's returning "It is not a number" for negative numbers is because user_guess.isdigit() treats negative numbers as strings (or non-digits). Here's a code that could work as you expect: while True: user_guess = input("Guess a number: ") try: user_guess = int(user_guess) if user_guess < 0: print("Too low, guess a number between 0 and 10.") if user_guess > 10: print("Too high, guess a number between 0 and 10.") except ValueError: print("It is not a number.") break Since the int() function can recognize negative numbers, using try-except can help you catch the ValueError exception that is raised whenever you try to use the int() function on non-integers. A: The problem is with isdigit(). isdigit() will return False if minus sign. One solution is to ask int() to validate the user_guess. while True: try: user_guess = int( input( "Guess a number: ")) except ValueError: print( "It is not a number.") break # exit loop # validate user entry if user_guess < 0: print("Too low...") continue elif user_guess > 10: print("Too high...") continue # do processing ...
How to distinguish negative numbers from input that is not a number
I am trying to build a simple game and I would like Python to return a message when a player enters a negative number. My issue is that negative numbers are interpreted as strings when the player tries to enter them. Here is my script: while True: user_guess = input("Guess a number: ") if user_guess.isdigit(): user_guess = int(user_guess) if user_guess < 0: print("Too low, guess a number between 0 and 10.") if user_guess > 10: print("Too high, guess a number between 0 and 10.") else: print("It is not a number.") break
[ "The code you have written is not wrong but it's not very idiomatic in Python and because of that you'll have to fight the language to add the \"parse negative\" functionality. Consider you could write something like:\nuser_guess = input(\"Guess a number: \")\nif is_positive_or_negative_number(user_guess):\n user_guess = int(user_guess)\n# continue as before\n\ndef is_positive_or_negative_number(s: str) -> bool:\n \"\"\"Checks if a given string represents a positive or negative number\"\"\"\n if s.startswith('-'):\n s = s[1:] # strip off the optional leading unary negation\n return s.isdigit() # do not allow decimals, so no need to worry\n # about allowing a \".\"\n\nHowever it's easier if you just write idiomatic Python! Your code is written in a style affectionately termed LBYL (Look Before You Leap) code checks to make sure a thing can be done before doing it. Python prefers EAFP (Easier to Ask Forgiveness than Permission), which has you try to do a thing and catch the error if it's thrown.\nThe idiomatic code then just tries to cast the input to int and pays attention if it fails.\nwhile True:\n user_guess = input(\"Guess a number: \")\n try:\n user_guess = int(user_guess)\n except ValueError:\n print(\"It is not a number.\")\n break\n # if we get here, user_guess is guaranteed to be an int\n # and int(user_guess) knows how to parse positive and\n # negative numbers\n if user_guess < 0:\n print(\"Too low, guess a number between 0 and 10.\")\n elif user_guess > 10:\n print(\"Too high, guess a number between 0 and 10.\")\n\n", "The reason it's returning \"It is not a number\" for negative numbers is because user_guess.isdigit() treats negative numbers as strings (or non-digits).\nHere's a code that could work as you expect:\nwhile True:\n user_guess = input(\"Guess a number: \")\n try:\n user_guess = int(user_guess)\n if user_guess < 0:\n print(\"Too low, guess a number between 0 and 10.\")\n if user_guess > 10:\n print(\"Too high, guess a number between 0 and 10.\")\n except ValueError:\n print(\"It is not a number.\")\n break\n\nSince the int() function can recognize negative numbers, using try-except can help you catch the ValueError exception that is raised whenever you try to use the int() function on non-integers.\n", "The problem is with isdigit(). isdigit() will return False if minus sign.\nOne solution is to ask int() to validate the user_guess.\nwhile True:\n try:\n user_guess = int( input( \"Guess a number: \"))\n except ValueError:\n print( \"It is not a number.\")\n break # exit loop\n # validate user entry\n if user_guess < 0:\n print(\"Too low...\")\n continue\n elif user_guess > 10:\n print(\"Too high...\")\n continue\n # do processing\n ...\n\n\n" ]
[ 0, 0, 0 ]
[ "def input_number(message):\n while True:\n user_guess = input(message)\n try:\n n = int(user_guess)\n if n < 0:\n print(\"Too low, guess a number between 0 and 10.\")\n elif n > 10:\n print(\"Too high, guess a number between 0 and 10.\")\n else:\n return n\n except ValueError:\n print(\"It is not a number. Try again\")\n continue\n\n\nif __name__ == '__main__':\n number = input_number(\"Guess a number.\")\n print(\"Your number\", number)\n\n", "Edit: I will explain my code and why it solves your problem. The isdigit method you use will only check if the characters of a string consist of digits. The minus sign is not a digit, and so it returns False.\nInstead, I try to convert the string to a number, and if python fails, I just loop again (continue) and ask for a new number. If the input is indeed a number, the lower part of the code checks for a valid interval. Only if the number is within the interval, the variable controlling the loop gets set, and the loop exits.\nMy code does not depend on isdigit, and therefore avoids your problem. Hope this helps and provides insight.\nuser_guess = None\nwhile user_guess is None:\n inp = input(\"Guess a number: \")\n\n try:\n nr_inp = int(inp)\n except ValueError:\n print(\"It is not a number.\")\n continue\n\n if nr_inp < 0:\n print(\"Too low, guess a number between 0 and 10.\")\n elif nr_inp > 10:\n print(\"Too high, guess a number between 0 and 10.\")\n else:\n user_guess = nr_inp\n\nprint(\"Done:\", user_guess)\n\n" ]
[ -1, -1 ]
[ "negative_integer", "python" ]
stackoverflow_0074504679_negative_integer_python.txt
Q: self need in call to parent class by name when using multi inheritance Edit: Thanks for the replies. This is a practice exercise from a website that I'm using to learn, I haven't designed it. I want to confirm that the Wolf.action(self) is an static call and ask why would you make Wolf inherit from Animal if you can only use Dog Class' methods with super() due to MRO (in Diamond scheme). Is there any point on making a subclass inherit from several independent classes since you can only use super() with the first one listed in the definition? Does it have anything to do with imports? So, in this code: class Animal: def __init__(self, name): self.name = name class Dog(Animal): def action(self): print("{} wags tail. Awwww".format(self.name)) class Wolf(Animal): def action(self): print("{} bites. OUCH!".format(self.name)) class Hybrid(Dog, Wolf): def action(self): super().action() Wolf.action(self) my_pet = Hybrid("Fluffy") my_pet.action() # Fluffy wags tail. Awwww # Fluffy bites. OUCH! Why do I have to provide self in Wolf.action(self) but not in super().action()? Why can't I just do Wolf.action()? I'm guessing this is just an static call, and thus that's why I need to pass an explicit parameter. But then, what is the point of multi inheritance in this context? Wouldn't it be the same if Hybrid doesn't inherit from Wolf? I've read some other threads but the majority of them talk about MRO and that is not the answer I'm looking for. Thanks in advance. A: Wolf.action is the actual function, not a bound method that implicitly includes self when you try to call it. However, if you use super properly, you don't need an explicit call to Wolf.action. class Animal: def __init__(self, name): self.name = name def action(self): pass class Dog(Animal): def action(self): super().action() print("{} wags tail. Awwww".format(self.name)) class Wolf(Animal): def action(self): super().action() print("{} bites. OUCH!".format(self.name)) class Hybrid(Wolf, Dog): pass my_pet = Hybrid("Fluffy") my_pet.action() # Fluffy wags tail. Awwww # Fluffy bites. OUCH!
self need in call to parent class by name when using multi inheritance
Edit: Thanks for the replies. This is a practice exercise from a website that I'm using to learn, I haven't designed it. I want to confirm that the Wolf.action(self) is an static call and ask why would you make Wolf inherit from Animal if you can only use Dog Class' methods with super() due to MRO (in Diamond scheme). Is there any point on making a subclass inherit from several independent classes since you can only use super() with the first one listed in the definition? Does it have anything to do with imports? So, in this code: class Animal: def __init__(self, name): self.name = name class Dog(Animal): def action(self): print("{} wags tail. Awwww".format(self.name)) class Wolf(Animal): def action(self): print("{} bites. OUCH!".format(self.name)) class Hybrid(Dog, Wolf): def action(self): super().action() Wolf.action(self) my_pet = Hybrid("Fluffy") my_pet.action() # Fluffy wags tail. Awwww # Fluffy bites. OUCH! Why do I have to provide self in Wolf.action(self) but not in super().action()? Why can't I just do Wolf.action()? I'm guessing this is just an static call, and thus that's why I need to pass an explicit parameter. But then, what is the point of multi inheritance in this context? Wouldn't it be the same if Hybrid doesn't inherit from Wolf? I've read some other threads but the majority of them talk about MRO and that is not the answer I'm looking for. Thanks in advance.
[ "Wolf.action is the actual function, not a bound method that implicitly includes self when you try to call it.\nHowever, if you use super properly, you don't need an explicit call to Wolf.action.\nclass Animal:\n def __init__(self, name):\n self.name = name\n\n def action(self):\n pass\n \nclass Dog(Animal):\n def action(self):\n super().action()\n print(\"{} wags tail. Awwww\".format(self.name))\n \nclass Wolf(Animal):\n def action(self):\n super().action()\n print(\"{} bites. OUCH!\".format(self.name))\n \nclass Hybrid(Wolf, Dog):\n pass\n\nmy_pet = Hybrid(\"Fluffy\")\nmy_pet.action() # Fluffy wags tail. Awwww\n # Fluffy bites. OUCH!\n\n" ]
[ 1 ]
[]
[]
[ "class", "python", "self" ]
stackoverflow_0074504782_class_python_self.txt
Q: How to set the `xpath` of pandas's read_xml? I want to parse data from a xml file of its Component part: <Component> <UnderlyingSecurityID>300001</UnderlyingSecurityID> <UnderlyingSecurityIDSource>102</UnderlyingSecurityIDSource> <UnderlyingSymbol>特锐德</UnderlyingSymbol> <ComponentShare>300.00</ComponentShare> <SubstituteFlag>1</SubstituteFlag> <PremiumRatio>0.25000</PremiumRatio> <CreationCashSubstitute>0.0000</CreationCashSubstitute> <RedemptionCashSubstitute>0.0000</RedemptionCashSubstitute> </Component> <Component> <UnderlyingSecurityID>300003</UnderlyingSecurityID> <UnderlyingSecurityIDSource>102</UnderlyingSecurityIDSource> <UnderlyingSymbol>乐普医疗</UnderlyingSymbol> <ComponentShare>600.00</ComponentShare> <SubstituteFlag>1</SubstituteFlag> <PremiumRatio>0.25000</PremiumRatio> <CreationCashSubstitute>0.0000</CreationCashSubstitute> <RedemptionCashSubstitute>0.0000</RedemptionCashSubstitute> </Component> I have installed the latest version of lxml and pandas, tried following codes without luck. Python 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 7.25.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd In [2]: pd.__version__ Out[2]: '1.3.0' In [3]: xml = pd.read_xml('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml', xpath='//component') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-3-67d228028cc9> in <module> ----> 1 xml = pd.read_xml('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml', xpath='//component') ... 501 if elems == []: --> 502 raise ValueError(msg) 503 504 if elems != [] and attrs == [] and children == []: ValueError: xpath does not return any nodes. Be sure row level nodes are in xpath. If document uses namespaces denoted with xmlns, be sure to define namespaces and use them in xpath. In [4]: xml = pd.read_xml('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml', xpath='//component', namespaces={'com': 'http://ts.szse.cn/Fund'}) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-4-52fbe542dadb> in <module> ----> 1 xml = pd.read_xml('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml', xpath='//component', namespaces={'com': 'http://ts.szse.cn/Fund'}) ... 501 if elems == []: --> 502 raise ValueError(msg) 503 504 if elems != [] and attrs == [] and children == []: ValueError: xpath does not return any nodes. Be sure row level nodes are in xpath. If document uses namespaces denoted with xmlns, be sure to define namespaces and use them in xpath. I also tried lxml directly, which seems work: In [5]: from lxml import etree In [6]: import requests In [7]: content = requests.get('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml').content In [8]: html = etree.HTML(content) In [9]: html.xpath('//component') Out[9]: [<Element component at 0x1d493cb23c0>, <Element component at 0x1d493cb2340>, <Element component at 0x1d493cb2240>, <Element component at 0x1d493cb22c0>, <Element component at 0x1d493cb2140>, <Element component at 0x1d493cb2040>, <Element component at 0x1d493cb2c40>, <Element component at 0x1d493cb61c0>, <Element component at 0x1d493cb63c0>, <Element component at 0x1d493cb2200>, ... I have no idea why the read_xml does not work. Any help would be appreciated! A: So in short the solution here is to figure out which node you want, in this case the Component (case-sensitive), and set the xpath as follows adding //. pd.read_xml(your_xml_file, xpath='//Component') A: You can use xml.etree.ElementTree, instead of pd.xml_read(): import xml.etree.ElementTree as ET import pandas as pd import requests url = 'https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml' response = requests.get(url) res = ET.fromstring(response.content) tree = ET.ElementTree(res) root = tree.getroot() namespace = "{http://ts.szse.cn/Fund}" columns =['UnderlyingSecurityID', 'UnderlyingSecurityIDSource', 'UnderlyingSymbol', 'ComponentShare', 'SubstituteFlag', 'PremiumRatio','CreationCashSubstitute', 'RedemptionCashSubstitute'] data = [] for elem in root: if elem.tag == f"{namespace}Components": com_l = [] for com in elem.findall(f"{namespace}Component"): for val in com: com_l.append(val.text) data.append(com_l) com_l =[] df = pd.DataFrame(data, columns=columns) print(df.to_string())
How to set the `xpath` of pandas's read_xml?
I want to parse data from a xml file of its Component part: <Component> <UnderlyingSecurityID>300001</UnderlyingSecurityID> <UnderlyingSecurityIDSource>102</UnderlyingSecurityIDSource> <UnderlyingSymbol>特锐德</UnderlyingSymbol> <ComponentShare>300.00</ComponentShare> <SubstituteFlag>1</SubstituteFlag> <PremiumRatio>0.25000</PremiumRatio> <CreationCashSubstitute>0.0000</CreationCashSubstitute> <RedemptionCashSubstitute>0.0000</RedemptionCashSubstitute> </Component> <Component> <UnderlyingSecurityID>300003</UnderlyingSecurityID> <UnderlyingSecurityIDSource>102</UnderlyingSecurityIDSource> <UnderlyingSymbol>乐普医疗</UnderlyingSymbol> <ComponentShare>600.00</ComponentShare> <SubstituteFlag>1</SubstituteFlag> <PremiumRatio>0.25000</PremiumRatio> <CreationCashSubstitute>0.0000</CreationCashSubstitute> <RedemptionCashSubstitute>0.0000</RedemptionCashSubstitute> </Component> I have installed the latest version of lxml and pandas, tried following codes without luck. Python 3.9.4 (tags/v3.9.4:1f2e308, Apr 6 2021, 13:40:21) [MSC v.1928 64 bit (AMD64)] Type 'copyright', 'credits' or 'license' for more information IPython 7.25.0 -- An enhanced Interactive Python. Type '?' for help. In [1]: import pandas as pd In [2]: pd.__version__ Out[2]: '1.3.0' In [3]: xml = pd.read_xml('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml', xpath='//component') --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-3-67d228028cc9> in <module> ----> 1 xml = pd.read_xml('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml', xpath='//component') ... 501 if elems == []: --> 502 raise ValueError(msg) 503 504 if elems != [] and attrs == [] and children == []: ValueError: xpath does not return any nodes. Be sure row level nodes are in xpath. If document uses namespaces denoted with xmlns, be sure to define namespaces and use them in xpath. In [4]: xml = pd.read_xml('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml', xpath='//component', namespaces={'com': 'http://ts.szse.cn/Fund'}) --------------------------------------------------------------------------- ValueError Traceback (most recent call last) <ipython-input-4-52fbe542dadb> in <module> ----> 1 xml = pd.read_xml('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml', xpath='//component', namespaces={'com': 'http://ts.szse.cn/Fund'}) ... 501 if elems == []: --> 502 raise ValueError(msg) 503 504 if elems != [] and attrs == [] and children == []: ValueError: xpath does not return any nodes. Be sure row level nodes are in xpath. If document uses namespaces denoted with xmlns, be sure to define namespaces and use them in xpath. I also tried lxml directly, which seems work: In [5]: from lxml import etree In [6]: import requests In [7]: content = requests.get('https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml').content In [8]: html = etree.HTML(content) In [9]: html.xpath('//component') Out[9]: [<Element component at 0x1d493cb23c0>, <Element component at 0x1d493cb2340>, <Element component at 0x1d493cb2240>, <Element component at 0x1d493cb22c0>, <Element component at 0x1d493cb2140>, <Element component at 0x1d493cb2040>, <Element component at 0x1d493cb2c40>, <Element component at 0x1d493cb61c0>, <Element component at 0x1d493cb63c0>, <Element component at 0x1d493cb2200>, ... I have no idea why the read_xml does not work. Any help would be appreciated!
[ "So in short the solution here is to figure out which node you want, in this case the Component (case-sensitive), and set the xpath as follows adding //.\npd.read_xml(your_xml_file, xpath='//Component')\n\n", "You can use xml.etree.ElementTree, instead of pd.xml_read():\nimport xml.etree.ElementTree as ET\nimport pandas as pd\nimport requests\n\nurl = 'https://www.huaan.com.cn/etf/159949/etffiledownload.jsp?etffilename=pcf_159949_20210707.xml'\nresponse = requests.get(url)\nres = ET.fromstring(response.content)\n\ntree = ET.ElementTree(res)\nroot = tree.getroot()\n\nnamespace = \"{http://ts.szse.cn/Fund}\"\n\ncolumns =['UnderlyingSecurityID', 'UnderlyingSecurityIDSource', 'UnderlyingSymbol', 'ComponentShare', 'SubstituteFlag', 'PremiumRatio','CreationCashSubstitute', 'RedemptionCashSubstitute']\n\ndata = []\nfor elem in root: \n if elem.tag == f\"{namespace}Components\":\n com_l = []\n for com in elem.findall(f\"{namespace}Component\"):\n for val in com:\n com_l.append(val.text)\n data.append(com_l)\n com_l =[]\n\ndf = pd.DataFrame(data, columns=columns)\nprint(df.to_string())\n\n" ]
[ 0, 0 ]
[]
[]
[ "pandas", "python", "xml" ]
stackoverflow_0068281666_pandas_python_xml.txt
Q: I had a problem with python library pikepdf When trying to install the python moduel pikepdf using pip, this error pops up: Building wheels for collected packages: pikepdf Building wheel for pikepdf (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for pikepdf (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [54 lines of output] ... creating build\temp.win-amd64-cpython-310\Release\src\qpdf "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DPOINTERHOLDER_TRANSITION=4 -IC:\Users\ME\AppData\Local\Temp\pip-build-env-dpc9ltd5\overlay\Lib\site-packages\pybind11\include "-IC:\Program Files\Python310\include" "-IC:\Program Files\Python310\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc/qpdf\annotation.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src/qpdf\annotation.obj /EHsc /bigobj /std:c++17 annotation.cpp src/qpdf\annotation.cpp(4): fatal error C1083: Cannot open include file: 'qpdf/Constants.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Professional\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pikepdf Failed to build pikepdf ERROR: Could not build wheels for pikepdf, which is required to install pyproject.toml-based projects Creating the wheel fails due to a missing header file: src/qpdf\annotation.cpp(4): fatal error C1083: Cannot open include file: 'qpdf/Constants.h': No such file or directory This is for pikepdf v6.0.0. My previous version was v4.0.1.post1, which worked fine. Is this something that can be remedied from my side? A: Just list all versions available for pidepdf: pip index versions pikepdf Pick one and install it: pip install pikepdf==5.6.1 Check back in a later version whether this is resolved. Issues like these can be reported in their tracker: https://github.com/pikepdf/pikepdf/issues The problem listed is known. From https://github.com/pikepdf/pikepdf/issues/390: pikepdf 6 requires qpdf 11 and drops compatibility for all earlier versions. [...] Binary wheel status: Windows support is currently blocked by [...] A: Solution for Macbook, M1 brew install qpdf After it use pip install pikepdf Solution from https://github.com/pikepdf/pikepdf/issues/274
I had a problem with python library pikepdf
When trying to install the python moduel pikepdf using pip, this error pops up: Building wheels for collected packages: pikepdf Building wheel for pikepdf (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for pikepdf (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [54 lines of output] ... creating build\temp.win-amd64-cpython-310\Release\src\qpdf "C:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\bin\HostX86\x64\cl.exe" /c /nologo /O2 /W3 /GL /DNDEBUG /MD -DPOINTERHOLDER_TRANSITION=4 -IC:\Users\ME\AppData\Local\Temp\pip-build-env-dpc9ltd5\overlay\Lib\site-packages\pybind11\include "-IC:\Program Files\Python310\include" "-IC:\Program Files\Python310\Include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\ATLMFC\include" "-IC:\Program Files (x86)\Microsoft Visual Studio\2019\Professional\VC\Tools\MSVC\14.29.30133\include" "-IC:\Program Files (x86)\Windows Kits\NETFXSDK\4.8\include\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\ucrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\shared" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\um" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\winrt" "-IC:\Program Files (x86)\Windows Kits\10\include\10.0.19041.0\cppwinrt" /EHsc /Tpsrc/qpdf\annotation.cpp /Fobuild\temp.win-amd64-cpython-310\Release\src/qpdf\annotation.obj /EHsc /bigobj /std:c++17 annotation.cpp src/qpdf\annotation.cpp(4): fatal error C1083: Cannot open include file: 'qpdf/Constants.h': No such file or directory error: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Professional\\VC\\Tools\\MSVC\\14.29.30133\\bin\\HostX86\\x64\\cl.exe' failed with exit code 2 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for pikepdf Failed to build pikepdf ERROR: Could not build wheels for pikepdf, which is required to install pyproject.toml-based projects Creating the wheel fails due to a missing header file: src/qpdf\annotation.cpp(4): fatal error C1083: Cannot open include file: 'qpdf/Constants.h': No such file or directory This is for pikepdf v6.0.0. My previous version was v4.0.1.post1, which worked fine. Is this something that can be remedied from my side?
[ "Just list all versions available for pidepdf:\npip index versions pikepdf\n\nPick one and install it:\npip install pikepdf==5.6.1\n\nCheck back in a later version whether this is resolved.\nIssues like these can be reported in their tracker: https://github.com/pikepdf/pikepdf/issues\nThe problem listed is known. From https://github.com/pikepdf/pikepdf/issues/390:\n\npikepdf 6 requires qpdf 11 and drops compatibility for all earlier\nversions. [...] Binary wheel status: Windows support is currently\nblocked by [...]\n\n", "Solution for Macbook, M1\nbrew install qpdf\n\nAfter it use\npip install pikepdf\n\nSolution from https://github.com/pikepdf/pikepdf/issues/274\n" ]
[ 1, 0 ]
[]
[]
[ "pikepdf", "python" ]
stackoverflow_0069686925_pikepdf_python.txt
Q: Split and convert str to int I'm making a shopping cart list, where the products are added and identified by their codes. The system has to add, remove, show and checkout. Show and checkout commands are working fine. Add is working fine too, but it has a particularity: it´s mandatory to add with "Add 15", "Add 70" (whatever other number). I can't input str and int separately (did before and was perfect, but not what they want). After I add, the remove command does not identify number inserted previously, because it is being added as a str. cart = [] while True: command = str(input("Command: ")).split() if "add" in command: cart.append(int(command[1])) elif "remove" in command: if command[1] in cart: cart.remove(int(command[1])) else: print(f'code {command[1]} not found') elif "show" in command: cart.sort() print(cart, end="\n") elif "checkout" in command: break cart.sort() print(cart, end="") A: You forgot to change the type of "command[1]" in the if. The following code works: cart = [] while True: command = str(input("Command: ")).split() if "add" in command: cart.append(int(command[1])) elif "remove" in command: if int(command[1]) in cart: # There you forgot to check command[1] with the casting of type cart.remove(int(command[1])) else: print(f'code {command[1]} not found') elif "show" in command: cart.sort() print(cart, end="\n") elif "checkout" in command: break cart.sort() print(cart, end="") You made this mistake because you repeated yourself too much, using multiple times the int() method. There is a cleaner version of the code cart = [] while True: raw = str(input("Command: ")).split() command = raw[0] amount = None if (len(raw) > 1): amount = int(raw[1]) if command == "add": cart.append(amount) elif command == "remove": if amount in cart: cart.remove(amount) else: print(f'code {amount} not found') elif command == "show": cart.sort() print(cart, end="\n") elif command == "checkout": break cart.sort() print(cart, end="") A: cart = [] while True: command = str(input("Command: ")).lower().split() print(command) # I assume that add and remove will be 2-word commands if len(command) == 2: try: number = int(command[1]) if "add" == command[0]: cart.append(number) elif "remove" == command[0]: if number in cart: cart.remove(number) else: print(f"code {number} not found") except ValueError: print(f"code {command[1]} is not a number") # I assume show and checkout will be 1-word commands elif len(command) == 1: if "show" == command[0]: cart.sort() print(cart, end="\n") elif "checkout" == command[0]: break else: print("invalid command") else: print("invalid command") cart.sort() print(cart, end="")
Split and convert str to int
I'm making a shopping cart list, where the products are added and identified by their codes. The system has to add, remove, show and checkout. Show and checkout commands are working fine. Add is working fine too, but it has a particularity: it´s mandatory to add with "Add 15", "Add 70" (whatever other number). I can't input str and int separately (did before and was perfect, but not what they want). After I add, the remove command does not identify number inserted previously, because it is being added as a str. cart = [] while True: command = str(input("Command: ")).split() if "add" in command: cart.append(int(command[1])) elif "remove" in command: if command[1] in cart: cart.remove(int(command[1])) else: print(f'code {command[1]} not found') elif "show" in command: cart.sort() print(cart, end="\n") elif "checkout" in command: break cart.sort() print(cart, end="")
[ "You forgot to change the type of \"command[1]\" in the if. The following code works:\ncart = []\nwhile True:\n command = str(input(\"Command: \")).split()\n if \"add\" in command:\n cart.append(int(command[1]))\n elif \"remove\" in command:\n if int(command[1]) in cart: # There you forgot to check command[1] with the casting of type\n cart.remove(int(command[1]))\n else:\n print(f'code {command[1]} not found')\n elif \"show\" in command:\n cart.sort()\n print(cart, end=\"\\n\")\n elif \"checkout\" in command:\n break\ncart.sort()\nprint(cart, end=\"\")\n\nYou made this mistake because you repeated yourself too much, using multiple times the int() method. There is a cleaner version of the code\ncart = []\nwhile True:\n raw = str(input(\"Command: \")).split()\n command = raw[0]\n amount = None\n if (len(raw) > 1):\n amount = int(raw[1])\n if command == \"add\":\n cart.append(amount)\n elif command == \"remove\":\n if amount in cart:\n cart.remove(amount)\n else:\n print(f'code {amount} not found')\n elif command == \"show\":\n cart.sort()\n print(cart, end=\"\\n\")\n elif command == \"checkout\":\n break\ncart.sort()\nprint(cart, end=\"\")\n\n", "cart = []\nwhile True:\n command = str(input(\"Command: \")).lower().split()\n print(command)\n # I assume that add and remove will be 2-word commands\n if len(command) == 2:\n try:\n number = int(command[1])\n if \"add\" == command[0]:\n cart.append(number)\n elif \"remove\" == command[0]:\n if number in cart:\n cart.remove(number)\n else:\n print(f\"code {number} not found\")\n except ValueError:\n print(f\"code {command[1]} is not a number\")\n # I assume show and checkout will be 1-word commands\n elif len(command) == 1:\n if \"show\" == command[0]:\n cart.sort()\n print(cart, end=\"\\n\")\n elif \"checkout\" == command[0]:\n break\n else:\n print(\"invalid command\")\n else:\n print(\"invalid command\")\n\ncart.sort()\nprint(cart, end=\"\")\n\n" ]
[ 3, 0 ]
[]
[]
[ "integer", "list", "python", "string" ]
stackoverflow_0074504776_integer_list_python_string.txt
Q: Not sure of the Print Structure with YouTube v3 API So I was creating a script to list information from Google's V3 YouTube API and I used the structure that was shown on their Site describing it, so I'm pretty sure I'm misunderstanding something. I tried using the structure that was shown to print JUST the Video's Title as a test and was expecting that to print, however it just throws an error. Error is below Here's what I wrote below import sys, json, requests vidCode = input('\nVideo Code Here: ') url = requests.get(f'https://youtube.googleapis.com/youtube/v3/videos?part=snippet%2CcontentDetails%2Cstatistics&id={vidCode}&key=(not sharing the api key, lol)') text = url.text data = json.loads(text) if "kind" in data: print(f'Video URL: youtube.com/watch?v={vidCode}') print('Title: ', data['snippet.title']) else: print("The video could not be found.\n") This did not work, however if I change snippet.title to just something like etag the print is successful. I take it this is because the Title is further down in the JSON List. I've also tried doing data['items'] which did work, but I also don't want to output a massive chunk of unformatted information, it's not pretty lol. Another test I did was data['items.snippet.title'] to see if that was what I was missing, also no, that didn't work. Any idea what I'm doing wrong? A: you need to access the keys in the dictionary separately. import sys, json, requests vidCode = input('\nVideo Code Here: ') url = requests.get(f'https://youtube.googleapis.com/youtube/v3/videos?part=snippet%2CcontentDetails%2Cstatistics&id={vidCode}&key=(not sharing the api key, lol)') text = url.text data = json.loads(text) if "kind" in data: print(f'Video URL: youtube.com/watch?v={vidCode}') print('Title: ', data['items'][0]['snippet']['title']) else: print("The video could not be found.\n") To be clear, you need to access the 'items' value in the dictionary which is a list, get the first item from that list, then get the 'snippet' sub object, then finally the title.
Not sure of the Print Structure with YouTube v3 API
So I was creating a script to list information from Google's V3 YouTube API and I used the structure that was shown on their Site describing it, so I'm pretty sure I'm misunderstanding something. I tried using the structure that was shown to print JUST the Video's Title as a test and was expecting that to print, however it just throws an error. Error is below Here's what I wrote below import sys, json, requests vidCode = input('\nVideo Code Here: ') url = requests.get(f'https://youtube.googleapis.com/youtube/v3/videos?part=snippet%2CcontentDetails%2Cstatistics&id={vidCode}&key=(not sharing the api key, lol)') text = url.text data = json.loads(text) if "kind" in data: print(f'Video URL: youtube.com/watch?v={vidCode}') print('Title: ', data['snippet.title']) else: print("The video could not be found.\n") This did not work, however if I change snippet.title to just something like etag the print is successful. I take it this is because the Title is further down in the JSON List. I've also tried doing data['items'] which did work, but I also don't want to output a massive chunk of unformatted information, it's not pretty lol. Another test I did was data['items.snippet.title'] to see if that was what I was missing, also no, that didn't work. Any idea what I'm doing wrong?
[ "you need to access the keys in the dictionary separately.\nimport sys, json, requests\n\nvidCode = input('\\nVideo Code Here: ')\n\nurl = requests.get(f'https://youtube.googleapis.com/youtube/v3/videos?part=snippet%2CcontentDetails%2Cstatistics&id={vidCode}&key=(not sharing the api key, lol)')\ntext = url.text\n\ndata = json.loads(text)\n\nif \"kind\" in data:\n print(f'Video URL: youtube.com/watch?v={vidCode}')\n \n print('Title: ', data['items'][0]['snippet']['title'])\nelse:\n print(\"The video could not be found.\\n\")\n\nTo be clear, you need to access the 'items' value in the dictionary which is a list, get the first item from that list, then get the 'snippet' sub object, then finally the title.\n" ]
[ 0 ]
[]
[]
[ "google_api", "json", "python", "python_3.x", "youtube_api" ]
stackoverflow_0074504824_google_api_json_python_python_3.x_youtube_api.txt
Q: Discord.py music bot Wavelink error: `TypeError: Type must meet VoiceProtocol abstract base class.` The print: 0|Runa | <class 'wavelink.player.Player'> The error: 0|Runa | vc:wavelink.Player=await ctx.author.voice.channel.connect(cls= wavelink.Player) 0|Runa | File "/usr/local/lib/python3.8/dist-packages/nextcord/abc.py", line 1683, in connect 0|Runa | raise TypeError("Type must meet VoiceProtocol abstract base class.") 0|Runa | TypeError: Type must meet VoiceProtocol abstract base class. My "play music" command: @commands.command() async def play(self,ctx:commands.Context,*, search: wavelink.YouTubeTrack): if not ctx.voice_client: print(wavelink.Player) vc:wavelink.Player=await ctx.author.voice.channel.connect(cls= wavelink.Player) elif not ctx.author.voice: await ctx.send('Join a voice channel first lol.') return elif not ctx.author.voice.channel==ctx.voice_client.channel: await ctx.send('We need to be in the same voice channel.') return else: vc:wavelink.Player=ctx.voice_client if vc.queue.is_empty and not vc.is_playing(): await vc.play(search) await ctx.send(f'Now playing: {search.title}! {search.uri}') else: await vc.queue.put_wait(search) await ctx.send(f'Added {search.title} to the queue! {search.uri}') vc.ctx=ctx setattr(vc,'loop',False) The bot sends this error when i use the command ~play link and does not join vc or play music. Could someone please help me? A: I had the same issue, you need to run below for voice support: Linux/macOS python3 -m pip install -U "discord.py[voice]" Windows py -3 -m pip install -U discord.py[voice] this resolved the issue for me. I'm using nextcord, so used below: Linux/macOS python3 -m pip install -U "nextcord[voice]" Windows py -3 -m pip install -U nextcord[voice]
Discord.py music bot Wavelink error: `TypeError: Type must meet VoiceProtocol abstract base class.`
The print: 0|Runa | <class 'wavelink.player.Player'> The error: 0|Runa | vc:wavelink.Player=await ctx.author.voice.channel.connect(cls= wavelink.Player) 0|Runa | File "/usr/local/lib/python3.8/dist-packages/nextcord/abc.py", line 1683, in connect 0|Runa | raise TypeError("Type must meet VoiceProtocol abstract base class.") 0|Runa | TypeError: Type must meet VoiceProtocol abstract base class. My "play music" command: @commands.command() async def play(self,ctx:commands.Context,*, search: wavelink.YouTubeTrack): if not ctx.voice_client: print(wavelink.Player) vc:wavelink.Player=await ctx.author.voice.channel.connect(cls= wavelink.Player) elif not ctx.author.voice: await ctx.send('Join a voice channel first lol.') return elif not ctx.author.voice.channel==ctx.voice_client.channel: await ctx.send('We need to be in the same voice channel.') return else: vc:wavelink.Player=ctx.voice_client if vc.queue.is_empty and not vc.is_playing(): await vc.play(search) await ctx.send(f'Now playing: {search.title}! {search.uri}') else: await vc.queue.put_wait(search) await ctx.send(f'Added {search.title} to the queue! {search.uri}') vc.ctx=ctx setattr(vc,'loop',False) The bot sends this error when i use the command ~play link and does not join vc or play music. Could someone please help me?
[ "I had the same issue, you need to run below for voice support:\nLinux/macOS\npython3 -m pip install -U \"discord.py[voice]\"\nWindows\npy -3 -m pip install -U discord.py[voice]\nthis resolved the issue for me. I'm using nextcord, so used below:\nLinux/macOS\npython3 -m pip install -U \"nextcord[voice]\"\nWindows\npy -3 -m pip install -U nextcord[voice]\n" ]
[ 0 ]
[]
[]
[ "audio_player", "discord", "discord.py", "python", "voice" ]
stackoverflow_0074451569_audio_player_discord_discord.py_python_voice.txt
Q: On Matplotlib on python, how do I put a red circle on a specific point? My code I currently have is below, I want to put a filled in red circle where I have the plt.text below. How would I do that? plt.plot('Month', 'Total Profit', data=fruit_sales_df, color='g', ls='--') plt.ylim(35000, 74999) plt.text(11, 70476, '70476') plt.title("Total Profit Trend by Month") plt.xlabel("Month") plt.ylabel("Total Profit") ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) plt.show() A: Meaning just a point? You can add data consisting of one point only. import matplotlib.pyplot as plt plt.plot([1, 2], [3, 4], color='g', ls='--') plt.text(1.5, 3.7, '70476') plt.plot(1.5, 3.5, color='red', marker='o') plt.title("Total Profit Trend by Month") plt.xlabel("Month") plt.ylabel("Total Profit") plt.show() A: You can call plt.plot(x, y, 'style') again to create a point, like: import matplotlib.pyplot as plt plt.plot([1,2,3,4], [1,2,3,4]) plt.plot(5, 5, 'ro') # Additional point in red plt.plot(6, 6, 'go') # Additional point in green plt.text(5, 5, "Text") plt.axis([0, 8, 0, 8]) plt.title("Total Profit Trend by Month") plt.xlabel("Month") plt.ylabel("Total Profit") plt.show()
On Matplotlib on python, how do I put a red circle on a specific point?
My code I currently have is below, I want to put a filled in red circle where I have the plt.text below. How would I do that? plt.plot('Month', 'Total Profit', data=fruit_sales_df, color='g', ls='--') plt.ylim(35000, 74999) plt.text(11, 70476, '70476') plt.title("Total Profit Trend by Month") plt.xlabel("Month") plt.ylabel("Total Profit") ax.spines['top'].set_visible(False) ax.spines['right'].set_visible(False) plt.show()
[ "Meaning just a point? You can add data consisting of one point only.\nimport matplotlib.pyplot as plt\n\nplt.plot([1, 2], [3, 4], color='g', ls='--')\nplt.text(1.5, 3.7, '70476')\nplt.plot(1.5, 3.5, color='red', marker='o')\nplt.title(\"Total Profit Trend by Month\")\nplt.xlabel(\"Month\")\nplt.ylabel(\"Total Profit\")\nplt.show()\n\n\n", "You can call plt.plot(x, y, 'style') again to create a point, like:\nimport matplotlib.pyplot as plt\n\nplt.plot([1,2,3,4], [1,2,3,4]) \nplt.plot(5, 5, 'ro') # Additional point in red\nplt.plot(6, 6, 'go') # Additional point in green\nplt.text(5, 5, \"Text\")\nplt.axis([0, 8, 0, 8]) \nplt.title(\"Total Profit Trend by Month\")\nplt.xlabel(\"Month\")\nplt.ylabel(\"Total Profit\")\nplt.show()\n\n\n\n\n" ]
[ 2, 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074504770_matplotlib_python.txt
Q: Python prime number calculator prime = [2] while len(prime) <= 1000: i=3 a = 0 for number in prime: testlist= [] testlist.append(i%number) if 0 in testlist: i=i+1 else: prime.append(i) i=i+1 print(prime[999]) Trying to make a program that computes primes for online course. This program never ends, but I can't see an infinite loop in my code. A prime number is a number that can only be divided by exclusively one and itself. My logic is that if a number can be divided by prime numbers preceding it then it is not prime. A: As the comments to your question pointed out, there is several errors in your code. Here is a version of your code working fine. prime = [2] i = 3 while len(prime) <= 1000: testlist = [] for number in prime: testlist.append(i % number) if 0 not in testlist: prime.append(i) i = i + 1 print prime A: I haven't tested but you can create method like below: def get_prime_no_upto(number): start = 2 primes = list(range(start,number)).to_a for no in range(start,number): for num in range(start,no): if ( no % num == 0) and (num != no): primes.delete(no) break primes and can use it like print primeno(100) cheers! A: def prime_checker(number): stop = False prime = True n = 2 while stop == False and n < number: if (number) % n == 0: prime = False stop = True n += 1 if prime == True: print("It's a prime number.") elif prime == False: print("It's not a prime number.") prime_checker(11)
Python prime number calculator
prime = [2] while len(prime) <= 1000: i=3 a = 0 for number in prime: testlist= [] testlist.append(i%number) if 0 in testlist: i=i+1 else: prime.append(i) i=i+1 print(prime[999]) Trying to make a program that computes primes for online course. This program never ends, but I can't see an infinite loop in my code. A prime number is a number that can only be divided by exclusively one and itself. My logic is that if a number can be divided by prime numbers preceding it then it is not prime.
[ "As the comments to your question pointed out, there is several errors in your code.\nHere is a version of your code working fine.\nprime = [2]\ni = 3\nwhile len(prime) <= 1000:\n testlist = []\n for number in prime:\n testlist.append(i % number)\n if 0 not in testlist:\n prime.append(i)\n i = i + 1\nprint prime\n\n", "I haven't tested but you can create method like below:\ndef get_prime_no_upto(number):\n start = 2\n primes = list(range(start,number)).to_a\n for no in range(start,number):\n for num in range(start,no):\n if ( no % num == 0) and (num != no):\n primes.delete(no)\n break\n primes\n\nand can use it like\nprint primeno(100)\n\ncheers!\n", "def prime_checker(number):\n stop = False\n prime = True\n n = 2\n while stop == False and n < number:\n if (number) % n == 0:\n prime = False\n stop = True\n n += 1\n if prime == True:\n print(\"It's a prime number.\")\n elif prime == False:\n print(\"It's not a prime number.\")\n\nprime_checker(11)\n\n" ]
[ 2, 0, 0 ]
[]
[]
[ "conditional", "list", "python" ]
stackoverflow_0024252934_conditional_list_python.txt
Q: Problem with making .exe from python file by PyInstaller My script .py work perfectly, but .exe sadly doesn't work. Im running on newest PyInstaller. Here is my script I already tried everyting that i can think of here is options that i used: Options used -w : does't have .exe file -- onefile -w and -F -w : The specified module could not be found. --F , --onefile and no option used : Only shows this option for like half a second A: Not all python code can be compiled into a .exe. A: I was able to work around this issue by importing pywintypes into my script before win32print module.
Problem with making .exe from python file by PyInstaller
My script .py work perfectly, but .exe sadly doesn't work. Im running on newest PyInstaller. Here is my script I already tried everyting that i can think of here is options that i used: Options used -w : does't have .exe file -- onefile -w and -F -w : The specified module could not be found. --F , --onefile and no option used : Only shows this option for like half a second
[ "Not all python code can be compiled into a .exe.\n", "I was able to work around this issue by importing pywintypes into my script before win32print module.\n" ]
[ 0, 0 ]
[]
[]
[ "exe", "pyinstaller", "python", "python_3.x", "pywin32" ]
stackoverflow_0074504169_exe_pyinstaller_python_python_3.x_pywin32.txt
Q: How to ignore duplicate keys using the psycopg2 copy_from command copying .csv file into postgresql database I'm using Python. I have a daily csv file that I need to copy daily into a postgresql table. Some of those .csv records may be same day over day so I want to ignore those, based on a primary key field. Using cursor.copy_from,Day 1 all is fine, new table created. Day 2, copy_from throws duplicate key error (as it should), but copy_from stops on 1st error. Is there a copy_from parameter that would ignore the duplicates and continue? If not, any other recommendations other than copy_from? f = open(csv_file_name, 'r') c.copy_from(f, 'mytable', sep=',') A: This is how I'm doing it with psycopg3. Assumes the file is in the same folder as the script and that it has a header row. from pathlib import Path from psycopg import sql file = Path(__file__).parent / "the_data.csv" target_table = "mytable" conn = <your connection> with conn.cursor() as cur: # Create an empty table with the same columns as target_table. cur.execute(f"CREATE TEMP TABLE tmp_table (LIKE {target_table})") # The csv file imports as text. # This approach tells postgres how to convert text to the proper column types. column_types = sql.Identifier(target_table) query = sql.SQL("COPY tmp_table FROM STDIN WITH(FORMAT csv, HEADER true)") typed_query = query.format(column_types) with cur.copy(typed_query) as copy: with file.open() as csv_data: copy.write(csv_data.read()) cur.execute( f"INSERT INTO {target_table} SELECT * FROM tmp_table ON CONFLICT DO NOTHING" )
How to ignore duplicate keys using the psycopg2 copy_from command copying .csv file into postgresql database
I'm using Python. I have a daily csv file that I need to copy daily into a postgresql table. Some of those .csv records may be same day over day so I want to ignore those, based on a primary key field. Using cursor.copy_from,Day 1 all is fine, new table created. Day 2, copy_from throws duplicate key error (as it should), but copy_from stops on 1st error. Is there a copy_from parameter that would ignore the duplicates and continue? If not, any other recommendations other than copy_from? f = open(csv_file_name, 'r') c.copy_from(f, 'mytable', sep=',')
[ "This is how I'm doing it with psycopg3.\nAssumes the file is in the same folder as the script and that it has a header row.\nfrom pathlib import Path\nfrom psycopg import sql\n\nfile = Path(__file__).parent / \"the_data.csv\"\ntarget_table = \"mytable\"\nconn = <your connection>\n\nwith conn.cursor() as cur:\n\n # Create an empty table with the same columns as target_table.\n cur.execute(f\"CREATE TEMP TABLE tmp_table (LIKE {target_table})\")\n\n # The csv file imports as text.\n # This approach tells postgres how to convert text to the proper column types.\n column_types = sql.Identifier(target_table)\n query = sql.SQL(\"COPY tmp_table FROM STDIN WITH(FORMAT csv, HEADER true)\")\n typed_query = query.format(column_types)\n with cur.copy(typed_query) as copy:\n with file.open() as csv_data:\n copy.write(csv_data.read())\n\n cur.execute(\n f\"INSERT INTO {target_table} SELECT * FROM tmp_table ON CONFLICT DO NOTHING\"\n )\n\n" ]
[ 0 ]
[]
[]
[ "postgresql", "psycopg2", "python" ]
stackoverflow_0073200153_postgresql_psycopg2_python.txt
Q: Python Openpyxl Copy Data From Rows Based on Cell Value& Paste In Specific Rows of ExcelSheet I am trying to copy data by rows based on Column ['A'] cell value from one sheet and paste in row2 of another sheet. The paste in sheet is an existing worksheet, row 1 of the worksheet is my header row so i want to paste the copied data starting from row2. I do not want to append as I have existing formula columns in the paste in sheet that will be overwritten, also with append I lose formatting. So say Column A of my copy from sheet is States, i want to copy all rows where Column ['A'] cell.value is 'Georgia' and paste in row2 of sheet2, copy rows where Column ['A'] cell.value = Texas and paste in row2 of sheet 3 etc(pasting every state in different sheets). I am able to copy the data and paste but I am not able to get it to paste in row 2 it is pasting in whatever row the data is in my copy from sheet. So if Texas starts from row 3000, my code is copying from row 3000 of the copy from sheet and pasting in row 3000 of sheet 2 meaning rows 1-2999 of my sheet 2 is all empty rows, Copy from file looks like this: Paste in file looks like this: see my code below import openpyxl from openpyxl import load_workbook from openpyxl import Workbook from openpyxl.utils import range_boundaries from sys import argv script, inpath, outpath = argv # load copy from file wb_cpy = load_workbook(r'C:\Users\me\documents\sourcefolder\copyfromfile.xlsx') #ws = wb_src["sheet1"] #previous inconsistency referred to in thecomment ws = wb_cpy["sheet1"] #edited fixed # load paste in file wb_pst = load_workbook(r'C:\Users\me\documents\sourcefolder\pasteinfile.xlsx') #ws2 = wb_dst["sheet2"] #previous inconsistency referred to inthecomment ws2 = wb_pst["sheet2"] #edited fixed for row in ws.iter_rows(min_col=1, max_col=1, min_row=9): for row2 in ws2.iter_rows(min_col=1, max_col=1, min_row=2): for cell in row: for cell2 in row2: if cell.value == "GEORGIA": ws2.cell(row=cell.row, column=1).value = ws.cell(row=cell.row, column=1).value ws2.cell(row=cell.row, column=2).value = ws.cell(row=cell.row, column=2).value ws2.cell(row=cell.row, column=6).value = ws.cell(row=cell.row, column=6).value wb_pst.save(r'C:\Users\me\documents\sourcefolder\pasteinfile.xlsx') #ps: i will repeat the script for each state I maybe approaching it all wrong but I have tried multiple other approaches with no success, I cannot get the copied data to paste in row 2 of the paste in sheet A: There seems to be some inconsistencies in your code e.g. wb_cpy = load_workbook(r'C:\Users\me\documents\sourcefolder\copyfromfile.xlsx') ws = wb_src["sheet1"] ws is referencing a workbook object different to that just created or indeed does not appear to exist anywhere in your code. Similar with the next workbook and worksheet objects When you are writing code should try to avoid duplication, so reuse code where you can. Below is some example code is based on the assumption in my comment and that the states are in order as shown in your example data i.e. not jumbled together and the States list is in that same order. The code uses a python list of the States to search then copy the consecutive rows to the current 'pasteinfile.xlsx' sheet until the next State data. It then copies that State data to the next 'pasteinfile.xlsx' Sheet and so on for each State. Summary The States list is manually added here however it could be obtained from the values in Column A prior if these change each time. A search on Column A is made for each State in the list starting at A2, then subsequently from the last row of the last copied State data, i.e. after GEORGIA rows are copied and ALABAMA is the next search its will start from row 7 which is the end of the GEORGIA rows. As a 'State' matches it sets the first row to paste data in the 'pasteinfile.xlsx' Sheet to row 2 then iterates through the cells in the first matched row and copies each cell value to 'pasteinfile.xlsx' (starting at row 2). Then checks next row in Column A for a State match again and if true copies the next row to row 3 of 'pasteinfile.xlsx' and so on until the State no longer matches. At this point it loops to the next State and resets the start row back to 2 and sets the next numeric Sheet name. Then the same process is repeated until all States in the list are searched. For each State the 'pasteinfile.xlsx' Sheet name is incremented by 1, i.e. 'Sheet1', 'Sheet2', etc. The code starts naming at 'Sheet1' however that can be changed to start at another number if desired. ... from copy import copy # Import copy if used # load copy from file wb_cpy = load_workbook('copyfromfile.xlsx') # ws = wb_src["sheet1"] ws = wb_cpy["Sheet1"] # load paste in file wb_pst = load_workbook('pasteinfile.xlsx') # ws2 = wb_dst["sheet2"] copyfrom_max_columns = ws.max_column paste_start_min_row = 1 states_list = ['GEORGIA', 'ALABAMA', 'TEXAS'] # States list to search for rows for sheet_number, state in enumerate(states_list, 1): ws2 = wb_pst["Sheet" + str(sheet_number)] # Set Sheet name for current pasted data search_min_row = paste_start_min_row # Start search for States at top row then from the end of the last copy/paste paste_start_min_row = 1 # Reset the row number for each new sheet so the copy starts at row 2 for row in ws.iter_rows(max_col=1, min_row=search_min_row): # min_col defaults to 1 for cell in row: if cell.value == state: # Search ColA for the State, when match is found proceed to copy/paste paste_start_min_row += 1 # Set first row for 'copy to' to 2 for i in range(copyfrom_max_columns): # Iterate the cells in the row to max column # Set the copy and paste Cells copy_cell = cell.offset(column=i) paste_cell = ws2.cell(row=paste_start_min_row, column=i + 1) # Paste the copied value to the 'pasteinfile.xlsx' Sheet paste_cell.value = copy_cell.value # Set the number format of the cell to same as original paste_cell.number_format = copy_cell.number_format ### Copy other Cell formatting if desired ### Requires 'from copy import copy' paste_cell.font = copy(copy_cell.font) paste_cell.alignment = copy(copy_cell.alignment) paste_cell.border = copy(copy_cell.border) paste_cell.fill = copy(copy_cell.fill) wb_pst.save('pasteinfile.xlsx') This image is an example of the Sheet for ALABAMA in 'pasteinfile.xlsx' (Sheet2 in this case), before and after running the code. Note I set each row in the Type column to a numeric value as a unique identifier for each row of the data. #-------------Additional Information---------# I have updated the code to include some style and formatting copying. The specific format noted is 'number_format' which can be copied across the same way as the value per the code. If you need/want other formatting like font, orientation, fill etc these need the 'copy' function and you'll need to import copy as shown in the code, **from copy import copy**. If you just want the number format omit those lines and there is no need to import copy.
Python Openpyxl Copy Data From Rows Based on Cell Value& Paste In Specific Rows of ExcelSheet
I am trying to copy data by rows based on Column ['A'] cell value from one sheet and paste in row2 of another sheet. The paste in sheet is an existing worksheet, row 1 of the worksheet is my header row so i want to paste the copied data starting from row2. I do not want to append as I have existing formula columns in the paste in sheet that will be overwritten, also with append I lose formatting. So say Column A of my copy from sheet is States, i want to copy all rows where Column ['A'] cell.value is 'Georgia' and paste in row2 of sheet2, copy rows where Column ['A'] cell.value = Texas and paste in row2 of sheet 3 etc(pasting every state in different sheets). I am able to copy the data and paste but I am not able to get it to paste in row 2 it is pasting in whatever row the data is in my copy from sheet. So if Texas starts from row 3000, my code is copying from row 3000 of the copy from sheet and pasting in row 3000 of sheet 2 meaning rows 1-2999 of my sheet 2 is all empty rows, Copy from file looks like this: Paste in file looks like this: see my code below import openpyxl from openpyxl import load_workbook from openpyxl import Workbook from openpyxl.utils import range_boundaries from sys import argv script, inpath, outpath = argv # load copy from file wb_cpy = load_workbook(r'C:\Users\me\documents\sourcefolder\copyfromfile.xlsx') #ws = wb_src["sheet1"] #previous inconsistency referred to in thecomment ws = wb_cpy["sheet1"] #edited fixed # load paste in file wb_pst = load_workbook(r'C:\Users\me\documents\sourcefolder\pasteinfile.xlsx') #ws2 = wb_dst["sheet2"] #previous inconsistency referred to inthecomment ws2 = wb_pst["sheet2"] #edited fixed for row in ws.iter_rows(min_col=1, max_col=1, min_row=9): for row2 in ws2.iter_rows(min_col=1, max_col=1, min_row=2): for cell in row: for cell2 in row2: if cell.value == "GEORGIA": ws2.cell(row=cell.row, column=1).value = ws.cell(row=cell.row, column=1).value ws2.cell(row=cell.row, column=2).value = ws.cell(row=cell.row, column=2).value ws2.cell(row=cell.row, column=6).value = ws.cell(row=cell.row, column=6).value wb_pst.save(r'C:\Users\me\documents\sourcefolder\pasteinfile.xlsx') #ps: i will repeat the script for each state I maybe approaching it all wrong but I have tried multiple other approaches with no success, I cannot get the copied data to paste in row 2 of the paste in sheet
[ "There seems to be some inconsistencies in your code e.g.\nwb_cpy = load_workbook(r'C:\\Users\\me\\documents\\sourcefolder\\copyfromfile.xlsx')\nws = wb_src[\"sheet1\"]\n\nws is referencing a workbook object different to that just created or indeed does not appear to exist anywhere in your code. Similar with the next workbook and worksheet objects\nWhen you are writing code should try to avoid duplication, so reuse code where you can.\nBelow is some example code is based on the assumption in my comment and that the states are in order as shown in your example data i.e. not jumbled together and the States list is in that same order.\n\nThe code uses a python list of the States to search then copy the consecutive rows to the current 'pasteinfile.xlsx' sheet until the next State data. It then copies that State data to the next 'pasteinfile.xlsx' Sheet and so on for each State.\nSummary\nThe States list is manually added here however it could be obtained from the values in Column A prior if these change each time. A search on Column A is made for each State in the list starting at A2, then subsequently from the last row of the last copied State data, i.e. after GEORGIA rows are copied and ALABAMA is the next search its will start from row 7 which is the end of the GEORGIA rows.\nAs a 'State' matches it sets the first row to paste data in the 'pasteinfile.xlsx' Sheet to row 2 then iterates through the cells in the first matched row and copies each cell value to 'pasteinfile.xlsx' (starting at row 2). Then checks next row in Column A for a State match again and if true copies the next row to row 3 of 'pasteinfile.xlsx' and so on until the State no longer matches. At this point it loops to the next State and resets the start row back to 2 and sets the next numeric Sheet name. Then the same process is repeated until all States in the list are searched.\nFor each State the 'pasteinfile.xlsx' Sheet name is incremented by 1, i.e. 'Sheet1', 'Sheet2', etc. The code starts naming at 'Sheet1' however that can be changed to start at another number if desired.\n...\nfrom copy import copy # Import copy if used\n# load copy from file\nwb_cpy = load_workbook('copyfromfile.xlsx')\n# ws = wb_src[\"sheet1\"]\nws = wb_cpy[\"Sheet1\"]\n\n# load paste in file\nwb_pst = load_workbook('pasteinfile.xlsx')\n# ws2 = wb_dst[\"sheet2\"]\n\ncopyfrom_max_columns = ws.max_column\n\npaste_start_min_row = 1\nstates_list = ['GEORGIA', 'ALABAMA', 'TEXAS'] # States list to search for rows\nfor sheet_number, state in enumerate(states_list, 1):\n ws2 = wb_pst[\"Sheet\" + str(sheet_number)] # Set Sheet name for current pasted data\n search_min_row = paste_start_min_row # Start search for States at top row then from the end of the last copy/paste\n paste_start_min_row = 1 # Reset the row number for each new sheet so the copy starts at row 2\n for row in ws.iter_rows(max_col=1, min_row=search_min_row): # min_col defaults to 1\n for cell in row:\n if cell.value == state: # Search ColA for the State, when match is found proceed to copy/paste\n paste_start_min_row += 1 # Set first row for 'copy to' to 2\n for i in range(copyfrom_max_columns): # Iterate the cells in the row to max column\n # Set the copy and paste Cells\n copy_cell = cell.offset(column=i)\n paste_cell = ws2.cell(row=paste_start_min_row, column=i + 1)\n # Paste the copied value to the 'pasteinfile.xlsx' Sheet\n paste_cell.value = copy_cell.value\n # Set the number format of the cell to same as original\n paste_cell.number_format = copy_cell.number_format\n\n ### Copy other Cell formatting if desired\n ### Requires 'from copy import copy'\n paste_cell.font = copy(copy_cell.font)\n paste_cell.alignment = copy(copy_cell.alignment)\n paste_cell.border = copy(copy_cell.border)\n paste_cell.fill = copy(copy_cell.fill)\n\nwb_pst.save('pasteinfile.xlsx')\n\nThis image is an example of the Sheet for ALABAMA in 'pasteinfile.xlsx' (Sheet2 in this case), before and after running the code. Note I set each row in the Type column to a numeric value as a unique identifier for each row of the data.\n\n\n#-------------Additional Information---------#\nI have updated the code to include some style and formatting copying. The specific format noted is 'number_format' which can be copied across the same way as the value per the code. If you need/want other formatting like font, orientation, fill etc these need the 'copy' function and you'll need to import copy as shown in the code, **from copy import copy**. If you just want the number format omit those lines and there is no need to import copy.\n" ]
[ 1 ]
[]
[]
[ "openpyxl", "python" ]
stackoverflow_0074448799_openpyxl_python.txt
Q: Django migration not applied to the DB I had an Django2.2.3 app, it was working fine. But I had to chane the name of a field in a table, and add another field. Then I ran ./manage.py makemigrations && ./manage.py migrate. Besides the terminal prompt: Running migrations: No migrations to apply. No error is throwed. But then when I go to the MySQLWorkbench to check the database, it is exactly as I didn't make any change. I tried deleting the migrations and making again, the process ends with no errors but the database don't change. I create another empty database, change the name on settings.py and make migrations and migrate again, and it worked, but when I put the old database name on the settings, it just did not work. Can someone explain this behavior for me? There is any kind of cache for these information migrations or something? I realy want to know why this is not winrkig as I espect. A: Make sure the app with the migrations is in the INSTALLED_APPS. Django won't look at the app for changes otherwise. A: Adding new few fields to an existing model (table) is one reason for this problem. A way to go about this is simply as follows: a) un-apply the migrations for that app: python3 manage.py migrate --fake <app-name> zero b) migrate the required migrations (you've already deleted previous migrations and you've done 'makemigrations' for the newly added column. So, you just migrate: python3 manage.py migrate <app-name> If the steps above didn't solve the problems, then drop the table first; i) python3 manage.py dbshell ii) DROP TABLE appname_tablename close the shell and repeat a and b again.
Django migration not applied to the DB
I had an Django2.2.3 app, it was working fine. But I had to chane the name of a field in a table, and add another field. Then I ran ./manage.py makemigrations && ./manage.py migrate. Besides the terminal prompt: Running migrations: No migrations to apply. No error is throwed. But then when I go to the MySQLWorkbench to check the database, it is exactly as I didn't make any change. I tried deleting the migrations and making again, the process ends with no errors but the database don't change. I create another empty database, change the name on settings.py and make migrations and migrate again, and it worked, but when I put the old database name on the settings, it just did not work. Can someone explain this behavior for me? There is any kind of cache for these information migrations or something? I realy want to know why this is not winrkig as I espect.
[ "Make sure the app with the migrations is in the INSTALLED_APPS. Django won't look at the app for changes otherwise.\n", "Adding new few fields to an existing model (table) is one reason for this problem. A way to go about this is simply as follows:\na) un-apply the migrations for that app:\npython3 manage.py migrate --fake <app-name> zero\n\nb) migrate the required migrations (you've already deleted previous migrations and you've done 'makemigrations' for the newly added column. So, you just migrate:\npython3 manage.py migrate <app-name>\n\nIf the steps above didn't solve the problems, then drop the table first;\ni) python3 manage.py dbshell\n\nii) DROP TABLE appname_tablename\n\nclose the shell and repeat a and b again.\n" ]
[ 0, 0 ]
[]
[]
[ "django", "migration", "mysql", "python" ]
stackoverflow_0065929264_django_migration_mysql_python.txt
Q: Problem with python logging.handlers.SMTPHandler, 'credentials' not recognized as attribute of SMTPHandler I'm trying to set up email logging of critical errors in my python application. I keep running into an error trying to initialize the SMTPHandler: AttributeError: 'SMTPHandler' object has no attribute 'credentials' I'm using Python 3.10. I carved out a component of the program where I'm getting the error. import logging from logging.handlers import SMTPHandler mail_handler = SMTPHandler( mailhost='my.hosting.com', fromaddr='admin@myapp.com', toaddrs=['admin@myapp.com'], subject='Application Error', credentials=('admin@myapp.com', 'mypassword'), secure=() ) print(mail_handler.mailhost) print(mail_handler.fromaddr) print(mail_handler.toaddrs) print(mail_handler.subject) print(mail_handler.secure) print(mail_handler.timeout) print(mail_handler.credentials) mail_handler.setLevel(logging.ERROR) mail_handler.setFormatter(logging.Formatter('[%(asctime)s] %(levelname)s in %(module)s: %(message)s')) The print statements and traceback I'm getting is: my.hosting.com admin@myapp.com ['admin@myapp.com'] Application Error () 5.0 Traceback (most recent call last): File "C:\Users\user\Documents\myapp\test.py", line 31, in <module> print(mail_handler.credentials) AttributeError: 'SMTPHandler' object has no attribute 'credentials' When I check the init statement for SMTPHandler using the following snippet to make sure I'm not accessing a very old version (I think credentials was added in 2.6): import inspect signature = inspect.signature(SMTPHandler.__init__).parameters for name, parameter in signature.items(): print(name, parameter.default, parameter.annotation, parameter.kind)` I get: self <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD mailhost <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD fromaddr <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD toaddrs <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD subject <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD credentials None <class 'inspect._empty'> POSITIONAL_OR_KEYWORD secure None <class 'inspect._empty'> POSITIONAL_OR_KEYWORD timeout 5.0 <class 'inspect._empty'> POSITIONAL_OR_KEYWORD So 'credentials' is in the initialization statement. Anyone see something stupid in my code or run into this problem? Thanks so much! A: You have the full source code for all of the standard modules on your computer. I just took a quick look, and although the SMTPHandler accepts a credentials argument, it stores that argument in self.username and self.password.
Problem with python logging.handlers.SMTPHandler, 'credentials' not recognized as attribute of SMTPHandler
I'm trying to set up email logging of critical errors in my python application. I keep running into an error trying to initialize the SMTPHandler: AttributeError: 'SMTPHandler' object has no attribute 'credentials' I'm using Python 3.10. I carved out a component of the program where I'm getting the error. import logging from logging.handlers import SMTPHandler mail_handler = SMTPHandler( mailhost='my.hosting.com', fromaddr='admin@myapp.com', toaddrs=['admin@myapp.com'], subject='Application Error', credentials=('admin@myapp.com', 'mypassword'), secure=() ) print(mail_handler.mailhost) print(mail_handler.fromaddr) print(mail_handler.toaddrs) print(mail_handler.subject) print(mail_handler.secure) print(mail_handler.timeout) print(mail_handler.credentials) mail_handler.setLevel(logging.ERROR) mail_handler.setFormatter(logging.Formatter('[%(asctime)s] %(levelname)s in %(module)s: %(message)s')) The print statements and traceback I'm getting is: my.hosting.com admin@myapp.com ['admin@myapp.com'] Application Error () 5.0 Traceback (most recent call last): File "C:\Users\user\Documents\myapp\test.py", line 31, in <module> print(mail_handler.credentials) AttributeError: 'SMTPHandler' object has no attribute 'credentials' When I check the init statement for SMTPHandler using the following snippet to make sure I'm not accessing a very old version (I think credentials was added in 2.6): import inspect signature = inspect.signature(SMTPHandler.__init__).parameters for name, parameter in signature.items(): print(name, parameter.default, parameter.annotation, parameter.kind)` I get: self <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD mailhost <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD fromaddr <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD toaddrs <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD subject <class 'inspect._empty'> <class 'inspect._empty'> POSITIONAL_OR_KEYWORD credentials None <class 'inspect._empty'> POSITIONAL_OR_KEYWORD secure None <class 'inspect._empty'> POSITIONAL_OR_KEYWORD timeout 5.0 <class 'inspect._empty'> POSITIONAL_OR_KEYWORD So 'credentials' is in the initialization statement. Anyone see something stupid in my code or run into this problem? Thanks so much!
[ "You have the full source code for all of the standard modules on your computer. I just took a quick look, and although the SMTPHandler accepts a credentials argument, it stores that argument in self.username and self.password.\n" ]
[ 0 ]
[]
[]
[ "credentials", "python" ]
stackoverflow_0074504966_credentials_python.txt
Q: How to remove characters from string? How to remove user defined letters from a user defined sentence in Python? Hi, if anyone is willing to take the time to try and help me out with some python code. I am currently doing a software engineering bootcamp which the current requirement is that I create a program where a user inputs a sentence and then a user will input the letters he/she wishes to remove from the sentence. I have searched online and there are tons of articles and threads about removing letters from strings but I cannot find one article or thread about how to remove user defined letters from a user defined string. import re sentence = input("Please enter a sentence: ") letters = input("Please enter the letters you wish to remove: ") sentence1 = re.sub(letters, '', sentence) print(sentence1) The expected result should remove multiple letters from a user defined string, yet this will remove a letter if you only input 1 letter. If you input multiple letters it will just print the original sentence. Any help or guidance would be much appreciated. A: If I understood correctly we can use str.maketrans and str.translate methods here like from itertools import repeat sentence1 = sentence.translate(str.maketrans(dict(zip(letters, repeat(None))))) What this does line by line: create mapping of letters to None which will be interpreted as "remove this character" translation_mapping = dict(zip(letters, repeat(None)) create translation table from it translation_table = str.maketrans(translation_mapping) use translation table for given str sentence1 = sentence.translate(translation_table) Test >>> sentence = 'Some Text' >>> letters = 'te' >>> sentence.translate(str.maketrans(dict(zip(letters, repeat(None))))) 'Som Tx' Comparison from timeit import timeit print('this solution:', timeit('sentence.translate(str.maketrans(dict(zip(letters, repeat(None)))))', 'from itertools import repeat\n' 'sentence = "Hello World" * 100\n' 'letters = "el"')) print('@FailSafe solution using `re` module:', timeit('re.sub(str([letters]), "", sentence)', 'import re\n' 'sentence = "Hello World" * 100\n' 'letters = "el"')) print('@raratiru solution using `str.join` method:', timeit('"".join([x for x in sentence if x not in letters])', 'sentence = "Hello World" * 100\n' 'letters = "el"')) gives on my PC this solution: 3.620041800000024 @FailSafe solution using `re` module: 66.5485033 @raratiru solution using `str.join` method: 70.18480099999988 so we probably should think twice before using regular expressions everywhere and str.join'ing one-character strings. A: You can use a list comprehension: result = ''.join([x for x in sentence if x not in letters]) A: >>> sentence1 = re.sub(str([letters]), '', sentence) Preferably with letters entered in the form letters = 'abcd'. No spaces or punctuation marks if necessary. . Edit: These are actually better: >>> re.sub('['+letters+']', '', sentence) >>> re.sub('['+str(letters)+']', '', sentence) The first also removes \' if it appears in the string, although it is the prettier solution A: Your code doesn't work as expected because the regex you provide only matches the exact combination of letters you give it. What you want is to match either one of the letters, which can be achieved by putting them in brackets, for example: import re sentence = input("Please enter a sentence: ") letters = input("Please enter the letters you wish to remove: ") regex_str = '[' + letters + ']' sentence1 = re.sub(regex_str, '', sentence) print(sentence1) For more regex help I would suggest visiting https://regex101.com/
How to remove characters from string?
How to remove user defined letters from a user defined sentence in Python? Hi, if anyone is willing to take the time to try and help me out with some python code. I am currently doing a software engineering bootcamp which the current requirement is that I create a program where a user inputs a sentence and then a user will input the letters he/she wishes to remove from the sentence. I have searched online and there are tons of articles and threads about removing letters from strings but I cannot find one article or thread about how to remove user defined letters from a user defined string. import re sentence = input("Please enter a sentence: ") letters = input("Please enter the letters you wish to remove: ") sentence1 = re.sub(letters, '', sentence) print(sentence1) The expected result should remove multiple letters from a user defined string, yet this will remove a letter if you only input 1 letter. If you input multiple letters it will just print the original sentence. Any help or guidance would be much appreciated.
[ "If I understood correctly we can use str.maketrans and str.translate methods here like\nfrom itertools import repeat\n\nsentence1 = sentence.translate(str.maketrans(dict(zip(letters, repeat(None)))))\n\nWhat this does line by line:\n\ncreate mapping of letters to None which will be interpreted as \"remove this character\"\ntranslation_mapping = dict(zip(letters, repeat(None))\n\ncreate translation table from it\ntranslation_table = str.maketrans(translation_mapping)\n\nuse translation table for given str\nsentence1 = sentence.translate(translation_table)\n\n\nTest\n>>> sentence = 'Some Text'\n>>> letters = 'te'\n>>> sentence.translate(str.maketrans(dict(zip(letters, repeat(None)))))\n'Som Tx'\n\nComparison\nfrom timeit import timeit\nprint('this solution:',\n timeit('sentence.translate(str.maketrans(dict(zip(letters, repeat(None)))))',\n 'from itertools import repeat\\n'\n 'sentence = \"Hello World\" * 100\\n'\n 'letters = \"el\"'))\nprint('@FailSafe solution using `re` module:',\n timeit('re.sub(str([letters]), \"\", sentence)',\n 'import re\\n'\n 'sentence = \"Hello World\" * 100\\n'\n 'letters = \"el\"'))\nprint('@raratiru solution using `str.join` method:',\n timeit('\"\".join([x for x in sentence if x not in letters])',\n 'sentence = \"Hello World\" * 100\\n'\n 'letters = \"el\"'))\n\ngives on my PC\nthis solution: 3.620041800000024\n@FailSafe solution using `re` module: 66.5485033\n@raratiru solution using `str.join` method: 70.18480099999988\n\nso we probably should think twice before using regular expressions everywhere and str.join'ing one-character strings.\n", "You can use a list comprehension:\nresult = ''.join([x for x in sentence if x not in letters])\n\n", ">>> sentence1 = re.sub(str([letters]), '', sentence)\n\nPreferably with letters entered in the form letters = 'abcd'. No spaces or punctuation marks if necessary.\n.\nEdit:\nThese are actually better:\n>>> re.sub('['+letters+']', '', sentence)\n>>> re.sub('['+str(letters)+']', '', sentence)\n\nThe first also removes \\' if it appears in the string, although it is the prettier solution\n", "Your code doesn't work as expected because the regex you provide only matches the exact combination of letters you give it. What you want is to match either one of the letters, which can be achieved by putting them in brackets, for example:\nimport re\nsentence = input(\"Please enter a sentence: \")\nletters = input(\"Please enter the letters you wish to remove: \")\nregex_str = '[' + letters + ']'\nsentence1 = re.sub(regex_str, '', sentence)\nprint(sentence1)\n\nFor more regex help I would suggest visiting https://regex101.com/\n" ]
[ 3, 2, 2, 2 ]
[ "user_word = input(\"What is your prefered sentence? \") \n\nuser_letter_to_remove = input(\"which letters would you like to delete? \")\n\n#list of letter to remove\n\nletters =str(user_letter_to_remove)\n\nfor i in letters:\n user_word = user_word.replace(i,\"\")\n\nprint(user_word)\n\n" ]
[ -1 ]
[ "python", "regex", "replace", "string", "strip" ]
stackoverflow_0055747901_python_regex_replace_string_strip.txt
Q: Data Science Data Analysis - How to derive an equation for this Y variable? I am using gradient boosting algorithm to predict some 'Y' parameter. How to derive an equation for this Y independent variable? Interestingly, I have looked through many GB-tutorials in the Internet but none of them showed how to derive an equation for this Y independent variable also I didn't find how to print summary for fitted model... A: First things first, in the standard terminology of ML (where {X, y} refer to your training data and y is what your model is trying to predict), X are called the independent variables and y is called the dependent variable. With that out of the way, here is my 2 cents on the "equation of the dependent variable via gradient boosting" I think you are misunderstanding how Gradient Boosting algorithms work, and probably assuming you can trivially pull a y=mx+c style equation from the model as you would in a linear model. But these are 2 separate classes of models. If you have just learned about linear models (linear regression in this case), then you are jumping too quickly into a much more advanced topic without covering the basics for tree-based models and ensemble modelling first. Gradient boosting is a "tree-based ensemble model" that uses a method called boosting to ensemble a large number of "weak" decision trees. My first advice would be to start with an understanding of how Decision Trees work. This is how a single decision tree for a classification task might look like - And this is how their decision boundaries would look like - Technically speaking, you can of course get an "equation" for any curve, but as you can see in the image above, it's not a trivial problem to solve. Instead of looking at decision trees this way (and any derived ensemble algorithms), you should consider understanding the tree structure rather than expecting a y=mx+c style equation to summarise the model. Reference for visualizing decision trees can be found here. What is gradient boosting? Gradient boosting builds a large number of such decision trees (order of 100s or 1000s in general practice, but you can choose it as a hyperparameter), but it uses a method called boosting to ensemble them together for a model that is "better than the sum of its parts". At a very high level, gradient boosting builds an ensemble of trees one by one, then the predictions of the individual trees are summed. The next decision tree tries to cover the discrepancy between the target function f(x) and the current ensemble prediction by reconstructing the residual and this step is repeated multiple times before aggregation. A good way to understand intuitively what is happening is to look at the 3D decision boundaries of both a decision tree and a gradient-boosting model side by side. Here is a great read on how this model works and is responsible for these amazing visualizations. Single Decision Tree: Gradient boosting model: Since you are super new to this area, I will share these amazing (super funny and a bit childish but insanely useful) videos by Josh Stammer aka StatsQuest from youtube. He has a full playlist that covers this. I have used this to teach XGboost to people who have literally 0 background in math and ML, so I hope you will find these useful as well. Link to the playlist is here. If your goal with the y=mx+c equation is to analyze the estimates for each of the independent variables (X) and understand what contributes more to predicting the dependent variable (y), you can leverage something known as feature_importances which is a core feature of most implementations of tree based models. More details here.
Data Science Data Analysis - How to derive an equation for this Y variable?
I am using gradient boosting algorithm to predict some 'Y' parameter. How to derive an equation for this Y independent variable? Interestingly, I have looked through many GB-tutorials in the Internet but none of them showed how to derive an equation for this Y independent variable also I didn't find how to print summary for fitted model...
[ "First things first, in the standard terminology of ML (where {X, y} refer to your training data and y is what your model is trying to predict), X are called the independent variables and y is called the dependent variable. With that out of the way, here is my 2 cents on the \"equation of the dependent variable via gradient boosting\"\n\nI think you are misunderstanding how Gradient Boosting algorithms work, and probably assuming you can trivially pull a y=mx+c style equation from the model as you would in a linear model. But these are 2 separate classes of models. If you have just learned about linear models (linear regression in this case), then you are jumping too quickly into a much more advanced topic without covering the basics for tree-based models and ensemble modelling first.\nGradient boosting is a \"tree-based ensemble model\" that uses a method called boosting to ensemble a large number of \"weak\" decision trees. My first advice would be to start with an understanding of how Decision Trees work. This is how a single decision tree for a classification task might look like -\n\nAnd this is how their decision boundaries would look like -\n\nTechnically speaking, you can of course get an \"equation\" for any curve, but as you can see in the image above, it's not a trivial problem to solve. Instead of looking at decision trees this way (and any derived ensemble algorithms), you should consider understanding the tree structure rather than expecting a y=mx+c style equation to summarise the model. Reference for visualizing decision trees can be found here.\nWhat is gradient boosting?\nGradient boosting builds a large number of such decision trees (order of 100s or 1000s in general practice, but you can choose it as a hyperparameter), but it uses a method called boosting to ensemble them together for a model that is \"better than the sum of its parts\".\nAt a very high level, gradient boosting builds an ensemble of trees one by one, then the predictions of the individual trees are summed. The next decision tree tries to cover the discrepancy between the target function f(x) and the current ensemble prediction by reconstructing the residual and this step is repeated multiple times before aggregation.\nA good way to understand intuitively what is happening is to look at the 3D decision boundaries of both a decision tree and a gradient-boosting model side by side. Here is a great read on how this model works and is responsible for these amazing visualizations.\nSingle Decision Tree:\n\nGradient boosting model:\n\nSince you are super new to this area, I will share these amazing (super funny and a bit childish but insanely useful) videos by Josh Stammer aka StatsQuest from youtube. He has a full playlist that covers this. I have used this to teach XGboost to people who have literally 0 background in math and ML, so I hope you will find these useful as well.\nLink to the playlist is here.\n\nIf your goal with the y=mx+c equation is to analyze the estimates for each of the independent variables (X) and understand what contributes more to predicting the dependent variable (y), you can leverage something known as feature_importances which is a core feature of most implementations of tree based models. More details here.\n" ]
[ 0 ]
[]
[]
[ "data_science", "ensemble_learning", "machine_learning", "python", "regression" ]
stackoverflow_0074504886_data_science_ensemble_learning_machine_learning_python_regression.txt
Q: Error when installing Ctypes package into python I get an error when trying to install ctypes package in python 3.10.8. I tried every solution I could find but nothing worked. I tried using pip install ctypes I also tried using another name in case they changed the name pip install ctype A: The ctypes module available on PyPI was last released in May, 2007. It is ancient. ctypes has been bundled with Python since version 2.5. You don't need to install it separately. Just use it.
Error when installing Ctypes package into python
I get an error when trying to install ctypes package in python 3.10.8. I tried every solution I could find but nothing worked. I tried using pip install ctypes I also tried using another name in case they changed the name pip install ctype
[ "The ctypes module available on PyPI was last released in May, 2007. It is ancient.\nctypes has been bundled with Python since version 2.5. You don't need to install it separately. Just use it.\n" ]
[ 2 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074504993_python_python_3.x.txt
Q: Include files in Sphinx output on any path I have a project that I'm documenting where I've ended up with a structure like docs/ conf.py development/ architecture.rst uimockups/ index.html static/ <supporting css and js files> mockup1/ index.html ui1.html ui2.html mockup2/ index.html ui1.html ui2.html Where everything under uimockups is just a static site. For organizational reasons I really want to keep the folder structure as is here, and would like to just copy uimockups to build/development/uimockups directly, that way I could link to it from my architecture.rst file. I've searched around online, but most of what I can find is pertaining to the _static folder for customizing CSS and that sort of thing. All I want is to copy this entire folder to its corresponding location in the HTML build output. Is this possible without writing a custom extension? Can sphinx perform this simple task through configuration alone? A: Well, I figured out a solution, but it isn't what I'd consider the best solution. Since I wanted to be able to also do python -m http.server in the docs/development/uimockups folder and have it work, I ended up: Renaming docs/development/uimockups/static to docs/development/uimockups/_static. Changing all .html files to refer to files in ./_static or ../_static as appropriate instead of using an absolute /static path. Adding 'development/uimockups' to the html_static_path variable in conf.py This last step is the equivalent of adding cp development/uimockups/* $BUILD/_static/, so while not really ideal I end up with $BUILD/ _static/ _static/ # From uimockups/ <supporting files> index.html # From uimockups/ mockup1/ ui1.html ui2.html mockup2/ ui1.html ui2.html Then I can link to this with `link text </_static/index.html>`_ in my rst files. I don't really like that I just have to shove this into the $BUILD/_static folder, and I can't just have it appear in $BUILD/development/uimockups instead, but this doesn't require me to write any code at least. It's definitely not scaleable though, if I had multiple "static sub-sites" then they would potentially step on each other's resources. One way to work around this would be to have docs/ development/ uimockups-site/ uimockups/ index.html mockup1/ mockup2/ _static/ And then add development/uimockups-site to my html_static_path list so that the output is $BUILD/ _static/ uimockups/ index.html mockup1/ mockup2/ _static/ A: You could add uimockups to html_extra_path in conf.py, and link to files in it as explained here.
Include files in Sphinx output on any path
I have a project that I'm documenting where I've ended up with a structure like docs/ conf.py development/ architecture.rst uimockups/ index.html static/ <supporting css and js files> mockup1/ index.html ui1.html ui2.html mockup2/ index.html ui1.html ui2.html Where everything under uimockups is just a static site. For organizational reasons I really want to keep the folder structure as is here, and would like to just copy uimockups to build/development/uimockups directly, that way I could link to it from my architecture.rst file. I've searched around online, but most of what I can find is pertaining to the _static folder for customizing CSS and that sort of thing. All I want is to copy this entire folder to its corresponding location in the HTML build output. Is this possible without writing a custom extension? Can sphinx perform this simple task through configuration alone?
[ "Well, I figured out a solution, but it isn't what I'd consider the best solution.\nSince I wanted to be able to also do python -m http.server in the docs/development/uimockups folder and have it work, I ended up:\n\nRenaming docs/development/uimockups/static to docs/development/uimockups/_static.\nChanging all .html files to refer to files in ./_static or ../_static as appropriate instead of using an absolute /static path.\nAdding 'development/uimockups' to the html_static_path variable in conf.py\n\nThis last step is the equivalent of adding cp development/uimockups/* $BUILD/_static/, so while not really ideal I end up with\n$BUILD/\n _static/\n _static/ # From uimockups/\n <supporting files>\n index.html # From uimockups/\n mockup1/\n ui1.html\n ui2.html\n mockup2/\n ui1.html\n ui2.html\n\nThen I can link to this with `link text </_static/index.html>`_ in my rst files.\nI don't really like that I just have to shove this into the $BUILD/_static folder, and I can't just have it appear in $BUILD/development/uimockups instead, but this doesn't require me to write any code at least. It's definitely not scaleable though, if I had multiple \"static sub-sites\" then they would potentially step on each other's resources. One way to work around this would be to have\ndocs/\n development/\n uimockups-site/\n uimockups/\n index.html\n mockup1/\n mockup2/\n _static/\n\nAnd then add development/uimockups-site to my html_static_path list so that the output is\n$BUILD/\n _static/\n uimockups/\n index.html\n mockup1/\n mockup2/\n _static/\n\n", "You could add uimockups to html_extra_path in conf.py, and link to files in it as explained here.\n" ]
[ 1, 0 ]
[]
[]
[ "python", "python_sphinx" ]
stackoverflow_0048544965_python_python_sphinx.txt
Q: Unable to install AWS Elastic Beanstalk CLI (Win10, Python 3.6, Pip 9.0.1) I am trying to install awsebcli on my machine and I am unable to run the command eb --version It shows this error: 'eb' is not recognized as an internal or external command, operable program or batch file. This is my Python version: C:\>python --version Python 3.6.0 This is my pip version: C:\>pip --version pip 9.0.1 from c:\users\amirs\appdata\local\programs\python\python36\lib\site-packages (python 3.6) When I ran this command pip install --upgrade --user awsebcli to install awsebcli it successfully installed it. Here are my environment variables for PATH: A: After a great deal of running around I managed to figure out that I was missing an additional PATH entry, both of these were required to get eb to run on windows: %USERPROFILE%\AppData\Local\Programs\Python\Python36\Scripts %USERPROFILE%\AppData\Roaming\Python\Python36\Scripts NOTE: If you have Python 3.7 installed, change "Python36" to "Python37" in both of the path entries. A: This worked for me: sudo -H pip3 install awsebcli --upgrade --ignore-installed six A: This PATH worked for me... %USERPROFILE%\AppData\Roaming\Python\Scripts; %USERPROFILE%\AppData\Local\Programs\Python\Python36\Scripts; %USERPROFILE%\AppData\Roaming\Python\Python36\Scripts; C:\Program Files\Amazon\AWSCLI A: I figured out the issue. It looks like I needed to add this to my environment variables: %USERPROFILE%\AppData\Local\Programs\Python\Python36\Scripts Even though it had the other C:\Users\amirs\... path as well. A: I had the same problem these last few days. Though the Amazon documentation does not even mention it (i.e. only the following AWS Command-Line Interface home page mentions it, but does not explain that it is required), in addition to the 'awsebcli' package (that also requires the 'boto3' package), you also need to download and install the 'aws-shell' package in order to get the command 'aws configure' to work: https://aws.amazon.com/cli/ Click through the link for 'aws-shell' to the following GITHUB page and follow the install instructions: https://github.com/awslabs/aws-shell Then after installation type 'aws configure' in your COMMAND WINDOW as per instructions at the following link, and it will work fine prompting you to enter the necessary AWS ACCESS KEY and SECRET ACCESS KEY: http://boto3.readthedocs.io/en/latest/guide/quickstart.html FYI - I tried changing the environment variable path as per your solution as well as in another link, but neither worked for me: https://forums.aws.amazon.com/thread.jspa?threadID=228638 Thus I had to solve the issue with the true solution to the issue as detailed here. A: If you happened to be using Conda for your Python installation, then you might have to add the following path for Elastic Beanstalk to work: C:\Users\%USERPROFILE%\Anaconda3\Scripts A: If the above did not work, create a virtual environment and install it there: Install venv: pip install virtualenvironment (wherever folder you like): Create venv: python -m venv env Activate venv: windows: evn\Scripts\activate Now yes, install: pip install awsebcli --upgrade Close the cmd, open another: Try if this work: eb --version If this work, remember each time you want to use the command eb, you need to activate this venv, going to this path where you created the folder env, and run env\Scripts\activate A: The paths worked for me when I set up Python to work for all users. C:\Users\dell\AppData\Roaming\Python\Python310\ C:\Users\dell\AppData\Roaming\Python\Python310\Scripts A: I Was facing the Same problem. The given answers kind of helped me, but if you have a newer version of python this may gonna help you. Solution = CHANGE THE PATH VARIABLES. Just search at windows bar "change variables" and a option will apear. EDIT Path, add These two Variables: %USERPROFILE%\AppData\Local\Programs\Python\Python[YourPythonVersion]\Scripts %USERPROFILE%\AppData\Roaming\Python\Python[YourPythonVersion]\Scripts HINT: To be sure witch version you are using, follow this path in your windows explorer (that's what I did) For More Information, what really helped me was the oficial documentation on section 2 -Windows: https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-advanced.html Hope that it would help someone!!
Unable to install AWS Elastic Beanstalk CLI (Win10, Python 3.6, Pip 9.0.1)
I am trying to install awsebcli on my machine and I am unable to run the command eb --version It shows this error: 'eb' is not recognized as an internal or external command, operable program or batch file. This is my Python version: C:\>python --version Python 3.6.0 This is my pip version: C:\>pip --version pip 9.0.1 from c:\users\amirs\appdata\local\programs\python\python36\lib\site-packages (python 3.6) When I ran this command pip install --upgrade --user awsebcli to install awsebcli it successfully installed it. Here are my environment variables for PATH:
[ "After a great deal of running around I managed to figure out that I was missing an additional PATH entry, both of these were required to get eb to run on windows:\n%USERPROFILE%\\AppData\\Local\\Programs\\Python\\Python36\\Scripts\n%USERPROFILE%\\AppData\\Roaming\\Python\\Python36\\Scripts\n\nNOTE: If you have Python 3.7 installed, change \"Python36\" to \"Python37\" in both of the path entries.\n", "This worked for me:\nsudo -H pip3 install awsebcli --upgrade --ignore-installed six\n\n", "This PATH worked for me...\n%USERPROFILE%\\AppData\\Roaming\\Python\\Scripts;\n%USERPROFILE%\\AppData\\Local\\Programs\\Python\\Python36\\Scripts;\n%USERPROFILE%\\AppData\\Roaming\\Python\\Python36\\Scripts;\nC:\\Program Files\\Amazon\\AWSCLI\n\n", "I figured out the issue. It looks like I needed to add this to my environment variables:\n%USERPROFILE%\\AppData\\Local\\Programs\\Python\\Python36\\Scripts\n\nEven though it had the other C:\\Users\\amirs\\... path as well.\n", "I had the same problem these last few days.\nThough the Amazon documentation does not even mention it (i.e. only the following AWS Command-Line Interface home page mentions it, but does not explain that it is required), in addition to the 'awsebcli' package (that also requires the 'boto3' package), you also need to download and install the 'aws-shell' package in order to get the command 'aws configure' to work:\nhttps://aws.amazon.com/cli/\nClick through the link for 'aws-shell' to the following GITHUB page and follow the install instructions:\nhttps://github.com/awslabs/aws-shell\nThen after installation type 'aws configure' in your COMMAND WINDOW as per instructions at the following link, and it will work fine prompting you to enter the necessary AWS ACCESS KEY and SECRET ACCESS KEY:\nhttp://boto3.readthedocs.io/en/latest/guide/quickstart.html\nFYI - I tried changing the environment variable path as per your solution as well as in another link, but neither worked for me:\nhttps://forums.aws.amazon.com/thread.jspa?threadID=228638\nThus I had to solve the issue with the true solution to the issue as detailed here.\n", "If you happened to be using Conda for your Python installation, then you might have to add the following path for Elastic Beanstalk to work:\nC:\\Users\\%USERPROFILE%\\Anaconda3\\Scripts\n\n", "If the above did not work, create a virtual environment and install it there:\nInstall venv: pip install virtualenvironment\n(wherever folder you like):\nCreate venv: python -m venv env\nActivate venv: windows: evn\\Scripts\\activate\nNow yes, install: pip install awsebcli --upgrade\nClose the cmd, open another:\nTry if this work: eb --version\nIf this work, remember each time you want to use the command eb, you need to activate this venv, going to this path where you created the folder env, and run env\\Scripts\\activate\n", "The paths worked for me when I set up Python to work for all users.\nC:\\Users\\dell\\AppData\\Roaming\\Python\\Python310\\\n\nC:\\Users\\dell\\AppData\\Roaming\\Python\\Python310\\Scripts\n\n", "I Was facing the Same problem. The given answers kind of helped me, but if you have a newer version of python this may gonna help you.\nSolution = CHANGE THE PATH VARIABLES. Just search at windows bar \"change variables\" and a option will apear.\nEDIT Path, add These two Variables:\n%USERPROFILE%\\AppData\\Local\\Programs\\Python\\Python[YourPythonVersion]\\Scripts\n%USERPROFILE%\\AppData\\Roaming\\Python\\Python[YourPythonVersion]\\Scripts\n\nHINT: To be sure witch version you are using, follow this path in your windows explorer (that's what I did)\nFor More Information, what really helped me was the oficial documentation on section 2 -Windows:\nhttps://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install-advanced.html\nHope that it would help someone!!\n" ]
[ 33, 17, 8, 4, 3, 1, 0, 0, 0 ]
[]
[]
[ "amazon_elastic_beanstalk", "amazon_web_services", "python" ]
stackoverflow_0041729006_amazon_elastic_beanstalk_amazon_web_services_python.txt
Q: Difficulty instantiating a subclass [object has no attribute] I get two types of errors when I try to start or initiate the member function temp_controll from the subclass Temperature_Controll. The issue is that the while loops are started in a new thread. I am having trouble passing the modbus client connection to the member function. AttributeError: 'ModbusTcpClient' object has no attribute 'modbus' I don't understand the problem in its entirety, because I assumed I would inherit modbus.client from the main class? The second problem was, when I comment out rp and want to access a member function from the main class "database_reading", I get the following error: AttributeError: 'str' object has no attribute 'database_reading' How can I execute the subclass method via a second thread? class Echo(WebSocket): def __init__(self, client, server, sock, address): super().__init__(server, sock, address) self.modbus = client def database_reading(self) do_something() return data class Temperature_Controll2(Echo): def __init__(self, client): super(Temperature_Controll, self).__init__(client) self.modbus = client def temp_controll(self, value): #super().temp_controll(client) while True: print("temp_controll") rp = self.modbus.read_coils(524, 0x1) print(rp.bits[0]) self.database_reading() def main(): logging.basicConfig() with ModbusClient(host=HOST, port=PORT) as client: client.connect() time.sleep(0.01) print("Websocket server on port %s" % PORTNUM) server = SimpleWebSocketServer('', PORTNUM, partial(Echo, client)) control = Temperature_Controll2.temp_controll t2 = threading.Thread(target=control, args=(client, 'get')) t2.start() try: t1 = threading.Thread(target=server.serveforever()) t1.start() finally: server.close() if __name__ == "__main__": main() This is a minimal example of my code, the thread t1 is executed without any problems. I have little experience with OOP programming, maybe someone here can help, thanks! A: You get this error: AttributeError: 'ModbusTcpClient' object has no attribute 'modbus' because when the Thread that you create: t2 = threading.Thread(target=control, args=(client, 'get')) calls Temperature_Controll2.temp_controll(client, 'get'), on this line: rp = self.modbus.read_coils(524, 0x1) the self is actually the client variable you created here: with ModbusClient(host=HOST, port=PORT) as client: and is not an instance of Temperature_Controll2 that I assume you were expecting. A: Ok, thank you again, the solution is: class Temperature_Controll2(Echo): def __init__(self, client): #super(Temperature_Controll2, self).__init__() #client , server, sock, address, database_reading) #super().__init__() self.modbus = client def temp_controll(self, value): #super().temp_controll(client) while True: print("temp_controll") rp = self.modbus.read_coils(524, 0x1) time.sleep(4) def main(): with ModbusClient(host=HOST, port=PORT) as client: client.connect() time.sleep(0.01) print("Websocket server on port %s" % PORTNUM) server = SimpleWebSocketServer('', PORTNUM, partial(Echo, client)) control = Temperature_Controll2(client) t2 = threading.Thread(target=control.temp_controll('get')) try: t1 = threading.Thread(target=server.serveforever()) t1.start() finally: server.close() But with client I can only make one connection to the modbus server, so either the websocket or the while loop works. I think I would have to approach the problem differently. A: A short addendum, I think, I now know why the second variant does not work. This variant is running until the temp_control in class Echo comes to the point, in which modbus modul was called by a funcion. Modbus module is not part of the mother class Echo, which is why I think this cannot be inherited. Modbus is passed to the class Echo as a variable via partial and will thus instantiate (I hope I am expressing myself correctly). Therefore, only the variant in which the variable client is passed to the instance will work. # This is a non-functional version of my programme and is for information only class Echo(WebSocket): def __init__(self, client, server, sock, address): super().__init__(server, sock, address) self.modbus = client def temp_control(self) do_something() return True class Temperature_Control3(Echo): def __init__(self, value=None): #, client, server, sock, address): #super(Temperature_Control3, self).__init__(server, sock, address) if value is None: value = {} self.value = value def control(self, value): while True: self.temp_control(524, 'get') #self.database_reading()[0][1] time.sleep(2) def main(): with ModbusClient(host=HOST, port=PORT) as client: client.connect() time.sleep(0.01) print("Websocket server on port %s" % PORTNUM) server = SimpleWebSocketServer('', PORTNUM, partial(Echo, client)) control = Temperature_Control3() t3 = threading.Thread(target=lambda:control.control('get')) t3.start() try: t1 = threading.Thread(target=server.serveforever()) t1.start() for thread in threading.enumerate(): print(thread.name) finally: server.close()
Difficulty instantiating a subclass [object has no attribute]
I get two types of errors when I try to start or initiate the member function temp_controll from the subclass Temperature_Controll. The issue is that the while loops are started in a new thread. I am having trouble passing the modbus client connection to the member function. AttributeError: 'ModbusTcpClient' object has no attribute 'modbus' I don't understand the problem in its entirety, because I assumed I would inherit modbus.client from the main class? The second problem was, when I comment out rp and want to access a member function from the main class "database_reading", I get the following error: AttributeError: 'str' object has no attribute 'database_reading' How can I execute the subclass method via a second thread? class Echo(WebSocket): def __init__(self, client, server, sock, address): super().__init__(server, sock, address) self.modbus = client def database_reading(self) do_something() return data class Temperature_Controll2(Echo): def __init__(self, client): super(Temperature_Controll, self).__init__(client) self.modbus = client def temp_controll(self, value): #super().temp_controll(client) while True: print("temp_controll") rp = self.modbus.read_coils(524, 0x1) print(rp.bits[0]) self.database_reading() def main(): logging.basicConfig() with ModbusClient(host=HOST, port=PORT) as client: client.connect() time.sleep(0.01) print("Websocket server on port %s" % PORTNUM) server = SimpleWebSocketServer('', PORTNUM, partial(Echo, client)) control = Temperature_Controll2.temp_controll t2 = threading.Thread(target=control, args=(client, 'get')) t2.start() try: t1 = threading.Thread(target=server.serveforever()) t1.start() finally: server.close() if __name__ == "__main__": main() This is a minimal example of my code, the thread t1 is executed without any problems. I have little experience with OOP programming, maybe someone here can help, thanks!
[ "You get this error:\n AttributeError: 'ModbusTcpClient' object has no attribute 'modbus'\n\nbecause when the Thread that you create:\nt2 = threading.Thread(target=control, args=(client, 'get'))\ncalls Temperature_Controll2.temp_controll(client, 'get'),\non this line: rp = self.modbus.read_coils(524, 0x1) the self is actually the client variable you created here:\nwith ModbusClient(host=HOST, port=PORT) as client:\nand is not an instance of Temperature_Controll2 that I assume you were expecting.\n", "Ok, thank you again, the solution is:\nclass Temperature_Controll2(Echo):\n\n def __init__(self, client):\n #super(Temperature_Controll2, self).__init__() #client , server, sock, address, database_reading)\n #super().__init__()\n self.modbus = client\n\n def temp_controll(self, value):\n #super().temp_controll(client)\n while True:\n print(\"temp_controll\")\n rp = self.modbus.read_coils(524, 0x1)\n time.sleep(4)\n\ndef main():\n with ModbusClient(host=HOST, port=PORT) as client:\n client.connect()\n time.sleep(0.01)\n\n print(\"Websocket server on port %s\" % PORTNUM)\n server = SimpleWebSocketServer('', PORTNUM, partial(Echo, client))\n\n control = Temperature_Controll2(client)\n t2 = threading.Thread(target=control.temp_controll('get'))\n\n try:\n t1 = threading.Thread(target=server.serveforever())\n t1.start()\n finally:\n server.close()\n\nBut with client I can only make one connection to the modbus server, so either the websocket or the while loop works. I think I would have to approach the problem differently.\n", "A short addendum, I think, I now know why the second variant does not work.\nThis variant is running until the temp_control in class Echo comes to the point, in which modbus modul was called by a funcion. Modbus module is not part of the mother class Echo, which is why I think this cannot be inherited.\nModbus is passed to the class Echo as a variable via partial and will thus instantiate (I hope I am expressing myself correctly). Therefore, only the variant in which the variable client is passed to the instance will work.\n# This is a non-functional version of my programme and is for information only\n\nclass Echo(WebSocket):\n\n def __init__(self, client, server, sock, address):\n super().__init__(server, sock, address)\n self.modbus = client\n\n def temp_control(self)\n do_something()\n return True\n\nclass Temperature_Control3(Echo):\n\n def __init__(self, value=None): #, client, server, sock, address):\n #super(Temperature_Control3, self).__init__(server, sock, address) \n if value is None:\n value = {}\n self.value = value\n\n def control(self, value):\n while True:\n self.temp_control(524, 'get')\n #self.database_reading()[0][1]\n time.sleep(2)\n\n\ndef main():\n with ModbusClient(host=HOST, port=PORT) as client:\n client.connect()\n time.sleep(0.01)\n\n print(\"Websocket server on port %s\" % PORTNUM)\n server = SimpleWebSocketServer('', PORTNUM, partial(Echo, client))\n\n control = Temperature_Control3()\n\n t3 = threading.Thread(target=lambda:control.control('get')) \n t3.start()\n\n try:\n t1 = threading.Thread(target=server.serveforever())\n t1.start()\n for thread in threading.enumerate():\n print(thread.name)\n finally:\n server.close()\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "class", "inheritance", "member_functions", "python", "python_multithreading" ]
stackoverflow_0074501121_class_inheritance_member_functions_python_python_multithreading.txt
Q: Pandas groupby - divide by the sum of all groups I have a DataFrame df and I create gb = df.groupby("column1"). Now I would like to do the following: x = gb.apply(lambda x: x["column2"].sum() / df["column2"].sum()) It works but I would like to based everytinh on x not x and df. Ideally I expected that there is a function x.get_source_df and then my solution would be: x = gb.apply(lambda x: x["column2"].sum() / x.get_source_df()["column2"].sum()) and in that case I could save this lambda function in a dictionary which I could use for any df. Is it possible? A: you should not use apply here, may be you find it interesting, optimal method would be df.groupby('column1')['column2'].sum().div(df['column2'].sum()) It works for more than one column too. A: I am not sure in your explanation that you want to divide for the sum of each group or divide for the sum of the entire database. I assume what you want is to divide the sum of each group. Data: df = pd.DataFrame({'name':['a']*5+['b']*5, 'year':[2001,2002,2003,2004,2005]*2, 'val1':[1,2,3,4,5,None,7,8,9,10], 'val2':[21,22,23,24,25,26,27,28,29,30]}) Using transform then simply divide col by col: df['sum'] = df.groupby('name')['val1'].transform(lambda g: g.sum()) df['weight'] = df['val1']/df['sum']
Pandas groupby - divide by the sum of all groups
I have a DataFrame df and I create gb = df.groupby("column1"). Now I would like to do the following: x = gb.apply(lambda x: x["column2"].sum() / df["column2"].sum()) It works but I would like to based everytinh on x not x and df. Ideally I expected that there is a function x.get_source_df and then my solution would be: x = gb.apply(lambda x: x["column2"].sum() / x.get_source_df()["column2"].sum()) and in that case I could save this lambda function in a dictionary which I could use for any df. Is it possible?
[ "you should not use apply here, may be you find it interesting, optimal method would be\ndf.groupby('column1')['column2'].sum().div(df['column2'].sum())\n\nIt works for more than one column too.\n", "I am not sure in your explanation that you want to divide for the sum of each group or divide for the sum of the entire database. I assume what you want is to divide the sum of each group.\nData:\ndf = pd.DataFrame({'name':['a']*5+['b']*5,\n 'year':[2001,2002,2003,2004,2005]*2,\n 'val1':[1,2,3,4,5,None,7,8,9,10],\n 'val2':[21,22,23,24,25,26,27,28,29,30]})\n\nUsing transform then simply divide col by col:\ndf['sum'] = df.groupby('name')['val1'].transform(lambda g: g.sum())\ndf['weight'] = df['val1']/df['sum']\n\n" ]
[ 0, 0 ]
[]
[]
[ "group_by", "pandas", "python" ]
stackoverflow_0074500059_group_by_pandas_python.txt
Q: How to add a 1d array to a 2d array element-wise to get a 3d array in numpy I have a 2d array of values, and I want to add a 1d array to this 2d array element wise such that I would get a 3d array where each element is the original 2d array plus a respective element of the 1d array. For example: A = np.array([ [10, 9, 8, 7, 6], [5, 4, 3, 2, 1] ]) B = np.array([1, 2, 3]) #What A + B should return: np.array([ [[11, 10, 9, 8, 7], [6, 5, 4, 3, 2]], [[12, 11, 10, 9, 8], [7, 6, 5, 4, 3]], [[13, 12, 11, 10, 9], [8, 7, 6, 5, 4]] ]) I was able to do this pretty easily with a normal for loop but is this possible in pure numpy? A: I believe this gives you the output you're after? import numpy as np A = np.array([ [10, 9, 8, 7, 6], [5, 4, 3, 2, 1] ]) B = np.array([1, 2, 3]) A = A.reshape(1, 2, 5) B = B.reshape(3, 1, 1) for each in A + B: print (each) # Result: # [[11 10 9 8 7] # [ 6 5 4 3 2]] # [[12 11 10 9 8] # [ 7 6 5 4 3]] # [[13 12 11 10 9] # [ 8 7 6 5 4]] A: import numpy as np A = np.array([ [10, 9, 8, 7, 6], [5, 4, 3, 2, 1] ]) B = np.array([1, 2, 3]) # What A + B should return: # np.array([ # [[11, 10, 9, 8, 7], [6, 5, 4, 3, 2]], # [[12, 11, 10, 9, 8], [7, 6, 5, 4, 3]], # [[13, 12, 11, 10, 9], [8, 7, 6, 5, 4]] # ]) temp = np.array([A]*len(B)).flatten() add = np.repeat(B, len(A.flatten())) temp += add result = temp.reshape((B.shape[0],)+A.shape) print(result) # np.array([ # [[11, 10, 9, 8, 7], [6, 5, 4, 3, 2]], # [[12, 11, 10, 9, 8], [7, 6, 5, 4, 3]], # [[13, 12, 11, 10, 9], [8, 7, 6, 5, 4]] # ]) A: you can have fun with list comprehension here and do it with a one-liner import numpy as np A = np.array([ [10, 9, 8, 7, 6], [5, 4, 3, 2, 1] ]) B = np.array([1, 2, 3]) r = np.array([A+b for b in B]) print(r) # [[[11 10 9 8 7] # [ 6 5 4 3 2]] # # [[12 11 10 9 8] # [ 7 6 5 4 3]] # # [[13 12 11 10 9] # [ 8 7 6 5 4]]]
How to add a 1d array to a 2d array element-wise to get a 3d array in numpy
I have a 2d array of values, and I want to add a 1d array to this 2d array element wise such that I would get a 3d array where each element is the original 2d array plus a respective element of the 1d array. For example: A = np.array([ [10, 9, 8, 7, 6], [5, 4, 3, 2, 1] ]) B = np.array([1, 2, 3]) #What A + B should return: np.array([ [[11, 10, 9, 8, 7], [6, 5, 4, 3, 2]], [[12, 11, 10, 9, 8], [7, 6, 5, 4, 3]], [[13, 12, 11, 10, 9], [8, 7, 6, 5, 4]] ]) I was able to do this pretty easily with a normal for loop but is this possible in pure numpy?
[ "I believe this gives you the output you're after?\nimport numpy as np\n\nA = np.array([\n [10, 9, 8, 7, 6],\n [5, 4, 3, 2, 1]\n])\nB = np.array([1, 2, 3])\n\nA = A.reshape(1, 2, 5)\nB = B.reshape(3, 1, 1)\n\nfor each in A + B:\n print (each)\n \n# Result:\n # [[11 10 9 8 7]\n # [ 6 5 4 3 2]]\n # [[12 11 10 9 8]\n # [ 7 6 5 4 3]]\n # [[13 12 11 10 9]\n # [ 8 7 6 5 4]]\n\n", "import numpy as np\n\n\nA = np.array([\n [10, 9, 8, 7, 6], [5, 4, 3, 2, 1]\n])\nB = np.array([1, 2, 3])\n\n# What A + B should return:\n# np.array([\n# [[11, 10, 9, 8, 7], [6, 5, 4, 3, 2]],\n# [[12, 11, 10, 9, 8], [7, 6, 5, 4, 3]],\n# [[13, 12, 11, 10, 9], [8, 7, 6, 5, 4]]\n# ])\n\n\ntemp = np.array([A]*len(B)).flatten()\nadd = np.repeat(B, len(A.flatten()))\n\ntemp += add\n\nresult = temp.reshape((B.shape[0],)+A.shape)\nprint(result)\n\n# np.array([\n# [[11, 10, 9, 8, 7], [6, 5, 4, 3, 2]],\n# [[12, 11, 10, 9, 8], [7, 6, 5, 4, 3]],\n# [[13, 12, 11, 10, 9], [8, 7, 6, 5, 4]]\n# ])\n\n", "you can have fun with list comprehension here and do it with a one-liner\nimport numpy as np\n\nA = np.array([\n [10, 9, 8, 7, 6],\n [5, 4, 3, 2, 1]\n])\nB = np.array([1, 2, 3])\n\nr = np.array([A+b for b in B])\nprint(r)\n\n# [[[11 10 9 8 7]\n# [ 6 5 4 3 2]]\n# \n# [[12 11 10 9 8]\n# [ 7 6 5 4 3]]\n# \n# [[13 12 11 10 9]\n# [ 8 7 6 5 4]]]\n\n" ]
[ 0, 0, 0 ]
[]
[]
[ "numpy", "python" ]
stackoverflow_0074504800_numpy_python.txt
Q: error using np.argmax when applying keepdims I am running my Python code and recieving this error on keepdims: enter image description here This is the code: enter image description here It worked fine to run this command on my computer a few days ago but I have ran other codes etc after that might have done something. It works to write keepdims on amax, just not on argmax. My friend ran the same on her computer now, and this error did not show up even though the code were identical. I tried uninstalling and reinstalling anaconda but it dit not change it. Not sure if there is something else I have to download or what has happened. A: For an array x, a simple way to replicate the behavior of np.argmax(x, axis=0, keepdims=True) is np.argmax(x, axis=0)[np.newaxis, ...]. Note that this is specifically for the case axis=0. Other alternatives include np.expand_dims(np.argmax(x, axis=0), 0) and np.argmax(x, axis=0).reshape((1,) + x.shape[1:]). For an arbitrary axis k, np.expand_dims(np.argmax(x, axis=k), k) will work.
error using np.argmax when applying keepdims
I am running my Python code and recieving this error on keepdims: enter image description here This is the code: enter image description here It worked fine to run this command on my computer a few days ago but I have ran other codes etc after that might have done something. It works to write keepdims on amax, just not on argmax. My friend ran the same on her computer now, and this error did not show up even though the code were identical. I tried uninstalling and reinstalling anaconda but it dit not change it. Not sure if there is something else I have to download or what has happened.
[ "For an array x, a simple way to replicate the behavior of np.argmax(x, axis=0, keepdims=True) is np.argmax(x, axis=0)[np.newaxis, ...]. Note that this is specifically for the case axis=0.\nOther alternatives include np.expand_dims(np.argmax(x, axis=0), 0) and np.argmax(x, axis=0).reshape((1,) + x.shape[1:]).\nFor an arbitrary axis k, np.expand_dims(np.argmax(x, axis=k), k) will work.\n" ]
[ 0 ]
[]
[]
[ "argmax", "numpy", "python" ]
stackoverflow_0074501160_argmax_numpy_python.txt
Q: Bs4 fail when try to get next url There is my code def parser(): flag = True url = 'https://quotes.toscrape.com' while flag: responce = requests.get(url) soup = BeautifulSoup(responce.text, 'html.parser') quote_l = soup.find_all('span', {'class': 'text'}) q_count = 0 for i in range(len(quote_l)): if q_count >= 5: flag = False break quote = soup.find_all('span', {'class': 'text'})[i] if not Quote.objects.filter(quote=quote.string).exists(): author = soup.find_all('small', {'class': 'author'})[i] if not Author.objects.filter(name=author.string).exists(): a = Author.objects.create(name=author.string) Quote.objects.create(quote=quote.string, author_id=a.id) q_count += 1 else: a = Author.objects.get(name=author.string) Quote.objects.create(quote=quote.string, author_id=a.id) q_count += 1 url += soup.find('li', {'class': 'next'}).a['href'] I need to get the next page but I have this Exc. 'NoneType' object has no attribute 'a' How to fix that and maybe how I can optimize my Code.Thx A: Upon reaching the last page there will be no Next button so you need an exit condition check prior to attempting to access the href for next page. One possibility would be to add the following lines before your current last line: next_page = soup.find('li', {'class': 'next'}) if not next_page: flag = False # or return Or simply return at that point. You'd also update the last line to use the variable, of course, and ensure you are not continuously extending url with suffixes of next page. For example, one could add the suffix during the requests call: def parser(): flag = True url = 'https://quotes.toscrape.com' suffix = '' while flag: responce = requests.get(url + suffix) soup = BeautifulSoup(responce.text, 'html.parser') # other code next_page = soup.find('li', {'class': 'next'}) if not next_page: return suffix = next_page.a['href']
Bs4 fail when try to get next url
There is my code def parser(): flag = True url = 'https://quotes.toscrape.com' while flag: responce = requests.get(url) soup = BeautifulSoup(responce.text, 'html.parser') quote_l = soup.find_all('span', {'class': 'text'}) q_count = 0 for i in range(len(quote_l)): if q_count >= 5: flag = False break quote = soup.find_all('span', {'class': 'text'})[i] if not Quote.objects.filter(quote=quote.string).exists(): author = soup.find_all('small', {'class': 'author'})[i] if not Author.objects.filter(name=author.string).exists(): a = Author.objects.create(name=author.string) Quote.objects.create(quote=quote.string, author_id=a.id) q_count += 1 else: a = Author.objects.get(name=author.string) Quote.objects.create(quote=quote.string, author_id=a.id) q_count += 1 url += soup.find('li', {'class': 'next'}).a['href'] I need to get the next page but I have this Exc. 'NoneType' object has no attribute 'a' How to fix that and maybe how I can optimize my Code.Thx
[ "Upon reaching the last page there will be no Next button so you need an exit condition check prior to attempting to access the href for next page. One possibility would be to add the following lines before your current last line:\nnext_page = soup.find('li', {'class': 'next'})\nif not next_page: flag = False # or return\n\nOr simply return at that point.\nYou'd also update the last line to use the variable, of course, and ensure you are not continuously extending url with suffixes of next page. For example, one could add the suffix during the requests call:\ndef parser():\n flag = True\n url = 'https://quotes.toscrape.com'\n suffix = ''\n\n while flag:\n responce = requests.get(url + suffix)\n soup = BeautifulSoup(responce.text, 'html.parser')\n # other code\n \n \n next_page = soup.find('li', {'class': 'next'})\n\n if not next_page: \n return\n suffix = next_page.a['href']\n\n" ]
[ 0 ]
[]
[]
[ "beautifulsoup", "html_parsing", "parsing", "python" ]
stackoverflow_0074503332_beautifulsoup_html_parsing_parsing_python.txt
Q: Random array generation using Numba wrapper Suppose I want to generate an array using njit which is a library of Numba. The following approach is throwing an error and I have no idea why. I followed this from speed up function that takes a function as argument with numba. import numpy as np from numba import prange, njit def numpy_random(n): return np.random.normal(size=n) s=np.zeros(n) def call_func(func): # only take func func = njit(func) # compile func in nopython mode! @njit def inner(x): return func(x) return inner cf = call_func(numpy_random) for i in range(k): s += cf(n*3) print(np.mean(s)) Traceback (most recent call last): File "/home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/timer.py", line 51, in <module> s += cf(n*3) File "/home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/lib/python3.9/site-packages/numba/core/dispatcher.py", line 468, in _compile_for_args error_rewrite(e, 'typing') File "/home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/lib/python3.9/site-packages/numba/core/dispatcher.py", line 409, in error_rewrite raise e.with_traceback(None) numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<built-in method normal of numpy.random.mtrand.RandomState object at 0x7fa4cd469340>) found for signature: >>> normal(size=int64) There are 4 candidate implementations: - Of which 4 did not match due to: Overload in function '_OverloadWrapper._build.<locals>.ol_generated': File: numba/core/overload_glue.py: Line 129. With argument(s): '(size=int64)': Rejected as the implementation raised a specific error: TypingError: unsupported call signature raised from /home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/lib/python3.9/site-packages/numba/core/typing/templates.py:439 During: resolving callee type: Function(<built-in method normal of numpy.random.mtrand.RandomState object at 0x7fa4cd469340>) During: typing of call at /home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/timer.py (11) File "timer.py", line 11: def numpy_random(n): return np.random.normal(size=n) ^ During: resolving callee type: type(CPUDispatcher(<function numpy_random at 0x7fa4c6f40160>)) During: typing of call at /home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/timer.py (46) During: resolving callee type: type(CPUDispatcher(<function numpy_random at 0x7fa4c6f40160>)) During: typing of call at /home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/timer.py (46) File "timer.py", line 46: def inner(x): return func(x) ^ A: To clarify the error, Numba basically reports No implementation of function [...] found for signature normal(size=int64) and then unsupported call signature. Thus, Numba does not support calling normal with a size attribute. This is actually documented. A simple way to reproduce the error is to execute this code: @njit('(int64,)') def numpy_random(n): return np.random.normal(size=n) A simple solution is to create an array, fill it and then return it: # @njit should not be used if in the context of the initial code @njit('(int64,)') def numpy_random(n): out = np.empty(n) for i in range(n): out[i] = np.random.normal() return out Note that there is no reason for Numba to be particularly faster than Numpy here. It might even be slower on some platform since Numpy can use a more optimized implementation than Numba on them.
Random array generation using Numba wrapper
Suppose I want to generate an array using njit which is a library of Numba. The following approach is throwing an error and I have no idea why. I followed this from speed up function that takes a function as argument with numba. import numpy as np from numba import prange, njit def numpy_random(n): return np.random.normal(size=n) s=np.zeros(n) def call_func(func): # only take func func = njit(func) # compile func in nopython mode! @njit def inner(x): return func(x) return inner cf = call_func(numpy_random) for i in range(k): s += cf(n*3) print(np.mean(s)) Traceback (most recent call last): File "/home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/timer.py", line 51, in <module> s += cf(n*3) File "/home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/lib/python3.9/site-packages/numba/core/dispatcher.py", line 468, in _compile_for_args error_rewrite(e, 'typing') File "/home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/lib/python3.9/site-packages/numba/core/dispatcher.py", line 409, in error_rewrite raise e.with_traceback(None) numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend) Failed in nopython mode pipeline (step: nopython frontend) No implementation of function Function(<built-in method normal of numpy.random.mtrand.RandomState object at 0x7fa4cd469340>) found for signature: >>> normal(size=int64) There are 4 candidate implementations: - Of which 4 did not match due to: Overload in function '_OverloadWrapper._build.<locals>.ol_generated': File: numba/core/overload_glue.py: Line 129. With argument(s): '(size=int64)': Rejected as the implementation raised a specific error: TypingError: unsupported call signature raised from /home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/lib/python3.9/site-packages/numba/core/typing/templates.py:439 During: resolving callee type: Function(<built-in method normal of numpy.random.mtrand.RandomState object at 0x7fa4cd469340>) During: typing of call at /home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/timer.py (11) File "timer.py", line 11: def numpy_random(n): return np.random.normal(size=n) ^ During: resolving callee type: type(CPUDispatcher(<function numpy_random at 0x7fa4c6f40160>)) During: typing of call at /home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/timer.py (46) During: resolving callee type: type(CPUDispatcher(<function numpy_random at 0x7fa4c6f40160>)) During: typing of call at /home/abhigyan/.pyenv/versions/3.9.5/envs/ltesim/timer.py (46) File "timer.py", line 46: def inner(x): return func(x) ^
[ "To clarify the error, Numba basically reports No implementation of function [...] found for signature normal(size=int64) and then unsupported call signature. Thus, Numba does not support calling normal with a size attribute. This is actually documented.\nA simple way to reproduce the error is to execute this code:\n@njit('(int64,)')\ndef numpy_random(n):\n return np.random.normal(size=n)\n\nA simple solution is to create an array, fill it and then return it:\n# @njit should not be used if in the context of the initial code\n@njit('(int64,)')\ndef numpy_random(n):\n out = np.empty(n)\n for i in range(n):\n out[i] = np.random.normal()\n return out\n\nNote that there is no reason for Numba to be particularly faster than Numpy here. It might even be slower on some platform since Numpy can use a more optimized implementation than Numba on them.\n" ]
[ 1 ]
[]
[]
[ "numba", "numpy", "python" ]
stackoverflow_0074505047_numba_numpy_python.txt
Q: passing input between multiple functions? im currently trying to pass input between multiple functions. As of now im having an extremely hard time figuring out how to do it with my program. My program consists of 2 functions. main() will get the user input, remove all punctuation and capital() will take that output and turn it into all caps. However, when i call the function it only prints it fully capitalized rather than printing it first without the punctuation and then fully capitalized. here is what ive tried. I set the space variable = to my main function so i can pass on the string thats produced from main. However im getting the error from above and feel my solution is extremely inefficient. if anyone has a way to do this without using a global constant or global variable please let me know. the was im trying to do this is with parameters but i am very confused as to why this is happening. thanks punctuation = "!@#$%^&*():<>?{}[]`\/~" def capital(): space = main() string2 = '' for i in range(len(space)): if(space[i] >= 'a' and space[i] <= 'z'): string2 = string2 + chr((ord(space[i]) - 32)) else: string2 = string2 + space[i] return string2 def main(): user_string=input("Please enter a string: ") space = "" for character in user_string: if character not in punctuation: space = space+character return space print(capital()) print(main()) ``` ` A: The reason this might be happening is because you are calling main() prior to assignment, which does not work on some versions of python if I remember correctly. You could update to a newer version, but a better way is to use parameters like you explained. To make a parameter, you could have your capital() function take in a variable. To do this, simply write the name of the local variable (only usable inside the function) inside the parentheses: def capital(space): Then, all you need to do is run capital() and pass in main() as space: def capital(space): string2 = '' for i in range(len(space)): if(space[i] >= 'a' and space[i] <= 'z'): string2 = string2 + chr((ord(space[i]) - 32)) else: string2 = string2 + space[i] return string2 def main(): user_string=input("Please enter a string: ") space = "" for character in user_string: if character not in punctuation: space = space+character return space print(capital(main()))
passing input between multiple functions?
im currently trying to pass input between multiple functions. As of now im having an extremely hard time figuring out how to do it with my program. My program consists of 2 functions. main() will get the user input, remove all punctuation and capital() will take that output and turn it into all caps. However, when i call the function it only prints it fully capitalized rather than printing it first without the punctuation and then fully capitalized. here is what ive tried. I set the space variable = to my main function so i can pass on the string thats produced from main. However im getting the error from above and feel my solution is extremely inefficient. if anyone has a way to do this without using a global constant or global variable please let me know. the was im trying to do this is with parameters but i am very confused as to why this is happening. thanks punctuation = "!@#$%^&*():<>?{}[]`\/~" def capital(): space = main() string2 = '' for i in range(len(space)): if(space[i] >= 'a' and space[i] <= 'z'): string2 = string2 + chr((ord(space[i]) - 32)) else: string2 = string2 + space[i] return string2 def main(): user_string=input("Please enter a string: ") space = "" for character in user_string: if character not in punctuation: space = space+character return space print(capital()) print(main()) ``` `
[ "The reason this might be happening is because you are calling main() prior to assignment, which does not work on some versions of python if I remember correctly. You could update to a newer version, but a better way is to use parameters like you explained.\nTo make a parameter, you could have your capital() function take in a variable. To do this, simply write the name of the local variable (only usable inside the function) inside the parentheses:\ndef capital(space):\nThen, all you need to do is run capital() and pass in main() as space:\ndef capital(space):\n string2 = ''\n for i in range(len(space)):\n if(space[i] >= 'a' and space[i] <= 'z'):\n string2 = string2 + chr((ord(space[i]) - 32))\n else:\n string2 = string2 + space[i]\n return string2\n\n\n\ndef main():\n user_string=input(\"Please enter a string: \")\n space = \"\"\n for character in user_string:\n if character not in punctuation:\n space = space+character\n return space\n\nprint(capital(main()))\n\n" ]
[ 0 ]
[]
[]
[ "python" ]
stackoverflow_0074505140_python.txt
Q: I can not understand why my test and predict y plot for my regression model is like that? I am working on a regression model (Decision Tree) on a multidimensional data, with 16 features. The model r2_score is 0.97. The y test and y predict plot looks so wrong! the range of x is not the same. would you please tell me what is the problem? I have also tried to fit the model in one dimension to check the x range in the diagram, but it just decrease the score obviously, and the diagram is still odd! A: Matplotlib's plot function draws a single line by connecting the points in the order that they are drawn. The reason you are seeing a mess is because the points are not ordered along the x-axis. In a regression model, you have a function f(x) -> R where f here is your decision tree and x is in the 16 dimensional space. However, you cannot order your x , which has 16 dimensions, along the x-axis. Instead, what you can do is just plot the the ground truth and predicted values for each index as a scatter plot: import numpy as np # Here, I'm assuming y_DT_5 is either a 1D array or a column vector. # If necessary, change the argument of np.arange accordingly to get the number of values idxs = np.arange(len(y_DT_5)) plt.figure(figsize=(16,4)) plt.scatter(x=idxs, y=y_DT_5, marker='x') # Plot each ground truth value as separate pts plt.scatter(x=idxs, y=y_test, marker='.') # Plot each predicted value as separate points If your model works, the 2 points plotted at each index should be close along the y-axis.
I can not understand why my test and predict y plot for my regression model is like that?
I am working on a regression model (Decision Tree) on a multidimensional data, with 16 features. The model r2_score is 0.97. The y test and y predict plot looks so wrong! the range of x is not the same. would you please tell me what is the problem? I have also tried to fit the model in one dimension to check the x range in the diagram, but it just decrease the score obviously, and the diagram is still odd!
[ "Matplotlib's plot function draws a single line by connecting the points in the order that they are drawn. The reason you are seeing a mess is because the points are not ordered along the x-axis.\nIn a regression model, you have a function f(x) -> R where f here is your decision tree and x is in the 16 dimensional space. However, you cannot order your x , which has 16 dimensions, along the x-axis.\nInstead, what you can do is just plot the the ground truth and predicted values for each index as a scatter plot:\nimport numpy as np\n\n# Here, I'm assuming y_DT_5 is either a 1D array or a column vector.\n# If necessary, change the argument of np.arange accordingly to get the number of values\nidxs = np.arange(len(y_DT_5))\n\nplt.figure(figsize=(16,4))\nplt.scatter(x=idxs, y=y_DT_5, marker='x') # Plot each ground truth value as separate pts\nplt.scatter(x=idxs, y=y_test, marker='.') # Plot each predicted value as separate points\n\n\nIf your model works, the 2 points plotted at each index should be close along the y-axis.\n" ]
[ 1 ]
[]
[]
[ "decision_tree", "machine_learning", "matplotlib", "python", "regression" ]
stackoverflow_0074505098_decision_tree_machine_learning_matplotlib_python_regression.txt
Q: Runge Kutta constants diverging for Lorenz system? I'm trying to solve the Lorenz system using the 4th order Runge Kutta method, where dx/dt=a*(y-x) dy/dt=x(b-z)-y dx/dt=x*y-c*z Since this system doesn't depend explicity on time, it's possibly to ignore that part in the iteration, so I just have dX=F(x,y,z) def func(x0): a=10 b=38.63 c=8/3 fx=a*(x0[1]-x0[0]) fy=x0[0]*(b-x0[2])-x0[1] fz=x0[0]*x0[1]-c*x0[2] return np.array([fx,fy,fz]) def kcontants(f,h,x0): k0=h*f(x0) k1=h*f(f(x0)+k0/2) k2=h*f(f(x0)+k1/2) k3=h*f(f(x0)+k2) #note returned K is a matrix return np.array([k0,k1,k2,k3]) x0=np.array([-8,8,27]) h=0.001 t=np.arange(0,50,h) result=np.zeros([len(t),3]) for time in range(len(t)): if time==0: k=kcontants(func,h,x0) result[time]=func(x0)+(1/6)*(k[0]+2*k[1]+2*k[2]+k[3]) else: k=kcontants(func,h,result[time-1]) result[time]=result[time-1]+(1/6)*(k[0]+2*k[1]+2*k[2]+k[3]) The result should be the Lorenz atractors, however my code diverges around the fifth iteration, and it's because the contants I create in kconstants do, however I checked and I'm pretty sure the runge kutta impletmentation is not to fault... (at least i think) edit: Found a similar post ,yet can't figure what I'm doing wrong A: You have an extra call of f(x0) in the calculation of k1, k2 and k3. Change the function kcontants to def kcontants(f,h,x0): k0=h*f(x0) k1=h*f(x0 + k0/2) k2=h*f(x0 + k1/2) k3=h*f(x0 + k2) #note returned K is a matrix return np.array([k0,k1,k2,k3]) A: Have you looked at different initial values for your calculation? Do the ones you've chosen make sense? I.e. are they physical? From past experience with rk you can sometimes get very confusing results if you pick silly starting parameters. A: Goodnight. This and version I made using the scipy edo integrator, scipy.integrate.odeint. # Author : Carlos Eduardo da Silva Lima # Theme : Movement of a Plant around a fixed star # Language : Python # date : 11/19/2022 # Environment : Google Colab import numpy as np import matplotlib.pyplot as plt from scipy.integrate import odeint from scipy.optimize import root from scipy.linalg import eig from mpl_toolkits.mplot3d import Axes3D ################################## # Condições inicial e parãmetros # ################################## t_inicial = 0 t_final = 100 N = 10000 h = 1e-3 x_0 = 1.0 y_0 = 1.0 z_0 = 1.0 ##################### # Equação de Lorenz # ##################### def Lorenz(r,t,sigma,rho,beta): x = r[0]; y = r[1]; z = r[2] edo1 = sigma*(y-x) edo2 = x*(rho-z)-y edo3 = x*y-beta*z return np.array([edo1,edo2,edo3]) t = np.linspace(t_inicial,t_final,N) r_0 = np.array([x_0,y_0,z_0]) #sol = odeint(Lorenz,r_0,t,rtol=1e-6,args = (10,28,8/3)) sol = odeint(Lorenz, r_0, t, args=(10,28,8/3), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None, rtol=1e-9, atol=1e-9, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0, tfirst=False) '''x = sol[:,0] y = sol[:,1] z = sol[:,2]''' x, y, z = sol.T # Plot plt.style.use('dark_background') ax = plt.figure(figsize = (10,10)).add_subplot(projection='3d') ax.plot(x,y,z,'m-',lw=0.5, linewidth = 1.5) ax.set_xlabel("X") ax.set_ylabel("Y") ax.set_zlabel("Z") ax.set_title("Atrator de Lorenz") plt.show() In this second part, I simulate two Lorenz systems to verify the sensitive dependencies of the systems to the initial conditions. In the second system, I add a certain amount of eps = 1e-3 to the initial conditions of x(t0), y(t0) and z(t0). # Depedência com as condições iniciais eps = 1e-3 r_0_eps = np.array([x_0+eps,y_0+eps,z_0+eps]) sol_eps = odeint(Lorenz, r_0_eps, t, args=(10,28,8/3), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None, rtol=1e-9, atol=1e-9, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0, tfirst=False) '''x_eps = sol_eps[:,0] y_eps = sol_eps[:,1] z_eps = sol_eps[:,2]''' x_eps, y_eps, z_eps = sol_eps.T # Plot plt.style.use('dark_background') ax = plt.figure(figsize = (10,10)).add_subplot(projection='3d') ax.plot(x,y,z,'r-',lw=1.5) ax.plot(x_eps,y_eps,z_eps,'b-.',lw=1.1) ax.set_xlabel("X") ax.set_ylabel("Y") ax.set_zlabel("Z") ax.set_title("Lorenz Attractor") plt.show() Hope I helped, see you :).
Runge Kutta constants diverging for Lorenz system?
I'm trying to solve the Lorenz system using the 4th order Runge Kutta method, where dx/dt=a*(y-x) dy/dt=x(b-z)-y dx/dt=x*y-c*z Since this system doesn't depend explicity on time, it's possibly to ignore that part in the iteration, so I just have dX=F(x,y,z) def func(x0): a=10 b=38.63 c=8/3 fx=a*(x0[1]-x0[0]) fy=x0[0]*(b-x0[2])-x0[1] fz=x0[0]*x0[1]-c*x0[2] return np.array([fx,fy,fz]) def kcontants(f,h,x0): k0=h*f(x0) k1=h*f(f(x0)+k0/2) k2=h*f(f(x0)+k1/2) k3=h*f(f(x0)+k2) #note returned K is a matrix return np.array([k0,k1,k2,k3]) x0=np.array([-8,8,27]) h=0.001 t=np.arange(0,50,h) result=np.zeros([len(t),3]) for time in range(len(t)): if time==0: k=kcontants(func,h,x0) result[time]=func(x0)+(1/6)*(k[0]+2*k[1]+2*k[2]+k[3]) else: k=kcontants(func,h,result[time-1]) result[time]=result[time-1]+(1/6)*(k[0]+2*k[1]+2*k[2]+k[3]) The result should be the Lorenz atractors, however my code diverges around the fifth iteration, and it's because the contants I create in kconstants do, however I checked and I'm pretty sure the runge kutta impletmentation is not to fault... (at least i think) edit: Found a similar post ,yet can't figure what I'm doing wrong
[ "You have an extra call of f(x0) in the calculation of k1, k2 and k3. Change the function kcontants to\ndef kcontants(f,h,x0):\n k0=h*f(x0)\n k1=h*f(x0 + k0/2)\n k2=h*f(x0 + k1/2)\n k3=h*f(x0 + k2)\n #note returned K is a matrix\n return np.array([k0,k1,k2,k3])\n\n", "Have you looked at different initial values for your calculation? Do the ones you've chosen make sense? I.e. are they physical? From past experience with rk you can sometimes get very confusing results if you pick silly starting parameters.\n", "Goodnight. This and version I made using the scipy edo integrator, scipy.integrate.odeint.\n# Author : Carlos Eduardo da Silva Lima\n# Theme : Movement of a Plant around a fixed star\n# Language : Python\n# date : 11/19/2022\n# Environment : Google Colab\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom scipy.integrate import odeint\nfrom scipy.optimize import root\nfrom scipy.linalg import eig\nfrom mpl_toolkits.mplot3d import Axes3D\n\n##################################\n# Condições inicial e parãmetros #\n##################################\nt_inicial = 0\nt_final = 100\nN = 10000\nh = 1e-3\n\nx_0 = 1.0\ny_0 = 1.0\nz_0 = 1.0\n\n#####################\n# Equação de Lorenz #\n#####################\ndef Lorenz(r,t,sigma,rho,beta):\n\n x = r[0]; y = r[1]; z = r[2]\n\n edo1 = sigma*(y-x)\n edo2 = x*(rho-z)-y\n edo3 = x*y-beta*z\n return np.array([edo1,edo2,edo3])\n\nt = np.linspace(t_inicial,t_final,N)\nr_0 = np.array([x_0,y_0,z_0])\n#sol = odeint(Lorenz,r_0,t,rtol=1e-6,args = (10,28,8/3))\nsol = odeint(Lorenz, r_0, t, args=(10,28,8/3), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None, rtol=1e-9, atol=1e-9, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0, tfirst=False)\n\n'''x = sol[:,0]\ny = sol[:,1]\nz = sol[:,2]'''\nx, y, z = sol.T\n\n# Plot\nplt.style.use('dark_background')\nax = plt.figure(figsize = (10,10)).add_subplot(projection='3d')\nax.plot(x,y,z,'m-',lw=0.5, linewidth = 1.5)\nax.set_xlabel(\"X\")\nax.set_ylabel(\"Y\")\nax.set_zlabel(\"Z\")\nax.set_title(\"Atrator de Lorenz\")\nplt.show()\n\nIn this second part, I simulate two Lorenz systems to verify the sensitive dependencies of the systems to the initial conditions. In the second system, I add a certain amount of eps = 1e-3 to the initial conditions of x(t0), y(t0) and z(t0).\n# Depedência com as condições iniciais\neps = 1e-3\nr_0_eps = np.array([x_0+eps,y_0+eps,z_0+eps])\nsol_eps = odeint(Lorenz, r_0_eps, t, args=(10,28,8/3), Dfun=None, col_deriv=0, full_output=0, ml=None, mu=None,\n rtol=1e-9, atol=1e-9, tcrit=None, h0=0.0, hmax=0.0, hmin=0.0, ixpr=0, mxstep=0, mxhnil=0, mxordn=12, mxords=5, printmessg=0, tfirst=False)\n\n'''x_eps = sol_eps[:,0]\ny_eps = sol_eps[:,1]\nz_eps = sol_eps[:,2]'''\nx_eps, y_eps, z_eps = sol_eps.T\n\n# Plot\nplt.style.use('dark_background')\nax = plt.figure(figsize = (10,10)).add_subplot(projection='3d')\nax.plot(x,y,z,'r-',lw=1.5)\nax.plot(x_eps,y_eps,z_eps,'b-.',lw=1.1)\nax.set_xlabel(\"X\")\nax.set_ylabel(\"Y\")\nax.set_zlabel(\"Z\")\nax.set_title(\"Lorenz Attractor\")\nplt.show()\n\nHope I helped, see you :).\n" ]
[ 1, 0, 0 ]
[]
[]
[ "lorenz_system", "numerical_methods", "python", "runge_kutta" ]
stackoverflow_0055884705_lorenz_system_numerical_methods_python_runge_kutta.txt
Q: sklearn Cross validation scoring , scores are all nan I'm trying to make a multiclass classification here and the score from the cross validaiton are all nan Below the code which works perfectly for binary classifcation when i only keep accuracy and balanced_accuracy it shows the actual score when i add f1 or precison or recall all scores turns into nan the problem that my code worked perfectly for binary classifcation and i'm using the same dataset just changed the target data scoring = {'accuracy': 'accuracy', "balanced_accuracy": "balanced_accuracy", "precision": "precision", "recall": "recall", "f1" :"f1", "roc_auc":"roc_auc" } # load the dataset def load_dataset(df): # load the dataset as a numpy array data = df # retrieve numpy array data = data.values # split into input and output elements X, y = data[:, :-1], data[:, -1] y = LabelEncoder().fit_transform(y) return X, y # evaluate a model def evaluate_model(X, y, model): # define evaluation procedure cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_validate(model, X, y, scoring=scoring, cv=cv, n_jobs=-1) return scores model=DecisionTreeClassifier() # define the location of the dataset # load the dataset X, y = load_dataset(df2) # evaluate the model and store results results_without_nlp = evaluate_model(X, y, model) i have tried to use those from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score but that does not seem to help A: For precision_score, recall_score and f1_score, I think you can try using parameter average = micro (or macro and weighted) for multiple targets. Because its default value is binary.
sklearn Cross validation scoring , scores are all nan
I'm trying to make a multiclass classification here and the score from the cross validaiton are all nan Below the code which works perfectly for binary classifcation when i only keep accuracy and balanced_accuracy it shows the actual score when i add f1 or precison or recall all scores turns into nan the problem that my code worked perfectly for binary classifcation and i'm using the same dataset just changed the target data scoring = {'accuracy': 'accuracy', "balanced_accuracy": "balanced_accuracy", "precision": "precision", "recall": "recall", "f1" :"f1", "roc_auc":"roc_auc" } # load the dataset def load_dataset(df): # load the dataset as a numpy array data = df # retrieve numpy array data = data.values # split into input and output elements X, y = data[:, :-1], data[:, -1] y = LabelEncoder().fit_transform(y) return X, y # evaluate a model def evaluate_model(X, y, model): # define evaluation procedure cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1) # evaluate model scores = cross_validate(model, X, y, scoring=scoring, cv=cv, n_jobs=-1) return scores model=DecisionTreeClassifier() # define the location of the dataset # load the dataset X, y = load_dataset(df2) # evaluate the model and store results results_without_nlp = evaluate_model(X, y, model) i have tried to use those from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score but that does not seem to help
[ "For precision_score, recall_score and f1_score, I think you can try using parameter average = micro (or macro and weighted) for multiple targets. Because its default value is binary.\n" ]
[ 1 ]
[]
[]
[ "classification", "machine_learning", "multilabel_classification", "python", "scikit_learn" ]
stackoverflow_0074505106_classification_machine_learning_multilabel_classification_python_scikit_learn.txt
Q: Best way to flatten and remap ORM to Pydantic Model I am using Pydantic with FastApi to output ORM data into JSON. I would like to flatten and remap the ORM model to eliminate an unnecessary level in the JSON. Here's a simplified example to illustrate the problem. original output: {"id": 1, "billing": [ {"id": 1, "order_id": 1, "first_name": "foo"}, {"id": 2, "order_id": 1, "first_name": "bar"} ] } desired output: {"id": 1, "name": ["foo", "bar"]} How to map values from nested dict to Pydantic Model? provides a solution that works for dictionaries by using the init function in the Pydantic model class. This example shows how that works with dictionaries: from pydantic import BaseModel # The following approach works with a dictionary as the input order_dict = {"id": 1, "billing": {"first_name": "foo"}} # desired output: {"id": 1, "name": "foo"} class Order_Model_For_Dict(BaseModel): id: int name: str = None class Config: orm_mode = True def __init__(self, **kwargs): print( "kwargs for dictionary:", kwargs ) # kwargs for dictionary: {'id': 1, 'billing': {'first_name': 'foo'}} kwargs["name"] = kwargs["billing"]["first_name"] super().__init__(**kwargs) print(Order_Model_For_Dict.parse_obj(order_dict)) # id=1 name='foo' (This script is complete, it should run "as is") However, when working with ORM objects, this approach does not work. It appears that the init function is not called. Here's an example which will not provide the desired output. from pydantic import BaseModel, root_validator from typing import List from sqlalchemy.orm import relationship from sqlalchemy import Column, Integer, String, ForeignKey from sqlalchemy.dialects.postgresql import ARRAY from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() from pydantic.utils import GetterDict class BillingOrm(Base): __tablename__ = "billing" id = Column(Integer, primary_key=True, nullable=False) order_id = Column(ForeignKey("orders.id", ondelete="CASCADE"), nullable=False) first_name = Column(String(20)) class OrderOrm(Base): __tablename__ = "orders" id = Column(Integer, primary_key=True, nullable=False) billing = relationship("BillingOrm") class Billing(BaseModel): id: int order_id: int first_name: str class Config: orm_mode = True class Order(BaseModel): id: int name: List[str] = None # billing: List[Billing] # uncomment to verify the relationship is working class Config: orm_mode = True def __init__(self, **kwargs): # This __init__ function does not run when using from_orm to parse ORM object print("kwargs for orm:", kwargs) kwargs["name"] = kwargs["billing"]["first_name"] super().__init__(**kwargs) billing_orm_1 = BillingOrm(id=1, order_id=1, first_name="foo") billing_orm_2 = BillingOrm(id=2, order_id=1, first_name="bar") order_orm = OrderOrm(id=1) order_orm.billing.append(billing_orm_1) order_orm.billing.append(billing_orm_2) order_model = Order.from_orm(order_orm) # Output returns 'None' for name instead of ['foo','bar'] print(order_model) # id=1 name=None (This script is complete, it should run "as is") The output returns name=None instead of the desired list of names. In the above example, I am using Order.from_orm to create the Pydantic model. This approach seems to be the same that is used by FastApi when specifying a response model. The desired solution should support use in the FastApi response model as shown in this example: @router.get("/orders", response_model=List[schemas.Order]) async def list_orders(db: Session = Depends(get_db)): return get_orders(db) Update: Regarding MatsLindh comment to try validators, I replaced the init function with a root validator, however, I'm unable to mutate the return values to include a new attribute. I suspect this issue is because it is a ORM object and not a true dictionary. The following code will extract the names and print them in the desired list. However, I can't see how to include this updated result in the model response: @root_validator(pre=True) def flatten(cls, values): if isinstance(values, GetterDict): names = [ billing_entry.first_name for billing_entry in values.get("billing") ] print(names) # values["name"] = names # error: 'GetterDict' object does not support item assignment return values I also found a couple other discussions on this problem that led me to try this approach: https://github.com/samuelcolvin/pydantic/issues/717 https://gitmemory.com/issue/samuelcolvin/pydantic/821/744047672 A: What if you override the from_orm class method? class Order(BaseModel): id: int name: List[str] = None billing: List[Billing] class Config: orm_mode = True @classmethod def from_orm(cls, obj: Any) -> 'Order': # `obj` is the orm model instance if hasattr(obj, 'billing'): obj.name = obj.billing.first_name return super().from_orm(obj) A: I really missed the handy Django REST Framework serializers while working with the FastAPI + Pydantic stack... So I wrangled with GetterDict to allow defining field getter function in the Pydantic model like this: class User(FromORM): fullname: str class Config(FromORM.Config): getter_dict = FieldGetter.bind(lambda: User) @staticmethod def get_fullname(obj: User) -> str: return f'{obj.firstname} {obj.lastname}' where the magic part FieldGetter is implemented as from typing import Any, Callable, Optional, Type from types import new_class from pydantic import BaseModel from pydantic.utils import GetterDict class FieldGetter(GetterDict): model_class_forward_ref: Optional[Callable] = None model_class: Optional[Type[BaseModel]] = None def __new__(cls, *args, **kwargs): inst = super().__new__(cls) if cls.model_class_forward_ref: inst.model_class = cls.model_class_forward_ref() return inst @classmethod def bind(cls, model_class_forward_ref: Callable): sub_class = new_class(f'{cls.__name__}FieldGetter', (cls,)) sub_class.model_class_forward_ref = model_class_forward_ref return sub_class def get(self, key: str, default): if hasattr(self._obj, key): return super().get(key, default) getter_fun_name = f'get_{key}' if not (getter := getattr(self.model_class, getter_fun_name, None)): raise AttributeError(f'no field getter function found for {key}') return getter(self._obj) class FromORM(BaseModel): class Config: orm_mode = True getter_dict = FieldGetter
Best way to flatten and remap ORM to Pydantic Model
I am using Pydantic with FastApi to output ORM data into JSON. I would like to flatten and remap the ORM model to eliminate an unnecessary level in the JSON. Here's a simplified example to illustrate the problem. original output: {"id": 1, "billing": [ {"id": 1, "order_id": 1, "first_name": "foo"}, {"id": 2, "order_id": 1, "first_name": "bar"} ] } desired output: {"id": 1, "name": ["foo", "bar"]} How to map values from nested dict to Pydantic Model? provides a solution that works for dictionaries by using the init function in the Pydantic model class. This example shows how that works with dictionaries: from pydantic import BaseModel # The following approach works with a dictionary as the input order_dict = {"id": 1, "billing": {"first_name": "foo"}} # desired output: {"id": 1, "name": "foo"} class Order_Model_For_Dict(BaseModel): id: int name: str = None class Config: orm_mode = True def __init__(self, **kwargs): print( "kwargs for dictionary:", kwargs ) # kwargs for dictionary: {'id': 1, 'billing': {'first_name': 'foo'}} kwargs["name"] = kwargs["billing"]["first_name"] super().__init__(**kwargs) print(Order_Model_For_Dict.parse_obj(order_dict)) # id=1 name='foo' (This script is complete, it should run "as is") However, when working with ORM objects, this approach does not work. It appears that the init function is not called. Here's an example which will not provide the desired output. from pydantic import BaseModel, root_validator from typing import List from sqlalchemy.orm import relationship from sqlalchemy import Column, Integer, String, ForeignKey from sqlalchemy.dialects.postgresql import ARRAY from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() from pydantic.utils import GetterDict class BillingOrm(Base): __tablename__ = "billing" id = Column(Integer, primary_key=True, nullable=False) order_id = Column(ForeignKey("orders.id", ondelete="CASCADE"), nullable=False) first_name = Column(String(20)) class OrderOrm(Base): __tablename__ = "orders" id = Column(Integer, primary_key=True, nullable=False) billing = relationship("BillingOrm") class Billing(BaseModel): id: int order_id: int first_name: str class Config: orm_mode = True class Order(BaseModel): id: int name: List[str] = None # billing: List[Billing] # uncomment to verify the relationship is working class Config: orm_mode = True def __init__(self, **kwargs): # This __init__ function does not run when using from_orm to parse ORM object print("kwargs for orm:", kwargs) kwargs["name"] = kwargs["billing"]["first_name"] super().__init__(**kwargs) billing_orm_1 = BillingOrm(id=1, order_id=1, first_name="foo") billing_orm_2 = BillingOrm(id=2, order_id=1, first_name="bar") order_orm = OrderOrm(id=1) order_orm.billing.append(billing_orm_1) order_orm.billing.append(billing_orm_2) order_model = Order.from_orm(order_orm) # Output returns 'None' for name instead of ['foo','bar'] print(order_model) # id=1 name=None (This script is complete, it should run "as is") The output returns name=None instead of the desired list of names. In the above example, I am using Order.from_orm to create the Pydantic model. This approach seems to be the same that is used by FastApi when specifying a response model. The desired solution should support use in the FastApi response model as shown in this example: @router.get("/orders", response_model=List[schemas.Order]) async def list_orders(db: Session = Depends(get_db)): return get_orders(db) Update: Regarding MatsLindh comment to try validators, I replaced the init function with a root validator, however, I'm unable to mutate the return values to include a new attribute. I suspect this issue is because it is a ORM object and not a true dictionary. The following code will extract the names and print them in the desired list. However, I can't see how to include this updated result in the model response: @root_validator(pre=True) def flatten(cls, values): if isinstance(values, GetterDict): names = [ billing_entry.first_name for billing_entry in values.get("billing") ] print(names) # values["name"] = names # error: 'GetterDict' object does not support item assignment return values I also found a couple other discussions on this problem that led me to try this approach: https://github.com/samuelcolvin/pydantic/issues/717 https://gitmemory.com/issue/samuelcolvin/pydantic/821/744047672
[ "What if you override the from_orm class method?\nclass Order(BaseModel):\n id: int\n name: List[str] = None\n billing: List[Billing]\n\n class Config:\n orm_mode = True\n\n @classmethod\n def from_orm(cls, obj: Any) -> 'Order':\n # `obj` is the orm model instance\n if hasattr(obj, 'billing'):\n obj.name = obj.billing.first_name\n return super().from_orm(obj)\n\n", "I really missed the handy Django REST Framework serializers while working with the FastAPI + Pydantic stack... So I wrangled with GetterDict to allow defining field getter function in the Pydantic model like this:\nclass User(FromORM):\n\n fullname: str\n\n class Config(FromORM.Config):\n getter_dict = FieldGetter.bind(lambda: User)\n\n @staticmethod\n def get_fullname(obj: User) -> str:\n return f'{obj.firstname} {obj.lastname}'\n\nwhere the magic part FieldGetter is implemented as\nfrom typing import Any, Callable, Optional, Type\nfrom types import new_class\nfrom pydantic import BaseModel\nfrom pydantic.utils import GetterDict\n\n\nclass FieldGetter(GetterDict):\n\n model_class_forward_ref: Optional[Callable] = None\n model_class: Optional[Type[BaseModel]] = None\n\n def __new__(cls, *args, **kwargs):\n inst = super().__new__(cls)\n if cls.model_class_forward_ref:\n inst.model_class = cls.model_class_forward_ref()\n\n return inst\n\n @classmethod\n def bind(cls, model_class_forward_ref: Callable):\n sub_class = new_class(f'{cls.__name__}FieldGetter', (cls,))\n sub_class.model_class_forward_ref = model_class_forward_ref\n return sub_class\n\n def get(self, key: str, default):\n if hasattr(self._obj, key):\n return super().get(key, default)\n\n getter_fun_name = f'get_{key}'\n if not (getter := getattr(self.model_class, getter_fun_name, None)):\n raise AttributeError(f'no field getter function found for {key}')\n\n return getter(self._obj)\n\n\nclass FromORM(BaseModel):\n\n class Config:\n orm_mode = True\n getter_dict = FieldGetter\n\n" ]
[ 10, 1 ]
[]
[]
[ "fastapi", "nested", "pydantic", "python", "sqlalchemy" ]
stackoverflow_0068850403_fastapi_nested_pydantic_python_sqlalchemy.txt
Q: MovingSum of list of integers I want to calculate the moving sum of a list of integers with a window of size 3. I have a class as such: class MovingSum: def __init__(self, window=3): self.window = window def push(self, nums: List[int]): pass def belongs(self, total) -> bool: pass I need to calculate the moving sum of 3 numbers and keep track of total. Example: Movingsum.push([1, 2, 3, 4]) will calculate the sum of (1, 2, 3) and 4, hence it keeps two totals, which is 6 and 4. Then next calls to Movingsum.push([10]) will update the total and hence we have the folloing totals: 6 and 14. Then Movingsum.push([20]) will update the total and hence we have 6 and 34. Now, Next call to Movingsum.push([10, 20, 30]) will hence have 3 totals calculated: 6, 34 and 60 etc. Hence i need to keep track of the running totals. I'm having trouble updating the total i have already calculated. My attempt: def __init__(self, window=3): self.window = window self.totals = set() self.count = 0 def push(self, nums: List[int]): total = 0 for num in nums: total += num self.count += 1 if self.count % 3 == 0: self.totals.add(total) self.count = 0 def belongs(self, total) -> bool: return total in self.totals where the belongs function needs to check if the total has already been calculated. I'm having trouble figuring out how to update the new totals. Thanks Moving sum needs to be calculated for 3 numbers before moving to next 3 etc test case: Start: nums = [1, 2] MovingSum.push(nums) Now total is 3 MovingSum.push([10, 20]) Now total is 13 and 20 (since on first push, we calculated total of two numbers which had value 3, but we need total of 3 numbers. Hence update total to 3 + 10 (which has 3 numbers), and since only one number remaining, we have two totals: 13 and 20 MovingSum.push([40]) update the total: 13, 60 (since total of 20 has only one number) MovingSum.push([100, 20, 30]) Update the total: 13, 160, 50 (3rd total is 20 + 30 which is two numbers) MovingSum.push([40, 100]) Update the total: 13, 160, 90, 100 (since 90 is sum of 20 + 30 + 40) and we have one number remaining which is 100. A: If I understand you correctly, you have a constant stream of numbers coming in, and you want the total of each n-item window (which I'll call a group) within that. So, you'll need a list of totals of all the groups so far, and you'll need to keep track of the running total of the items within the current (partial) group, as well as the number in the partial group. The code below achieves this -- I'm sure it could be cleaner using features from itertools and the like, but I've tried to keep it simple so you can see what's going on. class MovingSum: window_size: int totals: list[int] count: int def __init__(self, window_size: int = 3): self.window_size = window_size self.totals = [] self.count = 0 def push(self, nums: list[int]) -> None: offset = 0 # Handle partial set as a special case, since the final total # needs incrementing rather than a new total being added. if self.count > 0: items = nums[:self.window_size - self.count] offset = len(items) self.totals[-1] += sum(items) self.count = (self.count + offset) % self.window_size # Iterate over the remaining items, and stop once we run out. # Whilst we have full groups, self.count will remain 0, but # the final group may be partial so we recalculate it each loop. while offset < len(nums): items = nums[offset:offset + self.window_size] self.totals.append(sum(items)) self.count = len(items) % self.window_size offset += self.window_size def belongs(self, total: int) -> bool: if self.count > 0: return total in self.totals[:-1] else: return total in self.totals The belongs() function above also excludes partial totals from the membership check, but your simpler version will work fine if you don't need this extra complication. As an aside, your belongs() function will get quite slow as totals gets large. This won't be an issue for a few hundred items, but if you're in the tens of thousands or more than a set() is a much more efficient way of checking for membership -- it will also handle de-duplication rather conveniently.
MovingSum of list of integers
I want to calculate the moving sum of a list of integers with a window of size 3. I have a class as such: class MovingSum: def __init__(self, window=3): self.window = window def push(self, nums: List[int]): pass def belongs(self, total) -> bool: pass I need to calculate the moving sum of 3 numbers and keep track of total. Example: Movingsum.push([1, 2, 3, 4]) will calculate the sum of (1, 2, 3) and 4, hence it keeps two totals, which is 6 and 4. Then next calls to Movingsum.push([10]) will update the total and hence we have the folloing totals: 6 and 14. Then Movingsum.push([20]) will update the total and hence we have 6 and 34. Now, Next call to Movingsum.push([10, 20, 30]) will hence have 3 totals calculated: 6, 34 and 60 etc. Hence i need to keep track of the running totals. I'm having trouble updating the total i have already calculated. My attempt: def __init__(self, window=3): self.window = window self.totals = set() self.count = 0 def push(self, nums: List[int]): total = 0 for num in nums: total += num self.count += 1 if self.count % 3 == 0: self.totals.add(total) self.count = 0 def belongs(self, total) -> bool: return total in self.totals where the belongs function needs to check if the total has already been calculated. I'm having trouble figuring out how to update the new totals. Thanks Moving sum needs to be calculated for 3 numbers before moving to next 3 etc test case: Start: nums = [1, 2] MovingSum.push(nums) Now total is 3 MovingSum.push([10, 20]) Now total is 13 and 20 (since on first push, we calculated total of two numbers which had value 3, but we need total of 3 numbers. Hence update total to 3 + 10 (which has 3 numbers), and since only one number remaining, we have two totals: 13 and 20 MovingSum.push([40]) update the total: 13, 60 (since total of 20 has only one number) MovingSum.push([100, 20, 30]) Update the total: 13, 160, 50 (3rd total is 20 + 30 which is two numbers) MovingSum.push([40, 100]) Update the total: 13, 160, 90, 100 (since 90 is sum of 20 + 30 + 40) and we have one number remaining which is 100.
[ "If I understand you correctly, you have a constant stream of numbers coming in, and you want the total of each n-item window (which I'll call a group) within that. So, you'll need a list of totals of all the groups so far, and you'll need to keep track of the running total of the items within the current (partial) group, as well as the number in the partial group.\nThe code below achieves this -- I'm sure it could be cleaner using features from itertools and the like, but I've tried to keep it simple so you can see what's going on.\nclass MovingSum:\n\n window_size: int\n totals: list[int]\n count: int\n\n def __init__(self, window_size: int = 3):\n self.window_size = window_size\n self.totals = []\n self.count = 0\n\n def push(self, nums: list[int]) -> None:\n offset = 0\n\n # Handle partial set as a special case, since the final total\n # needs incrementing rather than a new total being added.\n if self.count > 0:\n items = nums[:self.window_size - self.count]\n offset = len(items)\n self.totals[-1] += sum(items)\n self.count = (self.count + offset) % self.window_size\n\n # Iterate over the remaining items, and stop once we run out.\n # Whilst we have full groups, self.count will remain 0, but\n # the final group may be partial so we recalculate it each loop.\n while offset < len(nums):\n items = nums[offset:offset + self.window_size]\n self.totals.append(sum(items))\n self.count = len(items) % self.window_size\n offset += self.window_size\n\n def belongs(self, total: int) -> bool:\n if self.count > 0:\n return total in self.totals[:-1]\n else:\n return total in self.totals\n\nThe belongs() function above also excludes partial totals from the membership check, but your simpler version will work fine if you don't need this extra complication.\nAs an aside, your belongs() function will get quite slow as totals gets large. This won't be an issue for a few hundred items, but if you're in the tens of thousands or more than a set() is a much more efficient way of checking for membership -- it will also handle de-duplication rather conveniently.\n" ]
[ 1 ]
[]
[]
[ "array_algorithms", "python" ]
stackoverflow_0074505125_array_algorithms_python.txt
Q: Why merging 2 data frames gives me one with triple the rows I have df1: x y no. 0 -17.7 -0.785430 y1 1 -15.0 -3820.085000 y4 2 -12.5 2.138833 y3 .. .... ........ .. 40 15.6 5.486901 y2 41 19.2 1.980686 y3 42 19.6 9.364718 y2 and df2: delta y x 0 0.053884 -17.7 1 0.085000 -15.0 2 0.143237 -12.5 .. ........ .... 40 0.113099 15.6 41 0.102245 19.2 42 0.235282 19.6 They both have 43 rows, and x column is exactly the same on both. Somehow when I merge them on x I get a df with 123 rows: x y no. delta y 0 -17.7 -0.785430 y1 0.053884 1 -15.0 -3820.085000 y4 0.085000 2 -12.5 2.138833 y3 0.143237 3 -12.4 1.721205 y3 0.251180 4 -12.1 2.227343 y2 0.127343 .. ... ... .. ... 118 12.1 1.642526 y3 0.143886 119 14.4 2576.435000 y4 0.171000 120 15.6 5.486901 y2 0.113099 121 19.2 1.980686 y3 0.102245 122 19.6 9.364718 y2 0.235282 My input: final = df1.merge(df2, on="x") x float64 y float64 no. object dtype: object delta y float64 x float64 dtype: object x float64 y float64 no. object dtype: object delta y float64 x float64 dtype: object x float64 y float64 no. object dtype: object delta y float64 x float64 dtype: object df1 = pd.DataFrame({'x': {0: -17.7, 1: -15.0, 2: -12.5, 3: -12.4, 4: -12.1, 5: -11.2, 6: -8.9, 7: -7.5, 8: -7.5, 9: -6.0, 10: -6.0, 11: -4.7, 12: -4.1, 13: -3.8, 14: -3.4, 15: -3.4, 16: -1.9, 17: -1.5, 18: -1.1, 19: -0.4, 20: -0.1, 21: 3.5, 22: 3.8, 23: 5.3, 24: 5.3, 25: 5.3, 26: 5.3, 27: 5.3, 28: 5.3, 29: 5.3, 30: 5.3, 31: 5.3, 32: 6.4, 33: 6.8, 34: 6.8, 35: 10.2, 36: 10.3, 37: 11.9, 38: 12.1, 39: 14.4, 40: 15.6, 41: 19.2, 42: 19.6}, 'y': {0: -0.7854295, 1: -3820.085, 2: 2.1388333, 3: 1.7212046, 4: 2.227343, 5: 0.04315967, 6: -0.9616607, 7: -1.9878536, 8: -0.52237016, 9: -283.27216, 10: -282.5332, 11: -0.4335017, 12: -1.1585577, 13: -0.008831219, 14: 848.92303, 15: -57.407845, 16: -9.010686, 17: -3.2473037, 18: 0.5536767, 19: 1.8351307, 20: 4.8347697, 21: -6.45842, 22: -1.5683812, 23: 0.9338831, 24: 0.9338831, 25: 97.65833, 26: 1.6500127, 27: 1.6500127, 28: 97.65833, 29: 97.65833, 30: 1.6500127, 31: 97.65833, 32: -3.655422, 33: 1.9058462, 34: 227.5592, 35: 857.7455, 36: -0.68584794, 37: 1.6785516, 38: 1.6425261, 39: 2576.435, 40: 5.4869013, 41: 1.9806856, 42: 9.364718}, 'no.': {0: 'y1', 1: 'y4', 2: 'y3', 3: 'y3', 4: 'y2', 5: 'y3', 6: 'y2', 7: 'y2', 8: 'y2', 9: 'y4', 10: 'y4', 11: 'y1', 12: 'y3', 13: 'y1', 14: 'y4', 15: 'y4', 16: 'y4', 17: 'y4', 18: 'y1', 19: 'y3', 20: 'y4', 21: 'y2', 22: 'y3', 23: 'y3', 24: 'y3', 25: 'y4', 26: 'y3', 27: 'y3', 28: 'y4', 29: 'y3', 30: 'y4', 31: 'y4', 32: 'y2', 33: 'y3', 34: 'y3', 35: 'y4', 36: 'y3', 37: 'y3', 38: 'y3', 39: 'y4', 40: 'y2', 41: 'y3', 42: 'y2'}}) df2 = pd.DataFrame({'delta y': {0: 0.05388353000000001, 1: 0.08500000000003638, 2: 0.14323679999999994, 3: 0.25117999999999996, 4: 0.12734299999999976, 5: 0.36285006000000003, 6: 0.13833930000000005, 7: 0.5121464, 8: 1.97762984, 9: 0.2721599999999853, 10: 0.4667999999999779, 11: 0.2692114, 12: 0.00890970000000002, 13: 0.314458351, 14: 906.34703, 15: 0.0161549999999977, 16: 0.06831400000000087, 17: 0.3723036999999998, 18: 0.2988478, 19: 0.006991300000000145, 20: 0.14423030000000026, 21: 0.04157999999999973, 22: 0.013554200000000183, 23: 0.17486560000000007, 24: 0.17486560000000007, 25: 0.03866999999999621, 26: 0.541264, 27: 0.541264, 28: 0.03866999999999621, 29: 96.5495813, 30: 96.0469873, 31: 0.03866999999999621, 32: 0.05542200000000008, 33: 0.1670513, 34: 225.82040510000002, 35: 0.38250000000005, 36: 0.59580486, 37: 0.10641100000000003, 38: 0.14388610000000002, 39: 0.17099999999982174, 40: 0.11309869999999922, 41: 0.10224489999999986, 42: 0.23528199999999977}, 'x': {0: -17.7, 1: -15.0, 2: -12.5, 3: -12.4, 4: -12.1, 5: -11.2, 6: -8.9, 7: -7.5, 8: -7.5, 9: -6.0, 10: -6.0, 11: -4.7, 12: -4.1, 13: -3.8, 14: -3.4, 15: -3.4, 16: -1.9, 17: -1.5, 18: -1.1, 19: -0.4, 20: -0.1, 21: 3.5, 22: 3.8, 23: 5.3, 24: 5.3, 25: 5.3, 26: 5.3, 27: 5.3, 28: 5.3, 29: 5.3, 30: 5.3, 31: 5.3, 32: 6.4, 33: 6.8, 34: 6.8, 35: 10.2, 36: 10.3, 37: 11.9, 38: 12.1, 39: 14.4, 40: 15.6, 41: 19.2, 42: 19.6}}) final = df1.merge(df2, on="x") A: try the following: df1.join(df2) join is a column-wise left join pd.merge is a column-wise inner join pd.concat is a row-wise outer join pd.concat: takes Iterable arguments. Thus, it cannot take DataFrames directly (use [df,df2]) Dimensions of DataFrame should match along axis Join and pd.merge: can take DataFrame arguments ref: Merge two dataframes by index A: Try the following syntax and I encourage you to thoroughly read the official documentation of python, the link is given at the bottom. I think you might have different x values in df1 and df2 and they are not 100% identical. This could be perhaps because of the decimals. import pandas as pd left = pd.DataFrame( { "key": ["K0", "K1", "K2", "K3"], "A": ["A0", "A1", "A2", "A3"], "B": ["B0", "B1", "B2", "B3"], } ) right = pd.DataFrame( { "key": ["K0", "K1", "K2", "K3"], "C": ["C0", "C1", "C2", "C3"], "D": ["D0", "D1", "D2", "D3"], } ) result = pd.merge(left, right, on="key") Result Image Python Merge,Join, Concatenate Official Guide A: The problem is that x values are not unique, so the merge duplicates rows to get all of the combinations. In a simple example >>> import pandas as pd >>> df1=pd.DataFrame({"a":[1,2,3,2], "b":['a', 'b', 'c', 'd']}) >>> df2=pd.DataFrame({"a":[1,2,3,2], "c":['aa', 'bb', 'cc', 'dd']}) >>> df1.merge(df2, on='a') a b c 0 1 a aa 1 2 b bb 2 2 b dd 3 2 d bb 4 2 d dd 5 3 c cc 2 is not unique in the column and gets all of the combinations (notice b --> dd and d --> dd). In your case, the x column is identical in the two dataframes. This would also mean that indexes haven't changed and you could assign the columns you want to df1. df1["delta y"] = df2["delta y"]
Why merging 2 data frames gives me one with triple the rows
I have df1: x y no. 0 -17.7 -0.785430 y1 1 -15.0 -3820.085000 y4 2 -12.5 2.138833 y3 .. .... ........ .. 40 15.6 5.486901 y2 41 19.2 1.980686 y3 42 19.6 9.364718 y2 and df2: delta y x 0 0.053884 -17.7 1 0.085000 -15.0 2 0.143237 -12.5 .. ........ .... 40 0.113099 15.6 41 0.102245 19.2 42 0.235282 19.6 They both have 43 rows, and x column is exactly the same on both. Somehow when I merge them on x I get a df with 123 rows: x y no. delta y 0 -17.7 -0.785430 y1 0.053884 1 -15.0 -3820.085000 y4 0.085000 2 -12.5 2.138833 y3 0.143237 3 -12.4 1.721205 y3 0.251180 4 -12.1 2.227343 y2 0.127343 .. ... ... .. ... 118 12.1 1.642526 y3 0.143886 119 14.4 2576.435000 y4 0.171000 120 15.6 5.486901 y2 0.113099 121 19.2 1.980686 y3 0.102245 122 19.6 9.364718 y2 0.235282 My input: final = df1.merge(df2, on="x") x float64 y float64 no. object dtype: object delta y float64 x float64 dtype: object x float64 y float64 no. object dtype: object delta y float64 x float64 dtype: object x float64 y float64 no. object dtype: object delta y float64 x float64 dtype: object df1 = pd.DataFrame({'x': {0: -17.7, 1: -15.0, 2: -12.5, 3: -12.4, 4: -12.1, 5: -11.2, 6: -8.9, 7: -7.5, 8: -7.5, 9: -6.0, 10: -6.0, 11: -4.7, 12: -4.1, 13: -3.8, 14: -3.4, 15: -3.4, 16: -1.9, 17: -1.5, 18: -1.1, 19: -0.4, 20: -0.1, 21: 3.5, 22: 3.8, 23: 5.3, 24: 5.3, 25: 5.3, 26: 5.3, 27: 5.3, 28: 5.3, 29: 5.3, 30: 5.3, 31: 5.3, 32: 6.4, 33: 6.8, 34: 6.8, 35: 10.2, 36: 10.3, 37: 11.9, 38: 12.1, 39: 14.4, 40: 15.6, 41: 19.2, 42: 19.6}, 'y': {0: -0.7854295, 1: -3820.085, 2: 2.1388333, 3: 1.7212046, 4: 2.227343, 5: 0.04315967, 6: -0.9616607, 7: -1.9878536, 8: -0.52237016, 9: -283.27216, 10: -282.5332, 11: -0.4335017, 12: -1.1585577, 13: -0.008831219, 14: 848.92303, 15: -57.407845, 16: -9.010686, 17: -3.2473037, 18: 0.5536767, 19: 1.8351307, 20: 4.8347697, 21: -6.45842, 22: -1.5683812, 23: 0.9338831, 24: 0.9338831, 25: 97.65833, 26: 1.6500127, 27: 1.6500127, 28: 97.65833, 29: 97.65833, 30: 1.6500127, 31: 97.65833, 32: -3.655422, 33: 1.9058462, 34: 227.5592, 35: 857.7455, 36: -0.68584794, 37: 1.6785516, 38: 1.6425261, 39: 2576.435, 40: 5.4869013, 41: 1.9806856, 42: 9.364718}, 'no.': {0: 'y1', 1: 'y4', 2: 'y3', 3: 'y3', 4: 'y2', 5: 'y3', 6: 'y2', 7: 'y2', 8: 'y2', 9: 'y4', 10: 'y4', 11: 'y1', 12: 'y3', 13: 'y1', 14: 'y4', 15: 'y4', 16: 'y4', 17: 'y4', 18: 'y1', 19: 'y3', 20: 'y4', 21: 'y2', 22: 'y3', 23: 'y3', 24: 'y3', 25: 'y4', 26: 'y3', 27: 'y3', 28: 'y4', 29: 'y3', 30: 'y4', 31: 'y4', 32: 'y2', 33: 'y3', 34: 'y3', 35: 'y4', 36: 'y3', 37: 'y3', 38: 'y3', 39: 'y4', 40: 'y2', 41: 'y3', 42: 'y2'}}) df2 = pd.DataFrame({'delta y': {0: 0.05388353000000001, 1: 0.08500000000003638, 2: 0.14323679999999994, 3: 0.25117999999999996, 4: 0.12734299999999976, 5: 0.36285006000000003, 6: 0.13833930000000005, 7: 0.5121464, 8: 1.97762984, 9: 0.2721599999999853, 10: 0.4667999999999779, 11: 0.2692114, 12: 0.00890970000000002, 13: 0.314458351, 14: 906.34703, 15: 0.0161549999999977, 16: 0.06831400000000087, 17: 0.3723036999999998, 18: 0.2988478, 19: 0.006991300000000145, 20: 0.14423030000000026, 21: 0.04157999999999973, 22: 0.013554200000000183, 23: 0.17486560000000007, 24: 0.17486560000000007, 25: 0.03866999999999621, 26: 0.541264, 27: 0.541264, 28: 0.03866999999999621, 29: 96.5495813, 30: 96.0469873, 31: 0.03866999999999621, 32: 0.05542200000000008, 33: 0.1670513, 34: 225.82040510000002, 35: 0.38250000000005, 36: 0.59580486, 37: 0.10641100000000003, 38: 0.14388610000000002, 39: 0.17099999999982174, 40: 0.11309869999999922, 41: 0.10224489999999986, 42: 0.23528199999999977}, 'x': {0: -17.7, 1: -15.0, 2: -12.5, 3: -12.4, 4: -12.1, 5: -11.2, 6: -8.9, 7: -7.5, 8: -7.5, 9: -6.0, 10: -6.0, 11: -4.7, 12: -4.1, 13: -3.8, 14: -3.4, 15: -3.4, 16: -1.9, 17: -1.5, 18: -1.1, 19: -0.4, 20: -0.1, 21: 3.5, 22: 3.8, 23: 5.3, 24: 5.3, 25: 5.3, 26: 5.3, 27: 5.3, 28: 5.3, 29: 5.3, 30: 5.3, 31: 5.3, 32: 6.4, 33: 6.8, 34: 6.8, 35: 10.2, 36: 10.3, 37: 11.9, 38: 12.1, 39: 14.4, 40: 15.6, 41: 19.2, 42: 19.6}}) final = df1.merge(df2, on="x")
[ "try the following: df1.join(df2)\njoin is a column-wise left join\npd.merge is a column-wise inner join\npd.concat is a row-wise outer join\npd.concat:\ntakes Iterable arguments. Thus, it cannot take DataFrames directly (use [df,df2])\nDimensions of DataFrame should match along axis\nJoin and pd.merge:\ncan take DataFrame arguments\nref: Merge two dataframes by index\n", "Try the following syntax and I encourage you to thoroughly read the official documentation of python, the link is given at the bottom.\nI think you might have different x values in df1 and df2 and they are not 100% identical. This could be perhaps because of the decimals.\nimport pandas as pd\n\nleft = pd.DataFrame(\n {\n \"key\": [\"K0\", \"K1\", \"K2\", \"K3\"],\n \"A\": [\"A0\", \"A1\", \"A2\", \"A3\"],\n \"B\": [\"B0\", \"B1\", \"B2\", \"B3\"],\n }\n )\n\n\nright = pd.DataFrame(\n {\n \"key\": [\"K0\", \"K1\", \"K2\", \"K3\"],\n \"C\": [\"C0\", \"C1\", \"C2\", \"C3\"],\n \"D\": [\"D0\", \"D1\", \"D2\", \"D3\"],\n }\n )\n \n\nresult = pd.merge(left, right, on=\"key\")\n\nResult Image\nPython Merge,Join, Concatenate Official Guide\n", "The problem is that x values are not unique, so the merge duplicates rows to get all of the combinations. In a simple example\n>>> import pandas as pd\n>>> df1=pd.DataFrame({\"a\":[1,2,3,2], \"b\":['a', 'b', 'c', 'd']})\n>>> df2=pd.DataFrame({\"a\":[1,2,3,2], \"c\":['aa', 'bb', 'cc', 'dd']})\n>>> df1.merge(df2, on='a')\n a b c\n0 1 a aa\n1 2 b bb\n2 2 b dd\n3 2 d bb\n4 2 d dd\n5 3 c cc\n\n2 is not unique in the column and gets all of the combinations (notice b --> dd and d --> dd).\nIn your case, the x column is identical in the two dataframes. This would also mean that indexes haven't changed and you could assign the columns you want to df1.\ndf1[\"delta y\"] = df2[\"delta y\"]\n\n" ]
[ 1, 1, 1 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074504496_dataframe_pandas_python.txt
Q: Python: Sort dictionary keys case-insensitive and return the dictionary in the same format unchanged I'm new to Python3 and I'm not fully aware of all its useful functions yet. I have the following dictionary: my_dict = {'david': ('18', 'Paris', '253-345-5434'), 'Joe': ('19', 'Dubai', '675-353-2345'), 'Luc': ('31', 'Istanbul', '766-673-3451')} the dictionary keys are strings and each key has a tuple value that contains (age, address, phone number) I tried the sorted method and it returned the following: ['Joe', 'Luc', 'david'] I want it to return ['david', 'Joe', 'Luc'] but I cannot find a way to do so. I need suggestions please! A: from collections import OrderedDict ... print( OrderedDict(sorted(my_dict.items())) )
Python: Sort dictionary keys case-insensitive and return the dictionary in the same format unchanged
I'm new to Python3 and I'm not fully aware of all its useful functions yet. I have the following dictionary: my_dict = {'david': ('18', 'Paris', '253-345-5434'), 'Joe': ('19', 'Dubai', '675-353-2345'), 'Luc': ('31', 'Istanbul', '766-673-3451')} the dictionary keys are strings and each key has a tuple value that contains (age, address, phone number) I tried the sorted method and it returned the following: ['Joe', 'Luc', 'david'] I want it to return ['david', 'Joe', 'Luc'] but I cannot find a way to do so. I need suggestions please!
[ "from collections import OrderedDict\n...\nprint(\n OrderedDict(sorted(my_dict.items()))\n)\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074505148_python_python_3.x.txt
Q: getting error : ModuleNotFoundError: No module named 'trialrisk.urls' in python I'm new here in django python, right now I'm working with rest api, So I have created new app trialrisk, first i have added my app in settings.py file, After then when I am trying to add url in urls.py file I'm getting an error : ModuleNotFoundError: No module named 'trialrisk.urls' in python, Here I have added the whole code and my folder structure, Can anyone please look my code and help me to resolve this issue ? Folder Structure settings.py INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'trialrisk', ] urls.py from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('/', include('trialrisk.urls')) ] A: There is no urls.py file in trialrisk folder. Create the same in trialrisk folder and import from it. A: In addition to adding urls.py to trialrisk, you'll also have to add a urlpatterns object to it e.g. urlpatterns = [], or you'll get an error like this raise ImproperlyConfigured(msg.format(name=self.urlconf_name)) from e django.core.exceptions.ImproperlyConfigured: The included URLconf '<module 'trialrisk.urls' from '/pathtoyourproject/trialrisk/urls.py'>' does not appear to have any patterns in it. If you see the 'urlpatterns' variable with valid patterns in the file then the issue is probably caused by a circular import.
getting error : ModuleNotFoundError: No module named 'trialrisk.urls' in python
I'm new here in django python, right now I'm working with rest api, So I have created new app trialrisk, first i have added my app in settings.py file, After then when I am trying to add url in urls.py file I'm getting an error : ModuleNotFoundError: No module named 'trialrisk.urls' in python, Here I have added the whole code and my folder structure, Can anyone please look my code and help me to resolve this issue ? Folder Structure settings.py INSTALLED_APPS = [ 'django.contrib.admin', 'django.contrib.auth', 'django.contrib.contenttypes', 'django.contrib.sessions', 'django.contrib.messages', 'django.contrib.staticfiles', 'trialrisk', ] urls.py from django.contrib import admin from django.urls import path, include urlpatterns = [ path('admin/', admin.site.urls), path('/', include('trialrisk.urls')) ]
[ "There is no urls.py file in trialrisk folder. Create the same in trialrisk folder and import from it.\n", "In addition to adding urls.py to trialrisk, you'll also have to add a urlpatterns object to it e.g. urlpatterns = [], or you'll get an error like this\nraise ImproperlyConfigured(msg.format(name=self.urlconf_name)) from e django.core.exceptions.ImproperlyConfigured: The included URLconf '<module 'trialrisk.urls' from '/pathtoyourproject/trialrisk/urls.py'>' does not appear to have any patterns in it. If you see the 'urlpatterns' variable with valid patterns in the file then the issue is probably caused by a circular import.\n" ]
[ 1, 0 ]
[]
[]
[ "django", "python" ]
stackoverflow_0059099801_django_python.txt
Q: How can I have a query set in the DetailView? Field 'id' expected a number but got <django.db.models.fields.related_descriptors.ForwardManyToOneDescriptor object at 0x1024f3c70>. This is the error message and class ProductDetail(DetailView): model = Product def get_context_data(self, **kwargs): context = super(ProductDetail, self).get_context_data() context['related_products'] = Product.objects.filter(category=Product.category) context['categories'] = Category.objects.all() context['no_category_post_count'] = Product.objects.filter(category=None).count return context this is my views.py. A page that shows a product and related items is what I want to present. My questions are 1. Am I not allowed to bring a query set in the DetailView? 2. Then should I use ListView to do so? A: You access the object with self.object, so: class ProductDetail(DetailView): model = Product def get_context_data(self, *args, **kwargs): context = super().get_context_data(*args, **kwargs) context['related_products'] = Product.objects.filter( category_id=self.object.category_id ) context['categories'] = Category.objects.all() context['no_category_post_count'] = Product.objects.filter( category=None ).count() return context or perhaps shorter: class ProductDetail(DetailView): model = Product def get_context_data(self, *args, **kwargs): return super().get_context_data( *args, **kwargs, related_products=Product.objects.filter( category_id=self.object.category_id ), categories=Category.objects.all(), no_category_post_count=Product.objects.filter(category=None).count() )
How can I have a query set in the DetailView?
Field 'id' expected a number but got <django.db.models.fields.related_descriptors.ForwardManyToOneDescriptor object at 0x1024f3c70>. This is the error message and class ProductDetail(DetailView): model = Product def get_context_data(self, **kwargs): context = super(ProductDetail, self).get_context_data() context['related_products'] = Product.objects.filter(category=Product.category) context['categories'] = Category.objects.all() context['no_category_post_count'] = Product.objects.filter(category=None).count return context this is my views.py. A page that shows a product and related items is what I want to present. My questions are 1. Am I not allowed to bring a query set in the DetailView? 2. Then should I use ListView to do so?
[ "You access the object with self.object, so:\nclass ProductDetail(DetailView):\n model = Product\n\n def get_context_data(self, *args, **kwargs):\n context = super().get_context_data(*args, **kwargs)\n context['related_products'] = Product.objects.filter(\n category_id=self.object.category_id\n )\n context['categories'] = Category.objects.all()\n context['no_category_post_count'] = Product.objects.filter(\n category=None\n ).count()\n return context\nor perhaps shorter:\nclass ProductDetail(DetailView):\n model = Product\n\n def get_context_data(self, *args, **kwargs):\n return super().get_context_data(\n *args,\n **kwargs,\n related_products=Product.objects.filter(\n category_id=self.object.category_id\n ),\n categories=Category.objects.all(),\n no_category_post_count=Product.objects.filter(category=None).count()\n )\n" ]
[ 1 ]
[]
[]
[ "django", "django_views", "python" ]
stackoverflow_0074505326_django_django_views_python.txt
Q: group or unpivot df not considering empty values I have a df like this : PRODUCTNUMBER Jerarquía principal Jerarquía secundaria marcas COT Ecommerce dabra-catalog Dexter-ecommerce Stockcenter-ecommerce AD802309 Medias-Hombre ADIDAS 950699 NaN NaN NaN NaN AD481076 NaN Adidas 950699 NaN NaN NaN NaN AD481137 Medias-Hombre Adidas 950699 Medias-Hombre Medias-Hombre Medias-Hombre Medias-Hombre and I need to get this output: PRODUCTNUMBER PRODUCTCATEGORYNAME PRODUCTCATEGORYHIERARCHYNAME AD802309 Medias-Hombre Jerarquía principal AD802309 ADIDAS Jerarquía secundaria marcas AD802309 950699 COT AD481076 Adidas Jerarquía secundaria marcas AD481076 950699 COT AD481137 Medias-Hombre Jerarquía principal AD481137 Adidas Jerarquía secundaria marcas AD481137 950699 COT AD481137 Medias-Hombre Ecommerce AD481137 Medias-Hombre dabra-catalog AD481137 Medias-Hombre Dexter-ecommerce AD481137 Medias-Hombre Stockcenter-ecommerce is it possible? "NaN" values must not be transposed A: Try with melt out = df.melt('PRODUCTNUMBER', value_name='PRODUCTCATEGORYHIERARCHYNAME', var_name='PRODUCTCATEGORYNAME').dropna() Out[201]: PRODUCTNUMBER PRODUCTCATEGORYNAME PRODUCTCATEGORYHIERARCHYNAME 0 AD802309 Jerarquía principal Medias-Hombre 2 AD481137 Jerarquía principal Medias-Hombre 3 AD802309 Jerarquía secundaria marcas ADIDAS 4 AD481076 Jerarquía secundaria marcas Adidas 5 AD481137 Jerarquía secundaria marcas Adidas 6 AD802309 COT 950699 7 AD481076 COT 950699 8 AD481137 COT 950699 11 AD481137 Ecommerce Medias-Hombre 14 AD481137 dabra-catalog Medias-Hombre 17 AD481137 Dexter-ecommerce Medias-Hombre 20 AD481137 Stockcenter-ecommerce Medias-Hombre A: Try: df = ( df.set_index("PRODUCTNUMBER") .stack() .reset_index() .rename( columns={ 0: "PRODUCTCATEGORYNAME", "level_1": "PRODUCTCATEGORYHIERARCHYNAME", } ) ) df = df[["PRODUCTNUMBER", "PRODUCTCATEGORYNAME", "PRODUCTCATEGORYHIERARCHYNAME"]] print(df) Prints: PRODUCTNUMBER PRODUCTCATEGORYNAME PRODUCTCATEGORYHIERARCHYNAME 0 AD802309 Medias-Hombre Jerarquía principal 1 AD802309 ADIDAS Jerarquía secundaria marcas 2 AD802309 950699 COT 3 AD481076 Adidas Jerarquía secundaria marcas 4 AD481076 950699 COT 5 AD481137 Medias-Hombre Jerarquía principal 6 AD481137 Adidas Jerarquía secundaria marcas 7 AD481137 950699 COT 8 AD481137 Medias-Hombre Ecommerce 9 AD481137 Medias-Hombre dabra-catalog 10 AD481137 Medias-Hombre Dexter-ecommerce 11 AD481137 Medias-Hombre Stockcenter-ecommerce A: need simple example for answer Example data = {'A': {'a': 'val1', 'b': 'val3'}, 'B': {'a': None, 'b': 'val4'}, 'C': {'a': 'val2', 'b': None}} df = pd.DataFrame(data) output(df): A B C a val1 None val2 b val3 val4 None Code when unpivot by stack, we can drop null automatic df.stack().reset_index().set_axis(['col1', 'col2', 'col3'], axis=1) result: col1 col2 col3 0 a A val1 1 a C val2 2 b A val3 3 b B val4
group or unpivot df not considering empty values
I have a df like this : PRODUCTNUMBER Jerarquía principal Jerarquía secundaria marcas COT Ecommerce dabra-catalog Dexter-ecommerce Stockcenter-ecommerce AD802309 Medias-Hombre ADIDAS 950699 NaN NaN NaN NaN AD481076 NaN Adidas 950699 NaN NaN NaN NaN AD481137 Medias-Hombre Adidas 950699 Medias-Hombre Medias-Hombre Medias-Hombre Medias-Hombre and I need to get this output: PRODUCTNUMBER PRODUCTCATEGORYNAME PRODUCTCATEGORYHIERARCHYNAME AD802309 Medias-Hombre Jerarquía principal AD802309 ADIDAS Jerarquía secundaria marcas AD802309 950699 COT AD481076 Adidas Jerarquía secundaria marcas AD481076 950699 COT AD481137 Medias-Hombre Jerarquía principal AD481137 Adidas Jerarquía secundaria marcas AD481137 950699 COT AD481137 Medias-Hombre Ecommerce AD481137 Medias-Hombre dabra-catalog AD481137 Medias-Hombre Dexter-ecommerce AD481137 Medias-Hombre Stockcenter-ecommerce is it possible? "NaN" values must not be transposed
[ "Try with melt\nout = df.melt('PRODUCTNUMBER',\n value_name='PRODUCTCATEGORYHIERARCHYNAME',\n var_name='PRODUCTCATEGORYNAME').dropna()\nOut[201]: \n PRODUCTNUMBER PRODUCTCATEGORYNAME PRODUCTCATEGORYHIERARCHYNAME\n0 AD802309 Jerarquía principal Medias-Hombre\n2 AD481137 Jerarquía principal Medias-Hombre\n3 AD802309 Jerarquía secundaria marcas ADIDAS\n4 AD481076 Jerarquía secundaria marcas Adidas\n5 AD481137 Jerarquía secundaria marcas Adidas\n6 AD802309 COT 950699\n7 AD481076 COT 950699\n8 AD481137 COT 950699\n11 AD481137 Ecommerce Medias-Hombre\n14 AD481137 dabra-catalog Medias-Hombre\n17 AD481137 Dexter-ecommerce Medias-Hombre\n20 AD481137 Stockcenter-ecommerce Medias-Hombre\n\n", "Try:\ndf = (\n df.set_index(\"PRODUCTNUMBER\")\n .stack()\n .reset_index()\n .rename(\n columns={\n 0: \"PRODUCTCATEGORYNAME\",\n \"level_1\": \"PRODUCTCATEGORYHIERARCHYNAME\",\n }\n )\n)\n\ndf = df[[\"PRODUCTNUMBER\", \"PRODUCTCATEGORYNAME\", \"PRODUCTCATEGORYHIERARCHYNAME\"]]\nprint(df)\n\nPrints:\n PRODUCTNUMBER PRODUCTCATEGORYNAME PRODUCTCATEGORYHIERARCHYNAME\n0 AD802309 Medias-Hombre Jerarquía principal\n1 AD802309 ADIDAS Jerarquía secundaria marcas\n2 AD802309 950699 COT\n3 AD481076 Adidas Jerarquía secundaria marcas\n4 AD481076 950699 COT\n5 AD481137 Medias-Hombre Jerarquía principal\n6 AD481137 Adidas Jerarquía secundaria marcas\n7 AD481137 950699 COT\n8 AD481137 Medias-Hombre Ecommerce\n9 AD481137 Medias-Hombre dabra-catalog\n10 AD481137 Medias-Hombre Dexter-ecommerce\n11 AD481137 Medias-Hombre Stockcenter-ecommerce\n\n", "need simple example for answer\nExample\ndata = {'A': {'a': 'val1', 'b': 'val3'},\n 'B': {'a': None, 'b': 'val4'},\n 'C': {'a': 'val2', 'b': None}}\ndf = pd.DataFrame(data)\n\noutput(df):\n A B C\na val1 None val2\nb val3 val4 None\n\n\nCode\nwhen unpivot by stack, we can drop null automatic\ndf.stack().reset_index().set_axis(['col1', 'col2', 'col3'], axis=1)\n\nresult:\n col1 col2 col3\n0 a A val1\n1 a C val2\n2 b A val3\n3 b B val4\n\n" ]
[ 2, 1, 0 ]
[]
[]
[ "dataframe", "pandas", "pivot", "python", "unpivot" ]
stackoverflow_0074505019_dataframe_pandas_pivot_python_unpivot.txt
Q: Download or working with such Large Dataset The size of this ML Competition dataset is very large. Here are some issues I am facing: My PC is not that strong to process and work with this much large dataset. My internet connection is not that fast to download. My drive has only 10 GB left, so can't fetch this dataset with Colab either. Can't upload the dataset to Kaggle for the 404 issues. So, basically, my question is how I should work for this kinda dataset and of course more efficiently. I tried to create dataset with Kaggle giving the link of the URLs from the dataset link, but it was showing: Unfortunately we could not create your dataset. Reason: An internal error occurred. A: Use distributed system like Apache Spark framework. PySpark and Dask are very efficient to handle big data.
Download or working with such Large Dataset
The size of this ML Competition dataset is very large. Here are some issues I am facing: My PC is not that strong to process and work with this much large dataset. My internet connection is not that fast to download. My drive has only 10 GB left, so can't fetch this dataset with Colab either. Can't upload the dataset to Kaggle for the 404 issues. So, basically, my question is how I should work for this kinda dataset and of course more efficiently. I tried to create dataset with Kaggle giving the link of the URLs from the dataset link, but it was showing: Unfortunately we could not create your dataset. Reason: An internal error occurred.
[ "Use distributed system like Apache Spark framework. PySpark and Dask are very efficient to handle big data.\n" ]
[ 1 ]
[]
[]
[ "dataset", "kaggle", "machine_learning", "python" ]
stackoverflow_0074502068_dataset_kaggle_machine_learning_python.txt
Q: Modify requiremet.txt file to install from private repos on heroku I have an app deployed in Heroku , now I got a lot of private repos need to be included in requirements.txt file , I set my GitHub access token and need to put it in Heroku environment variables to be included in requirements.txt file , I already tried a lot to pass it but its not read by the file unless I hard code it inside it , what should be done to make this step as secure as possible? A: Choose a private repository For an organization and private libraries, you have only one option, no matter the language: An artifact repository. You need to deploy it and configure it Push your private libraries. Create a user/password and configure them in the machine where yo build your apps. Also you could create another user for your developers. There are roles like write, read only, etc I advice you: How to upload the python packages to Nexus sonartype private repo https://www.zepl.com/use-your-private-python-libraries-from-artifactory-in-zepl/ Download the private packages No matter the cloud (aws, gcp, heroku, etc), you only need to configure the credentials and url of your private repository using the shell or a config file. Here an example: .pypirc: [distutils] index-servers = pypi [pypi] repository: https://nexus.your.domain/repository/pypi-hosted/ username: nexususername password: nexuspassword If you are worried about the credentials, you could do a simple automation to read them from env variable or perform a replacement password: ${PRIVATE_REPOSITORY_PASSWORD} password: <PRIVATE_REPOSITORY_PASSWORD> This generic approach (any private repository & any cloud) should work with github: https://truveris.github.io/articles/configuring-pypirc/ https://gist.github.com/NearHuscarl/90aa951dc970e8a6bd0ceba3d6846c14
Modify requiremet.txt file to install from private repos on heroku
I have an app deployed in Heroku , now I got a lot of private repos need to be included in requirements.txt file , I set my GitHub access token and need to put it in Heroku environment variables to be included in requirements.txt file , I already tried a lot to pass it but its not read by the file unless I hard code it inside it , what should be done to make this step as secure as possible?
[ "Choose a private repository\nFor an organization and private libraries, you have only one option, no matter the language:\nAn artifact repository.\n\nYou need to deploy it and configure it\nPush your private libraries.\nCreate a user/password and configure them in the machine where yo build your apps. Also you could create another user for your developers. There are roles like write, read only, etc\n\nI advice you:\n\n\nHow to upload the python packages to Nexus sonartype private repo\nhttps://www.zepl.com/use-your-private-python-libraries-from-artifactory-in-zepl/\n\nDownload the private packages\nNo matter the cloud (aws, gcp, heroku, etc), you only need to configure the credentials and url of your private repository using the shell or a config file.\nHere an example:\n.pypirc:\n\n[distutils]\nindex-servers =\npypi\n[pypi]\nrepository: https://nexus.your.domain/repository/pypi-hosted/\nusername: nexususername\npassword: nexuspassword \n\nIf you are worried about the credentials, you could do a simple automation to read them from env variable or perform a replacement\npassword: ${PRIVATE_REPOSITORY_PASSWORD} \n\npassword: <PRIVATE_REPOSITORY_PASSWORD> \n\nThis generic approach (any private repository & any cloud) should work with github:\n\nhttps://truveris.github.io/articles/configuring-pypirc/\nhttps://gist.github.com/NearHuscarl/90aa951dc970e8a6bd0ceba3d6846c14\n\n" ]
[ 0 ]
[]
[]
[ "git", "github", "heroku", "python" ]
stackoverflow_0074505276_git_github_heroku_python.txt
Q: after making a .exe file using auto-py-to-exe it didnt run I make a calculator. Now my desire to make a .exe file to use my python file. so I use auto-py-to-exe and convert my script to an EXE file. but when I run this file using mouse double click it didn't work. My calculator code: from tkinter import * root = Tk() root.title("Calculator") root.iconbitmap('miracle_logo_icon.ico') e = Entry(root, width=35, borderwidth=5) e.grid(row=0, columnspan=3, padx=10, pady=10) # e.insert(0, "Enter Your Name") def button_click(number): current = e.get() e.delete(0, END) e.insert(0, str(current) + str(number)) def button_clear(): e.delete(0, END) def button_add(): first_number = e.get() global f_num global math math="addition" f_num = int(first_number) e.delete(0, END) def button_equal(): second_number = e.get() e.delete(0, END) if math == "addition": e.insert(0, f_num + int(second_number)) if math == "subtraction": e.insert(0, f_num - int(second_number)) if math == "multiplication": e.insert(0, f_num * int(second_number)) if math == "division": e.insert(0, f_num / int(second_number)) def button_subtract(): first_number = e.get() global f_num global math math = "subtraction" f_num = int(first_number) e.delete(0, END) def button_multiply(): first_number = e.get() global f_num global math math = "multiplication" f_num = int(first_number) e.delete(0, END) def button_divide(): first_number = e.get() global f_num global math math = "division" f_num = int(first_number) e.delete(0, END) button_1 = Button(root, text="1", padx=40, pady=20, command=lambda: button_click(1)) button_2 = Button(root, text="2", padx=40, pady=20, command=lambda: button_click(2)) button_3 = Button(root, text="3", padx=40, pady=20, command=lambda: button_click(3)) button_4 = Button(root, text="4", padx=40, pady=20, command=lambda: button_click(4)) button_5 = Button(root, text="5", padx=40, pady=20, command=lambda: button_click(5)) button_6 = Button(root, text="6", padx=40, pady=20, command=lambda: button_click(6)) button_7 = Button(root, text="7", padx=40, pady=20, command=lambda: button_click(7)) button_8 = Button(root, text="8", padx=40, pady=20, command=lambda: button_click(8)) button_9 = Button(root, text="9", padx=40, pady=20, command=lambda: button_click(9)) button_0 = Button(root, text="0", padx=40, pady=20, command=lambda: button_click(0)) button_add = Button(root, text="+", padx=39, pady=20, command=button_add) button_equal = Button(root, text="=", padx=91, pady=20, command=button_equal) button_clear = Button(root, text="Clear", padx=79, pady=20, command=button_clear) button_subtract = Button(root, text="-", padx=41, pady=20, command=button_subtract) button_multiply = Button(root, text="*", padx=40, pady=20, command=button_multiply) button_divide = Button(root, text="/", padx=41, pady=20, command=button_divide) button_1.grid(row=3, column=0) button_2.grid(row=3, column=1) button_3.grid(row=3, column=2) button_4.grid(row=2, column=0) button_5.grid(row=2, column=1) button_6.grid(row=2, column=2) button_7.grid(row=1, column=0) button_8.grid(row=1, column=1) button_9.grid(row=1, column=2) button_0.grid(row=4, column=0) button_clear.grid(row=4, column=1, columnspan=2) button_add.grid(row=5, column=0) button_equal.grid(row=5, column=1, columnspan=2) button_subtract.grid(row=6, column=0) button_multiply.grid(row=6, column=1) button_divide.grid(row=6, column=2) root.mainloop() My code work when I run the script. Folder After Converting. When I am using one file and run it. I am getting this virus error. I am a totally new user of auto-py-to-exe. A: Open Setting/Update & Security/Windows Security Then Go to "Virus & threat protection" then click on "Protection history".You will see here the list of threats removed by Windows Defender. Search your file name and then "Allow" the threat from here. This will add your exe to the "Allowed Threats" section and then open your exe. It will work. If it doesn't work, turn off the "Real-Time Protection" setting from the "Virus and Threat Protection setting". If it still doesn't work then Open Command Prompt as an Administrator. Then type these two below codes and hit enter after each code. sfc /SCANFILE=c:\windows\explorer.exe sfc /SCANFILE=C:\Windows\SysWow64\explorer.exe
after making a .exe file using auto-py-to-exe it didnt run
I make a calculator. Now my desire to make a .exe file to use my python file. so I use auto-py-to-exe and convert my script to an EXE file. but when I run this file using mouse double click it didn't work. My calculator code: from tkinter import * root = Tk() root.title("Calculator") root.iconbitmap('miracle_logo_icon.ico') e = Entry(root, width=35, borderwidth=5) e.grid(row=0, columnspan=3, padx=10, pady=10) # e.insert(0, "Enter Your Name") def button_click(number): current = e.get() e.delete(0, END) e.insert(0, str(current) + str(number)) def button_clear(): e.delete(0, END) def button_add(): first_number = e.get() global f_num global math math="addition" f_num = int(first_number) e.delete(0, END) def button_equal(): second_number = e.get() e.delete(0, END) if math == "addition": e.insert(0, f_num + int(second_number)) if math == "subtraction": e.insert(0, f_num - int(second_number)) if math == "multiplication": e.insert(0, f_num * int(second_number)) if math == "division": e.insert(0, f_num / int(second_number)) def button_subtract(): first_number = e.get() global f_num global math math = "subtraction" f_num = int(first_number) e.delete(0, END) def button_multiply(): first_number = e.get() global f_num global math math = "multiplication" f_num = int(first_number) e.delete(0, END) def button_divide(): first_number = e.get() global f_num global math math = "division" f_num = int(first_number) e.delete(0, END) button_1 = Button(root, text="1", padx=40, pady=20, command=lambda: button_click(1)) button_2 = Button(root, text="2", padx=40, pady=20, command=lambda: button_click(2)) button_3 = Button(root, text="3", padx=40, pady=20, command=lambda: button_click(3)) button_4 = Button(root, text="4", padx=40, pady=20, command=lambda: button_click(4)) button_5 = Button(root, text="5", padx=40, pady=20, command=lambda: button_click(5)) button_6 = Button(root, text="6", padx=40, pady=20, command=lambda: button_click(6)) button_7 = Button(root, text="7", padx=40, pady=20, command=lambda: button_click(7)) button_8 = Button(root, text="8", padx=40, pady=20, command=lambda: button_click(8)) button_9 = Button(root, text="9", padx=40, pady=20, command=lambda: button_click(9)) button_0 = Button(root, text="0", padx=40, pady=20, command=lambda: button_click(0)) button_add = Button(root, text="+", padx=39, pady=20, command=button_add) button_equal = Button(root, text="=", padx=91, pady=20, command=button_equal) button_clear = Button(root, text="Clear", padx=79, pady=20, command=button_clear) button_subtract = Button(root, text="-", padx=41, pady=20, command=button_subtract) button_multiply = Button(root, text="*", padx=40, pady=20, command=button_multiply) button_divide = Button(root, text="/", padx=41, pady=20, command=button_divide) button_1.grid(row=3, column=0) button_2.grid(row=3, column=1) button_3.grid(row=3, column=2) button_4.grid(row=2, column=0) button_5.grid(row=2, column=1) button_6.grid(row=2, column=2) button_7.grid(row=1, column=0) button_8.grid(row=1, column=1) button_9.grid(row=1, column=2) button_0.grid(row=4, column=0) button_clear.grid(row=4, column=1, columnspan=2) button_add.grid(row=5, column=0) button_equal.grid(row=5, column=1, columnspan=2) button_subtract.grid(row=6, column=0) button_multiply.grid(row=6, column=1) button_divide.grid(row=6, column=2) root.mainloop() My code work when I run the script. Folder After Converting. When I am using one file and run it. I am getting this virus error. I am a totally new user of auto-py-to-exe.
[ "Open Setting/Update & Security/Windows Security\nThen Go to \"Virus & threat protection\" then click on \"Protection history\".You will see here the list of threats removed by Windows Defender. Search your file name and then \"Allow\" the threat from here. This will add your exe to the \"Allowed Threats\" section and then open your exe. It will work.\nIf it doesn't work, turn off the \"Real-Time Protection\" setting from the \"Virus and Threat Protection setting\".\nIf it still doesn't work then Open Command Prompt as an Administrator. Then type these two below codes and hit enter after each code.\nsfc /SCANFILE=c:\\windows\\explorer.exe\n\nsfc /SCANFILE=C:\\Windows\\SysWow64\\explorer.exe\n\n" ]
[ 1 ]
[ "I had the same result. Auto-py-to-exe is clearly a hack. It was written Trojans on windows defender. JUST DON'T USE IT\n" ]
[ -1 ]
[ "auto_py_to_exe", "python" ]
stackoverflow_0067930573_auto_py_to_exe_python.txt
Q: Python Anaconda interpreter is in a Conda environment, but the environment has not been activated I have been using a working Anaconda install (Python 3.7) for about a year, but suddenly I'm getting this warning when I run the interpreter: > python Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32 Warning: This Python interpreter is in a conda environment, but the environment has not been activated. Libraries may fail to load. To activate this environment please see https://conda.io/activation Type "help", "copyright", "credits" or "license" for more information. >>> I quite often use virtual environments, but never with Conda. Note that I've been able to run Python from the command line with just python for a long time now, and have never had to use conda activate base. I don't even have Conda on my path. I've found these answers, but neither gives any clarity into why this may have started happening: CMD warning: "Python interpreter is in a conda environment, but the environment has not been activated" Python is in a Conda environment, but it has not been activated in a Windows virtual environment A: If you receive this warning, you need to activate your environment. To do so on Windows, use the Anaconda Prompt shortcut in your Windows start menu. If you have an existing cmd.exe session that you’d like to activate conda in run: call <your anaconda/miniconda install location>\Scripts\activate base. A: I have the same problem, by following this post conda-is-not-recognized-as-internal-or-external-command, I am able to solve the problem. The reason may be that your default Python interpreter has been switch to the the Conda python (e.g. on my Wondows 10, the path is C:\Users\Xiang\anaconda3\python.exe). Therefore, we need to add the Conda related path to the Environments Path, with details explained in the link.
Python Anaconda interpreter is in a Conda environment, but the environment has not been activated
I have been using a working Anaconda install (Python 3.7) for about a year, but suddenly I'm getting this warning when I run the interpreter: > python Python 3.7.3 (default, Mar 27 2019, 17:13:21) [MSC v.1915 64 bit (AMD64)] :: Anaconda, Inc. on win32 Warning: This Python interpreter is in a conda environment, but the environment has not been activated. Libraries may fail to load. To activate this environment please see https://conda.io/activation Type "help", "copyright", "credits" or "license" for more information. >>> I quite often use virtual environments, but never with Conda. Note that I've been able to run Python from the command line with just python for a long time now, and have never had to use conda activate base. I don't even have Conda on my path. I've found these answers, but neither gives any clarity into why this may have started happening: CMD warning: "Python interpreter is in a conda environment, but the environment has not been activated" Python is in a Conda environment, but it has not been activated in a Windows virtual environment
[ "If you receive this warning, you need to activate your environment. To do so on Windows, use the Anaconda Prompt shortcut in your Windows start menu. If you have an existing cmd.exe session that you’d like to activate conda in run:\ncall <your anaconda/miniconda install location>\\Scripts\\activate base.\n", "I have the same problem, by following this post conda-is-not-recognized-as-internal-or-external-command, I am able to solve the problem.\nThe reason may be that your default Python interpreter has been switch to the the Conda python (e.g. on my Wondows 10, the path is C:\\Users\\Xiang\\anaconda3\\python.exe). Therefore, we need to add the Conda related path to the Environments Path, with details explained in the link.\n" ]
[ 1, 0 ]
[]
[]
[ "anaconda", "python", "python_3.x" ]
stackoverflow_0062333071_anaconda_python_python_3.x.txt
Q: How can I shuffle the values of the cards and print 2 hands? Im trying to shuffle the cards, and from the shuffled deck print out 2 hands like in poker (so 10 cards total). but rather than connecting it to the original code itself i made a seperate block that'll shuffle and get the 2 hands and dont know how to connect it to the original code. need to shuffle whats below and get two hands of the cards dCardNames = ['2','3','4','5','6','7','8','9','10','J','Q','K','A'] dCardValues = ['2','3','4','5','6','7','8','9','10','11','12','13','14'] dSuits = ["Clubs","Spades","Diamonds","Hearts"] # Build a two dimensional deck with Cards suits and values. aCards = [['' for i in range(52)] for j in range(3)] i = 0 n = 0 while i < 13: aCards[0][i] = dCardNames[i] aCards[0][i + 13] = dCardNames[i] aCards[0][i + 26] = dCardNames[i] aCards[0][i + 39] = dCardNames[i] aCards[1][i] = dSuits[0] aCards[1][i + 13] = dSuits[1] aCards[1][i + 26] = dSuits[2] aCards[1][i + 39] = dSuits[3] aCards[2][i] = dCardValues[i] aCards[2][i + 13] = dCardValues[i] aCards[2][i + 26] = dCardValues[i] aCards[2][i + 39] = dCardValues[i] i = i + 1 i = 0 while i < 52: print (aCards[0][i], " ", aCards[1][i], " ", aCards[2][i]) i = i + 1 ^thats the original code import random hands = {} card_values = {1:"1", 2:"2", 3: "3", 4: "4", 5: "5", 6: "6", 7: "7", 8: "8", 9: "9", 10: "10", 11: "J", 12: "Q", 13: "K", 14: "A"} card_types = {1: "Spades", 2: "Hearts", 3: "Diamonds", 4: "Clubs"} deck = [] for i_type in range(1,5): for i_value in range(1, 15): deck.append(card_types[i_type] + " " + card_values[i_value]) # Could be handled as inputs #hands_amt = int(input("How many players?: ")) #cards_per_hand = int(input("How many cards per player?: ")) #or set value hands_amt = 2 cards_per_hand = 5 for i_hands in range(1, hands_amt+1): my_cards = [] for i_cardamt in range(1, cards_per_hand + 1): my_card = random.choice(deck) my_cards.append(my_card) deck.remove(my_card) hands[i_hands] = my_cards print(hands) this is the code i made to shuffle the cards. I know i made a comepletely new set that's nothing to do witht the one on top. how would I connect the last block to the one on top ? my expected output is: 3 Hearts 3 4 Hearts 4 5 Hearts 5 6 Hearts 6 7 Hearts 7 8 Hearts 8 9 Hearts 9 10 Hearts 10 J Hearts 11 Q Hearts 12 K Hearts 13 A Hearts 14 #piece below should be randomized Hand 1: Hearts 1 Clubs J Diamonds 3 Diamonds J Clubs 1 Hand 2 Diamonds 5 Clubs K Spades 4 Clubs 3 Clubs 6 this is what i normally get 2 Clubs 2 3 Clubs 3 4 Clubs 4 5 Clubs 5 6 Clubs 6 7 Clubs 7 8 Clubs 8 9 Clubs 9 10 Clubs 10 J Clubs 11 Q Clubs 12 K Clubs 13 A Clubs 14 2 Spades 2 3 Spades 3 4 Spades 4 5 Spades 5 6 Spades 6 7 Spades 7 8 Spades 8 9 Spades 9 10 Spades 10 J Spades 11 Q Spades 12 K Spades 13 A Spades 14 2 Diamonds 2 3 Diamonds 3 4 Diamonds 4 5 Diamonds 5 6 Diamonds 6 7 Diamonds 7 8 Diamonds 8 9 Diamonds 9 10 Diamonds 10 J Diamonds 11 Q Diamonds 12 K Diamonds 13 A Diamonds 14 2 Hearts 2 3 Hearts 3 4 Hearts 4 5 Hearts 5 6 Hearts 6 7 Hearts 7 8 Hearts 8 9 Hearts 9 10 Hearts 10 J Hearts 11 Q Hearts 12 K Hearts 13 A Hearts 14 {1: ['Hearts K', 'Diamonds 9', 'Hearts 6', 'Hearts 5', 'Clubs 9'], 2: ['Hearts 1', 'Diamonds 1', 'Hearts Q', 'Diamonds A', 'Diamonds 4']} A: I am not completely clear on your question, but as you told you are learning python, I decided to help you with some implementation that, as I hope, could inspire you and motivate to learn new coding concepts and idioms. Some of the features I use here are: enums, dataclasses, itertools, overriding __repr__, fstrings and slice syntax. Check the comments for additional hints. from dataclasses import dataclass from enum import Enum from itertools import product, starmap from random import shuffle # an enum is a special class that you can use when you want to # limit the instances to specific values Suit = Enum('Suit', {'Clubs':'♣','Spades':'♠','Diamonds':'♦','Hearts':'♥'}) # by starting at 2, values are incremental up to 14 Pip = Enum('Pip', ['2','3','4','5','6','7','8','9','10','J','Q','K','A'], start=2) # the nice thing of dataclasses, is that it automatically implements # most of the class behavior, including constructors @dataclass class Card: suit: Suit pip: Pip # overriding __repr__ to make it print like 2♣ def __repr__(self): return f'{self.pip.name}{self.suit.value}' # calculate the value of the card based on the pip def value(self): return self.pip.value # product makes every possible combination of suits and pips deck = list(starmap(Card, product(Suit, Pip))) print(*(f'{card}: {card.value()}' for card in deck), sep='\n') ncards, nhands = 5, 2 shuffle(deck) # this line is all you need to deals the hands hands = [deck[i:i+ncards] for i in range(0, ncards*nhands, ncards)] print(*hands, sep='\n')
How can I shuffle the values of the cards and print 2 hands?
Im trying to shuffle the cards, and from the shuffled deck print out 2 hands like in poker (so 10 cards total). but rather than connecting it to the original code itself i made a seperate block that'll shuffle and get the 2 hands and dont know how to connect it to the original code. need to shuffle whats below and get two hands of the cards dCardNames = ['2','3','4','5','6','7','8','9','10','J','Q','K','A'] dCardValues = ['2','3','4','5','6','7','8','9','10','11','12','13','14'] dSuits = ["Clubs","Spades","Diamonds","Hearts"] # Build a two dimensional deck with Cards suits and values. aCards = [['' for i in range(52)] for j in range(3)] i = 0 n = 0 while i < 13: aCards[0][i] = dCardNames[i] aCards[0][i + 13] = dCardNames[i] aCards[0][i + 26] = dCardNames[i] aCards[0][i + 39] = dCardNames[i] aCards[1][i] = dSuits[0] aCards[1][i + 13] = dSuits[1] aCards[1][i + 26] = dSuits[2] aCards[1][i + 39] = dSuits[3] aCards[2][i] = dCardValues[i] aCards[2][i + 13] = dCardValues[i] aCards[2][i + 26] = dCardValues[i] aCards[2][i + 39] = dCardValues[i] i = i + 1 i = 0 while i < 52: print (aCards[0][i], " ", aCards[1][i], " ", aCards[2][i]) i = i + 1 ^thats the original code import random hands = {} card_values = {1:"1", 2:"2", 3: "3", 4: "4", 5: "5", 6: "6", 7: "7", 8: "8", 9: "9", 10: "10", 11: "J", 12: "Q", 13: "K", 14: "A"} card_types = {1: "Spades", 2: "Hearts", 3: "Diamonds", 4: "Clubs"} deck = [] for i_type in range(1,5): for i_value in range(1, 15): deck.append(card_types[i_type] + " " + card_values[i_value]) # Could be handled as inputs #hands_amt = int(input("How many players?: ")) #cards_per_hand = int(input("How many cards per player?: ")) #or set value hands_amt = 2 cards_per_hand = 5 for i_hands in range(1, hands_amt+1): my_cards = [] for i_cardamt in range(1, cards_per_hand + 1): my_card = random.choice(deck) my_cards.append(my_card) deck.remove(my_card) hands[i_hands] = my_cards print(hands) this is the code i made to shuffle the cards. I know i made a comepletely new set that's nothing to do witht the one on top. how would I connect the last block to the one on top ? my expected output is: 3 Hearts 3 4 Hearts 4 5 Hearts 5 6 Hearts 6 7 Hearts 7 8 Hearts 8 9 Hearts 9 10 Hearts 10 J Hearts 11 Q Hearts 12 K Hearts 13 A Hearts 14 #piece below should be randomized Hand 1: Hearts 1 Clubs J Diamonds 3 Diamonds J Clubs 1 Hand 2 Diamonds 5 Clubs K Spades 4 Clubs 3 Clubs 6 this is what i normally get 2 Clubs 2 3 Clubs 3 4 Clubs 4 5 Clubs 5 6 Clubs 6 7 Clubs 7 8 Clubs 8 9 Clubs 9 10 Clubs 10 J Clubs 11 Q Clubs 12 K Clubs 13 A Clubs 14 2 Spades 2 3 Spades 3 4 Spades 4 5 Spades 5 6 Spades 6 7 Spades 7 8 Spades 8 9 Spades 9 10 Spades 10 J Spades 11 Q Spades 12 K Spades 13 A Spades 14 2 Diamonds 2 3 Diamonds 3 4 Diamonds 4 5 Diamonds 5 6 Diamonds 6 7 Diamonds 7 8 Diamonds 8 9 Diamonds 9 10 Diamonds 10 J Diamonds 11 Q Diamonds 12 K Diamonds 13 A Diamonds 14 2 Hearts 2 3 Hearts 3 4 Hearts 4 5 Hearts 5 6 Hearts 6 7 Hearts 7 8 Hearts 8 9 Hearts 9 10 Hearts 10 J Hearts 11 Q Hearts 12 K Hearts 13 A Hearts 14 {1: ['Hearts K', 'Diamonds 9', 'Hearts 6', 'Hearts 5', 'Clubs 9'], 2: ['Hearts 1', 'Diamonds 1', 'Hearts Q', 'Diamonds A', 'Diamonds 4']}
[ "I am not completely clear on your question, but as you told you are learning python, I decided to help you with some implementation that, as I hope, could inspire you and motivate to learn new coding concepts and idioms.\nSome of the features I use here are: enums, dataclasses, itertools, overriding __repr__, fstrings and slice syntax.\nCheck the comments for additional hints.\nfrom dataclasses import dataclass\nfrom enum import Enum\nfrom itertools import product, starmap\nfrom random import shuffle\n\n# an enum is a special class that you can use when you want to \n# limit the instances to specific values\nSuit = Enum('Suit', {'Clubs':'♣','Spades':'♠','Diamonds':'♦','Hearts':'♥'})\n# by starting at 2, values are incremental up to 14\nPip = Enum('Pip', ['2','3','4','5','6','7','8','9','10','J','Q','K','A'], start=2)\n\n# the nice thing of dataclasses, is that it automatically implements \n# most of the class behavior, including constructors\n@dataclass\nclass Card:\n suit: Suit\n pip: Pip\n # overriding __repr__ to make it print like 2♣\n def __repr__(self):\n return f'{self.pip.name}{self.suit.value}'\n # calculate the value of the card based on the pip\n def value(self):\n return self.pip.value\n\n# product makes every possible combination of suits and pips\ndeck = list(starmap(Card, product(Suit, Pip)))\nprint(*(f'{card}: {card.value()}' for card in deck), sep='\\n')\n\nncards, nhands = 5, 2\nshuffle(deck)\n# this line is all you need to deals the hands\nhands = [deck[i:i+ncards] for i in range(0, ncards*nhands, ncards)]\nprint(*hands, sep='\\n')\n\n" ]
[ 0 ]
[]
[]
[ "loops", "python", "shuffle" ]
stackoverflow_0074505074_loops_python_shuffle.txt
Q: How to create a function that converts month values into quarter using if statement in python I need to create a function called as convert_to_qtr() that converts monthly values in the month value of data frame into quarters. Given below is the month data frame below:- In the convert_to_qtr() function, we should use the following if conditions:- • If the month input is Jan-Mar, then the function returns “Q1” • If the month input is Apr-Jun, then the function returns “Q2” • If the month input is Jul-Sep, then the function returns “Q3” • If the month input is Oct-Dec, then the function returns “Q4” Then this function should be applied to Month Dataframe provided above and a new column called as Quarter should be created that contains the quarter of each observations of months(January, Feb) etc it is aligned to . quarter = 0 excl_merged['quarter'] = excl_merged[quarter] excl_merged def convert_to_quarterly(excl_merged): if excl_merged['Month'] == 'January' & excl_merged['Month'] == 'February' & excl_merged['Month'] == 'March': print(excl_merged[quarter] == 'Q1') elif excl_merged['Month'] == 'April' & excl_merged['Month'] == 'May' & excl_merged['Month'] == 'June': print(excl_merged[quarter] == 'Q2') elif excl_merged['Month'] == 'July' & excl_merged['Month'] == 'August' & excl_merged['Month'] == 'September': print(excl_merged[quarter] == 'Q3') else: print(excl_merged[quarter] == 'Q4') convert_to_quarterly(excl_merged) I was not able to run the function properly and hence was getting errors A: def convert_to_quarter( month): months = [ 'January', 'February', 'March', 'April ', 'May', 'June', \ 'July', 'August', 'September', 'October', 'November', 'December'] return months.index[ 'month'] // 3 A: Try the following: def convert_to_quarterly(excl_merged): if excl_merged['Month'] in ['January', 'February', "March"]: excl_merged[quarter] == 'Q1' elif excl_merged['Month'] in ["April", "May", "June"]: excl_merged[quarter] == 'Q2' elif excl_merged['Month'] in ['July', 'August', 'September']: excl_merged[quarter] == 'Q3' elif excl_merged["Month"] in ["November", "December", "December"]: excl_merged[quarter] == 'Q4' else: print("Unkown month name!") The main problem is that you are using an and statement. A month can't be "Januar" and "Fabruary". I would also recoment to use brackets when useing the & or | operator around the single bool operations. At last i would recoment to use the in operator to test against all three values at one. It should be faster and the code is much easier to read. A: Wouldn't it be easier to do something like: df.Transaction_Timestamp.apply(lambda x: "Q" + str(x.quarter)) Example import pandas as pd import numpy as np rng = np.random.default_rng() df = pd.DataFrame({ "Transaction_Timestamp":pd.date_range("2022-01-01", periods=365), "Value":rng.integers(0, 100, size=365) }) df["Qrt"] = df.Transaction_Timestamp.apply(lambda x: "Q" + str(x.quarter)) df.head() Transaction_Timestamp Value Qrt 0 2022-01-01 84 Q1 1 2022-01-02 43 Q1 2 2022-01-03 91 Q1 3 2022-01-04 29 Q1 4 2022-01-05 88 Q1 A: You need to do two things: Create the function convert_to_qtr() that takes in a month (January, February, etc.) and returns the associated quarter. So if the month is January, return 1, if December, return 4, etc. For a month to be in the first quarter, for example, the month could be January or February or March. A month cannot be January and February and March at the same time, which is what your code is currently checking for. This function should also be taking in a month instead of a dataframe. Apply this function to the Month column in your dataframe, and store the result in a new column called Quarter. You can do something like: df['Quarter'] = df.Month.apply(lambda month: convert_to_qtr(month)). This is saying: look at the month column, df.Month. Then, call the convert_to_qtr function on each value in the month column. The result is then stored as a new column in your dataframe, Quarter. A: you can use a map function in pandas. You need a dictionary that maps the months into quarters. import pandas as pd # an sorted list of months list_of_months =['Jan','Feb','Mar','Apr','Jun','Jul','Aug','Sep','Oct','Nov','Dec'] # creating a dictionary with the months and quarters d = {} for i, month in enumerate(list_of_months): d[month] = 'Q' + str(i//3+1) # example dataframe df = pd.DataFrame(['Jan','Dec','Mar'],columns=['Month']) # applying map to series df['Month'].map(d) The result looks like: Month Quarter 0 Jan Q1 1 Dec Q4 2 Mar Q1
How to create a function that converts month values into quarter using if statement in python
I need to create a function called as convert_to_qtr() that converts monthly values in the month value of data frame into quarters. Given below is the month data frame below:- In the convert_to_qtr() function, we should use the following if conditions:- • If the month input is Jan-Mar, then the function returns “Q1” • If the month input is Apr-Jun, then the function returns “Q2” • If the month input is Jul-Sep, then the function returns “Q3” • If the month input is Oct-Dec, then the function returns “Q4” Then this function should be applied to Month Dataframe provided above and a new column called as Quarter should be created that contains the quarter of each observations of months(January, Feb) etc it is aligned to . quarter = 0 excl_merged['quarter'] = excl_merged[quarter] excl_merged def convert_to_quarterly(excl_merged): if excl_merged['Month'] == 'January' & excl_merged['Month'] == 'February' & excl_merged['Month'] == 'March': print(excl_merged[quarter] == 'Q1') elif excl_merged['Month'] == 'April' & excl_merged['Month'] == 'May' & excl_merged['Month'] == 'June': print(excl_merged[quarter] == 'Q2') elif excl_merged['Month'] == 'July' & excl_merged['Month'] == 'August' & excl_merged['Month'] == 'September': print(excl_merged[quarter] == 'Q3') else: print(excl_merged[quarter] == 'Q4') convert_to_quarterly(excl_merged) I was not able to run the function properly and hence was getting errors
[ "def convert_to_quarter( month):\n months = [ 'January', 'February', 'March', 'April ', 'May', 'June', \\\n 'July', 'August', 'September', 'October', 'November', 'December']\n return months.index[ 'month'] // 3\n\n", "Try the following:\ndef convert_to_quarterly(excl_merged):\n if excl_merged['Month'] in ['January', 'February', \"March\"]:\n excl_merged[quarter] == 'Q1'\n elif excl_merged['Month'] in [\"April\", \"May\", \"June\"]:\n excl_merged[quarter] == 'Q2'\n elif excl_merged['Month'] in ['July', 'August', 'September']:\n excl_merged[quarter] == 'Q3'\n elif excl_merged[\"Month\"] in [\"November\", \"December\", \"December\"]:\n excl_merged[quarter] == 'Q4'\n else:\n print(\"Unkown month name!\")\n\n\nThe main problem is that you are using an and statement.\nA month can't be \"Januar\" and \"Fabruary\".\nI would also recoment to use brackets when useing the & or | operator around the single bool operations.\nAt last i would recoment to use the in operator to test against all three values at one. It should be faster and the code is much easier to read.\n", "Wouldn't it be easier to do something like:\ndf.Transaction_Timestamp.apply(lambda x: \"Q\" + str(x.quarter))\n\n\nExample\nimport pandas as pd\nimport numpy as np\n\nrng = np.random.default_rng()\ndf = pd.DataFrame({\n \"Transaction_Timestamp\":pd.date_range(\"2022-01-01\", periods=365),\n \"Value\":rng.integers(0, 100, size=365)\n})\n\ndf[\"Qrt\"] = df.Transaction_Timestamp.apply(lambda x: \"Q\" + str(x.quarter))\n\ndf.head()\n\n Transaction_Timestamp Value Qrt\n0 2022-01-01 84 Q1\n1 2022-01-02 43 Q1\n2 2022-01-03 91 Q1\n3 2022-01-04 29 Q1\n4 2022-01-05 88 Q1\n\n", "You need to do two things:\n\nCreate the function convert_to_qtr() that takes in a month (January, February, etc.) and returns the associated quarter. So if the month is January, return 1, if December, return 4, etc. For a month to be in the first quarter, for example, the month could be January or February or March. A month cannot be January and February and March at the same time, which is what your code is currently checking for. This function should also be taking in a month instead of a dataframe.\nApply this function to the Month column in your dataframe, and store the result in a new column called Quarter. You can do something like: df['Quarter'] = df.Month.apply(lambda month: convert_to_qtr(month)). This is saying: look at the month column, df.Month. Then, call the convert_to_qtr function on each value in the month column. The result is then stored as a new column in your dataframe, Quarter.\n\n", "you can use a map function in pandas. You need a dictionary that maps the months into quarters.\nimport pandas as pd\n\n# an sorted list of months\nlist_of_months =['Jan','Feb','Mar','Apr','Jun','Jul','Aug','Sep','Oct','Nov','Dec']\n\n# creating a dictionary with the months and quarters\nd = {}\nfor i, month in enumerate(list_of_months):\n d[month] = 'Q' + str(i//3+1)\n\n# example dataframe\ndf = pd.DataFrame(['Jan','Dec','Mar'],columns=['Month'])\n\n# applying map to series\ndf['Month'].map(d)\n\nThe result looks like:\n Month Quarter\n0 Jan Q1\n1 Dec Q4\n2 Mar Q1\n\n" ]
[ 1, 0, 0, 0, 0 ]
[]
[]
[ "function", "pandas", "python" ]
stackoverflow_0074503136_function_pandas_python.txt
Q: TypeError: descriptor 'append' for 'list' objects doesn't apply to a 'str' object. Iterating through a folder and return a list in python this is my first time trying to write a script on my own and I'm trying to make something that looks through my folders and return a list, and I'm keep getting this TypeError: descriptor 'append' for 'list' objects doesn't apply to a 'str' object anyone have any ideas? Thank you so much! import os path = input("Where you want to look?") myFolder = list() print("Here's your list of folders:") for dirname in os.listdir(path): f = os.path.join(path,dirname) if os.path.isdir(f): for item in f: myFolder = list.append(f) print(myFolder) I've tried to change myFolder = list() to myFolder = list[] which resulted "none" A: You're misusing append() a bit - how should that method now to which list to append the values? You either have to specify the list myFolder as the first argument (list.append(myFolder, f)) or, a bit cleaner, call append() on the instance: myFolder.append(f) You can read up a bit more on the details in the docs here. Take a look at the examples there. So, altogether, your code should read like import os path = input("Where you want to look?") myFolder = list() print("Here's your list of folders:") for dirname in os.listdir(path): f = os.path.join(path,dirname) if os.path.isdir(f): for item in f: myFolder.append(f) print(myFolder)
TypeError: descriptor 'append' for 'list' objects doesn't apply to a 'str' object. Iterating through a folder and return a list in python
this is my first time trying to write a script on my own and I'm trying to make something that looks through my folders and return a list, and I'm keep getting this TypeError: descriptor 'append' for 'list' objects doesn't apply to a 'str' object anyone have any ideas? Thank you so much! import os path = input("Where you want to look?") myFolder = list() print("Here's your list of folders:") for dirname in os.listdir(path): f = os.path.join(path,dirname) if os.path.isdir(f): for item in f: myFolder = list.append(f) print(myFolder) I've tried to change myFolder = list() to myFolder = list[] which resulted "none"
[ "You're misusing append() a bit - how should that method now to which list to append the values? You either have to specify the list myFolder as the first argument (list.append(myFolder, f)) or, a bit cleaner, call append() on the instance: myFolder.append(f)\nYou can read up a bit more on the details in the docs here. Take a look at the examples there.\nSo, altogether, your code should read like\nimport os\n\npath = input(\"Where you want to look?\")\n\nmyFolder = list()\nprint(\"Here's your list of folders:\")\nfor dirname in os.listdir(path):\n f = os.path.join(path,dirname)\n if os.path.isdir(f):\n for item in f:\n myFolder.append(f)\n\nprint(myFolder)\n\n" ]
[ 0 ]
[]
[]
[ "list", "python", "python_3.x" ]
stackoverflow_0074505317_list_python_python_3.x.txt
Q: Having trouble with python3 print syntax I've started to learn python about 4 days ago. To practice, I've decided to make a program that calculates combinations. Here is the code: print('Insert values for your combination (Cp,n)') def combin(exemplo): print('insert p value') p = int(input()) print('insert n value') n = int(input()) exemplo = [p,n] #"fator" is a function defined earlier in the program. It basically calculates the factorial of a number res = int(exemplo[0]/(fator(exemplo[0]-exemplo[1])*fator(exemplo[1])) print(res) teste = [] combin(teste) After running this, the following error has ocurred: print(res) ^ SyntaxError: invalid syntax >>> However, I can't see what I'm doing wrong here. I figured that I probably would have problems with the math and the functions, but I can't figure out what's up with the syntax in this case. A: Hey nothing to worry about, its just a typo with missing parenthesis hope you find the solution :) res = int(exemplo[0]/(fator(exemplo[0]-exemplo[1])*fator(exemplo[1])) A: Hey in the following line: res = int(exemplo[0]/(fator(exemplo[0]-exemplo[1])*fator(exemplo[1])) you are missing a closing bracket. A: You didn't close all of your parenthesis in the res line. Try this: res = int(exemplo[0]/(fator(exemplo[0]-exemplo[1]))*fator(exemplo[1]))
Having trouble with python3 print syntax
I've started to learn python about 4 days ago. To practice, I've decided to make a program that calculates combinations. Here is the code: print('Insert values for your combination (Cp,n)') def combin(exemplo): print('insert p value') p = int(input()) print('insert n value') n = int(input()) exemplo = [p,n] #"fator" is a function defined earlier in the program. It basically calculates the factorial of a number res = int(exemplo[0]/(fator(exemplo[0]-exemplo[1])*fator(exemplo[1])) print(res) teste = [] combin(teste) After running this, the following error has ocurred: print(res) ^ SyntaxError: invalid syntax >>> However, I can't see what I'm doing wrong here. I figured that I probably would have problems with the math and the functions, but I can't figure out what's up with the syntax in this case.
[ "Hey nothing to worry about, its just a typo with missing parenthesis\nhope you find the solution :)\nres = int(exemplo[0]/(fator(exemplo[0]-exemplo[1])*fator(exemplo[1]))\n\n", "Hey in the following line:\nres = int(exemplo[0]/(fator(exemplo[0]-exemplo[1])*fator(exemplo[1]))\n\nyou are missing a closing bracket.\n", "You didn't close all of your parenthesis in the res line. Try this:\nres = int(exemplo[0]/(fator(exemplo[0]-exemplo[1]))*fator(exemplo[1]))\n\n" ]
[ 1, 0, 0 ]
[]
[]
[ "printing", "python", "syntax" ]
stackoverflow_0074505077_printing_python_syntax.txt
Q: create pairs of vectors based on if the value of the first element of 1 vector is equal to the same element in the other vector I have an array that has 17k+ vectors with 3 elements in each vector. Each vector has a value for MovieTitle, AverageRating and CountRating, see example vector below: vector = [MovieTitle AverageRating CountRating] Array1 = MergedDF[["Title", "AveRating", "CountRating"]].to_numpy() print(Array1) Array1 I need to create all pairs of vectors where the MovieTitle is different. So for example, the output would be: Array2 = [([MovieTitle1 AverageRating CountRating],[MovieTitle2 AverageRating CountRating]),([MovieTitle1 AverageRating CountRating],[MovieTitle3 AverageRating CountRating]),([MovieTitle1 AverageRating CountRating],[MovieTitle3000 AverageRating CountRating]),] So the pairs would be all possible combinations of vectors based on the element MovieTitles. Please help. I tried looking into the documentation for itertools to see if there was something in that module that I could use to do this but I can't figure it out A: To create pairs you only need a nested loop. r,c=Array1.shape Array2=[] for ix1 in range(r-1): for ix2 in range(ix1+1,r): Array2.append((Array1[ix1],Array1[ix2]))
create pairs of vectors based on if the value of the first element of 1 vector is equal to the same element in the other vector
I have an array that has 17k+ vectors with 3 elements in each vector. Each vector has a value for MovieTitle, AverageRating and CountRating, see example vector below: vector = [MovieTitle AverageRating CountRating] Array1 = MergedDF[["Title", "AveRating", "CountRating"]].to_numpy() print(Array1) Array1 I need to create all pairs of vectors where the MovieTitle is different. So for example, the output would be: Array2 = [([MovieTitle1 AverageRating CountRating],[MovieTitle2 AverageRating CountRating]),([MovieTitle1 AverageRating CountRating],[MovieTitle3 AverageRating CountRating]),([MovieTitle1 AverageRating CountRating],[MovieTitle3000 AverageRating CountRating]),] So the pairs would be all possible combinations of vectors based on the element MovieTitles. Please help. I tried looking into the documentation for itertools to see if there was something in that module that I could use to do this but I can't figure it out
[ "To create pairs you only need a nested loop.\nr,c=Array1.shape\nArray2=[]\nfor ix1 in range(r-1):\n for ix2 in range(ix1+1,r):\n Array2.append((Array1[ix1],Array1[ix2]))\n \n\n\n\n" ]
[ 0 ]
[]
[]
[ "arrays", "combinations", "python", "vector" ]
stackoverflow_0074505420_arrays_combinations_python_vector.txt
Q: Calculating CRC16 in Python for modbus Firstly, sorry! I am a beginner... I got the following byte sequence on a modbus: "01 04 08 00 00 00 09 00 00 00 00 f8 0c". The CRC on bold on this byte sequence is correct. However, to check/create the CRC I have to follow the device especs that states: The error checking must be done using a 16 bit CRC implemented as two 8 bit bytes. The CRC is appended to the frame as the last field. The low order byte of the CRC is appended first, followed by the high order byte. Thus, the CRC high order byte is the last byte to be sent in the frame. The polynomial value used to generate the CRC must be 0xA001. Now, how can I check the CRC using crcmod? My code is: import crcmod crc16 = crcmod.mkCrcFun(0x1A001, rev=True, initCrc=0xFFFF, xorOut=0x0000) print crc16("0104080000000900000000".decode("hex")) I tried everything but I can't get the "f8 0C" that is correct on the byte sequence... A: Use 0x18005 instead of 0x1A001. A: Modbus shortcut, if not diving into the CRC detail from pymodbus.utilities import computeCRC
Calculating CRC16 in Python for modbus
Firstly, sorry! I am a beginner... I got the following byte sequence on a modbus: "01 04 08 00 00 00 09 00 00 00 00 f8 0c". The CRC on bold on this byte sequence is correct. However, to check/create the CRC I have to follow the device especs that states: The error checking must be done using a 16 bit CRC implemented as two 8 bit bytes. The CRC is appended to the frame as the last field. The low order byte of the CRC is appended first, followed by the high order byte. Thus, the CRC high order byte is the last byte to be sent in the frame. The polynomial value used to generate the CRC must be 0xA001. Now, how can I check the CRC using crcmod? My code is: import crcmod crc16 = crcmod.mkCrcFun(0x1A001, rev=True, initCrc=0xFFFF, xorOut=0x0000) print crc16("0104080000000900000000".decode("hex")) I tried everything but I can't get the "f8 0C" that is correct on the byte sequence...
[ "Use 0x18005 instead of 0x1A001.\n", "Modbus shortcut, if not diving into the CRC detail\nfrom pymodbus.utilities import computeCRC\n\n" ]
[ 1, 0 ]
[]
[]
[ "crc", "modbus", "python", "python_2.x" ]
stackoverflow_0069369408_crc_modbus_python_python_2.x.txt
Q: Convert from dictionary to dataframe when arrays aren't equal length? I have a dictionary like this: {1: ["a", "b", "c"], 2: ["d", "e", "f", "g"]} that I want to turn into a dataframe like this: id item 1 a 1 b 1 c 2 d 2 e 2 f 2 g but when I try use pandas.DataFrame.from_dict() I get an error because my arrays aren't the same length. How can I accomplish what I'm trying to do here? A: Example data = {1: ["a", "b", "c"], 2: ["d", "e", "f", "g"]} Code pd.Series(data).explode() output(series): 1 a 1 b 1 c 2 d 2 e 2 f 2 g dtype: object if you want result to dataframe, use following code: pd.Series(data).explode().reset_index().set_axis(['id', 'item'], axis=1) output(dataframe): id item 0 1 a 1 1 b 2 1 c 3 2 d 4 2 e 5 2 f 6 2 g A: pd.concat([pd.DataFrame(v,index=[i]*len(v),columns=['items']) for i,v in map1.items()])\ .rename_axis('id').reset_index() id items 0 1 a 1 1 b 2 1 c 3 2 d 4 2 e 5 2 f 6 2 g
Convert from dictionary to dataframe when arrays aren't equal length?
I have a dictionary like this: {1: ["a", "b", "c"], 2: ["d", "e", "f", "g"]} that I want to turn into a dataframe like this: id item 1 a 1 b 1 c 2 d 2 e 2 f 2 g but when I try use pandas.DataFrame.from_dict() I get an error because my arrays aren't the same length. How can I accomplish what I'm trying to do here?
[ "Example\ndata = {1: [\"a\", \"b\", \"c\"],\n 2: [\"d\", \"e\", \"f\", \"g\"]}\n\nCode\npd.Series(data).explode()\n\noutput(series):\n1 a\n1 b\n1 c\n2 d\n2 e\n2 f\n2 g\ndtype: object\n\n\nif you want result to dataframe, use following code:\npd.Series(data).explode().reset_index().set_axis(['id', 'item'], axis=1)\n\noutput(dataframe):\n id item\n0 1 a\n1 1 b\n2 1 c\n3 2 d\n4 2 e\n5 2 f\n6 2 g\n\n", " pd.concat([pd.DataFrame(v,index=[i]*len(v),columns=['items']) for i,v in map1.items()])\\\n .rename_axis('id').reset_index()\n \n id items\n 0 1 a\n 1 1 b\n 2 1 c\n 3 2 d\n 4 2 e\n 5 2 f\n 6 2 g\n\n" ]
[ 2, 0 ]
[]
[]
[ "dictionary", "numpy", "pandas", "python" ]
stackoverflow_0074505455_dictionary_numpy_pandas_python.txt
Q: Not enough parameters for sql statement. Want to update table (python-mysql connect) import sys import mysql.connector mydb = mysql.connector.connect(host='localhost', user='root', passwd='anohacker', database='csproj') cursor = mydb.cursor(buffered=True) nameb=input("enter your name: ") bookbor=int(input("Enter book code to borrow: ")) def borrow(nameb,bookbor): bquery="update inventory set name_of_borrower=%s where book_code=%s" stock1="update inventory set in_stock=in_stock-1 where book_code=%s" stock2="update inventory set borrowed=borrowed+1 where book_code=%s" cursor.execute(stock1,bookbor) cursor.execute(stock2,bookbor) cursor.execute(bquery,nameb,bookbor) mydb.commit() borrow([nameb],[bookbor]) I want to take name and book code from user and update my mysql table columns with them. But it's giving me an error. Most answers are for insert into but I want to update table. mysql.connector.errors.ProgrammingError: Not enough parameters for the SQL statement A: you need to provide the data for the query as a tuple so: bquery="update inventory set name_of_borrower=%s where book_code=%s" cursor.execute(bquery,(nameb,bookbor)) see https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html
Not enough parameters for sql statement. Want to update table (python-mysql connect)
import sys import mysql.connector mydb = mysql.connector.connect(host='localhost', user='root', passwd='anohacker', database='csproj') cursor = mydb.cursor(buffered=True) nameb=input("enter your name: ") bookbor=int(input("Enter book code to borrow: ")) def borrow(nameb,bookbor): bquery="update inventory set name_of_borrower=%s where book_code=%s" stock1="update inventory set in_stock=in_stock-1 where book_code=%s" stock2="update inventory set borrowed=borrowed+1 where book_code=%s" cursor.execute(stock1,bookbor) cursor.execute(stock2,bookbor) cursor.execute(bquery,nameb,bookbor) mydb.commit() borrow([nameb],[bookbor]) I want to take name and book code from user and update my mysql table columns with them. But it's giving me an error. Most answers are for insert into but I want to update table. mysql.connector.errors.ProgrammingError: Not enough parameters for the SQL statement
[ "you need to provide the data for the query as a tuple so:\nbquery=\"update inventory set name_of_borrower=%s where book_code=%s\"\ncursor.execute(bquery,(nameb,bookbor))\n\nsee https://dev.mysql.com/doc/connector-python/en/connector-python-api-mysqlcursor-execute.html\n" ]
[ 0 ]
[]
[]
[ "mysql", "mysql_connector", "mysql_python", "python" ]
stackoverflow_0074505488_mysql_mysql_connector_mysql_python_python.txt
Q: find all permutations from a list of lists where outputs contain all, none, or any of the items in each list I have any arbitrary number of lists with an arbitrary number of elements. I need to find the permutations such that each permutation contains all, any, or none of the elements from each list l1 = ['red', 'blue', 'green'] l2 = ['big','small','medium'] l3 = ['fast','slow','stopped'] res = function([l1,l2,l3]) res = [(['red', 'blue', 'green'], ['big','small','medium'], ['fast','slow','stopped']), (['red', 'blue'], ['big','small','medium'], ['fast','slow','stopped']), ([],[],['fast'])] I looked any cartesian products and itertools but my problem appears to be distinctly different because I need all, any, or none rather than just every combination of a fixed set of elements. A: I guess here's my hacky solution. products = [] for r in range(len(l1)+1): perm = list(itertools.combinations(l1, r)) for r2 in range(len(l2)+1): perm2 = list(itertools.combinations(l2, r2)) for r3 in range(len(l3)+1): perm3 = list(itertools.combinations(l3, r3)) for p in perm: for p2 in perm2: for p3 in perm3: products.append((p,p2,p3))
find all permutations from a list of lists where outputs contain all, none, or any of the items in each list
I have any arbitrary number of lists with an arbitrary number of elements. I need to find the permutations such that each permutation contains all, any, or none of the elements from each list l1 = ['red', 'blue', 'green'] l2 = ['big','small','medium'] l3 = ['fast','slow','stopped'] res = function([l1,l2,l3]) res = [(['red', 'blue', 'green'], ['big','small','medium'], ['fast','slow','stopped']), (['red', 'blue'], ['big','small','medium'], ['fast','slow','stopped']), ([],[],['fast'])] I looked any cartesian products and itertools but my problem appears to be distinctly different because I need all, any, or none rather than just every combination of a fixed set of elements.
[ "I guess here's my hacky solution.\nproducts = []\nfor r in range(len(l1)+1):\n perm = list(itertools.combinations(l1, r))\n for r2 in range(len(l2)+1):\n perm2 = list(itertools.combinations(l2, r2)) \n for r3 in range(len(l3)+1):\n perm3 = list(itertools.combinations(l3, r3))\n for p in perm:\n for p2 in perm2:\n for p3 in perm3:\n products.append((p,p2,p3))\n\n" ]
[ 0 ]
[]
[]
[ "combinatorics", "permutation", "python", "python_itertools" ]
stackoverflow_0074505467_combinatorics_permutation_python_python_itertools.txt
Q: How to apply maps to dataframes based on a field value? I have a script where I loop through a dataframe based on one of its field values. Something like import pandas as pd import numpy as np data = { "thevalue": [0,0,1,2,2,3,5,5,5], "firstname": ["Sally", "Mary", "John","Peter","Julius","Cornelius","Athos","Porthos","Aramis"], "age": [50, 40, 30,20,10,20,11,12,23] } df = pd.DataFrame(data) print(df) print(max(df['thevalue'])) limi=max(df['thevalue']) print("=============") def get_result(df,f): n_df=df.query('thevalue==@f') print(n_df) suma=sum(n_df['age']) if n_df.empty: return np.nan ave=suma/len(n_df['age']) return ave lista=[] for f in range(limi+1): #<---replace from here print(f) #print(df.query('thevalue ==@f')) res=get_result(df,f) lista.append(res) print(lista) I want to replace the last for with a map If I were to apply a map to all rows of the dataframe one by one it would not be a problem but how do I apply it in chunks based on thevalue? EDIT: The result of the first script (with loops) is thevalue firstname age 0 0 Sally 50 1 0 Mary 40 2 1 John 30 3 2 Peter 20 4 2 Julius 10 5 3 Cornelius 20 6 5 Athos 11 7 5 Porthos 12 8 5 Aramis 23 5 ============= 0 thevalue firstname age 0 0 Sally 50 1 0 Mary 40 1 thevalue firstname age 2 1 John 30 2 thevalue firstname age 3 2 Peter 20 4 2 Julius 10 3 thevalue firstname age 5 3 Cornelius 20 4 Empty DataFrame Columns: [thevalue, firstname, age] Index: [] 5 thevalue firstname age 6 5 Athos 11 7 5 Porthos 12 8 5 Aramis 23 [45.0, 30.0, 15.0, 20.0, nan, 15.333333333333334] I would like to have the same output but with maps. Ergo, the final list [45.0, 30.0, 15.0, 20.0, nan, 15.333333333333334] (and if possible the printing like: 0 thevalue firstname age 0 0 Sally 50 1 0 Mary 40 A: you can divide dataframe by group with following code: g = df.groupby('thevalue') [g.get_group(x) for x in g.groups] let's use code above to get desired output : g = df.groupby('thevalue') range_v = range(df['thevalue'].min(), df['thevalue'].max() + 1) [(x, g.get_group(x)) if x in g.groups else (x, pd.DataFrame(columns=df.columns)) for x in range_v] result: [(0, thevalue firstname age 0 0 Sally 50 1 0 Mary 40), (1, thevalue firstname age 2 1 John 30), (2, thevalue firstname age 3 2 Peter 20 4 2 Julius 10), (3, thevalue firstname age 5 3 Cornelius 20), (4, Empty DataFrame Columns: [thevalue, firstname, age] Index: []), (5, thevalue firstname age 6 5 Athos 11 7 5 Porthos 12 8 5 Aramis 23)] I made it as tuple, but if you want different type(list or dict), modify it appropriately.
How to apply maps to dataframes based on a field value?
I have a script where I loop through a dataframe based on one of its field values. Something like import pandas as pd import numpy as np data = { "thevalue": [0,0,1,2,2,3,5,5,5], "firstname": ["Sally", "Mary", "John","Peter","Julius","Cornelius","Athos","Porthos","Aramis"], "age": [50, 40, 30,20,10,20,11,12,23] } df = pd.DataFrame(data) print(df) print(max(df['thevalue'])) limi=max(df['thevalue']) print("=============") def get_result(df,f): n_df=df.query('thevalue==@f') print(n_df) suma=sum(n_df['age']) if n_df.empty: return np.nan ave=suma/len(n_df['age']) return ave lista=[] for f in range(limi+1): #<---replace from here print(f) #print(df.query('thevalue ==@f')) res=get_result(df,f) lista.append(res) print(lista) I want to replace the last for with a map If I were to apply a map to all rows of the dataframe one by one it would not be a problem but how do I apply it in chunks based on thevalue? EDIT: The result of the first script (with loops) is thevalue firstname age 0 0 Sally 50 1 0 Mary 40 2 1 John 30 3 2 Peter 20 4 2 Julius 10 5 3 Cornelius 20 6 5 Athos 11 7 5 Porthos 12 8 5 Aramis 23 5 ============= 0 thevalue firstname age 0 0 Sally 50 1 0 Mary 40 1 thevalue firstname age 2 1 John 30 2 thevalue firstname age 3 2 Peter 20 4 2 Julius 10 3 thevalue firstname age 5 3 Cornelius 20 4 Empty DataFrame Columns: [thevalue, firstname, age] Index: [] 5 thevalue firstname age 6 5 Athos 11 7 5 Porthos 12 8 5 Aramis 23 [45.0, 30.0, 15.0, 20.0, nan, 15.333333333333334] I would like to have the same output but with maps. Ergo, the final list [45.0, 30.0, 15.0, 20.0, nan, 15.333333333333334] (and if possible the printing like: 0 thevalue firstname age 0 0 Sally 50 1 0 Mary 40
[ "you can divide dataframe by group with following code:\ng = df.groupby('thevalue')\n[g.get_group(x) for x in g.groups]\n\nlet's use code above to get desired output :\ng = df.groupby('thevalue')\nrange_v = range(df['thevalue'].min(), df['thevalue'].max() + 1)\n[(x, g.get_group(x)) if x in g.groups else (x, pd.DataFrame(columns=df.columns)) for x in range_v]\n\nresult:\n[(0,\n thevalue firstname age\n 0 0 Sally 50\n 1 0 Mary 40),\n (1,\n thevalue firstname age\n 2 1 John 30),\n (2,\n thevalue firstname age\n 3 2 Peter 20\n 4 2 Julius 10),\n (3,\n thevalue firstname age\n 5 3 Cornelius 20),\n (4,\n Empty DataFrame\n Columns: [thevalue, firstname, age]\n Index: []),\n (5,\n thevalue firstname age\n 6 5 Athos 11\n 7 5 Porthos 12\n 8 5 Aramis 23)]\n\nI made it as tuple, but if you want different type(list or dict), modify it appropriately.\n" ]
[ 0 ]
[]
[]
[ "dataframe", "pandas", "python" ]
stackoverflow_0074505510_dataframe_pandas_python.txt
Q: Insert input into list by the name of the menu, and then calculating its price after selecting the items I wanted a create a function in Python where the user inputs the name of the menu and then it returns it in their order. After they are finished with ordering, the function would then calculate the price. My problem is I typed "Apple", but it came back empty. Is there anyway I could get around this? Any assistance is appreciated. Here is the function: menu = [{"Menu":"Apple","Price":9.00},{"Menu":"Banana","Price":5.00}], my_order = [], userInput = 0 try: userInput = input("Enter item menu name that you want to select >> ") except ValueError: print("Item does not exist.") if userInput in menu: print("The item is in the list") else: print("The item is not in the list. Please choose a different item.") while userInput != "Stop" or userInput != "stop": print(f"Available menu: {menu}") userInput = input("Do you want to add the item from the menu? If so please type appropriate item menu name. If no please type Stop. >> ") if userInput == "Stop" or userInput == "stop": print("The program has ended no more items will be added.") print(f"Your order: {my_order}") break elif userInput not in menu: print("Item does not exist in the list, try another item.") print(f"Your order: {my_order}") continue else: menu["Menu"] = userInput my_order.append(userInput) print(f"Your order: {my_order}") continue A: The way you use a dictionary is to have a key and a value related to it, in this case your menu dictionary should be {"item":price}. That way if you want to know the price of an Apple yo do print(menu["Apple"]) Did not understand why the Try/except in this case. menu = {"Apple" : 9.00, "Banana" : 5.00} my_order = [] userInput = "" while userInput != "Stop" or userInput != "stop": print(f"Available menu: {menu}") userInput = input("Do you want to add the item from the menu? If so please type appropriate item menu name. If no please type Stop. >> ") if userInput == "Stop" or userInput == "stop": print("The program has ended no more items will be added.") print(f"Your order: {my_order}") break elif userInput not in menu: print("Item does not exist in the list, try another item.") print(f"Your order: {my_order}") continue else: my_order.append(userInput) print(f"Your order: {my_order}") continue I let you continue calculating the total for the bill.
Insert input into list by the name of the menu, and then calculating its price after selecting the items
I wanted a create a function in Python where the user inputs the name of the menu and then it returns it in their order. After they are finished with ordering, the function would then calculate the price. My problem is I typed "Apple", but it came back empty. Is there anyway I could get around this? Any assistance is appreciated. Here is the function: menu = [{"Menu":"Apple","Price":9.00},{"Menu":"Banana","Price":5.00}], my_order = [], userInput = 0 try: userInput = input("Enter item menu name that you want to select >> ") except ValueError: print("Item does not exist.") if userInput in menu: print("The item is in the list") else: print("The item is not in the list. Please choose a different item.") while userInput != "Stop" or userInput != "stop": print(f"Available menu: {menu}") userInput = input("Do you want to add the item from the menu? If so please type appropriate item menu name. If no please type Stop. >> ") if userInput == "Stop" or userInput == "stop": print("The program has ended no more items will be added.") print(f"Your order: {my_order}") break elif userInput not in menu: print("Item does not exist in the list, try another item.") print(f"Your order: {my_order}") continue else: menu["Menu"] = userInput my_order.append(userInput) print(f"Your order: {my_order}") continue
[ "The way you use a dictionary is to have a key and a value related to it, in this case your menu dictionary should be {\"item\":price}.\nThat way if you want to know the price of an Apple yo do\nprint(menu[\"Apple\"])\n\nDid not understand why the Try/except in this case.\nmenu = {\"Apple\" : 9.00, \"Banana\" : 5.00}\nmy_order = []\nuserInput = \"\"\n\nwhile userInput != \"Stop\" or userInput != \"stop\":\n print(f\"Available menu: {menu}\")\n userInput = input(\"Do you want to add the item from the menu? If so please type appropriate item menu name. If no please type Stop. >> \")\n if userInput == \"Stop\" or userInput == \"stop\":\n print(\"The program has ended no more items will be added.\")\n print(f\"Your order: {my_order}\")\n break\n elif userInput not in menu:\n print(\"Item does not exist in the list, try another item.\")\n print(f\"Your order: {my_order}\")\n continue\n else:\n my_order.append(userInput)\n print(f\"Your order: {my_order}\")\n continue\n\nI let you continue calculating the total for the bill.\n" ]
[ 0 ]
[]
[]
[ "dictionary", "input", "list", "python" ]
stackoverflow_0074505486_dictionary_input_list_python.txt
Q: Selenium WebDriver to extract only paragraphs I am totally new to all of this. I am trying to extract articles from a lot of pages but I put only 4 URLS in the code below and need to extract only important paragraphs from <p>text</p> == $0. Here is my code for this sample: currency = 'BTC' btc_today = pd.DataFrame({'Currency':[], 'Date':[], 'Title': [], 'Content': [], 'URL':[]}) links = ["https://www.investing.com/news/cryptocurrency-news/3-reasons-why-bitcoins-drop-to-21k-and-the-marketwide-selloff-could-be-worse-than-you-think-2876810", "https://www.investing.com/news/cryptocurrency-news/crypto-flipsider-news--btc-below-22k-no-support-for-pow-eth-ripple-brazil-odl-cardano-testnet-problems-mercado-launches-crypto-2876644", "https://www.investing.com/news/cryptocurrency-news/can-exchanges-create-imaginary-bitcoin-to-dump-price-crypto-platform-exec-answers-2876559", "https://www.investing.com/news/cryptocurrency-news/bitcoin-drops-7-to-hit-3week-lows-432SI-2876376"] for link in links: driver.get(link) driver.maximize_window() time.sleep(2) data = [] date = driver.find_element(By.XPATH, f'/html/body/div[5]/section/div[1]/span').text.strip() title = driver.find_element(By.XPATH,f'/html/body/div[5]/section/h1').text.strip() url = link content = driver.find_elements(By.TAG_NAME, 'p') for item in content: body = item.text print(body) articles = {'Currency': currency,'Date': date,'Title': title,'Content': body,'URL': url} btc_today = btc_today.append(pd.DataFrame(articles, index=[0])) btc_today.reset_index(drop=True, inplace=True) btc_today #I got this as a result output I have also tried to do it with this loop but it rturns results in many rows and not article by article for p_number in range(1,10): try: content = driver.find_element(By.XPATH, f'/html/body/div[5]/section/div[3]/p[{p_number}]').text.strip() #print(content) except NoSuchElementException: pass can somebody help, please? I would really really appreciate it. I seriously did my best for days to find a solution but no progress A: I am assuming you need to get the main content, for that, change the locator for the 'content': content = driver.find_elements(By.CSS_SELECTOR, '.WYSIWYG.articlePage p') Also, there are unnecessary '<p>' tags with the content - "Position added successfully to: " and "Continue reading on DailyCoin", you can ignore that using if statement inside the below for loop: for item in content: body = item.text print(body)
Selenium WebDriver to extract only paragraphs
I am totally new to all of this. I am trying to extract articles from a lot of pages but I put only 4 URLS in the code below and need to extract only important paragraphs from <p>text</p> == $0. Here is my code for this sample: currency = 'BTC' btc_today = pd.DataFrame({'Currency':[], 'Date':[], 'Title': [], 'Content': [], 'URL':[]}) links = ["https://www.investing.com/news/cryptocurrency-news/3-reasons-why-bitcoins-drop-to-21k-and-the-marketwide-selloff-could-be-worse-than-you-think-2876810", "https://www.investing.com/news/cryptocurrency-news/crypto-flipsider-news--btc-below-22k-no-support-for-pow-eth-ripple-brazil-odl-cardano-testnet-problems-mercado-launches-crypto-2876644", "https://www.investing.com/news/cryptocurrency-news/can-exchanges-create-imaginary-bitcoin-to-dump-price-crypto-platform-exec-answers-2876559", "https://www.investing.com/news/cryptocurrency-news/bitcoin-drops-7-to-hit-3week-lows-432SI-2876376"] for link in links: driver.get(link) driver.maximize_window() time.sleep(2) data = [] date = driver.find_element(By.XPATH, f'/html/body/div[5]/section/div[1]/span').text.strip() title = driver.find_element(By.XPATH,f'/html/body/div[5]/section/h1').text.strip() url = link content = driver.find_elements(By.TAG_NAME, 'p') for item in content: body = item.text print(body) articles = {'Currency': currency,'Date': date,'Title': title,'Content': body,'URL': url} btc_today = btc_today.append(pd.DataFrame(articles, index=[0])) btc_today.reset_index(drop=True, inplace=True) btc_today #I got this as a result output I have also tried to do it with this loop but it rturns results in many rows and not article by article for p_number in range(1,10): try: content = driver.find_element(By.XPATH, f'/html/body/div[5]/section/div[3]/p[{p_number}]').text.strip() #print(content) except NoSuchElementException: pass can somebody help, please? I would really really appreciate it. I seriously did my best for days to find a solution but no progress
[ "I am assuming you need to get the main content, for that, change the locator for the 'content':\ncontent = driver.find_elements(By.CSS_SELECTOR, '.WYSIWYG.articlePage p')\n\nAlso, there are unnecessary '<p>' tags with the content - \"Position added successfully to: \" and \"Continue reading on DailyCoin\", you can ignore that using if statement inside the below for loop:\n for item in content:\n body = item.text\n print(body)\n\n" ]
[ 0 ]
[]
[]
[ "python", "selenium", "selenium_webdriver", "web_scraping" ]
stackoverflow_0074504175_python_selenium_selenium_webdriver_web_scraping.txt
Q: python app with selenium failes with MaxRetryError i am trying to host a python app in docker i am running selenium standalone chrome in docker and i can connect to it running my python app locally. my application looks like this: def web_scrape(): url = "https://who.maps.arcgis.com/apps/opsdashboard/index.html#/ead3c6475654481ca51c248d52ab9c61" #setup webdriver options = webdriver.ChromeOptions() options.add_argument('--no-sandbox') options.add_argument('--headless') options.add_argument('--disable-dev-shm-usage') options.add_argument("--remote-debugging-port=9222") driver = webdriver.Remote(command_executor="http://localhost:4444/wd/hub", desired_capabilities=options.to_capabilities()) driver.get(url) time.sleep(20) html = driver.execute_script("return document.documentElement.outerHTML") #Use BeautifulSoup for working with html soup = BeautifulSoup(html, "html.parser") covid_soup = soup.find("div", id="ember44").div.nav.find_all("span", class_="flex-horizontal") covid_dict = {} for i in covid_soup: country = i.find("strong").get_text(strip=True) country = clean_country_name(country) imgURL = i.p.find_next("p").find_next("p").find("img").get('src') color = get_covid_color(imgURL) covid_dict[country] = color save_to_json(covid_dict) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=4444): Max retries exceeded with url: /wd/hub/session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f44c2ac6c40>: Failed to establish a new connection: [Errno 111] Connection refused')) does anyone have any suggestions on what could be wrong? A: how have you setup your testing environment in docker, plus you could replace localhost with selenium-hub at the command_executor argument in the meantime A: Had the same issue. Tests running from Docker container were failing to drive Chrome using Selenium running in Docker container. The issue was that Selenium server was not available (while browser container was already up) when tests started to run. Try calling below function before running actual tests. It will make sure the server is available. def test_selenium_server_available(): import requests from requests.adapters import HTTPAdapter from requests.packages.urllib3.util.retry import Retry session = requests.Session() retry = Retry(connect=5, backoff_factor=0.5) adapter = HTTPAdapter(max_retries=retry) session.mount('http://', adapter) session.mount('https://', adapter) session.get("http://localhost:4444/wd/hub") A: Check logs from standalone browser. I found that my correct url was http://172.17.0.4:4444/wd/hub instead localhost or 127.0.0.1
python app with selenium failes with MaxRetryError
i am trying to host a python app in docker i am running selenium standalone chrome in docker and i can connect to it running my python app locally. my application looks like this: def web_scrape(): url = "https://who.maps.arcgis.com/apps/opsdashboard/index.html#/ead3c6475654481ca51c248d52ab9c61" #setup webdriver options = webdriver.ChromeOptions() options.add_argument('--no-sandbox') options.add_argument('--headless') options.add_argument('--disable-dev-shm-usage') options.add_argument("--remote-debugging-port=9222") driver = webdriver.Remote(command_executor="http://localhost:4444/wd/hub", desired_capabilities=options.to_capabilities()) driver.get(url) time.sleep(20) html = driver.execute_script("return document.documentElement.outerHTML") #Use BeautifulSoup for working with html soup = BeautifulSoup(html, "html.parser") covid_soup = soup.find("div", id="ember44").div.nav.find_all("span", class_="flex-horizontal") covid_dict = {} for i in covid_soup: country = i.find("strong").get_text(strip=True) country = clean_country_name(country) imgURL = i.p.find_next("p").find_next("p").find("img").get('src') color = get_covid_color(imgURL) covid_dict[country] = color save_to_json(covid_dict) urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=4444): Max retries exceeded with url: /wd/hub/session (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f44c2ac6c40>: Failed to establish a new connection: [Errno 111] Connection refused')) does anyone have any suggestions on what could be wrong?
[ "how have you setup your testing environment in docker, plus you could replace localhost with selenium-hub at the command_executor argument in the meantime\n", "Had the same issue. Tests running from Docker container were failing to drive Chrome using Selenium running in Docker container.\nThe issue was that Selenium server was not available (while browser container was already up) when tests started to run.\nTry calling below function before running actual tests. It will make sure the server is available.\ndef test_selenium_server_available():\n import requests\n from requests.adapters import HTTPAdapter\n from requests.packages.urllib3.util.retry import Retry\n\n session = requests.Session()\n retry = Retry(connect=5, backoff_factor=0.5)\n adapter = HTTPAdapter(max_retries=retry)\n session.mount('http://', adapter)\n session.mount('https://', adapter)\n\n session.get(\"http://localhost:4444/wd/hub\")\n\n", "Check logs from standalone browser. I found that my correct url was http://172.17.0.4:4444/wd/hub instead localhost or 127.0.0.1\n" ]
[ 0, 0, 0 ]
[]
[]
[ "docker", "python", "selenium_chromedriver", "selenium_webdriver" ]
stackoverflow_0065338801_docker_python_selenium_chromedriver_selenium_webdriver.txt
Q: How do I use QT6 Dark Theme with PySide6? Simple demo application I am trying to set the theme to dark. I would prefer a code version (non QtQuick preferred), but only way I see for Python is with a QtQuick config file, and even that does not work. from PySide6 import QtWidgets from PySide6 import QtQuick if __name__ == '__main__': app = QtWidgets.QApplication() app.setApplicationDisplayName("Should be Dark Theme") app.setStyle("Universal") view = QtQuick.QQuickView() view.show() app.exec() And I have a qtquickcontrols2.conf configuration file in the same directory. (Also tried setting QT_QUICK_CONTROLS_CONF to absolute path.) [Controls] Style=Material [Universal] Theme=Dark [Material] Theme=Dark And yet, it's still bright white: I do not care if it is Material or Universal style, just want some built in dark mode for the title bar. In the end, need a way to make the titlebar dark without creating a custom one. Thank you for any guidance! A: import sys sys.argv += ['-platform', 'windows:darkmode=2'] app = QApplication(sys.argv) above 3 lines can change your window to dark mode if you are using windows and Fusion style makes the app more beautiful, tested in windows 10, 11 example:- from PySide6.QtWidgets import ( QApplication, QCheckBox, QComboBox, QDateEdit, QDateTimeEdit, QDial, QDoubleSpinBox, QFontComboBox, QLabel, QLCDNumber, QLineEdit, QMainWindow, QProgressBar, QPushButton, QRadioButton, QSlider, QSpinBox, QTimeEdit, QVBoxLayout, QWidget, ) import sys sys.argv += ['-platform', 'windows:darkmode=2'] class MainWindow(QMainWindow): def __init__(self): super().__init__() self.setWindowTitle("Widgets App") layout = QVBoxLayout() widgets = [ QCheckBox, QComboBox, QDateEdit, QDateTimeEdit, QDial, QDoubleSpinBox, QFontComboBox, QLCDNumber, QLabel, QLineEdit, QProgressBar, QPushButton, QRadioButton, QSlider, QSpinBox, QTimeEdit, ] for w in widgets: layout.addWidget(w()) widget = QWidget() widget.setLayout(layout) self.setCentralWidget(widget) app = QApplication(sys.argv) app.setStyle('Fusion') window = MainWindow() window.show() app.exec()
How do I use QT6 Dark Theme with PySide6?
Simple demo application I am trying to set the theme to dark. I would prefer a code version (non QtQuick preferred), but only way I see for Python is with a QtQuick config file, and even that does not work. from PySide6 import QtWidgets from PySide6 import QtQuick if __name__ == '__main__': app = QtWidgets.QApplication() app.setApplicationDisplayName("Should be Dark Theme") app.setStyle("Universal") view = QtQuick.QQuickView() view.show() app.exec() And I have a qtquickcontrols2.conf configuration file in the same directory. (Also tried setting QT_QUICK_CONTROLS_CONF to absolute path.) [Controls] Style=Material [Universal] Theme=Dark [Material] Theme=Dark And yet, it's still bright white: I do not care if it is Material or Universal style, just want some built in dark mode for the title bar. In the end, need a way to make the titlebar dark without creating a custom one. Thank you for any guidance!
[ "import sys\nsys.argv += ['-platform', 'windows:darkmode=2']\napp = QApplication(sys.argv)\n\nabove 3 lines can change your window to dark mode if you are using windows and Fusion style makes the app more beautiful, tested in windows 10, 11\nexample:-\nfrom PySide6.QtWidgets import (\n QApplication,\n QCheckBox,\n QComboBox,\n QDateEdit,\n QDateTimeEdit,\n QDial,\n QDoubleSpinBox,\n QFontComboBox,\n QLabel,\n QLCDNumber,\n QLineEdit,\n QMainWindow,\n QProgressBar,\n QPushButton,\n QRadioButton,\n QSlider,\n QSpinBox,\n QTimeEdit,\n QVBoxLayout,\n QWidget,\n)\nimport sys\nsys.argv += ['-platform', 'windows:darkmode=2']\n\n\nclass MainWindow(QMainWindow):\n def __init__(self):\n super().__init__()\n\n self.setWindowTitle(\"Widgets App\")\n\n layout = QVBoxLayout()\n widgets = [\n QCheckBox,\n QComboBox,\n QDateEdit,\n QDateTimeEdit,\n QDial,\n QDoubleSpinBox,\n QFontComboBox,\n QLCDNumber,\n QLabel,\n QLineEdit,\n QProgressBar,\n QPushButton,\n QRadioButton,\n QSlider,\n QSpinBox,\n QTimeEdit,\n ]\n\n for w in widgets:\n layout.addWidget(w())\n\n widget = QWidget()\n widget.setLayout(layout)\n\n self.setCentralWidget(widget)\n\n\napp = QApplication(sys.argv)\napp.setStyle('Fusion')\nwindow = MainWindow()\nwindow.show()\napp.exec()\n\n" ]
[ 2 ]
[]
[]
[ "pyside6", "python", "python_3.x", "qt6" ]
stackoverflow_0073060080_pyside6_python_python_3.x_qt6.txt
Q: Adding Nodes to a graph displays object instead of string (adjacency list) I'm learning how to create a graph using an adjacency list on Python. My current problem is when trying to add a node to the list, it displays the Node object at 0x0000.... instead of a string. When I try to print out the list, I get TypeError: list indices must be integers or slices, not Node". I can't seem to figure out a way to fix this. Any help would be appreciated! class Node: def __init__(self, name): self.name = name self.visited = False self.adjacency = [] def addNeighbor(self, v): if v not in self.adjacency: self.adjacency.append(v) class DGraph: def __init__(self, size=20): self.size = size self.numNodes = 0 self.nodeList = [0] * size def addNode(self, name): """adds new node to graph""" if self.numNodes >= self.size: raise OverflowError("Graph Size Exceeded") newNode = Node(name) newNode.name = name newNode.addNeighbor(name) self.nodeList[self.numNodes] = newNode self.numNodes += 1 def listNodes(self): theList = "Nodes: " for i in self.nodeList: theList += self.nodeList[i] theList += "" return theList tree = DGraph() tree.addNode("A") tree.addNode('C') tree.addNode('T') What the list looks like in the debugger A: You can specify string representation of your object by implementing __repr__ See details in the docs and this question Here is a working example (nodeList is fixed too) class Node: def __init__(self, name): self.name = name self.visited = False self.adjacency = [] def addNeighbor(self, v): if v not in self.adjacency: self.adjacency.append(v) def __repr__(self): return self.name def __str__(self): return self.name class DGraph: def __init__(self, size=20): self.size = size self.numNodes = 0 self.nodeList = [0] * size def addNode(self, name): """adds new node to graph""" if self.numNodes >= self.size: raise OverflowError("Graph Size Exceeded") newNode = Node(name) newNode.name = name newNode.addNeighbor(name) self.nodeList[self.numNodes] = newNode self.numNodes += 1 def listNodes(self): theList = "Nodes: " for i in self.nodeList: theList += str(i) theList += " " return theList tree = DGraph() tree.addNode("A") tree.addNode('C') tree.addNode('T') print(tree.listNodes()) Now debugger will show node names (using__repr__) Such implementation of __repr__ is probably not a good idea unless you enforce uniqueness of node names. As a side note, it's nice to follow naming convention in Python
Adding Nodes to a graph displays object instead of string (adjacency list)
I'm learning how to create a graph using an adjacency list on Python. My current problem is when trying to add a node to the list, it displays the Node object at 0x0000.... instead of a string. When I try to print out the list, I get TypeError: list indices must be integers or slices, not Node". I can't seem to figure out a way to fix this. Any help would be appreciated! class Node: def __init__(self, name): self.name = name self.visited = False self.adjacency = [] def addNeighbor(self, v): if v not in self.adjacency: self.adjacency.append(v) class DGraph: def __init__(self, size=20): self.size = size self.numNodes = 0 self.nodeList = [0] * size def addNode(self, name): """adds new node to graph""" if self.numNodes >= self.size: raise OverflowError("Graph Size Exceeded") newNode = Node(name) newNode.name = name newNode.addNeighbor(name) self.nodeList[self.numNodes] = newNode self.numNodes += 1 def listNodes(self): theList = "Nodes: " for i in self.nodeList: theList += self.nodeList[i] theList += "" return theList tree = DGraph() tree.addNode("A") tree.addNode('C') tree.addNode('T') What the list looks like in the debugger
[ "You can specify string representation of your object by implementing __repr__\nSee details in the docs and this question\nHere is a working example (nodeList is fixed too)\nclass Node:\n def __init__(self, name):\n self.name = name\n self.visited = False\n self.adjacency = []\n\n def addNeighbor(self, v):\n if v not in self.adjacency:\n self.adjacency.append(v)\n\n def __repr__(self):\n return self.name\n\n def __str__(self):\n return self.name\n\n\n\nclass DGraph:\n def __init__(self, size=20):\n self.size = size\n self.numNodes = 0\n self.nodeList = [0] * size\n\n def addNode(self, name):\n \"\"\"adds new node to graph\"\"\"\n if self.numNodes >= self.size:\n raise OverflowError(\"Graph Size Exceeded\")\n newNode = Node(name)\n newNode.name = name\n newNode.addNeighbor(name)\n self.nodeList[self.numNodes] = newNode\n self.numNodes += 1\n\n def listNodes(self):\n theList = \"Nodes: \"\n for i in self.nodeList:\n theList += str(i)\n theList += \" \"\n return theList\n\n\ntree = DGraph()\ntree.addNode(\"A\")\ntree.addNode('C')\ntree.addNode('T')\nprint(tree.listNodes())\n\nNow debugger will show node names (using__repr__)\n\nSuch implementation of __repr__ is probably not a good idea unless you enforce uniqueness of node names.\nAs a side note, it's nice to follow naming convention in Python\n" ]
[ 1 ]
[]
[]
[ "adjacency_list", "data_structures", "graph", "python", "python_3.x" ]
stackoverflow_0074505246_adjacency_list_data_structures_graph_python_python_3.x.txt
Q: Python error in VSCode :Sorry, something went wrong activating IntelliCode support for Python My code is not working in vscode when i click to run code i saw this error: Sorry, something went wrong activating IntelliCode support for Python. Please check the "Python" and "VS IntelliCode" output windows for details. and when i try to run code again i saw this message; Code is already running Code dont stop when i click to ctrl+c so i have to close the editor and open it back. I dont understand why this happen , please help me,thanks in advance. A: I would just like to add a few helpful links: Intellicode Issue 57 Intellicode Issue 266 Gitmemory issue 486082039 For a lot of people, it just began working after a few tries randomly. See this text (quoted from issue 57): There's a race condition in the activation of both the IntelliCode and Python language server extensions. Even if the Python extension is loaded, the language server that the extension spins up might not be fully initialized yet. So if the Python extension loads, then the IntelliCode extension, then the Python language server initializes, we will have this problem. For some people, it was working to reload VS Intellicode pack following the reinstall the Python extension pack. Thank you. A: Go to Extensions, then search for the Python extension, then switch to release. A: Make sure to have Pylance installed (intellisense support for Python) Make sure to be into the tab for any python file For VS Code, and locate the {} Python icon on the bottom row. Click over the {} icon, and then click over Select Interpreter. Just after that, make sure to input the desired python path, wait for a few seconds in the current python tab, and finally Pylance will be doing its job A: First of all find your Python installation path Copy it Then in VSCode Open settings Extensions Python Default Interpreter Path And paste the full path to your Python installation folder. For example: X:/Program Files/Python310 If it didn't work immediately, try reloading VSCode. (P.S. Should work without any reloads)
Python error in VSCode :Sorry, something went wrong activating IntelliCode support for Python
My code is not working in vscode when i click to run code i saw this error: Sorry, something went wrong activating IntelliCode support for Python. Please check the "Python" and "VS IntelliCode" output windows for details. and when i try to run code again i saw this message; Code is already running Code dont stop when i click to ctrl+c so i have to close the editor and open it back. I dont understand why this happen , please help me,thanks in advance.
[ "I would just like to add a few helpful links:\nIntellicode Issue 57\nIntellicode Issue 266\nGitmemory issue 486082039\nFor a lot of people, it just began working after a few tries randomly. See this text (quoted from issue 57):\n\nThere's a race condition in the activation of both the IntelliCode and Python language server extensions. Even if the Python extension is loaded, the language server that the extension spins up might not be fully initialized yet. So if the Python extension loads, then the IntelliCode extension, then the Python language server initializes, we will have this problem.\n\nFor some people, it was working to reload VS Intellicode pack following the reinstall the Python extension pack.\nThank you.\n", "Go to Extensions, then search for the Python extension, then switch to release.\n", "\nMake sure to have Pylance installed (intellisense support for Python)\nMake sure to be into the tab for any python file For VS Code, and locate the {} Python icon on the bottom row. Click over the {} icon, and then click over Select Interpreter. Just after that, make sure to input the desired python path, wait for a few seconds in the current python tab, and finally Pylance will be doing its job\n\n", "\nFirst of all find your Python installation path\nCopy it\n\nThen in VSCode\n\nOpen settings\nExtensions\nPython\nDefault Interpreter Path\n\nAnd paste the full path to your Python installation folder.\nFor example:\nX:/Program Files/Python310\n\nIf it didn't work immediately, try reloading VSCode.\n(P.S. Should work without any reloads)\n" ]
[ 2, 1, 0, 0 ]
[]
[]
[ "python", "runtime_error", "visual_studio_code" ]
stackoverflow_0068637153_python_runtime_error_visual_studio_code.txt
Q: Remove Redundant Parenthesis in an Arithmetic Expression I'm trying to remove redundant parentheses from an arithmetic expression. For example, if I have the expression (5+((2*3))), I want the redundant parenthesis between (2*3) removed. The output that I want is (5+(2*3)). I'm getting this arithmetic expression from performing an inorder traversal on an expression tree. The final string that I get after performing the traversal is ((5)+(((2)*(3)))). I used re.sub('\((\d+)\)', r'\1', <traversal output string>) to remove the parenthesis between the numbers like (5) to 5. But I'm still confused on how I should approach to remove parenthesis among sub expressions (((2*3)) in this case). Here is what my inorder traversal function looks like def inorder(tree): # return string of inorder traversal of tree with parenthesis s = '' if tree != None: s += '(' s += ExpTree.inorder(tree.getLeftChild()) s += str(tree.getRootVal()) s += ExpTree.inorder(tree.getRightChild()) s += ')' return re.sub('\((\d+)\)', r'\1', s) Any guidance on how I should approach to solve this problem would be appreciated! A: Instead of adding the parenthesis around the parent/root expression, one option is to add the parentheses before and after recursing down on each of the left and right children. If those children are leaves, meaning they do not have children, do not add parentheses. In practice, this might look something like this: def inorder(tree): # return string of inorder traversal of tree with parenthesis s = '' # Add left subtree, with parentheses if it is not a leaf left = tree.getLeftChild() if left is not None: if left.isLeaf(): s += ExpTree.inorder(left) else: s += '(' + ExpTree.inorder(left) + ')' # Add root value s += str(tree.getRootVal()) # Add right subtree, with parentheses if it is not a leaf right = tree.getRightChild() if right is not None: if right.isLeaf(): s += ExpTree.inorder(right) else: s += '(' + ExpTree.inorder(right) + ')' return s This means that a tree with a single node, 5, will become just "5", and the following tree + / \ 5 * / \ 2 3 will become "5+(2*3)". To instead get the output "(5+((2*3)))", add an additional set of parenthesis around the returned string if the tree is not a leaf. def inorder(tree): # return string of inorder traversal of tree with parenthesis s = '' # Add left subtree, with parentheses if it is not a leaf left = tree.getLeftChild() if left is not None: if left.isLeaf(): s += ExpTree.inorder(left) else: s += '(' + ExpTree.inorder(left) + ')' # Add root value s += str(tree.getRootVal()) # Add right subtree, with parentheses if it is not a leaf right = tree.getRightChild() if right is not None: if right.isLeaf(): s += ExpTree.inorder(right) else: s += '(' + ExpTree.inorder(right) + ')' # Add an additional set of parenthesis around non-leaf expressions. if tree.isLeaf(): return s else: return '(' + s + ')'
Remove Redundant Parenthesis in an Arithmetic Expression
I'm trying to remove redundant parentheses from an arithmetic expression. For example, if I have the expression (5+((2*3))), I want the redundant parenthesis between (2*3) removed. The output that I want is (5+(2*3)). I'm getting this arithmetic expression from performing an inorder traversal on an expression tree. The final string that I get after performing the traversal is ((5)+(((2)*(3)))). I used re.sub('\((\d+)\)', r'\1', <traversal output string>) to remove the parenthesis between the numbers like (5) to 5. But I'm still confused on how I should approach to remove parenthesis among sub expressions (((2*3)) in this case). Here is what my inorder traversal function looks like def inorder(tree): # return string of inorder traversal of tree with parenthesis s = '' if tree != None: s += '(' s += ExpTree.inorder(tree.getLeftChild()) s += str(tree.getRootVal()) s += ExpTree.inorder(tree.getRightChild()) s += ')' return re.sub('\((\d+)\)', r'\1', s) Any guidance on how I should approach to solve this problem would be appreciated!
[ "Instead of adding the parenthesis around the parent/root expression, one option is to add the parentheses before and after recursing down on each of the left and right children. If those children are leaves, meaning they do not have children, do not add parentheses.\nIn practice, this might look something like this:\ndef inorder(tree):\n # return string of inorder traversal of tree with parenthesis\n s = ''\n # Add left subtree, with parentheses if it is not a leaf\n left = tree.getLeftChild()\n if left is not None:\n if left.isLeaf():\n s += ExpTree.inorder(left)\n else:\n s += '(' + ExpTree.inorder(left) + ')'\n\n # Add root value\n s += str(tree.getRootVal())\n\n # Add right subtree, with parentheses if it is not a leaf\n right = tree.getRightChild()\n if right is not None:\n if right.isLeaf():\n s += ExpTree.inorder(right)\n else:\n s += '(' + ExpTree.inorder(right) + ')'\n return s\n\nThis means that a tree with a single node, 5, will become just \"5\", and the following tree\n +\n / \\\n5 *\n / \\\n 2 3\n\nwill become \"5+(2*3)\".\nTo instead get the output \"(5+((2*3)))\", add an additional set of parenthesis around the returned string if the tree is not a leaf.\ndef inorder(tree):\n # return string of inorder traversal of tree with parenthesis\n s = ''\n # Add left subtree, with parentheses if it is not a leaf\n left = tree.getLeftChild()\n if left is not None:\n if left.isLeaf():\n s += ExpTree.inorder(left)\n else:\n s += '(' + ExpTree.inorder(left) + ')'\n\n # Add root value\n s += str(tree.getRootVal())\n\n # Add right subtree, with parentheses if it is not a leaf\n right = tree.getRightChild()\n if right is not None:\n if right.isLeaf():\n s += ExpTree.inorder(right)\n else:\n s += '(' + ExpTree.inorder(right) + ')'\n\n # Add an additional set of parenthesis around non-leaf expressions.\n if tree.isLeaf():\n return s\n else:\n return '(' + s + ')'\n\n\n" ]
[ 1 ]
[]
[]
[ "python", "python_3.x", "string", "traversal", "tree" ]
stackoverflow_0074505569_python_python_3.x_string_traversal_tree.txt
Q: How to resolve No module named 'hmmlearn' error in Jupyter Notebook I'm new to hmmlearn and am trying to use the Jupyter Notebook to work through this Gaussian HMM of stock data example. However, when I run the following code, I get an error. from __future__ import print_function import datetime import numpy as np from matplotlib import cm, pyplot as plt from matplotlib.dates import YearLocator, MonthLocator try: from matplotlib.finance import quotes_historical_yahoo_ochl except ImportError: # For Matplotlib prior to 1.5. from matplotlib.finance import ( quotes_historical_yahoo as quotes_historical_yahoo_ochl ) from hmmlearn.hmm import GaussianHMM print(__doc__) The error is as follows: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-2-02bbde14d4d4> in <module>() 14 ) 15 ---> 16 from hmmlearn.hmm import GaussianHMM 17 18 ModuleNotFoundError: No module named 'hmmlearn' I have spent a while searching the Internet and trying to find out why this is happening. I've ensured that I've downloaded the dependencies (scikit-learn, numpy and scipy), and I've run pip install -U --user hmmlearn, both via the Windows cmd and as mentioned here. However, I keep getting the same error. I'm not sure if it may be something to do with the location of the different packages on my computer (I'm using Windows). Does anyone have suggestions on what I could try to solve this? (My main aim is just to be able to get set up with hmmlearn so that I can start using it to explore HMMs.) A: This page provides 32- and 64-bit Windows binaries of many scientific open-source extension packages for the official CPython distribution of the Python programming language. Select the appropriate file according to your system requirements. (For me, it's python 3.7 and windows 64 bit) After you downloaded this, open command prompt in the same folder with .whl file and type: pip install hmmlearn-0.2.1-cp37-cp37m-win_amd64.whl Then you can use hmmlearn in the Jupyter Notebook like that: import hmmlearn # Or from hmmlearn import hmm A: I have tried to run 'pip install hmmlearn' directly in the notebook cell. After that I restarted the kernel and it worked for me. Try to see if it works for you.
How to resolve No module named 'hmmlearn' error in Jupyter Notebook
I'm new to hmmlearn and am trying to use the Jupyter Notebook to work through this Gaussian HMM of stock data example. However, when I run the following code, I get an error. from __future__ import print_function import datetime import numpy as np from matplotlib import cm, pyplot as plt from matplotlib.dates import YearLocator, MonthLocator try: from matplotlib.finance import quotes_historical_yahoo_ochl except ImportError: # For Matplotlib prior to 1.5. from matplotlib.finance import ( quotes_historical_yahoo as quotes_historical_yahoo_ochl ) from hmmlearn.hmm import GaussianHMM print(__doc__) The error is as follows: --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) <ipython-input-2-02bbde14d4d4> in <module>() 14 ) 15 ---> 16 from hmmlearn.hmm import GaussianHMM 17 18 ModuleNotFoundError: No module named 'hmmlearn' I have spent a while searching the Internet and trying to find out why this is happening. I've ensured that I've downloaded the dependencies (scikit-learn, numpy and scipy), and I've run pip install -U --user hmmlearn, both via the Windows cmd and as mentioned here. However, I keep getting the same error. I'm not sure if it may be something to do with the location of the different packages on my computer (I'm using Windows). Does anyone have suggestions on what I could try to solve this? (My main aim is just to be able to get set up with hmmlearn so that I can start using it to explore HMMs.)
[ "This page provides 32- and 64-bit Windows binaries of many scientific open-source extension packages for the official CPython distribution of the Python programming language. \nSelect the appropriate file according to your system requirements. (For me, it's python 3.7 and windows 64 bit)\nAfter you downloaded this, open command prompt in the same folder with .whl file and type:\npip install hmmlearn-0.2.1-cp37-cp37m-win_amd64.whl\n\nThen you can use hmmlearn in the Jupyter Notebook like that:\nimport hmmlearn\n# Or \nfrom hmmlearn import hmm\n\n", "I have tried to run 'pip install hmmlearn' directly in the notebook cell. After that I restarted the kernel and it worked for me. Try to see if it works for you.\n" ]
[ 2, 0 ]
[]
[]
[ "hmmlearn", "jupyter_notebook", "numpy", "python" ]
stackoverflow_0048355747_hmmlearn_jupyter_notebook_numpy_python.txt
Q: On Matplotlib, how do i move my legend where I want it? How would I move my legend to inside the graph right under where my title is? plt.plot([1, 2], [3, 4], color='r', label="Apple") plt.plot([3, 4], [5, 6], color='g', label="Pear") plt.title("Total Profit Trend by Month") plt.legend() plt.show() A: You can use: plt.legend(loc='upper left') or you can replace 'upper left' by the following locations: upper right, lower left, lower right, right, center left, center right, lower center, upper center, center.
On Matplotlib, how do i move my legend where I want it?
How would I move my legend to inside the graph right under where my title is? plt.plot([1, 2], [3, 4], color='r', label="Apple") plt.plot([3, 4], [5, 6], color='g', label="Pear") plt.title("Total Profit Trend by Month") plt.legend() plt.show()
[ "You can use: plt.legend(loc='upper left') or you can replace 'upper left' by the following locations:\nupper right,\nlower left,\nlower right,\nright,\ncenter left,\ncenter right,\nlower center,\nupper center,\ncenter.\n" ]
[ 0 ]
[]
[]
[ "matplotlib", "python" ]
stackoverflow_0074505551_matplotlib_python.txt
Q: How to remove the none error from the output I'm creating a recursive function that creates n lines of asterisk. I do not have problems on writing code, but just am wondering why None appears in my output. Here is my code: def recursive_lines(n): for n in range(0,n): print ('*' + ('*'*n)) # Print asterisk print(recursive_lines(5)) # Enter an integer here And this is the result: * ** *** **** ***** None I don't think I used any int(print()) kind of statement here.. Then why does this error keep appearing? A: You are printing out recursive_lines(5), but inside the function, you are already printing the values. Simply remove the print that is around recursive_lines(5) A: The None is printing because you are using print(recursive_lines(5)) even though your function is not returning anything. Remove the print statement while calling the function. def recursive_lines(n): for n in range(0,n): print ('*' + ('*'*n)) recursive_lines(5) A: Your function isn't returning anything. By default it'll return none. And if you don't want it to print none you can just not print recursive_lines(5). If you add a return 0 statement it'll return 0 instead of none
How to remove the none error from the output
I'm creating a recursive function that creates n lines of asterisk. I do not have problems on writing code, but just am wondering why None appears in my output. Here is my code: def recursive_lines(n): for n in range(0,n): print ('*' + ('*'*n)) # Print asterisk print(recursive_lines(5)) # Enter an integer here And this is the result: * ** *** **** ***** None I don't think I used any int(print()) kind of statement here.. Then why does this error keep appearing?
[ "You are printing out recursive_lines(5), but inside the function, you are already printing the values. Simply remove the print that is around recursive_lines(5)\n", "The None is printing because you are using print(recursive_lines(5)) even though your function is not returning anything. Remove the print statement while calling the function.\ndef recursive_lines(n):\n for n in range(0,n):\n print ('*' + ('*'*n)) \n \nrecursive_lines(5) \n\n", "Your function isn't returning anything. By default it'll return none. And if you don't want it to print none you can just not print recursive_lines(5).\nIf you add a return 0 statement it'll return 0 instead of none\n" ]
[ 0, 0, 0 ]
[]
[]
[ "python", "python_3.x" ]
stackoverflow_0074505681_python_python_3.x.txt
Q: How to remove all items that are in between two duplicates in a list How do I write a program that will remove all the items that are in between two duplicates in a list and it will also remove the second duplicate. For example, a = [ (0,0) , (1,0) , (2,0) , (3,0) , (1,0) ] In the list a, we see that (1,0) occurs more than once in the list. Thus I want to remove all the items in between the 2 duplicates and I want to remove the second occurrence of (1,0). Thus, in this example, I want to remove (2,0),(3,0) and the second occurrence of (1,0). now my list would look like this : a = [(0,0),(1,0)] I was able to do this however the problem occurs when I have more than 1 duplicates in my list. For example, b = [ (0,0) , (1,0) , (2,0) , (3,0) , (1,0) , (5,0) , (6,0) , (7,0) , (8,0) , (5,0), (9,0) , (10,0) ] In this example, we see that that I have 2 items that are duplicates. I have (1,0) and I have (5,0). Thus, I want to remove all the items between (1,0) and the second occurrence of (1,0) including its second occurrence and I want to remove all the items between (5,0) and the second occurrence of (5,0). In the end, my list should look like this : b = [ (0,0) , (1,0) ,(5,0) , (9,0) ] This is what I have thus far: a = [ (0,0) , (1,0) , (2,0) , (3,0) , (1,0) ] indexes_of_duplicates = [] for i,j in enumerate(a): if a.count(j) > 1 : indexes_of_duplicates.append(i) for k in range(indexes_of_duplicates[0]+1,indexes_of_duplicates[1]+1): a.pop(indexes_of_duplicates[0]+1) print(a) Output : [(0, 0), (1, 0)] as you can see, this code would only work if I have only 1 duplicate in my list, but I have no idea how to do it if I have more than one duplicate. PS : I can't obtain a list with overlaps like this [(1, 0), (2, 0), (3, 0), (1, 0), (2, 0)]. thus, you can ignore lists of this kind A: Here's one way to do that by using index: lst = [(0,0), (1,0), (2,0), (3,0), (1,0), (5,0), (6,0), (7,0), (8,0), (5,0), (9,0), (10,0)] output = [] while lst: # while `lst` is non-empty x, *lst = lst # if lst = [1,2,3], for example, now x = 1 and lst = [2,3] output.append(x) try: # try finding the x in lst lst = lst[lst.index(x)+1:] # if found, reduce the lst (i.e., skip the first lst.index(x)+1 elememts except ValueError: # if not found pass # do nothing print(output) # [(0, 0), (1, 0), (5, 0), (9, 0), (10, 0)] Note that lst will be exhausted. If you want to preserve it, you can copy it beforehand. A: Here's one way. from collections import Counter a = [0, 'x', 2, 3, 'x', 4, 'y', 'y', 6] # Count the number of occurrence of each unique value counts = Counter(a) removing = None new_list = [] for item in a: if removing: if item == removing: removing = None continue if counts[item] > 1: removing = item new_list.append(item) print(new_list) Output: [0, 'x', 4, 'y', 6]
How to remove all items that are in between two duplicates in a list
How do I write a program that will remove all the items that are in between two duplicates in a list and it will also remove the second duplicate. For example, a = [ (0,0) , (1,0) , (2,0) , (3,0) , (1,0) ] In the list a, we see that (1,0) occurs more than once in the list. Thus I want to remove all the items in between the 2 duplicates and I want to remove the second occurrence of (1,0). Thus, in this example, I want to remove (2,0),(3,0) and the second occurrence of (1,0). now my list would look like this : a = [(0,0),(1,0)] I was able to do this however the problem occurs when I have more than 1 duplicates in my list. For example, b = [ (0,0) , (1,0) , (2,0) , (3,0) , (1,0) , (5,0) , (6,0) , (7,0) , (8,0) , (5,0), (9,0) , (10,0) ] In this example, we see that that I have 2 items that are duplicates. I have (1,0) and I have (5,0). Thus, I want to remove all the items between (1,0) and the second occurrence of (1,0) including its second occurrence and I want to remove all the items between (5,0) and the second occurrence of (5,0). In the end, my list should look like this : b = [ (0,0) , (1,0) ,(5,0) , (9,0) ] This is what I have thus far: a = [ (0,0) , (1,0) , (2,0) , (3,0) , (1,0) ] indexes_of_duplicates = [] for i,j in enumerate(a): if a.count(j) > 1 : indexes_of_duplicates.append(i) for k in range(indexes_of_duplicates[0]+1,indexes_of_duplicates[1]+1): a.pop(indexes_of_duplicates[0]+1) print(a) Output : [(0, 0), (1, 0)] as you can see, this code would only work if I have only 1 duplicate in my list, but I have no idea how to do it if I have more than one duplicate. PS : I can't obtain a list with overlaps like this [(1, 0), (2, 0), (3, 0), (1, 0), (2, 0)]. thus, you can ignore lists of this kind
[ "Here's one way to do that by using index:\nlst = [(0,0), (1,0), (2,0), (3,0), (1,0), (5,0), (6,0), (7,0), (8,0), (5,0), (9,0), (10,0)]\n\noutput = []\n\nwhile lst: # while `lst` is non-empty\n x, *lst = lst # if lst = [1,2,3], for example, now x = 1 and lst = [2,3]\n output.append(x)\n try: # try finding the x in lst\n lst = lst[lst.index(x)+1:] # if found, reduce the lst (i.e., skip the first lst.index(x)+1 elememts\n except ValueError: # if not found\n pass # do nothing\n\nprint(output) # [(0, 0), (1, 0), (5, 0), (9, 0), (10, 0)]\n\nNote that lst will be exhausted. If you want to preserve it, you can copy it beforehand.\n", "Here's one way.\nfrom collections import Counter\n\na = [0, 'x', 2, 3, 'x', 4, 'y', 'y', 6]\n\n# Count the number of occurrence of each unique value\ncounts = Counter(a)\n\nremoving = None\nnew_list = []\nfor item in a:\n if removing: \n if item == removing:\n removing = None\n continue\n if counts[item] > 1:\n removing = item\n new_list.append(item)\n\nprint(new_list)\n\nOutput:\n[0, 'x', 4, 'y', 6]\n\n" ]
[ 2, 0 ]
[]
[]
[ "duplicates", "list", "python", "python_3.x" ]
stackoverflow_0074505592_duplicates_list_python_python_3.x.txt
Q: Create a column based on conditions and calculation Below is my dataframe: df = pd.DataFrame({"ID" : [1, 1, 2, 2, 2, 3, 3], "length" : [0.7, 0.7, 0.8, 0.6, 0.6, 0.9, 0.9], "comment" : ["typed", "handwritten", "typed", "typed", "handwritten", "handwritten", "handwritten"]}) df ID length comment 0 1 0.7 typed 1 1 0.7 handwritten 2 2 0.8 typed 3 2 0.6 typed 4 2 0.6 handwritten 5 3 0.9 handwritten 6 3 0.9 handwritten I want to be able to do the following: For any group of ID, if the length are the same but the comments are different, use the "typed" formula (5 x length) for the calculated length of that group of ID, otherwise use the formula that apply to each comment to get the calculated length. typed = 5 x length, handwritten = 7*length. Required Output will be as below: ID length comment Calculated Length 0 1 0.7 typed 5*length 1 1 0.7 handwritten 5*length 2 2 0.8 typed 5*length 3 2 0.6 typed 5*length 4 2 0.6 handwritten 7*length 5 3 0.9 handwritten 7*length 6 3 0.9 handwritten 7*length Thank you. A: Find the IDs that satisfy the special condition using groupby. Using the IDs and the comment, compute the Calculated length using np.where as follows >>> grp_ids = df.groupby("ID")[["length", "comment"]].nunique() >>> grp_ids length comment ID 1 1 2 2 2 2 3 1 1 >>> idx = grp_ids.index[(grp_ids["length"] == 1) & (grp_ids["comment"] != 1)] >>> idx Int64Index([1], dtype='int64', name='ID') >>> df["Calculated length"] = np.where( df["ID"].isin(idx) | (df["comment"] == "typed"), df["length"] * 5, df["length"] * 7 ) >>> df ID length comment Calculated length 0 1 0.7 typed 3.5 1 1 0.7 handwritten 3.5 2 2 0.8 typed 4.0 3 2 0.6 typed 3.0 4 2 0.6 handwritten 4.2 5 3 0.9 handwritten 6.3 6 3 0.9 handwritten 6.3
Create a column based on conditions and calculation
Below is my dataframe: df = pd.DataFrame({"ID" : [1, 1, 2, 2, 2, 3, 3], "length" : [0.7, 0.7, 0.8, 0.6, 0.6, 0.9, 0.9], "comment" : ["typed", "handwritten", "typed", "typed", "handwritten", "handwritten", "handwritten"]}) df ID length comment 0 1 0.7 typed 1 1 0.7 handwritten 2 2 0.8 typed 3 2 0.6 typed 4 2 0.6 handwritten 5 3 0.9 handwritten 6 3 0.9 handwritten I want to be able to do the following: For any group of ID, if the length are the same but the comments are different, use the "typed" formula (5 x length) for the calculated length of that group of ID, otherwise use the formula that apply to each comment to get the calculated length. typed = 5 x length, handwritten = 7*length. Required Output will be as below: ID length comment Calculated Length 0 1 0.7 typed 5*length 1 1 0.7 handwritten 5*length 2 2 0.8 typed 5*length 3 2 0.6 typed 5*length 4 2 0.6 handwritten 7*length 5 3 0.9 handwritten 7*length 6 3 0.9 handwritten 7*length Thank you.
[ "Find the IDs that satisfy the special condition using groupby. Using the IDs and the comment, compute the Calculated length using np.where as follows\n>>> grp_ids = df.groupby(\"ID\")[[\"length\", \"comment\"]].nunique()\n>>> grp_ids\n length comment\nID\n1 1 2\n2 2 2\n3 1 1\n>>> idx = grp_ids.index[(grp_ids[\"length\"] == 1) & (grp_ids[\"comment\"] != 1)]\n>>> idx\nInt64Index([1], dtype='int64', name='ID')\n>>> df[\"Calculated length\"] = np.where(\n df[\"ID\"].isin(idx) | (df[\"comment\"] == \"typed\"),\n df[\"length\"] * 5,\n df[\"length\"] * 7\n )\n>>> df\n ID length comment Calculated length\n0 1 0.7 typed 3.5\n1 1 0.7 handwritten 3.5\n2 2 0.8 typed 4.0\n3 2 0.6 typed 3.0\n4 2 0.6 handwritten 4.2\n5 3 0.9 handwritten 6.3\n6 3 0.9 handwritten 6.3\n\n" ]
[ 0 ]
[ "use np.where if comment column exist only typed or handwritten.\nimport numpy as np\ncond1 = df['comment'] == 'typed'\ndf.assign(Calculated_Length=np.where(cond1, df['length'] * 5, df['length'] * 7))\n\noutput:\n ID length comment Calculated_Length\n0 1 0.7 typed 3.5\n1 1 0.7 handwritten 4.9\n2 2 0.8 typed 4.0\n3 2 0.6 typed 3.0\n4 2 0.6 handwritten 4.2\n5 3 0.9 handwritten 6.3\n6 3 0.9 handwritten 6.3\n\nedit after comment\ncond1 = df['comment'] == 'typed'\ncond2 = df.groupby('ID')['length'].transform(lambda x: (x.max() == x.min()) & (df.loc[x.index, 'comment'].eq('typed').sum() > 0))\ndf.assign(Caculated_Length=np.where((cond1 | cond2), df['length']*5, df['length']*7))\n\noutput:\n ID length comment Caculated_Length\n0 1 0.7 typed 3.5\n1 1 0.7 handwritten 3.5\n2 2 0.8 typed 4.0\n3 2 0.6 typed 3.0\n4 2 0.6 handwritten 4.2\n5 3 0.9 handwritten 6.3\n6 3 0.9 handwritten 6.3\n\n" ]
[ -1 ]
[ "numpy", "pandas", "python" ]
stackoverflow_0074505643_numpy_pandas_python.txt
Q: How to set PyQt element text from another running script? I have a client socket program and a server socket program in python. The client sends a message and the server echos the message as well as stores some variables about the clients ip and port number. I made a GUI in PyQt with some text fields to store the clients ip and port number. The problem is I need to run both the server socket python file and the PyQt GUI at the same time, as well as update the GUI's text fields from the variables in the server socket file(client ip and port number). I have tried creating a new thread in the server socket file right before the server accepts clients which does start the gui and lets the server run. But when the code to set the client ip and port for the gui gets executed I get an error AttributeError: 'Ui_MainWindow' object has no attribute 'client_ip'. gui.py from PyQt6 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(800, 600) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.label = QtWidgets.QLabel(self.centralwidget) self.label.setGeometry(QtCore.QRect(170, 150, 66, 18)) self.label.setObjectName("label") self.label_2 = QtWidgets.QLabel(self.centralwidget) self.label_2.setGeometry(QtCore.QRect(460, 150, 81, 18)) self.label_2.setObjectName("label_2") # CLIENT IP self.client_ip = QtWidgets.QLabel(self.centralwidget) self.client_ip.setGeometry(QtCore.QRect(250, 150, 66, 18)) self.client_ip.setText("") self.client_ip.setObjectName("client_ip") # CLIENT PORT self.client_port = QtWidgets.QLabel(self.centralwidget) self.client_port.setGeometry(QtCore.QRect(560, 150, 66, 18)) self.client_port.setText("") self.client_port.setObjectName("client_port") MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 28)) self.menubar.setObjectName("menubar") MainWindow.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.label.setText(_translate("MainWindow", "Client IP:")) self.label_2.setText(_translate("MainWindow", "Client Port:")) def set_client_ip(self, ip): self.client_ip.setText(ip) def set_client_port(self, port): self.client_port.setText(port) def main(self): import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec()) server.py import socket from myUtils import add_arguments from gui import Ui_MainWindow from threading import Thread def handle_client(client_socket, client_address, gui): """ Handles the client, receives and sends ack back :param client_socket: client socket :param client_address: client IP address :return: """ print(f"[+] Connected to Client: {client_address[0]} PORT: {address[1]}") try: # SET GUI CLIENT IP AND PORT client_ip = client_address[0] client_port = client_address[1] gui.set_client_ip(client_ip) gui.set_client_port(client_port) with client_socket: client_data = client_socket.recv(1024).decode() while client_data: print(f"[+] From: {client_address[0]}: {client_data}") response = f"from server: {len(client_data)}".encode() client_socket.send(response) client_data = client_socket.recv(1024).decode() except ConnectionResetError: print(f"[-] Closing connection to client: {client_address[0]}") client_socket.close() if __name__ == '__main__': gui = Ui_MainWindow() gui_thread = Thread(target=gui.main) gui_thread.start() # DEFAULTS PORT = 65002 HOST = '0.0.0.0' args = add_arguments("Start the server", ['-p'], ['--port'], [int], ['?'], [False], ["The port number."]) if args.port: PORT = args.port try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() print(f"===== Server started on HOST: {HOST}\tPORT: {PORT} =====") while True: connection, address = s.accept() handle_client(connection, address, gui) except KeyboardInterrupt: print("Caught keyboard interrupt, exiting.") A: musicamantes advice worked: The QApplication and any UI element must be in the main thread, anything else is in other threads. Use QThread subclasses and custom signals, and also don't modify pyuic files (as clearly written in their headers), but follow the official guidelines about using Designer instead. I used QThread to start a worker thread that runs my socket script.
How to set PyQt element text from another running script?
I have a client socket program and a server socket program in python. The client sends a message and the server echos the message as well as stores some variables about the clients ip and port number. I made a GUI in PyQt with some text fields to store the clients ip and port number. The problem is I need to run both the server socket python file and the PyQt GUI at the same time, as well as update the GUI's text fields from the variables in the server socket file(client ip and port number). I have tried creating a new thread in the server socket file right before the server accepts clients which does start the gui and lets the server run. But when the code to set the client ip and port for the gui gets executed I get an error AttributeError: 'Ui_MainWindow' object has no attribute 'client_ip'. gui.py from PyQt6 import QtCore, QtGui, QtWidgets class Ui_MainWindow(object): def setupUi(self, MainWindow): MainWindow.setObjectName("MainWindow") MainWindow.resize(800, 600) self.centralwidget = QtWidgets.QWidget(MainWindow) self.centralwidget.setObjectName("centralwidget") self.label = QtWidgets.QLabel(self.centralwidget) self.label.setGeometry(QtCore.QRect(170, 150, 66, 18)) self.label.setObjectName("label") self.label_2 = QtWidgets.QLabel(self.centralwidget) self.label_2.setGeometry(QtCore.QRect(460, 150, 81, 18)) self.label_2.setObjectName("label_2") # CLIENT IP self.client_ip = QtWidgets.QLabel(self.centralwidget) self.client_ip.setGeometry(QtCore.QRect(250, 150, 66, 18)) self.client_ip.setText("") self.client_ip.setObjectName("client_ip") # CLIENT PORT self.client_port = QtWidgets.QLabel(self.centralwidget) self.client_port.setGeometry(QtCore.QRect(560, 150, 66, 18)) self.client_port.setText("") self.client_port.setObjectName("client_port") MainWindow.setCentralWidget(self.centralwidget) self.menubar = QtWidgets.QMenuBar(MainWindow) self.menubar.setGeometry(QtCore.QRect(0, 0, 800, 28)) self.menubar.setObjectName("menubar") MainWindow.setMenuBar(self.menubar) self.statusbar = QtWidgets.QStatusBar(MainWindow) self.statusbar.setObjectName("statusbar") MainWindow.setStatusBar(self.statusbar) self.retranslateUi(MainWindow) QtCore.QMetaObject.connectSlotsByName(MainWindow) def retranslateUi(self, MainWindow): _translate = QtCore.QCoreApplication.translate MainWindow.setWindowTitle(_translate("MainWindow", "MainWindow")) self.label.setText(_translate("MainWindow", "Client IP:")) self.label_2.setText(_translate("MainWindow", "Client Port:")) def set_client_ip(self, ip): self.client_ip.setText(ip) def set_client_port(self, port): self.client_port.setText(port) def main(self): import sys app = QtWidgets.QApplication(sys.argv) MainWindow = QtWidgets.QMainWindow() ui = Ui_MainWindow() ui.setupUi(MainWindow) MainWindow.show() sys.exit(app.exec()) server.py import socket from myUtils import add_arguments from gui import Ui_MainWindow from threading import Thread def handle_client(client_socket, client_address, gui): """ Handles the client, receives and sends ack back :param client_socket: client socket :param client_address: client IP address :return: """ print(f"[+] Connected to Client: {client_address[0]} PORT: {address[1]}") try: # SET GUI CLIENT IP AND PORT client_ip = client_address[0] client_port = client_address[1] gui.set_client_ip(client_ip) gui.set_client_port(client_port) with client_socket: client_data = client_socket.recv(1024).decode() while client_data: print(f"[+] From: {client_address[0]}: {client_data}") response = f"from server: {len(client_data)}".encode() client_socket.send(response) client_data = client_socket.recv(1024).decode() except ConnectionResetError: print(f"[-] Closing connection to client: {client_address[0]}") client_socket.close() if __name__ == '__main__': gui = Ui_MainWindow() gui_thread = Thread(target=gui.main) gui_thread.start() # DEFAULTS PORT = 65002 HOST = '0.0.0.0' args = add_arguments("Start the server", ['-p'], ['--port'], [int], ['?'], [False], ["The port number."]) if args.port: PORT = args.port try: with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s: s.bind((HOST, PORT)) s.listen() print(f"===== Server started on HOST: {HOST}\tPORT: {PORT} =====") while True: connection, address = s.accept() handle_client(connection, address, gui) except KeyboardInterrupt: print("Caught keyboard interrupt, exiting.")
[ "musicamantes advice worked:\n\nThe QApplication and any UI element must be in the main thread, anything else is in other threads. Use QThread subclasses and custom signals, and also don't modify pyuic files (as clearly written in their headers), but follow the official guidelines about using Designer instead.\n\nI used QThread to start a worker thread that runs my socket script.\n" ]
[ 0 ]
[]
[]
[ "pyqt", "pyqt6", "python", "python_3.x", "python_multithreading" ]
stackoverflow_0074504651_pyqt_pyqt6_python_python_3.x_python_multithreading.txt
Q: Regular expression for the name O`Malley, John F I am unable to generate a regular expression for the name O`Malley, John F. Right now, I have the following. re.compile(r'^[A-Z][a-z]+`, [A-Z][a-z]+ [A-Z][a-z]+.$') Any help or what am I doing wrong? A: For that specific name (format), the back tick is in the wrong place: re.compile(r'^[A-Z]`{0,1}[a-z]+, [A-Z][a-z]+ [A-Z][a-z]+.$') You are asking for the regex for that specific name format, the above will catch a name with or without the back tick on the second position. You should take into account the comments regarding the purpose of this exercise.
Regular expression for the name O`Malley, John F
I am unable to generate a regular expression for the name O`Malley, John F. Right now, I have the following. re.compile(r'^[A-Z][a-z]+`, [A-Z][a-z]+ [A-Z][a-z]+.$') Any help or what am I doing wrong?
[ "For that specific name (format), the back tick is in the wrong place:\nre.compile(r'^[A-Z]`{0,1}[a-z]+, [A-Z][a-z]+ [A-Z][a-z]+.$')\n\nYou are asking for the regex for that specific name format, the above will catch a name with or without the back tick on the second position.\nYou should take into account the comments regarding the purpose of this exercise.\n" ]
[ 0 ]
[]
[]
[ "python", "regex" ]
stackoverflow_0074505636_python_regex.txt
Q: asyncio.sleep(0) does not yield control to the event loop I have a simple async setup which includes two coroutines: light_job and heavy_job. light_job halts in the middle and heavy_job starts. I want heavy_job to yield the control in the middle and allow light_job to finish but asyncio.sleep(0) is not working as I expect. this is the setup: import asyncio import time loop = asyncio.get_event_loop() async def light_job(): print("hello ") print(time.time()) await asyncio.sleep(1) print(time.time()) print("world!") async def heavy_job(): print("heavy start") time.sleep(3) print("heavy halt started") await asyncio.sleep(0) print("heavy halt ended") time.sleep(3) print("heavy done") loop.run_until_complete(asyncio.gather( light_job(), heavy_job() )) if I run this code, the light_job will not continue until after heavy_job is done. this is the outpu: hello 1668793123.159075 haevy start heavy halt started heavy halt ended heavy done 1668793129.1706061 world! but if I change asyncio.sleep(0) to asyncio.sleep(0.0001), the code will work as expected: hello 1668793379.599066 heavy start heavy halt started 1668793382.605899 world! heavy halt ended heavy done based on documentations and related threads, I expect asyncio.sleep(0) to work exactly as asyncio.sleep(0.0001). what is off here? A: Call asyncio.sleep(0) 3 times: import asyncio import time async def light_job(): print("hello ") print(time.time()) await asyncio.sleep(1) print(time.time()) print("world!") async def heavy_job(): print("heavy start") time.sleep(3) print("heavy halt started") for _ in range(3): await asyncio.sleep(0) print("heavy halt ended") time.sleep(3) print("heavy done") async def test(): await asyncio.gather( light_job(), heavy_job() ) asyncio.run(test()) This results in: hello 1668844526.157173 heavy start heavy halt started 1668844529.1575627 world! heavy halt ended heavy done Looking at "asyncio/base_events.py", "_run_once" goes over pending timers first then runs everything it sees after calculating that. asyncio.sleep can only skip one iteration of the event loop. Multiple sleeps are required because asyncio.sleep(1) schedules a future which takes one extra iteration before giving back control to light_job by adding light_job back to the queue, and asyncio happens to run newly queued jobs last. For a clearer picture, it is possible to add more print statements: import asyncio import time async def light_job(): print("hello ") print(time.time()) await asyncio.sleep(1) print(time.time()) print("world!") async def heavy_job(): print("heavy start") time.sleep(3) print("heavy halt started") # Sleep to yield to the event loop. light_job isn't detected as ready so this iteration of the loop will finish await asyncio.sleep(0) print("after 1 sleep") # We are still in front of the event loop. Yield so that the 1 second timer in light_job runs. # The timer will realize it itself has expired, then put light_job back onto the queue. await asyncio.sleep(0) # Again the current Python implementation puts us in front. Yield so that the light_job runs print("after 2 sleeps") await asyncio.sleep(0) print("heavy halt ended") time.sleep(3) print("heavy done") async def test(): await asyncio.gather( light_job(), heavy_job() ) asyncio.run(test()) Then add breakpoints in "def _run_once(self):" of "asyncio/base_events.py". Add a breakpoint printing "loop start" on line 1842 at the start a.k.a "sched_count =". Add another one at line 1910 at the end a.k.a "handle = None" printing "loop end". Then add one before each task is run on line 1897 a.k.a "if self._debug:" evaluating and printing "_format_handle(handle)". The sequence of events is revealed: loop start <Task pending name='Task-1' coro=<test() running at /home/home/PycharmProjects/sandbox/notsync.py:34> cb=[_run_until_complete_cb() at /usr/lib/python3.11/asyncio/base_events.py:180]> loop end loop start <Task pending name='Task-2' coro=<light_job() running at /home/home/PycharmProjects/sandbox/notsync.py:5> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]> hello 1668844827.5052986 <Task pending name='Task-3' coro=<heavy_job() running at /home/home/PycharmProjects/sandbox/notsync.py:13> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]> heavy start heavy halt started loop end loop start <Task pending name='Task-3' coro=<heavy_job() running at /home/home/PycharmProjects/sandbox/notsync.py:18> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]> after 1 sleep <TimerHandle when=37442.097934711 _set_result_unless_cancelled(<Future pendi...ask_wakeup()]>, None) at /usr/lib/python3.11/asyncio/futures.py:317> loop end loop start <Task pending name='Task-3' coro=<heavy_job() running at /home/home/PycharmProjects/sandbox/notsync.py:23> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]> after 2 sleeps <Task pending name='Task-2' coro=<light_job() running at /home/home/PycharmProjects/sandbox/notsync.py:8> wait_for=<Future finished result=None> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]> 1668844830.9250844 world! loop end loop start <Task pending name='Task-3' coro=<heavy_job() running at /home/home/PycharmProjects/sandbox/notsync.py:27> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]> heavy halt ended heavy done <Handle gather.<locals>._done_callback(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/tasks.py:759> loop end loop start <Handle gather.<locals>._done_callback(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/tasks.py:759> loop end loop start <Task pending name='Task-1' coro=<test() running at /home/home/PycharmProjects/sandbox/notsync.py:35> wait_for=<_GatheringFuture finished result=[None, None]> cb=[_run_until_complete_cb() at /usr/lib/python3.11/asyncio/base_events.py:180]> loop end loop start <Handle _run_until_complete_cb(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/base_events.py:180> loop end loop start <Task pending name='Task-4' coro=<BaseEventLoop.shutdown_asyncgens() running at /usr/lib/python3.11/asyncio/base_events.py:539> cb=[_run_until_complete_cb() at /usr/lib/python3.11/asyncio/base_events.py:180]> loop end loop start <Handle _run_until_complete_cb(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/base_events.py:180> loop end loop start <Task pending name='Task-5' coro=<BaseEventLoop.shutdown_default_executor() running at /usr/lib/python3.11/asyncio/base_events.py:564> cb=[_run_until_complete_cb() at /usr/lib/python3.11/asyncio/base_events.py:180]> loop end loop start <Handle _run_until_complete_cb(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/base_events.py:180> loop end A: I think this subject needs some more discussion. I intend this post as an appendix to Daniel T's excellent and very clever answer - that's a fine piece of work. But Dan Getz's comment made me think that some more detail would be helpful. Dan suggests that there is no general way to yield to another task. This is correct because there is no guarantee that any other Task is ready to run, nor is there any guarantee of the execution order of the various Tasks. The example program fails to meet expectations because of details in the event loop implementation, which I discuss below. There are, however, tools for unambiguously synchronizing work between different Tasks. It's probably a bad idea to rely on time intervals in asyncio.sleep() for this purpose. Consider the following program, which uses an asyncio.Event to force light_job() to finish before heavy_job() can enter its second time.sleep delay. This will always work because the program logic is explicit: import asyncio import time event = asyncio.Event() async def light_job(): print("hello ") print(time.time()) await asyncio.sleep(1) print(time.time()) print("world!") event.set() async def heavy_job(): print("heavy start") time.sleep(3) print("heavy halt started") # await asyncio.sleep(0) await event.wait() print("heavy halt ended") time.sleep(3) print("heavy done") async def main(): await asyncio.gather(light_job(), heavy_job()) asyncio.run(main()) Even simpler is this approach, which avoids the use of Event and even of gather: import asyncio import time async def light_job(): print("hello ") print(time.time()) await asyncio.sleep(1) print(time.time()) print("world!") async def heavy_job(): light = asyncio.create_task(light_job()) print("heavy start") time.sleep(3) print("heavy halt started") # await asyncio.sleep(0) await light print("heavy halt ended") time.sleep(3) print("heavy done") async def main(): await heavy_job() asyncio.run(main()) As for why the original script failed, the explanation can be found in the event loop implementation. An event loop keeps track of two things: a list of "ready" items, representing Tasks that are able to execute right now; and a list of "scheduled" items, representing Tasks that are waiting for some time interval to expire. Every time the event loop goes through a cycle, its first step is to examine the list of scheduled items and see if any are ready to proceed. It appends any of those items to the "ready" list. Then it executes this simple loop to run all the ready Tasks (I have omitted some diagnostic code; this is from Python3.10 standard library module base_events.py). Here, _ready is a deque. The items in the queue all have a run method that causes the Task to take one step forward, or in other words, to resume the Task at the point where it previously was suspended (typically an await expression). ntodo = len(self._ready) for i in range(ntodo): handle = self._ready.popleft() if handle._cancelled: continue else: handle._run() It's also the case that await asyncio.sleep(0) is implemented differently from await asyncio.sleep(x) where x > 0. In the first case, the await expression yields a value of None. The Task object simply appends an item to the "ready" list. In the second case, the await expression executes a loop.call_later function call, which creates a Future. The Task object appends an item to the "scheduled" list. Here is the implementation of asyncio.sleep in tasks.py: @types.coroutine def __sleep0(): """Skip one event loop run cycle. This is a private helper for 'asyncio.sleep()', used when the 'delay' is set to 0. It uses a bare 'yield' expression (which Task.__step knows how to handle) instead of creating a Future object. """ yield async def sleep(delay, result=None): """Coroutine that completes after a given time (in seconds).""" if delay <= 0: await __sleep0() return result loop = events.get_running_loop() future = loop.create_future() h = loop.call_later(delay, futures._set_result_unless_cancelled, future, result) try: return await future finally: h.cancel() So in the example script in the original post, the Task test will start with two items in its "ready" list: [light_job, heavy_job]. The scheduled list is empty. Light_job starts and hits await asyncio.sleep(1), so an item is appended to the "scheduled" list that represents this time delay. Now heavy_job runs for three seconds and hits await asyncio.sleep(0), so an item is appended to the "ready" list which indicates that this Task is to proceed without delay. That's the end of one full cycle of the event loop. The cycle ends even though the ready list isn't empty at that point, because the await with a zero delay caused heavy_job to be appended to the ready list immediately. In the next cycle of the event loop, the ready list has one item, which was placed there on the previous cycle: [heavy_job]. The scheduled list also has one item: [light_job]. The event loop examines the scheduled list and sees that light_job is now ready, so it appends light_job to ready_list, which now looks like this: [heavy_job, light_job]. So the code logic has essentially caused the order of the Tasks to get switched. Result: heavy_job runs twice in a row, once at the end of the first cycle and once at the beginning of the second. This also explains what happened when you replaced await asyncio.sleep(0) with await asyncio.sleep(0.0001). In that case, the Task got appended to the scheduled list rather than the ready list. Then ready=[] and scheduled=[light_job, heavy_job]. On the next cycle of the loop both Tasks are ready, but the order will once again be [light_job, heavy_job]. This machinery is invisible to client code, as it should be, but it has a weird consequence in this particular script. Whether or not this should be called a "bug" is a matter of debate. I assume there are good performance reasons why asyncio.sleep(0) is implemented differently from asyncio.sleep(nonzero).
asyncio.sleep(0) does not yield control to the event loop
I have a simple async setup which includes two coroutines: light_job and heavy_job. light_job halts in the middle and heavy_job starts. I want heavy_job to yield the control in the middle and allow light_job to finish but asyncio.sleep(0) is not working as I expect. this is the setup: import asyncio import time loop = asyncio.get_event_loop() async def light_job(): print("hello ") print(time.time()) await asyncio.sleep(1) print(time.time()) print("world!") async def heavy_job(): print("heavy start") time.sleep(3) print("heavy halt started") await asyncio.sleep(0) print("heavy halt ended") time.sleep(3) print("heavy done") loop.run_until_complete(asyncio.gather( light_job(), heavy_job() )) if I run this code, the light_job will not continue until after heavy_job is done. this is the outpu: hello 1668793123.159075 haevy start heavy halt started heavy halt ended heavy done 1668793129.1706061 world! but if I change asyncio.sleep(0) to asyncio.sleep(0.0001), the code will work as expected: hello 1668793379.599066 heavy start heavy halt started 1668793382.605899 world! heavy halt ended heavy done based on documentations and related threads, I expect asyncio.sleep(0) to work exactly as asyncio.sleep(0.0001). what is off here?
[ "Call asyncio.sleep(0) 3 times:\nimport asyncio\nimport time\n\n\nasync def light_job():\n print(\"hello \")\n print(time.time())\n await asyncio.sleep(1)\n print(time.time())\n print(\"world!\")\n\n\nasync def heavy_job():\n print(\"heavy start\")\n time.sleep(3)\n print(\"heavy halt started\")\n for _ in range(3):\n await asyncio.sleep(0)\n print(\"heavy halt ended\")\n time.sleep(3)\n print(\"heavy done\")\n\n\nasync def test():\n await asyncio.gather(\n light_job(),\n heavy_job()\n )\n\nasyncio.run(test())\n\nThis results in:\nhello \n1668844526.157173\nheavy start\nheavy halt started\n1668844529.1575627\nworld!\nheavy halt ended\nheavy done\n\nLooking at \"asyncio/base_events.py\", \"_run_once\" goes over pending timers first then runs everything it sees after calculating that. asyncio.sleep can only skip one iteration of the event loop. Multiple sleeps are required because asyncio.sleep(1) schedules a future which takes one extra iteration before giving back control to light_job by adding light_job back to the queue, and asyncio happens to run newly queued jobs last.\nFor a clearer picture, it is possible to add more print statements:\nimport asyncio\nimport time\n\n\nasync def light_job():\n print(\"hello \")\n print(time.time())\n await asyncio.sleep(1)\n print(time.time())\n print(\"world!\")\n\n\nasync def heavy_job():\n print(\"heavy start\")\n time.sleep(3)\n print(\"heavy halt started\")\n # Sleep to yield to the event loop. light_job isn't detected as ready so this iteration of the loop will finish\n await asyncio.sleep(0)\n\n print(\"after 1 sleep\")\n # We are still in front of the event loop. Yield so that the 1 second timer in light_job runs.\n # The timer will realize it itself has expired, then put light_job back onto the queue.\n await asyncio.sleep(0)\n\n # Again the current Python implementation puts us in front. Yield so that the light_job runs\n print(\"after 2 sleeps\")\n await asyncio.sleep(0)\n\n print(\"heavy halt ended\")\n time.sleep(3)\n print(\"heavy done\")\n\n\nasync def test():\n await asyncio.gather(\n light_job(),\n heavy_job()\n )\n\nasyncio.run(test())\n\nThen add breakpoints in \"def _run_once(self):\" of \"asyncio/base_events.py\". Add a breakpoint printing \"loop start\" on line 1842 at the start a.k.a \"sched_count =\". Add another one at line 1910 at the end a.k.a \"handle = None\" printing \"loop end\". Then add one before each task is run on line 1897 a.k.a \"if self._debug:\" evaluating and printing \"_format_handle(handle)\". The sequence of events is revealed:\nloop start\n<Task pending name='Task-1' coro=<test() running at /home/home/PycharmProjects/sandbox/notsync.py:34> cb=[_run_until_complete_cb() at /usr/lib/python3.11/asyncio/base_events.py:180]>\nloop end\nloop start\n<Task pending name='Task-2' coro=<light_job() running at /home/home/PycharmProjects/sandbox/notsync.py:5> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]>\nhello \n1668844827.5052986\n<Task pending name='Task-3' coro=<heavy_job() running at /home/home/PycharmProjects/sandbox/notsync.py:13> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]>\nheavy start\nheavy halt started\nloop end\nloop start\n<Task pending name='Task-3' coro=<heavy_job() running at /home/home/PycharmProjects/sandbox/notsync.py:18> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]>\nafter 1 sleep\n<TimerHandle when=37442.097934711 _set_result_unless_cancelled(<Future pendi...ask_wakeup()]>, None) at /usr/lib/python3.11/asyncio/futures.py:317>\nloop end\nloop start\n<Task pending name='Task-3' coro=<heavy_job() running at /home/home/PycharmProjects/sandbox/notsync.py:23> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]>\nafter 2 sleeps\n<Task pending name='Task-2' coro=<light_job() running at /home/home/PycharmProjects/sandbox/notsync.py:8> wait_for=<Future finished result=None> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]>\n1668844830.9250844\nworld!\nloop end\nloop start\n<Task pending name='Task-3' coro=<heavy_job() running at /home/home/PycharmProjects/sandbox/notsync.py:27> cb=[gather.<locals>._done_callback() at /usr/lib/python3.11/asyncio/tasks.py:759]>\nheavy halt ended\nheavy done\n<Handle gather.<locals>._done_callback(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/tasks.py:759>\nloop end\nloop start\n<Handle gather.<locals>._done_callback(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/tasks.py:759>\nloop end\nloop start\n<Task pending name='Task-1' coro=<test() running at /home/home/PycharmProjects/sandbox/notsync.py:35> wait_for=<_GatheringFuture finished result=[None, None]> cb=[_run_until_complete_cb() at /usr/lib/python3.11/asyncio/base_events.py:180]>\nloop end\nloop start\n<Handle _run_until_complete_cb(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/base_events.py:180>\nloop end\nloop start\n<Task pending name='Task-4' coro=<BaseEventLoop.shutdown_asyncgens() running at /usr/lib/python3.11/asyncio/base_events.py:539> cb=[_run_until_complete_cb() at /usr/lib/python3.11/asyncio/base_events.py:180]>\nloop end\nloop start\n<Handle _run_until_complete_cb(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/base_events.py:180>\nloop end\nloop start\n<Task pending name='Task-5' coro=<BaseEventLoop.shutdown_default_executor() running at /usr/lib/python3.11/asyncio/base_events.py:564> cb=[_run_until_complete_cb() at /usr/lib/python3.11/asyncio/base_events.py:180]>\nloop end\nloop start\n<Handle _run_until_complete_cb(<Task finishe...> result=None>) at /usr/lib/python3.11/asyncio/base_events.py:180>\nloop end\n\n", "I think this subject needs some more discussion. I intend this post as an appendix to Daniel T's excellent and very clever answer - that's a fine piece of work. But Dan Getz's comment made me think that some more detail would be helpful.\nDan suggests that there is no general way to yield to another task. This is correct because there is no guarantee that any other Task is ready to run, nor is there any guarantee of the execution order of the various Tasks. The example program fails to meet expectations because of details in the event loop implementation, which I discuss below.\nThere are, however, tools for unambiguously synchronizing work between different Tasks. It's probably a bad idea to rely on time intervals in asyncio.sleep() for this purpose. Consider the following program, which uses an asyncio.Event to force light_job() to finish before heavy_job() can enter its second time.sleep delay. This will always work because the program logic is explicit:\nimport asyncio\nimport time\n\nevent = asyncio.Event()\n\nasync def light_job():\n print(\"hello \")\n print(time.time())\n await asyncio.sleep(1)\n print(time.time())\n print(\"world!\")\n event.set()\n\n\nasync def heavy_job():\n print(\"heavy start\")\n time.sleep(3)\n print(\"heavy halt started\")\n # await asyncio.sleep(0)\n await event.wait()\n print(\"heavy halt ended\")\n time.sleep(3)\n print(\"heavy done\")\n \nasync def main():\n await asyncio.gather(light_job(), heavy_job())\n\nasyncio.run(main())\n\nEven simpler is this approach, which avoids the use of Event and even of gather:\nimport asyncio\nimport time\n\nasync def light_job():\n print(\"hello \")\n print(time.time())\n await asyncio.sleep(1)\n print(time.time())\n print(\"world!\")\n\nasync def heavy_job():\n light = asyncio.create_task(light_job())\n print(\"heavy start\")\n time.sleep(3)\n print(\"heavy halt started\")\n # await asyncio.sleep(0)\n await light\n print(\"heavy halt ended\")\n time.sleep(3)\n print(\"heavy done\")\n \nasync def main():\n await heavy_job()\n\nasyncio.run(main())\n\nAs for why the original script failed, the explanation can be found in the event loop implementation. An event loop keeps track of two things: a list of \"ready\" items, representing Tasks that are able to execute right now; and a list of \"scheduled\" items, representing Tasks that are waiting for some time interval to expire.\nEvery time the event loop goes through a cycle, its first step is to examine the list of scheduled items and see if any are ready to proceed. It appends any of those items to the \"ready\" list. Then it executes this simple loop to run all the ready Tasks (I have omitted some diagnostic code; this is from Python3.10 standard library module base_events.py). Here, _ready is a deque. The items in the queue all have a run method that causes the Task to take one step forward, or in other words, to resume the Task at the point where it previously was suspended (typically an await expression).\n ntodo = len(self._ready)\n for i in range(ntodo):\n handle = self._ready.popleft()\n if handle._cancelled:\n continue\n else:\n handle._run()\n\nIt's also the case that await asyncio.sleep(0) is implemented differently from await asyncio.sleep(x) where x > 0. In the first case, the await expression yields a value of None. The Task object simply appends an item to the \"ready\" list. In the second case, the await expression executes a loop.call_later function call, which creates a Future. The Task object appends an item to the \"scheduled\" list. Here is the implementation of asyncio.sleep in tasks.py:\n@types.coroutine\ndef __sleep0():\n \"\"\"Skip one event loop run cycle.\n\n This is a private helper for 'asyncio.sleep()', used\n when the 'delay' is set to 0. It uses a bare 'yield'\n expression (which Task.__step knows how to handle)\n instead of creating a Future object.\n \"\"\"\n yield\n\n\nasync def sleep(delay, result=None):\n \"\"\"Coroutine that completes after a given time (in seconds).\"\"\"\n if delay <= 0:\n await __sleep0()\n return result\n\n loop = events.get_running_loop()\n future = loop.create_future()\n h = loop.call_later(delay,\n futures._set_result_unless_cancelled,\n future, result)\n try:\n return await future\n finally:\n h.cancel()\n\nSo in the example script in the original post, the Task test will start with two items in its \"ready\" list: [light_job, heavy_job]. The scheduled list is empty. Light_job starts and hits await asyncio.sleep(1), so an item is appended to the \"scheduled\" list that represents this time delay. Now heavy_job runs for three seconds and hits await asyncio.sleep(0), so an item is appended to the \"ready\" list which indicates that this Task is to proceed without delay. That's the end of one full cycle of the event loop. The cycle ends even though the ready list isn't empty at that point, because the await with a zero delay caused heavy_job to be appended to the ready list immediately.\nIn the next cycle of the event loop, the ready list has one item, which was placed there on the previous cycle: [heavy_job]. The scheduled list also has one item: [light_job]. The event loop examines the scheduled list and sees that light_job is now ready, so it appends light_job to ready_list, which now looks like this: [heavy_job, light_job]. So the code logic has essentially caused the order of the Tasks to get switched. Result: heavy_job runs twice in a row, once at the end of the first cycle and once at the beginning of the second.\nThis also explains what happened when you replaced await asyncio.sleep(0) with await asyncio.sleep(0.0001). In that case, the Task got appended to the scheduled list rather than the ready list. Then ready=[] and scheduled=[light_job, heavy_job]. On the next cycle of the loop both Tasks are ready, but the order will once again be [light_job, heavy_job].\nThis machinery is invisible to client code, as it should be, but it has a weird consequence in this particular script. Whether or not this should be called a \"bug\" is a matter of debate. I assume there are good performance reasons why asyncio.sleep(0) is implemented differently from asyncio.sleep(nonzero).\n" ]
[ 4, 3 ]
[]
[]
[ "python", "python_asyncio" ]
stackoverflow_0074493571_python_python_asyncio.txt
Q: If always true when checking strings I'm developing a chatbot project for college, and in the following code block, the first if is always going as a true value, no matter what. I really need help and don't know what to do, cause this project is due on monday. def registeredClient(): print('Olá, bem-vindo a WE-RJ Telecom!') userInputString = str(input('O que você precisa?\nCaso queira contratar ou trocar de plano escreva “Quero contratar” ou “Quero trocar de plano”.\nCaso esteja com problemas de conexão, escreva "suporte".\nCaso queira seu boleto, digite "boleto":\n')) userInputString = userInputString.lower() if 'contratar' or 'trocar plano' or 'aumentar velocidade' or 'mudar plano' or 'velocidade' or 'plano' in userInputString: newPlanOption() elif 'suporte' or 'lenta' or 'internet lenta' or 'internet esta lenta' or 'problema' or 'velocidade' in userInputString: supportOption() elif 'boleto' or 'segunda via' or '2ª via' or 'fatura' in userInputString: billingOption() else: print('Não foi posível entender a sua mensagem, seu atendimento será encerrado.') return False A: I updated the conditions. In your case your conditions were checking if the strings themselves were truthly which is why your first case would result in true. def registeredClient(): print('Olá, bem-vindo a WE-RJ Telecom!') userInputString = str(input('O que você precisa?\nCaso queira contratar ou trocar de plano escreva “Quero contratar” ou “Quero trocar de plano”.\nCaso esteja com problemas de conexão, escreva "suporte".\nCaso queira seu boleto, digite "boleto":\n')) userInputString = userInputString.lower() if any(x in userInputString for x in ['contratar', 'trocar plano' , 'aumentar velocidade' , 'mudar plano' , 'velocidade' , 'plano']): print("Case A") elif any(x in userInputString for x in ['suporte', 'lenta' , 'internet lenta' , 'internet esta lenta' , 'problema' , 'velocidade']): print("Case B") elif any(x in userInputString for x in ['boleto' , 'segunda via' , '2ª via' , 'fatura']): print("Case C") else: print('Não foi posível entender a sua mensagem, seu atendimento será encerrado.') return False registeredClient(); A: The first if block is understood by python as the following if block : (if 'contratar') or ('trocar plano') or ('aumentar velocidade') or ('mudar plano') or ('velocidade') or ('plano' in userInputString): which is always True as the strings are not vacant and thus truthy type. What you need is this as the first if block : if any(i in userInputString for i in ['contratar', 'trocar plano', 'aumentar velocidade', 'mudar plano', 'velocidade', 'plano']): Similarly you need to change your elif statements too. Try this : def registeredClient(): print('Olá, bem-vindo a WE-RJ Telecom!') userInputString = str(input('O que você precisa?\nCaso queira contratar ou trocar de plano escreva “Quero contratar” ou “Quero trocar de plano”.\nCaso esteja com problemas de conexão, escreva "suporte".\nCaso queira seu boleto, digite "boleto":\n')) userInputString = userInputString.lower() checkString = lambda l: any(i in userInputString for i in l) if checkString(['contratar', 'trocar plano', 'aumentar velocidade', 'mudar plano', 'velocidade', 'plano']): newPlanOption() elif checkString(['suporte', 'lenta', 'internet lenta', 'internet esta lenta', 'problema', 'velocidade']): supportOption() elif checkString(['boleto', 'segunda via', '2ª via', 'fatura']): billingOption() else: print('Não foi posível entender a sua mensagem, seu atendimento será encerrado.') return False
If always true when checking strings
I'm developing a chatbot project for college, and in the following code block, the first if is always going as a true value, no matter what. I really need help and don't know what to do, cause this project is due on monday. def registeredClient(): print('Olá, bem-vindo a WE-RJ Telecom!') userInputString = str(input('O que você precisa?\nCaso queira contratar ou trocar de plano escreva “Quero contratar” ou “Quero trocar de plano”.\nCaso esteja com problemas de conexão, escreva "suporte".\nCaso queira seu boleto, digite "boleto":\n')) userInputString = userInputString.lower() if 'contratar' or 'trocar plano' or 'aumentar velocidade' or 'mudar plano' or 'velocidade' or 'plano' in userInputString: newPlanOption() elif 'suporte' or 'lenta' or 'internet lenta' or 'internet esta lenta' or 'problema' or 'velocidade' in userInputString: supportOption() elif 'boleto' or 'segunda via' or '2ª via' or 'fatura' in userInputString: billingOption() else: print('Não foi posível entender a sua mensagem, seu atendimento será encerrado.') return False
[ "I updated the conditions. In your case your conditions were checking if the strings themselves were truthly which is why your first case would result in true.\n\n\ndef registeredClient():\n print('Olá, bem-vindo a WE-RJ Telecom!')\n\n userInputString = str(input('O que você precisa?\\nCaso queira contratar ou trocar de plano escreva “Quero contratar” ou “Quero trocar de plano”.\\nCaso esteja com problemas de conexão, escreva \"suporte\".\\nCaso queira seu boleto, digite \"boleto\":\\n'))\n\n userInputString = userInputString.lower()\n\n\n if any(x in userInputString for x in ['contratar', 'trocar plano' , 'aumentar velocidade' , 'mudar plano' , 'velocidade' , 'plano']):\n print(\"Case A\")\n elif any(x in userInputString for x in ['suporte', 'lenta' , 'internet lenta' , 'internet esta lenta' , 'problema' , 'velocidade']):\n print(\"Case B\")\n elif any(x in userInputString for x in ['boleto' , 'segunda via' , '2ª via' , 'fatura']):\n print(\"Case C\")\n else:\n print('Não foi posível entender a sua mensagem, seu atendimento será encerrado.')\n return False\n \nregisteredClient();\n\n\n\n", "The first if block is understood by python as the following if block :\n(if 'contratar') or ('trocar plano') or ('aumentar velocidade') or ('mudar plano') or ('velocidade') or ('plano' in userInputString): \n\nwhich is always True as the strings are not vacant and thus truthy type.\nWhat you need is this as the first if block :\nif any(i in userInputString for i in ['contratar', 'trocar plano', 'aumentar velocidade', 'mudar plano', 'velocidade', 'plano']):\n\nSimilarly you need to change your elif statements too.\nTry this :\ndef registeredClient():\n print('Olá, bem-vindo a WE-RJ Telecom!')\n\n userInputString = str(input('O que você precisa?\\nCaso queira contratar ou trocar de plano escreva “Quero contratar” ou “Quero trocar de plano”.\\nCaso esteja com problemas de conexão, escreva \"suporte\".\\nCaso queira seu boleto, digite \"boleto\":\\n'))\n\n userInputString = userInputString.lower()\n checkString = lambda l: any(i in userInputString for i in l)\n\n if checkString(['contratar', 'trocar plano', 'aumentar velocidade', 'mudar plano', 'velocidade', 'plano']):\n newPlanOption()\n elif checkString(['suporte', 'lenta', 'internet lenta', 'internet esta lenta', 'problema', 'velocidade']):\n supportOption()\n elif checkString(['boleto', 'segunda via', '2ª via', 'fatura']):\n billingOption()\n else:\n print('Não foi posível entender a sua mensagem, seu atendimento será encerrado.')\n return False\n\n" ]
[ 1, 0 ]
[]
[]
[ "if_statement", "python", "python_3.x", "string" ]
stackoverflow_0074505753_if_statement_python_python_3.x_string.txt
Q: How can I iterate a list of data using 2D list in python? I want to create a variable called containing a 2D (nested) list of 2 rows and 3 columns literal containing the values like this: 3 14 67 13 24 19 the code I have now is sth like this but the outcome doesn't give me the outcome I want: for row in range(2): new_list = [] for col in range(3): new_list.append(a_list) print(new_list) A: You can use my code: a_list = [3, 14, 67, 13, 24, 19] new_list = [] new_list += [a_list[0:3]] new_list += [a_list[3:6]] A: Your problem is two-fold, you need to instantiate the correct number of lists to hold your elements; and you also need to pull elements from a_list in order. You need to accumulate the elements of a_list into "rows" and you separately need to accumulate those rows into your outer list so that you end up with the structure: [[3, 14, 67], [13, 24, 19]] First, initialize an outer_list outside the loops. Then on each iteration of the outer loop, initialize an empty row list. Append the items from a_list to row in the inner loop. Then append row to the outer list at the end of the inner loop: a_list = [3, 14, 67, 13, 24, 19] num_rows = 2 num_cols = 3 # Make sure that num_rows * num_cols is equal to the length of the original list! outer_list = [] for i in range(num_rows): row = [] for j in range(num_cols): # row.append() <- grab the appropriate element from a_list outer_list.append(row) I've left the hard part up to you, which is to figure out how to get an index using i and j that can access each element in a_list in order. An easier approach would be to turn a_list into an iterator and repeatedly call next() on it, which negates the need for any tricky indexing. However, you should attempt to figure out how to index into a_list first before moving on so that you can get a decent understanding of how to work with lists.
How can I iterate a list of data using 2D list in python?
I want to create a variable called containing a 2D (nested) list of 2 rows and 3 columns literal containing the values like this: 3 14 67 13 24 19 the code I have now is sth like this but the outcome doesn't give me the outcome I want: for row in range(2): new_list = [] for col in range(3): new_list.append(a_list) print(new_list)
[ "You can use my code:\na_list = [3, 14, 67, 13, 24, 19] \nnew_list = []\nnew_list += [a_list[0:3]]\nnew_list += [a_list[3:6]]\n\n", "Your problem is two-fold, you need to instantiate the correct number of lists to hold your elements; and you also need to pull elements from a_list in order.\nYou need to accumulate the elements of a_list into \"rows\" and you separately need to accumulate those rows into your outer list so that you end up with the structure:\n[[3, 14, 67], [13, 24, 19]]\n\nFirst, initialize an outer_list outside the loops. Then on each iteration of the outer loop, initialize an empty row list. Append the items from a_list to row in the inner loop. Then append row to the outer list at the end of the inner loop:\na_list = [3, 14, 67, 13, 24, 19]\n\nnum_rows = 2\nnum_cols = 3\n\n# Make sure that num_rows * num_cols is equal to the length of the original list!\n\nouter_list = []\nfor i in range(num_rows):\n row = []\n for j in range(num_cols):\n # row.append() <- grab the appropriate element from a_list\n outer_list.append(row)\n\nI've left the hard part up to you, which is to figure out how to get an index using i and j that can access each element in a_list in order.\nAn easier approach would be to turn a_list into an iterator and repeatedly call next() on it, which negates the need for any tricky indexing. However, you should attempt to figure out how to index into a_list first before moving on so that you can get a decent understanding of how to work with lists.\n" ]
[ 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0074505648_python.txt
Q: Unable to create jira Bug using python I am using below code to create ticket in jira. I am able to create only TASK. When i create Bug or Story I am getting below error . issue_dict = { 'project': {'key': 'TEST'}, 'summary': 'New issue from jira-python', 'description': 'Look into this one', 'issuetype': {'name': 'Bug'} } new_issue = jira.create_issue(issue_dict) print(new_issue) errors : jira.exceptions.JIRAError: JiraError HTTP 400 url: https://soubhagyapradhan.atlassian.net/rest/api/2/issue response headers = {'Date': 'Sun, 20 Nov 2022 04:23:17 GMT', 'Content-Type': 'application/json;charset=UTF-8', 'Server': 'AtlassianEdge', 'Timing-Allow-Origin': '*', 'X-Arequestid': '023a3f63bfd3ed36e1b1f23637fa115d', 'X-Aaccountid': '5c2cfc199760f569b62799f9', 'Cache-Control': 'no-cache, no-store, no-transform', 'Expect-Ct': 'report-uri="https://web-security-reports.services.atlassian.com/expect-ct-report/atlassian-proxy", max-age=86400', 'Strict-Transport-Security': 'max-age=63072000; preload', 'X-Content-Type-Options': 'nosniff', 'X-Xss-Protection': '1; mode=block', 'Atl-Traceid': '8fa4c1f91d6f9fe1', 'Report-To': '{"endpoints": [{"url": "https://dz8aopenkvv6s.cloudfront.net"}], "group": "endpoint-1", "include_subdomains": true, "max_age": 600}', 'Nel': '{"failure_fraction": 0.001, "include_subdomains": true, "max_age": 600, "report_to": "endpoint-1"}', 'Transfer-Encoding': 'chunked'} response text = {"errorMessages":[],"errors":{"issuetype":"Specify an issue type"}} Please take a look how can i solve this error A: If you look at the error, you can see it says: "errors":{"issuetype":"Specify an issue type"} So clearly something must be wrong with how you've set issuetype. Have you tried looking at the API docs? It seems you could try: Specifying the Issue Type via the issuetypeNames parameter rather than just issuetype; or That you should specify an Issue Type ID rather than an issue type name. Note that in Jira the Issue Types can be customised, so the issue type that you're trying to create might not always exist / have the same name. You can try getting all issue types for the user to see if that's the case.
Unable to create jira Bug using python
I am using below code to create ticket in jira. I am able to create only TASK. When i create Bug or Story I am getting below error . issue_dict = { 'project': {'key': 'TEST'}, 'summary': 'New issue from jira-python', 'description': 'Look into this one', 'issuetype': {'name': 'Bug'} } new_issue = jira.create_issue(issue_dict) print(new_issue) errors : jira.exceptions.JIRAError: JiraError HTTP 400 url: https://soubhagyapradhan.atlassian.net/rest/api/2/issue response headers = {'Date': 'Sun, 20 Nov 2022 04:23:17 GMT', 'Content-Type': 'application/json;charset=UTF-8', 'Server': 'AtlassianEdge', 'Timing-Allow-Origin': '*', 'X-Arequestid': '023a3f63bfd3ed36e1b1f23637fa115d', 'X-Aaccountid': '5c2cfc199760f569b62799f9', 'Cache-Control': 'no-cache, no-store, no-transform', 'Expect-Ct': 'report-uri="https://web-security-reports.services.atlassian.com/expect-ct-report/atlassian-proxy", max-age=86400', 'Strict-Transport-Security': 'max-age=63072000; preload', 'X-Content-Type-Options': 'nosniff', 'X-Xss-Protection': '1; mode=block', 'Atl-Traceid': '8fa4c1f91d6f9fe1', 'Report-To': '{"endpoints": [{"url": "https://dz8aopenkvv6s.cloudfront.net"}], "group": "endpoint-1", "include_subdomains": true, "max_age": 600}', 'Nel': '{"failure_fraction": 0.001, "include_subdomains": true, "max_age": 600, "report_to": "endpoint-1"}', 'Transfer-Encoding': 'chunked'} response text = {"errorMessages":[],"errors":{"issuetype":"Specify an issue type"}} Please take a look how can i solve this error
[ "If you look at the error, you can see it says:\n\n\"errors\":{\"issuetype\":\"Specify an issue type\"}\n\nSo clearly something must be wrong with how you've set issuetype.\n\nHave you tried looking at the API docs? It seems you could try:\n\nSpecifying the Issue Type via the issuetypeNames parameter rather than just issuetype; or\nThat you should specify an Issue Type ID rather than an issue type name.\n\nNote that in Jira the Issue Types can be customised, so the issue type that you're trying to create might not always exist / have the same name. You can try getting all issue types for the user to see if that's the case.\n" ]
[ 0 ]
[]
[]
[ "jira", "python" ]
stackoverflow_0074505631_jira_python.txt
Q: How to combine two code points to get one? I know that unicode code point for Á is U+00C1. I read on internet and many forums and articles that I can also make an Á by combining characters ´ (unicode: U+00B4) and A (unicode: U+0041). My question is simple. How to do it? I tried something like this. I decided to try it in golang, but it's perfectly fine if someone knows how to do it in python (or some other programming language). It doesn't matter to me. Okay, so I tried next. A in binary is: 01000001 ´ in binary is: 10110100 It together takes 15 bits, so I need UTF-8 3 bytes format (1110xxxx 10xxxxxx 10xxxxxx) By filling the bits from A and ´ (first A) in the places of x, the following is obtained: 11100100 10000110 10110100. Then I converted the resulting three bytes back into hexadecimal values: E4 86 B4. However, when I tried to write it in code, I got a completely different character. In other words, my solution is not working as I expected. package main import ( "fmt" ) func main() { r := "\xE4\x86\xB4" fmt.Println(r) // It wrote 䆴 instead of Á } A: It looks like the ´ (U+00B4) character you provided is not actually a combining character as Unicode defines it. >>> "A\u00b4" 'A´' If we use ◌́ (U+0301) instead, then we can just place it in sequence with a character like A and get the expected output: >>> "A\u0301" 'Á' Unless I'm misunderstanding what you mean, it doesn't look like any binary manipulation or trickery is necessary here. A: As StardustGogeta explains in their answer, the correct combining unicode character for an "acute" accent is U+0301 (Combining Acute Accent). But in Go, a string consisting of the single U+00C1 (Latin Capital Letter A with Acute) character is not equal to a string consisting of a U+0041 (Latin Capital Letter A) followed by a U+0301 (Combining Acute Accent) If you want to compare strings, you need to normalise both to the same normalisation form. See blog post Text normalization in Go for more details. The following code snippet shows how to do that: package main import ( "fmt" "golang.org/x/text/unicode/norm" ) func main() { combined := "\u00c1" combining := "A\u0301" fmt.Printf("combined = %s, combining = %s\n", combined, combining) fmt.Printf("combined == combining: %t\n", combined == combining) combiningNormalised := string(norm.NFC.Bytes([]byte(combining))) fmt.Printf("combined == combiningNormalised: %t\n", combined == combiningNormalised) } Output: combined = Á, combining = Á combined == combining: false combined == combiningNormalised: true
How to combine two code points to get one?
I know that unicode code point for Á is U+00C1. I read on internet and many forums and articles that I can also make an Á by combining characters ´ (unicode: U+00B4) and A (unicode: U+0041). My question is simple. How to do it? I tried something like this. I decided to try it in golang, but it's perfectly fine if someone knows how to do it in python (or some other programming language). It doesn't matter to me. Okay, so I tried next. A in binary is: 01000001 ´ in binary is: 10110100 It together takes 15 bits, so I need UTF-8 3 bytes format (1110xxxx 10xxxxxx 10xxxxxx) By filling the bits from A and ´ (first A) in the places of x, the following is obtained: 11100100 10000110 10110100. Then I converted the resulting three bytes back into hexadecimal values: E4 86 B4. However, when I tried to write it in code, I got a completely different character. In other words, my solution is not working as I expected. package main import ( "fmt" ) func main() { r := "\xE4\x86\xB4" fmt.Println(r) // It wrote 䆴 instead of Á }
[ "It looks like the ´ (U+00B4) character you provided is not actually a combining character as Unicode defines it.\n>>> \"A\\u00b4\"\n'A´'\n\nIf we use ◌́ (U+0301) instead, then we can just place it in sequence with a character like A and get the expected output:\n>>> \"A\\u0301\"\n'Á'\n\nUnless I'm misunderstanding what you mean, it doesn't look like any binary manipulation or trickery is necessary here.\n", "As StardustGogeta explains in their answer, the correct combining unicode character for an \"acute\" accent is U+0301 (Combining Acute Accent).\nBut in Go, a string consisting of the single U+00C1 (Latin Capital Letter A with Acute) character is not equal to a string consisting of a U+0041 (Latin Capital Letter A) followed by a U+0301 (Combining Acute Accent)\nIf you want to compare strings, you need to normalise both to the same normalisation form. See blog post Text normalization in Go for more details.\nThe following code snippet shows how to do that:\npackage main\n\nimport (\n \"fmt\"\n\n \"golang.org/x/text/unicode/norm\"\n)\n\nfunc main() {\n combined := \"\\u00c1\"\n combining := \"A\\u0301\"\n fmt.Printf(\"combined = %s, combining = %s\\n\", combined, combining)\n fmt.Printf(\"combined == combining: %t\\n\", combined == combining)\n combiningNormalised := string(norm.NFC.Bytes([]byte(combining)))\n fmt.Printf(\"combined == combiningNormalised: %t\\n\", combined == combiningNormalised)\n}\n\nOutput:\ncombined = Á, combining = Á\ncombined == combining: false\ncombined == combiningNormalised: true\n\n" ]
[ 2, 1 ]
[]
[]
[ "go", "python", "unicode", "utf", "utf_8" ]
stackoverflow_0074505405_go_python_unicode_utf_utf_8.txt
Q: Specific tensor decomposition I want to decompose a 3-dimensional tensor using SVD. I am not quite sure if and, how following decomposition can be achieved. I already know how I can split the tensor horizontally from this tutorial: tensors.org Figure 2.2b d = 10; A = np.random.rand(d,d,d) Am = A.reshape(d**2,d) Um,Sm,Vh = LA.svd(Am,full_matrices=False) U = Um.reshape(d,d,d); S = np.diag(Sm) A: Matrix methods can be naturally extended to higher-orders. SVD, for instance, can be generalized to tensors e.g. with the Tucker decomposition, sometimes called a higher-order SVD. We maintain a Python library for tensor methods, TensorLy, which lets you do this easily. In this case you want a partial Tucker as you want to leave one of the modes uncompressed. Let's import the necessary parts: import tensorly as tl from tensorly import random from tensorly.decomposition import partial_tucker For testing, let's create a 3rd order tensor of size (10, 10, 10): size = 10 order = 3 shape = (size, )*order tensor = random.random_tensor(shape) You can now decompose the tensor using the tensor decomposition. In your case, you want to leave one of the dimensions untouched, so you'll only have two factors (your U and V) and a core tensor (your S): core, factors = partial_tucker(tensor, rank=size, modes=[0, 2]) You can reconstruct the original tensor from your approximation using a series of n-mode products to contract the core with the factors: from tensorly import tenalg rec = tenalg.multi_mode_dot(core, factors, modes=[0, 2]) rec_error = tl.norm(rec - tensor)/tl.norm(tensor) print(f'Relative reconstruction error: {rec_error}') In my case, I get Relative reconstruction error: 9.66027176805661e-16 A: You can also use "tensorlearn" package in python for example using tensor-train (TT) SVD algorithm. https://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition import numpy as np import tensorlearn as tl #lets generate an arbitrary array tensor = np.arange(0,1000) #reshaping it into a higher (3) dimensional tensor tensor = np.reshape(tensor,(10,20,5)) epsilon=0.05 #decompose the tensor to its factors tt_factors=tl.auto_rank_tt(tensor, epsilon) #epsilon is the error bound #tt_factors is a list of three arrays which are the tt-cores #rebuild (estimating) the tensor using the factors again as tensor_hat tensor_hat=tl.tt_to_tensor(tt_factors) #lets see the error error_tensor=tensor-tensor_hat error=tl.tensor_frobenius_norm(error_tensor)/tl.tensor_frobenius_norm(tensor) print('error (%)= ',error*100) #which is less than epsilon # one usage of tensor decomposition is data compression # So, lets calculate the compression ratio data_compression_ratio=tl.tt_compression_ratio(tt_factors) #data saving data_saving=1-(1/data_compression_ratio) print('data_saving (%): ', data_saving*100)
Specific tensor decomposition
I want to decompose a 3-dimensional tensor using SVD. I am not quite sure if and, how following decomposition can be achieved. I already know how I can split the tensor horizontally from this tutorial: tensors.org Figure 2.2b d = 10; A = np.random.rand(d,d,d) Am = A.reshape(d**2,d) Um,Sm,Vh = LA.svd(Am,full_matrices=False) U = Um.reshape(d,d,d); S = np.diag(Sm)
[ "Matrix methods can be naturally extended to higher-orders. SVD, for instance, can be generalized to tensors e.g. with the Tucker decomposition, sometimes called a higher-order SVD.\nWe maintain a Python library for tensor methods, TensorLy, which lets you do this easily. In this case you want a partial Tucker as you want to leave one of the modes uncompressed.\nLet's import the necessary parts:\nimport tensorly as tl\nfrom tensorly import random\nfrom tensorly.decomposition import partial_tucker\n\nFor testing, let's create a 3rd order tensor of size (10, 10, 10):\nsize = 10\norder = 3\nshape = (size, )*order\ntensor = random.random_tensor(shape)\n\nYou can now decompose the tensor using the tensor decomposition. In your case, you want to leave one of the dimensions untouched, so you'll only have two factors (your U and V) and a core tensor (your S):\ncore, factors = partial_tucker(tensor, rank=size, modes=[0, 2])\n\nYou can reconstruct the original tensor from your approximation using a series of n-mode products to contract the core with the factors:\nfrom tensorly import tenalg\nrec = tenalg.multi_mode_dot(core, factors, modes=[0, 2])\nrec_error = tl.norm(rec - tensor)/tl.norm(tensor)\nprint(f'Relative reconstruction error: {rec_error}')\n\nIn my case, I get\nRelative reconstruction error: 9.66027176805661e-16\n\n", "You can also use \"tensorlearn\" package in python for example using tensor-train (TT) SVD algorithm.\nhttps://github.com/rmsolgi/TensorLearn/tree/main/Tensor-Train%20Decomposition\nimport numpy as np\nimport tensorlearn as tl\n\n#lets generate an arbitrary array \ntensor = np.arange(0,1000) \n\n#reshaping it into a higher (3) dimensional tensor\n\ntensor = np.reshape(tensor,(10,20,5)) \nepsilon=0.05 \n#decompose the tensor to its factors\ntt_factors=tl.auto_rank_tt(tensor, epsilon) #epsilon is the error bound\n\n#tt_factors is a list of three arrays which are the tt-cores\n\n#rebuild (estimating) the tensor using the factors again as tensor_hat\n\ntensor_hat=tl.tt_to_tensor(tt_factors)\n\n#lets see the error\n\nerror_tensor=tensor-tensor_hat\n\nerror=tl.tensor_frobenius_norm(error_tensor)/tl.tensor_frobenius_norm(tensor)\n\nprint('error (%)= ',error*100) #which is less than epsilon\n# one usage of tensor decomposition is data compression\n# So, lets calculate the compression ratio\ndata_compression_ratio=tl.tt_compression_ratio(tt_factors)\n\n#data saving\ndata_saving=1-(1/data_compression_ratio)\n\nprint('data_saving (%): ', data_saving*100)\n\n" ]
[ 2, 0 ]
[]
[]
[ "numpy", "python", "tensor" ]
stackoverflow_0066753122_numpy_python_tensor.txt
Q: Can't import UDF from python to Excel using xlwings I am using python to write a function and then using xlwings I am trying to import it into Excel but I faced the following error: My xlwings version is 0.28.5, and python's is 3.10, and I am using Excel 2013. also both xlwings32-0.28.5.dll and xlwings64-0.28.5 are in the same folder as the python3.10.exe the name of the python file and Excel file are the same as instructed by the documentation I have also tried specifying the python interpreter path in the interpreter box located in the xlwings ribbon in Excel but with no results. I have read the following issues but also with no result: issue766 issue764 my python function : finally I have obviously added the xlwings add-in in Excel, and I trust access to the VBA project object model NOTE: Run Python works just fine with no error. so, can anyone point to what I am doing wrong or missing? thanks in advance. A: I have solved this by uninstalling the python version that I have and the reinstalling it using anaconda3 distribution. after that from the anaconda prompt type xlwings addin install and every thing worked fine.
Can't import UDF from python to Excel using xlwings
I am using python to write a function and then using xlwings I am trying to import it into Excel but I faced the following error: My xlwings version is 0.28.5, and python's is 3.10, and I am using Excel 2013. also both xlwings32-0.28.5.dll and xlwings64-0.28.5 are in the same folder as the python3.10.exe the name of the python file and Excel file are the same as instructed by the documentation I have also tried specifying the python interpreter path in the interpreter box located in the xlwings ribbon in Excel but with no results. I have read the following issues but also with no result: issue766 issue764 my python function : finally I have obviously added the xlwings add-in in Excel, and I trust access to the VBA project object model NOTE: Run Python works just fine with no error. so, can anyone point to what I am doing wrong or missing? thanks in advance.
[ "I have solved this by uninstalling the python version that I have and the reinstalling it using anaconda3 distribution. after that from the anaconda prompt type xlwings addin install and every thing worked fine.\n" ]
[ 0 ]
[]
[]
[ "excel", "python", "xlwings" ]
stackoverflow_0074456094_excel_python_xlwings.txt
Q: Date conversion in Pyspark Dataframe I have a date in Pyspark dataframe in "String" format as "dd-MMM-yyyy ( eg "01-Jan-2022"). I want to convert this to date with the same format so the Output should be 01-Jan-2022 The code i am using for this is as below, but the format doesn't convert properly. It converts the date to "dd-MM-yyyy" format (ie 01-01-2022), whereas i want it in "dd-MMM-yyyy"(ie "01-Jan-2022") format. My code is here: df = df.withColumn("mydate",F.to_date(df.mydate,"dd-MMM-yyyy")) This results in date type converted to "date" from "string" but the format doesn't convert properly. A: The documentation of to_date links to the format definition here. Have you tried using dd-LLL-yyyy? A: Assume your original data has date as string: df = spark.createDataFrame(data=[["01-Jan-2022",],["31-Dec-2022",]], schema=["date_initial"]) df.show() +------------+ |date_initial| +------------+ | 01-Jan-2022| | 31-Dec-2022| +------------+ Parse this string date to Date type. The "yyyy-MM-dd" output here is just the toString representation of the Date type. You can find the implementation details in this source code link. Look for __repr__ method in class date. df = df.withColumn("date_dt", F.to_date("date_initial", format="dd-MMM-yyyy")) df.show() +------------+----------+ |date_initial| date_dt| +------------+----------+ | 01-Jan-2022|2022-01-01| | 31-Dec-2022|2022-12-31| +------------+----------+ You can format the Date type back to string to confirm: df = df.withColumn("date_str", F.date_format("date_dt", format="dd-MMM-yyyy")) df.show() +------------+----------+-----------+ |date_initial| date_dt| date_str| +------------+----------+-----------+ | 01-Jan-2022|2022-01-01|01-Jan-2022| | 31-Dec-2022|2022-12-31|31-Dec-2022| +------------+----------+-----------+
Date conversion in Pyspark Dataframe
I have a date in Pyspark dataframe in "String" format as "dd-MMM-yyyy ( eg "01-Jan-2022"). I want to convert this to date with the same format so the Output should be 01-Jan-2022 The code i am using for this is as below, but the format doesn't convert properly. It converts the date to "dd-MM-yyyy" format (ie 01-01-2022), whereas i want it in "dd-MMM-yyyy"(ie "01-Jan-2022") format. My code is here: df = df.withColumn("mydate",F.to_date(df.mydate,"dd-MMM-yyyy")) This results in date type converted to "date" from "string" but the format doesn't convert properly.
[ "The documentation of to_date links to the format definition here.\nHave you tried using dd-LLL-yyyy?\n", "Assume your original data has date as string:\ndf = spark.createDataFrame(data=[[\"01-Jan-2022\",],[\"31-Dec-2022\",]], schema=[\"date_initial\"])\ndf.show()\n\n+------------+\n|date_initial|\n+------------+\n| 01-Jan-2022|\n| 31-Dec-2022|\n+------------+\n\nParse this string date to Date type. The \"yyyy-MM-dd\" output here is just the toString representation of the Date type. You can find the implementation details in this source code link. Look for __repr__ method in class date.\ndf = df.withColumn(\"date_dt\", F.to_date(\"date_initial\", format=\"dd-MMM-yyyy\"))\ndf.show()\n\n+------------+----------+\n|date_initial| date_dt|\n+------------+----------+\n| 01-Jan-2022|2022-01-01|\n| 31-Dec-2022|2022-12-31|\n+------------+----------+\n\nYou can format the Date type back to string to confirm:\ndf = df.withColumn(\"date_str\", F.date_format(\"date_dt\", format=\"dd-MMM-yyyy\"))\ndf.show()\n\n+------------+----------+-----------+\n|date_initial| date_dt| date_str|\n+------------+----------+-----------+\n| 01-Jan-2022|2022-01-01|01-Jan-2022|\n| 31-Dec-2022|2022-12-31|31-Dec-2022|\n+------------+----------+-----------+\n\n" ]
[ 0, 0 ]
[]
[]
[ "azure_databricks", "pyspark", "python" ]
stackoverflow_0074504540_azure_databricks_pyspark_python.txt
Q: Creating 3-Way Data Tensor in Python and performing PARAFAC decomposition I'm new to Python and Data Science and replicating a research paper I found on Vehicle Maintenance. I'm trying to analyze vehicle maintenance data to find seasonal patterns in component maintenance over absolute time and also component maintenance patterns over the age of a vehicle. By component I mean a specific part. I want to create a 3-way data tensor with vehicle number on the vertical axis, component number on the horizontal axis and the depth representing time(absolute or vehicle age). Each element will represent the count of jobs performed on the component at a given vehicle number, component number and time. I will appreciate it if someone can point me in the right direction to understand how to create a 3D tensor with the described data. The resources I've found so far deal with numpy matrices only, but my data is alpha numeric with the time unit being month. Direction on available resources on PARAFAC decomposition in Python will also be greatly appreciated. Thanks! A: You can use TensorLy which implements tensor operations, decompositions and regressions, and in particular, allows you to apply PARAFAC easily. Also checkout the notebooks for an introduction to tensor methods with TensorLy. There is also a chapter on tensor decomposition that includes Parafac and demonstrates how to apply it in practice. A: You can also use tensorlearn package. Especially for PARAFAC see the link for a complete explanation and example.
Creating 3-Way Data Tensor in Python and performing PARAFAC decomposition
I'm new to Python and Data Science and replicating a research paper I found on Vehicle Maintenance. I'm trying to analyze vehicle maintenance data to find seasonal patterns in component maintenance over absolute time and also component maintenance patterns over the age of a vehicle. By component I mean a specific part. I want to create a 3-way data tensor with vehicle number on the vertical axis, component number on the horizontal axis and the depth representing time(absolute or vehicle age). Each element will represent the count of jobs performed on the component at a given vehicle number, component number and time. I will appreciate it if someone can point me in the right direction to understand how to create a 3D tensor with the described data. The resources I've found so far deal with numpy matrices only, but my data is alpha numeric with the time unit being month. Direction on available resources on PARAFAC decomposition in Python will also be greatly appreciated. Thanks!
[ "You can use TensorLy which implements tensor operations, decompositions and regressions, and in particular, allows you to apply PARAFAC easily.\nAlso checkout the notebooks for an introduction to tensor methods with TensorLy. There is also a chapter on tensor decomposition that includes Parafac and demonstrates how to apply it in practice.\n", "You can also use tensorlearn package. Especially for PARAFAC see the link for a complete explanation and example.\n" ]
[ 3, 0 ]
[]
[]
[ "data_science", "multidimensional_array", "python", "tensor" ]
stackoverflow_0048327766_data_science_multidimensional_array_python_tensor.txt
Q: Re-compose a Tensor after tensor factorization I am trying to decompose a 3D matrix using python library scikit-tensor. I managed to decompose my Tensor (with dimensions 100x50x5) into three matrices. My question is how can I compose the initial matrix again using the decomposed matrix produced with Tensor factorization? I want to check if the decomposition has any meaning. My code is the following: import logging from scipy.io.matlab import loadmat from sktensor import dtensor, cp_als import numpy as np //Set logging to DEBUG to see CP-ALS information logging.basicConfig(level=logging.DEBUG) T = np.ones((400, 50)) T = dtensor(T) P, fit, itr, exectimes = cp_als(T, 10, init='random') // how can I re-compose the Matrix T? TA = np.dot(P.U[0], P.U[1].T) I am using the canonical decomposition as provided from the scikit-tensor library function cp_als. Also what is the expected dimensionality of the decomposed matrices? A: The CP product of, for example, 4 matrices can be expressed using Einstein notation as or in numpy as numpy.einsum('az,bz,cz,dz -> abcd', A, B, C, D) so in your case you would use numpy.einsum('az,bz->ab', P.U[0], P.U[1]) or, in your 3-matrix case numpy.einsum('az,bz,cz->abc', P.U[0], P.U[1], P.U[2]) sktensor.ktensor.ktensor also have a method totensor() that does exactly this: np.allclose(np.einsum('az,bz->ab', P.U[0], P.U[1]), P.totensor()) >>> True A: See an explanation of CP here. You may also use tensorlearn package to rebuild the tensor.
Re-compose a Tensor after tensor factorization
I am trying to decompose a 3D matrix using python library scikit-tensor. I managed to decompose my Tensor (with dimensions 100x50x5) into three matrices. My question is how can I compose the initial matrix again using the decomposed matrix produced with Tensor factorization? I want to check if the decomposition has any meaning. My code is the following: import logging from scipy.io.matlab import loadmat from sktensor import dtensor, cp_als import numpy as np //Set logging to DEBUG to see CP-ALS information logging.basicConfig(level=logging.DEBUG) T = np.ones((400, 50)) T = dtensor(T) P, fit, itr, exectimes = cp_als(T, 10, init='random') // how can I re-compose the Matrix T? TA = np.dot(P.U[0], P.U[1].T) I am using the canonical decomposition as provided from the scikit-tensor library function cp_als. Also what is the expected dimensionality of the decomposed matrices?
[ "The CP product of, for example, 4 matrices\n\ncan be expressed using Einstein notation as\n\nor in numpy as\nnumpy.einsum('az,bz,cz,dz -> abcd', A, B, C, D)\n\nso in your case you would use\nnumpy.einsum('az,bz->ab', P.U[0], P.U[1])\n\nor, in your 3-matrix case\nnumpy.einsum('az,bz,cz->abc', P.U[0], P.U[1], P.U[2])\n\nsktensor.ktensor.ktensor also have a method totensor() that does exactly this:\nnp.allclose(np.einsum('az,bz->ab', P.U[0], P.U[1]), P.totensor())\n>>> True\n\n", "See an explanation of CP here. You may also use tensorlearn package to rebuild the tensor.\n" ]
[ 7, 0 ]
[]
[]
[ "data_science", "math", "python", "scikits" ]
stackoverflow_0039748285_data_science_math_python_scikits.txt
Q: Keep only functions in a Python script Assume I have a Python script or module bar.py like this one # bar.py some_variable = 1 print(some_variable) def some_function(): print('hello') I need to create a copy of the script that only keeps the functions and does not contain any module-level code. For example, I would need to automatically create a copy of the script bar_fun.py that is defined as # bar_fun.py def some_function(): print('hello') Any suggestion on how to do this? A: In this way you can find all callable objects in the bar: >>> import bar >>> list(filter(lambda item: callable(getattr(bar, item)), bar.__dir__())) ['f'] To copy them you can store all callable objects in a list, tuple or any other data structure that fits your need. >>> list(map(lambda attr: getattr(bar, attr), filter(lambda item: callable(getattr(bar, item)), bar.__dir__()))) [<function f at 0x7f82d9aa6950>] And if you need to store it into a file, you can use pickle: >>> callables = list(map(lambda attr: getattr(bar, attr), filter(lambda item: callable(getattr(bar, item)), bar.__dir__()))) >>> import pickle >>> file = open("functions", "wb") >>> pickle.dump(callables, file)
Keep only functions in a Python script
Assume I have a Python script or module bar.py like this one # bar.py some_variable = 1 print(some_variable) def some_function(): print('hello') I need to create a copy of the script that only keeps the functions and does not contain any module-level code. For example, I would need to automatically create a copy of the script bar_fun.py that is defined as # bar_fun.py def some_function(): print('hello') Any suggestion on how to do this?
[ "In this way you can find all callable objects in the bar:\n>>> import bar\n>>> list(filter(lambda item: callable(getattr(bar, item)), bar.__dir__()))\n['f']\n\nTo copy them you can store all callable objects in a list, tuple or any other data structure that fits your need.\n>>> list(map(lambda attr: getattr(bar, attr), filter(lambda item: callable(getattr(bar, item)), bar.__dir__())))\n[<function f at 0x7f82d9aa6950>]\n\nAnd if you need to store it into a file, you can use pickle:\n>>> callables = list(map(lambda attr: getattr(bar, attr), filter(lambda item: callable(getattr(bar, item)), bar.__dir__())))\n>>> import pickle\n>>> file = open(\"functions\", \"wb\")\n>>> pickle.dump(callables, file)\n\n" ]
[ 0 ]
[]
[]
[ "python", "python_3.x", "python_import" ]
stackoverflow_0074505931_python_python_3.x_python_import.txt
Q: Is there a way in Python to return a value via an output parameter? Some languages have the feature to return values using parameters also like C#. Let’s take a look at an example: class OutClass { static void OutMethod(out int age) { age = 26; } static void Main() { int value; OutMethod(out value); // value is now 26 } } So is there anything similar in Python to get a value using parameter, too? A: Python can return a tuple of multiple items: def func(): return 1,2,3 a,b,c = func() But you can also pass a mutable parameter, and return values via mutation of the object as well: def func(a): a.append(1) a.append(2) a.append(3) L=[] func(L) print(L) # [1,2,3] A: You mean like passing by reference? For Python object the default is to pass by reference. However, I don't think you can change the reference in Python (otherwise it won't affect the original object). For example: def addToList(theList): # yes, the caller's list can be appended theList.append(3) theList.append(4) def addToNewList(theList): # no, the caller's list cannot be reassigned theList = list() theList.append(5) theList.append(6) myList = list() myList.append(1) myList.append(2) addToList(myList) print(myList) # [1, 2, 3, 4] addToNewList(myList) print(myList) # [1, 2, 3, 4] A: Pass a list or something like that and put the return value in there. A: In addition, if you feel like reading some code, I think that pywin32 has a way to handle output parameters. In the Windows API it's common practice to rely heavily on output parameters, so I figure they must have dealt with it in some way. A: You can do that with mutable objects, but in most cases it does not make sense because you can return multiple values (or a dictionary if you want to change a function's return value without breaking existing calls to it). I can only think of one case where you might need it - that is threading, or more exactly, passing a value between threads. def outer(): class ReturnValue: val = None ret = ReturnValue() def t(): # ret = 5 won't work obviously because that will set # the local name "ret" in the "t" function. But you # can change the attributes of "ret": ret.val = 5 threading.Thread(target = t).start() # Later, you can get the return value out of "ret.val" in the outer function A: Adding to Tark-Tolonen's answer: Please absolutely avoid altering the object reference of the output argument in your function, otherwise the output argument won't work. For instance, I wish to pass an ndarray into a function my_fun and modify it def my_fun(out_arr) out_arr = np.ones_like(out_arr) print(out_arr) # prints 1, 1, 1, ...... print(id(out_arr)) a = np.zeros(100) my_fun(a) print(a) # prints 0, 0, 0, .... print(id(a)) After calling my_fun, array a stills remains all zeros since the function np.ones_like returns a reference to another array full of ones and assigns it to out_arr instead of modifying the object reference passed by out_arr directly. Running this code you will find that two print(id()) gives different memory locations. Also, beware of the array operators from numpy, they usually returns a reference to another array if you write something like this def my_fun(arr_a, arr_b, out_arr) out_arr = arr_a - arr_b Using the - and = operator might cause similar problems. To prevent having out_arr's memory location altered, you can use the numpy functions that does the exactly same operations but has a out parameter built in. The proceeding code should be rewritten as def my_fun(arr_a, arr_b, out_arr): np.subtract(arr_a, arr_b, out = out_arr) And the memory location of out_arr remains the same before and after calling my_fun while its values gets modified successfully.
Is there a way in Python to return a value via an output parameter?
Some languages have the feature to return values using parameters also like C#. Let’s take a look at an example: class OutClass { static void OutMethod(out int age) { age = 26; } static void Main() { int value; OutMethod(out value); // value is now 26 } } So is there anything similar in Python to get a value using parameter, too?
[ "Python can return a tuple of multiple items:\ndef func():\n return 1,2,3\n\na,b,c = func()\n\nBut you can also pass a mutable parameter, and return values via mutation of the object as well:\ndef func(a):\n a.append(1)\n a.append(2)\n a.append(3)\n\nL=[]\nfunc(L)\nprint(L) # [1,2,3]\n\n", "You mean like passing by reference?\nFor Python object the default is to pass by reference. However, I don't think you can change the reference in Python (otherwise it won't affect the original object).\nFor example:\ndef addToList(theList): # yes, the caller's list can be appended\n theList.append(3)\n theList.append(4)\n\ndef addToNewList(theList): # no, the caller's list cannot be reassigned\n theList = list()\n theList.append(5)\n theList.append(6)\n\nmyList = list()\nmyList.append(1)\nmyList.append(2)\naddToList(myList)\nprint(myList) # [1, 2, 3, 4]\naddToNewList(myList)\nprint(myList) # [1, 2, 3, 4]\n\n", "Pass a list or something like that and put the return value in there.\n", "In addition, if you feel like reading some code, I think that pywin32 has a way to handle output parameters.\nIn the Windows API it's common practice to rely heavily on output parameters, so I figure they must have dealt with it in some way.\n", "You can do that with mutable objects, but in most cases it does not make sense because you can return multiple values (or a dictionary if you want to change a function's return value without breaking existing calls to it).\nI can only think of one case where you might need it - that is threading, or more exactly, passing a value between threads.\ndef outer():\n class ReturnValue:\n val = None\n ret = ReturnValue()\n def t():\n # ret = 5 won't work obviously because that will set\n # the local name \"ret\" in the \"t\" function. But you\n # can change the attributes of \"ret\":\n ret.val = 5\n\n threading.Thread(target = t).start()\n\n # Later, you can get the return value out of \"ret.val\" in the outer function\n\n", "Adding to Tark-Tolonen's answer:\nPlease absolutely avoid altering the object reference of the output argument in your function, otherwise the output argument won't work. For instance, I wish to pass an ndarray into a function my_fun and modify it\ndef my_fun(out_arr)\n out_arr = np.ones_like(out_arr)\n print(out_arr) # prints 1, 1, 1, ......\n print(id(out_arr))\n\na = np.zeros(100)\nmy_fun(a)\nprint(a) # prints 0, 0, 0, ....\nprint(id(a))\n\nAfter calling my_fun, array a stills remains all zeros since the function np.ones_like returns a reference to another array full of ones and assigns it to out_arr instead of modifying the object reference passed by out_arr directly. Running this code you will find that two print(id()) gives different memory locations.\nAlso, beware of the array operators from numpy, they usually returns a reference to another array if you write something like this\ndef my_fun(arr_a, arr_b, out_arr)\n out_arr = arr_a - arr_b\n\nUsing the - and = operator might cause similar problems. To prevent having out_arr's memory location altered, you can use the numpy functions that does the exactly same operations but has a out parameter built in. The proceeding code should be rewritten as\ndef my_fun(arr_a, arr_b, out_arr):\n np.subtract(arr_a, arr_b, out = out_arr)\n\nAnd the memory location of out_arr remains the same before and after calling my_fun while its values gets modified successfully.\n" ]
[ 83, 8, 4, 1, 1, 0 ]
[]
[]
[ "python" ]
stackoverflow_0004702249_python.txt
Q: How can I change an html based on which link the user clicks? I've created a database of recipes and on a recipes.html I display all the recipes and made the names links that I want to bring you to a new page that displays all of that recipes information. How can I make another html page for the single recipe that will change depending on which recipe a user chooses? I don't want to hard code a lot of html pages especially because a user could add a recipe to the data base. I'm working with Flask. The table I have on recipes.html <table> <thead> <tr> <th>Recipe Title</th> <th>Total Time</th> <th>Difficulty</th> </tr> </thead> <tbody> {% for row in names %} <tr> <td><a href="single-recipe.html">{{row[0]}}</a></td> <td>{{row[1]}}</td> <td>{{row[2]}}</td> </tr> {% endfor %} </tbody> </table> A: You can use href with parameters and pass in an argument, such as single-recipe?id=apple_pie. Then in flask you can get the id by doing @app.route(...) def single-recipe(): id = request.args.get('id') And return the relevant page
How can I change an html based on which link the user clicks?
I've created a database of recipes and on a recipes.html I display all the recipes and made the names links that I want to bring you to a new page that displays all of that recipes information. How can I make another html page for the single recipe that will change depending on which recipe a user chooses? I don't want to hard code a lot of html pages especially because a user could add a recipe to the data base. I'm working with Flask. The table I have on recipes.html <table> <thead> <tr> <th>Recipe Title</th> <th>Total Time</th> <th>Difficulty</th> </tr> </thead> <tbody> {% for row in names %} <tr> <td><a href="single-recipe.html">{{row[0]}}</a></td> <td>{{row[1]}}</td> <td>{{row[2]}}</td> </tr> {% endfor %} </tbody> </table>
[ "You can use href with parameters and pass in an argument, such as single-recipe?id=apple_pie.\nThen in flask you can get the id by doing\n@app.route(...)\ndef single-recipe():\n id = request.args.get('id')\n\nAnd return the relevant page\n" ]
[ 1 ]
[]
[]
[ "flask", "python", "sql" ]
stackoverflow_0074505531_flask_python_sql.txt
Q: Assigning value in python list I tried to create a tic tac toe program with python list: theBoard=[' '' '' ']*3 def userInput(board): loop=True while loop: userInput=input("Please enter (row,column)") row=int(userInput[0]) column=int(userInput[2]) if row<1 or row>3: print('[ERROR: Invalid Input]') loop=True elif column<1 or column>3: print('[ERROR: Invalid Input]') loop=True else: board[row-1][column-1]='X' loop=False def drawBoard(board): #Function that prints out board print(board[0][0]+' | '+board[0][1]+' | '+board[0][2]) print('---------') print(board[1][0]+' | '+board[1][1]+' | '+board[1][2]) print('---------') print(board[2][0]+' | '+board[2][1]+' | '+board[2][2]) print('---------') userInput(theBoard) drawBoard(theBoard) Error I got: TypeError: 'str' object does not support item assignment edit: sorry, i forgot to add the error line I dont know why but the program mistook theBoard as a string rather than a list. *A lot of people asked me to change theBoard=[' '' '' ']*3 to theBoard=[' ',' ',' ']*3 which i did however, I am still receiving the same error A: In the line theBoard=[' '' '' ']*3 You are creating a list of size 9 in the line board[row-1][column-1] You are treating the list as if it is a 2d list To make theBoard in to a 2d list try: theBoard=[' ',' ',' '] theBoard = [theBoard,theBoard,theBoard] A: Well, its not the program, its you who mistook string for a list. You declare the board as: theBoard=[' '' '' ']*3 So, youre passing a single string (technically three strings, but passed as one, so for your comfort, theyre concatenated). Output is a list with three strings. Therefore, when you call theBoard[0][1] - you are trying to access the second character of the first string. And that is ok, but, as strings are immutable, changing it is not allowed. Declaring board like below is pretty much what you wanted, but it still gives you one dimentional table (thereforo, you should access the last element by calling theBoard[8], and not theBoard[2][2] theBoard=[' ',' ',' ']*3 If you want it to be two dimensional, try: theBoard = [["","",""] for i in range(3)]
Assigning value in python list
I tried to create a tic tac toe program with python list: theBoard=[' '' '' ']*3 def userInput(board): loop=True while loop: userInput=input("Please enter (row,column)") row=int(userInput[0]) column=int(userInput[2]) if row<1 or row>3: print('[ERROR: Invalid Input]') loop=True elif column<1 or column>3: print('[ERROR: Invalid Input]') loop=True else: board[row-1][column-1]='X' loop=False def drawBoard(board): #Function that prints out board print(board[0][0]+' | '+board[0][1]+' | '+board[0][2]) print('---------') print(board[1][0]+' | '+board[1][1]+' | '+board[1][2]) print('---------') print(board[2][0]+' | '+board[2][1]+' | '+board[2][2]) print('---------') userInput(theBoard) drawBoard(theBoard) Error I got: TypeError: 'str' object does not support item assignment edit: sorry, i forgot to add the error line I dont know why but the program mistook theBoard as a string rather than a list. *A lot of people asked me to change theBoard=[' '' '' ']*3 to theBoard=[' ',' ',' ']*3 which i did however, I am still receiving the same error
[ "In the line\ntheBoard=[' '' '' ']*3\n\nYou are creating a list of size 9\nin the line\nboard[row-1][column-1]\n\nYou are treating the list as if it is a 2d list\nTo make theBoard in to a 2d list try:\ntheBoard=[' ',' ',' ']\ntheBoard = [theBoard,theBoard,theBoard]\n\n", "Well, its not the program, its you who mistook string for a list. You declare the board as:\ntheBoard=[' '' '' ']*3\n\nSo, youre passing a single string (technically three strings, but passed as one, so for your comfort, theyre concatenated). Output is a list with three strings.\nTherefore, when you call theBoard[0][1] - you are trying to access the second character of the first string. And that is ok, but, as strings are immutable, changing it is not allowed.\nDeclaring board like below is pretty much what you wanted, but it still gives you one dimentional table (thereforo, you should access the last element by calling theBoard[8], and not theBoard[2][2]\ntheBoard=[' ',' ',' ']*3\nIf you want it to be two dimensional, try:\ntheBoard = [[\"\",\"\",\"\"] for i in range(3)]\n\n" ]
[ 0, 0 ]
[]
[]
[ "list", "python" ]
stackoverflow_0074506004_list_python.txt
Q: Streamlit app keep showing "Please wait..." and give error in terminal The following error occurred in the terminal in Pycharm by running streamlit run app.py 2022-08-19 20:50:02.531 Uncaught exception Traceback (most recent call last): File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\http1connection.py", line 276, in _read_message delegate.finish() File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\routing.py", line 268, in finish self.delegate.finish() File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\web.py", line 2322, in finish self.execute() File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\web.py", line 2344, in execute self.handler = self.handler_class( File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\websocket.py", line 224, in __init__ super().__init__(application, request, **kwargs) File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\web.py", line 215, in __init__ super().__init__() File "C:\Python39\lib\typing.py", line 1083, in _no_init raise TypeError('Protocols cannot be instantiated`enter code here`') TypeError: Protocols cannot be instantiated A: I had the same issue. Uninstall streamlit and install the version 1.11.0 Type into the terminal: pip uninstall streamlit pip install streamlit==1.11.0 A: This problem shows up because of the streaming version. You can uninstall and reinstall the previous version. Try this command: pip uninstall streamlit pip install streamlit==1.11.0
Streamlit app keep showing "Please wait..." and give error in terminal
The following error occurred in the terminal in Pycharm by running streamlit run app.py 2022-08-19 20:50:02.531 Uncaught exception Traceback (most recent call last): File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\http1connection.py", line 276, in _read_message delegate.finish() File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\routing.py", line 268, in finish self.delegate.finish() File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\web.py", line 2322, in finish self.execute() File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\web.py", line 2344, in execute self.handler = self.handler_class( File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\websocket.py", line 224, in __init__ super().__init__(application, request, **kwargs) File "e:\project\movies-recommender-system\venv\lib\site-packages\tornado\web.py", line 215, in __init__ super().__init__() File "C:\Python39\lib\typing.py", line 1083, in _no_init raise TypeError('Protocols cannot be instantiated`enter code here`') TypeError: Protocols cannot be instantiated
[ "I had the same issue. Uninstall streamlit and install the version 1.11.0\nType into the terminal:\npip uninstall streamlit\n\npip install streamlit==1.11.0\n\n", "This problem shows up because of the streaming version. You can uninstall and reinstall the previous version.\nTry this command:\n\n\npip uninstall streamlit\npip install streamlit==1.11.0\n\n\n\n" ]
[ 4, 0 ]
[]
[]
[ "python", "streamlit", "web_applications" ]
stackoverflow_0073419067_python_streamlit_web_applications.txt
Q: Sphinx cannot find my python files. Says 'no module named ...' I have a question regarding the Sphinx autodoc generation. I feel that what I am trying to do should be very simple, but for some reason, it won't work. I have a Python project of which the directory is named slotting_tool. This directory is located at C:\Users\Sam\Desktop\picnic-data-shared-tools\standalone\slotting_tool I set up Sphinx using sphinx-quickstart. Then my directory structure (simplified) is as follows: slotting_tool/ |_ build/ |_ source/ |___ conf.py |___ index.rst |_ main/ |___ run_me.py Now, I set the root directory of my project to slotting_tool by adding the following to the conf.py file. import os import sys sys.path.insert(0, os.path.abspath('..')) Next, I update my index.rst file to look like this: .. toctree:: :maxdepth: 2 :caption: Contents: .. automodule:: main.run_me :members: When trying to build my html using the sphinx-build -b html source .\build command, I get the following output, with the no module named error: (base) C:\Users\Sam\Desktop\picnic-data-shared-tools\standalone\slotting_tool>sphinx-build -b html source .\build Running Sphinx v1.8.1 loading pickled environment... done building [mo]: targets for 0 po files that are out of date building [html]: targets for 1 source files that are out of date updating environment: [] 0 added, 1 changed, 0 removed reading sources... [100%] index WARNING: autodoc: failed to import module 'run_me' from module 'main'; the following exception was raised: No module named 'standalone' looking for now-outdated files... none found pickling environment... done checking consistency... done preparing documents... done writing output... [100%] index generating indices... genindex writing additional pages... search copying static files... done copying extra files... done dumping search index in English (code: en) ... done dumping object inventory... done build succeeded, 1 warning. The HTML pages are in build. There are no HTML pages that refer to run_me.py in build. I have tried setting my root directory to all different kinds of directories and I have tried replacing all dots . with backslashes \ and so forth, but can't seem to find out what I'm doing wrong. By the way, the statement that standalone is not a module is in fact true, it is just a directory without an __init__.py. Don't know if that might have caused some trouble? Anyone have an idea? A: This is the usual "canonical approach" to "getting started" applied to the case when your source code resides in a src directory like Project/src instead of simply being inside the Project base directory. Follows these steps: Create a docs directory in your Project directory (it's from this docs directory the commands in the following steps are executed). sphinx-quickstart (choose separate source from build. Places .html and .rst files in different folders). sphinx-apidoc -o ./source ../src make html This would yield the following structure (provided you .py source files reside in Project/src): Project | ├───docs │ │ make.bat │ │ Makefile │ │ │ ├───build │ └───source │ │ conf.py │ │ index.rst │ │ modules.rst │ │ stack.rst │ │ │ ├───_static │ └───_templates └───src stack.py In your conf.py you'd add (after step 2): import os import sys sys.path.insert(0, os.path.abspath(os.path.join('..', '..', 'src'))) Also include in conf.py: extensions = ['sphinx.ext.autodoc', 'sphinx.ext.napoleon'] And in index.rst you'd link modules.rst: Welcome to Project's documentation! ================================ .. toctree:: :maxdepth: 2 :caption: Contents: modules Indices and tables ================== * :ref:`genindex` * :ref:`modindex` * :ref:`search` Your stack.rst and modules.rst were auto-generated by sphinx-apidoc, no need to change them (at this point). But just so you know this is what they look like: stack.rst: stack module ============ .. automodule:: stack :members: :undoc-members: :show-inheritance: modules.rst: src === .. toctree:: :maxdepth: 4 stack After `make html` open `Project/docs/build/index.html` in your browser, the results: and: A: Let's take an example with a project: dl4sci-school-2020 on master branch, commit: 6cbcc2c72d5dc74d2defa56bf63706fd628d9892: ├── dl4sci-school-2020 │   ├── LICENSE │   ├── README.md │   ├── src │   │   └── __init__.py │   └── utility │   ├── __init__.py │   └── utils.py and utility package has a utils.py module: Follow this process(FYI, I'm using sphinx-build 3.1.2): create a docs/ directory under you project: mkdir docs cd docs start sphinx within docs/, and just pass your project_name, your_name & version of your choice and rest keep defaults. sphinx-quickstart you will get below auto-generated in your docs/ folder ├── docs │   ├── Makefile │   ├── build │   ├── make.bat │   └── source │   ├── _static │   ├── _templates │   ├── conf.py │   └── index.rst Since, we created a separate docs directory so we need sphinx find where to find build files and python src module. So, edit the conf.py file, you can use my conf.py file too import os import sys basedir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..')) sys.path.insert(0, basedir) Now, to enable access to nested multiple packages & modules if any, you need to edit index.rst file. .. toctree:: :maxdepth: 2 :caption: Description of my CodeBase: modules The modules picks up content from modules.rst file which we will create below: Make sure you're still in doc/ to run the below command sphinx-apidoc -o ./source .. The output you get: ├── docs │   ├── Makefile │   ├── build │   ├── make.bat │   └── source │   ├── _static │   ├── _templates │   ├── conf.py │   ├── index.rst │   ├── modules.rst │   ├── src.rst │   └── utility.rst now run: make html Now, go and open in browser of your choice, file:///<absolute_path_to_your_project>/dl4sci-school-2020/docs/build/html/index.html have you beautiful documentation ready https://imgur.com/5t1uguh FYI, You can switch any theme of your choice, I found sphinx_rtd_theme and extension sphinxcontrib.napoleon super dope!. Thanks to their creators, so I used it. below does the work! pip install sphinxcontrib-napoleon pip install sphinx-rtd-theme You can host your documentation it on readthedocs enjoy documenting your code! A: sys.path.insert(0, os.path.abspath('../..')) That's not correct. Steve Piercy's comment is not entirely on point (you don't need to add a __init__.py since you're using a simple module) but they're right that autodoc will try to import the module and then inspect the content. Hoever assuming your tree is doc/conf.py src/stack.py then you're just adding the folder which contains your repository to the sys.path which is completely useless. What you need to do is add the src folder to sys.path, such that when sphinx tries to import stack it finds your module. So your line should be: sys.path.insert(0, os.path.abspath('../src') (the path should be relative to conf.py). Of note: since you have something which is completely synthetic and should contain no secrets, an accessible repository or a zip file of the entire thing makes it much easier to diagnose issues and provide relevant help: the less has to be inferred, the less can be wrong in the answer. A: IMHO running pip install --no-deps -e . in the top project folder (or where ever setup.py is) to get an "editable" install is a better alternative to get your package modules on the PYTHONPATH than altering it in docs/conf.py using sys.path.
Sphinx cannot find my python files. Says 'no module named ...'
I have a question regarding the Sphinx autodoc generation. I feel that what I am trying to do should be very simple, but for some reason, it won't work. I have a Python project of which the directory is named slotting_tool. This directory is located at C:\Users\Sam\Desktop\picnic-data-shared-tools\standalone\slotting_tool I set up Sphinx using sphinx-quickstart. Then my directory structure (simplified) is as follows: slotting_tool/ |_ build/ |_ source/ |___ conf.py |___ index.rst |_ main/ |___ run_me.py Now, I set the root directory of my project to slotting_tool by adding the following to the conf.py file. import os import sys sys.path.insert(0, os.path.abspath('..')) Next, I update my index.rst file to look like this: .. toctree:: :maxdepth: 2 :caption: Contents: .. automodule:: main.run_me :members: When trying to build my html using the sphinx-build -b html source .\build command, I get the following output, with the no module named error: (base) C:\Users\Sam\Desktop\picnic-data-shared-tools\standalone\slotting_tool>sphinx-build -b html source .\build Running Sphinx v1.8.1 loading pickled environment... done building [mo]: targets for 0 po files that are out of date building [html]: targets for 1 source files that are out of date updating environment: [] 0 added, 1 changed, 0 removed reading sources... [100%] index WARNING: autodoc: failed to import module 'run_me' from module 'main'; the following exception was raised: No module named 'standalone' looking for now-outdated files... none found pickling environment... done checking consistency... done preparing documents... done writing output... [100%] index generating indices... genindex writing additional pages... search copying static files... done copying extra files... done dumping search index in English (code: en) ... done dumping object inventory... done build succeeded, 1 warning. The HTML pages are in build. There are no HTML pages that refer to run_me.py in build. I have tried setting my root directory to all different kinds of directories and I have tried replacing all dots . with backslashes \ and so forth, but can't seem to find out what I'm doing wrong. By the way, the statement that standalone is not a module is in fact true, it is just a directory without an __init__.py. Don't know if that might have caused some trouble? Anyone have an idea?
[ "This is the usual \"canonical approach\" to \"getting started\" applied to the case when your source code resides in a src directory like Project/src instead of simply being inside the Project base directory.\nFollows these steps:\n\nCreate a docs directory in your Project directory (it's from this docs directory the commands in the following steps are executed).\n\nsphinx-quickstart (choose separate source from build. Places .html and .rst files in different folders).\n\nsphinx-apidoc -o ./source ../src\n\nmake html\n\n\nThis would yield the following structure (provided you .py source files reside in Project/src):\nProject\n|\n├───docs\n│ │ make.bat\n│ │ Makefile\n│ │\n│ ├───build\n│ └───source\n│ │ conf.py\n│ │ index.rst\n│ │ modules.rst\n│ │ stack.rst\n│ │\n│ ├───_static\n│ └───_templates\n└───src\n stack.py\n\nIn your conf.py you'd add (after step 2):\nimport os\nimport sys\nsys.path.insert(0, os.path.abspath(os.path.join('..', '..', 'src')))\n\nAlso include in conf.py:\nextensions = ['sphinx.ext.autodoc', 'sphinx.ext.napoleon']\nAnd in index.rst you'd link modules.rst:\nWelcome to Project's documentation!\n================================\n\n.. toctree::\n :maxdepth: 2\n :caption: Contents:\n\n modules\n \n \nIndices and tables\n==================\n\n* :ref:`genindex`\n* :ref:`modindex`\n* :ref:`search`\n\n\nYour stack.rst and modules.rst were auto-generated by sphinx-apidoc, no need to change them (at this point). But just so you know this is what they look like:\nstack.rst:\nstack module\n============\n\n.. automodule:: stack\n :members:\n :undoc-members:\n :show-inheritance:\n\nmodules.rst:\nsrc\n===\n\n.. toctree::\n :maxdepth: 4\n\n stack\n\n\n\nAfter `make html` open `Project/docs/build/index.html` in your browser, the results:\n\nand:\n\n", "Let's take an example with a project: dl4sci-school-2020 on master branch, commit: 6cbcc2c72d5dc74d2defa56bf63706fd628d9892:\n├── dl4sci-school-2020\n│   ├── LICENSE\n│   ├── README.md\n│   ├── src\n│   │   └── __init__.py\n│   └── utility\n│   ├── __init__.py\n│   └── utils.py\n\nand utility package has a utils.py module:\nFollow this process(FYI, I'm using sphinx-build 3.1.2):\n\ncreate a docs/ directory under you project:\n\nmkdir docs\ncd docs\n\n\nstart sphinx within docs/, and just pass your project_name, your_name & version of your choice and rest keep defaults.\n\nsphinx-quickstart\n\nyou will get below auto-generated in your docs/ folder\n├── docs\n│   ├── Makefile\n│   ├── build\n│   ├── make.bat\n│   └── source\n│   ├── _static\n│   ├── _templates\n│   ├── conf.py\n│   └── index.rst\n\nSince, we created a separate docs directory so we need sphinx find\nwhere to find build files and python src module.\nSo, edit the conf.py file, you can use my conf.py file too\nimport os\nimport sys\nbasedir = os.path.abspath(os.path.join(os.path.dirname(__file__), '..', '..'))\nsys.path.insert(0, basedir)\n\nNow, to enable access to nested multiple packages & modules if any, you need to edit index.rst file.\n.. toctree::\n :maxdepth: 2\n :caption: Description of my CodeBase:\n\n modules\n\nThe modules picks up content from modules.rst file which we will create below:\nMake sure you're still in doc/ to run the below command\nsphinx-apidoc -o ./source ..\n\nThe output you get:\n├── docs\n│   ├── Makefile\n│   ├── build\n│   ├── make.bat\n│   └── source\n│   ├── _static\n│   ├── _templates\n│   ├── conf.py\n│   ├── index.rst\n│   ├── modules.rst\n│   ├── src.rst\n│   └── utility.rst\n\nnow run:\nmake html\n\nNow, go and open in browser of your choice,\nfile:///<absolute_path_to_your_project>/dl4sci-school-2020/docs/build/html/index.html\nhave you beautiful documentation ready\n\nhttps://imgur.com/5t1uguh\nFYI, You can switch any theme of your choice, I found sphinx_rtd_theme and extension sphinxcontrib.napoleon super dope!. Thanks to their creators, so I used it.\nbelow does the work!\npip install sphinxcontrib-napoleon\npip install sphinx-rtd-theme\n\nYou can host your documentation it on readthedocs\nenjoy documenting your code!\n", "\nsys.path.insert(0, os.path.abspath('../..'))\n\n\nThat's not correct. Steve Piercy's comment is not entirely on point (you don't need to add a __init__.py since you're using a simple module) but they're right that autodoc will try to import the module and then inspect the content.\nHoever assuming your tree is\ndoc/conf.py\nsrc/stack.py\n\nthen you're just adding the folder which contains your repository to the sys.path which is completely useless. What you need to do is add the src folder to sys.path, such that when sphinx tries to import stack it finds your module. So your line should be:\n\nsys.path.insert(0, os.path.abspath('../src')\n\n\n(the path should be relative to conf.py).\nOf note: since you have something which is completely synthetic and should contain no secrets, an accessible repository or a zip file of the entire thing makes it much easier to diagnose issues and provide relevant help: the less has to be inferred, the less can be wrong in the answer.\n", "IMHO running pip install --no-deps -e . in the top project folder (or where ever setup.py is) to get an \"editable\" install is a better alternative to get your package modules on the PYTHONPATH than altering it in docs/conf.py using sys.path.\n" ]
[ 21, 5, 3, 0 ]
[ "For me installing the package via setup.py file and re-running corresponding commands fixed the problem:\n$ python setup.py install\n\n" ]
[ -2 ]
[ "autodoc", "python", "python_3.x", "python_sphinx" ]
stackoverflow_0053668052_autodoc_python_python_3.x_python_sphinx.txt
Q: How I convert tensoflow Linear(kernel_constraint=max_norm) to pyotch code? Dense(self.latent_dim, kernel_constraint=max_norm(0.5))(en_conv) I want to convert the above tensoflow code to pytorch, but I don't understand kernel_constraint=max_norm(0.5). How can I convert it?
How I convert tensoflow Linear(kernel_constraint=max_norm) to pyotch code?
Dense(self.latent_dim, kernel_constraint=max_norm(0.5))(en_conv) I want to convert the above tensoflow code to pytorch, but I don't understand kernel_constraint=max_norm(0.5). How can I convert it?
[]
[]
[ "one way possible is to do it by a custom layer that you can use in the model as a custom layer. Kernel constrain is the same as you do by initializing the value in the simple Dense layer.\n\nSample: Dense layer with initial weight, you can use tf.zeros() or tf.ones() or random function or tf.constant() but the model training result does not always converge at the single points. To find possibilities you need to initial it from specific but running you may start from trained values.\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Simply Dense\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nclass SimpleDense(tf.keras.layers.Layer):\n\n def __init__(self, units=32):\n super(SimpleDense, self).__init__()\n self.units = units\n\n def build(self, input_shape):\n self.w = self.add_weight(shape=(input_shape[-1], self.units),\n initializer='random_normal',\n trainable=True)\n self.b = self.add_weight(shape=(self.units,),\n initializer='random_normal',\n trainable=True)\n\n def call(self, inputs):\n return tf.matmul(inputs, self.w) + self.b\n\n\nSample: As the question requirements, Dense layer with an initializer of the MaxNorm constrain.\n\nimport tensorflow as tf\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]\nNone\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nphysical_devices = tf.config.experimental.list_physical_devices('GPU')\nassert len(physical_devices) > 0, \"Not enough GPU hardware devices available\"\nconfig = tf.config.experimental.set_memory_growth(physical_devices[0], True)\nprint(physical_devices)\nprint(config)\n\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Class / Funtions\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\nclass MaxNorm(tf.keras.layers.Layer):\n def __init__(self, max_value=2, axis=1):\n super(MaxNorm, self).__init__()\n # self.units = units\n self._out_shape = None\n self.max_value = max_value\n self.axis = axis\n\n def build(self, input_shape):\n self._out_shape = input_shape\n\n def call(self, inputs):\n temp = tf.keras.layers.Dense(inputs.shape[1], kernel_constraint=tf.keras.constraints.MaxNorm(max_value=self.max_value, axis=self.axis), activation=None)( inputs )\n\n return temp\n \n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\n: Tasks\n\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\"\" \ntemp = tf.constant([[ 0.00346701, -0.00676209, -0.00109781, -0.0005832 , 0.00047849, 0.00311204, 0.00843922, -0.00400238, 0.00127922, -0.0026469 ,\n-0.00232184, -0.00686269, 0.00021552, -0.0039388 , 0.00753652,\n-0.00405236, -0.0008759 , 0.00275771, 0.00144688, -0.00361056,\n-0.0036177 , 0.00778807, -0.00116923, 0.00012773, 0.00276652,\n0.00438983, -0.00769166, -0.00432891, -0.00211244, -0.00594028,\n0.01009954, 0.00581804, -0.0062736 , -0.00921499, 0.00710281,\n0.00022364, 0.00051054, -0.00204145, 0.00928543, -0.00129213,\n-0.00209933, -0.00212295, -0.00452125, -0.00601313, -0.00239222,\n0.00663724, 0.00228883, 0.00359715, 0.00090024, 0.01166699,\n-0.00281386, -0.00791688, 0.00055902, 0.00070648, 0.00052972,\n0.00249906, 0.00491098, 0.00528313, -0.01159694, -0.00370812,\n-0.00950641, 0.00408999, 0.00800613, 0.0014898 ]], dtype=tf.float32)\n\nlayer = MaxNorm(max_value=2)\nprint( layer( temp )[0][tf.math.argmax(layer( temp )[0]).numpy()] )\nlayer = MaxNorm(max_value=4)\nprint( layer( temp )[0][tf.math.argmax(layer( temp )[0]).numpy()] )\nlayer = MaxNorm(max_value=10)\nprint( layer( temp )[0][tf.math.argmax(layer( temp )[0]).numpy()] )\n\n\nOutput: The custom modified creation of a new layer, one way to prove the answer is initial from near zero or where you know about results. Starting from zero you pay attention in less vary but none zeros you do most at the magnitudes of the process.\n\ntf.Tensor(-0.8576179, shape=(), dtype=float32)\ntf.Tensor(0.6010429, shape=(), dtype=float32)\ntf.Tensor(2.2286513, shape=(), dtype=float32)\n\n" ]
[ -1 ]
[ "python", "pytorch", "tensorflow" ]
stackoverflow_0074505815_python_pytorch_tensorflow.txt
Q: Reading in a web based text file I am getting a ton of errors with json() I am getting a bunch of errors with respect to the json function. My code is below and it's pretty straight forward. I am trying to request this text file on the web and write it to a new file. Then parse the data to get the first IP address in each row. I am first just trying to get past all of these errors. #extracting text from website https://isc.sans.edu/block.txt import requests import json url = "https://isc.sans.edu/block.txt" webtext = requests.get(url).json() # writing to file webtextfile = open('webtextfile.txt', 'w') webtextfile.writelines(str(webtext)) webtextfile.close() # Using readlines() webtextfile = open('webtextfile.txt', 'r') Lines = webtextfile.readlines() print(Lines) #parsing the lines Errors below: Traceback (most recent call last): File "/home/allen/Education/Masters/VT/5480/Project_6/P6.py", line 8, in <module> webtext = requests.get(url).json() File "/usr/lib/python3/dist-packages/requests/models.py", line 900, in json return complexjson.loads(self.text, **kwargs) File "/usr/lib/python3.10/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) I am getting a bunch of errors with respect to the json function. My code is below and it's pretty straight forward. I am trying to request this text file on the web and write it to a new file. Then parse the data to get the first IP address in each row. I am first just trying to get past all of these errors. I have tried without json. but I am not completely sure why I am using json. A: This may not be exactly optimized but it works and probably can be fixed up a bit. As mentioned in the comments you were trying to read the site as json data when it is a text file so I changed webtext = requests.get(url).json() to webtext = requests.get(url).text and added some parsing below your line Lines = webtextfile.readlines(). So the data you want starts at the first line without a '#" so the first for loop will count up until it doesn't find the '#'. This line would be the header. After that loop through starting at the header+1. This would be the table data. Each row is in its own list so you just need the first element from that list which is the ip column you need. Explanations in code block as well. #extracting text from website https://isc.sans.edu/block.txt import requests import json url = "https://isc.sans.edu/block.txt" webtext = requests.get(url).text # webtext = webtext.split() # writing to file webtextfile = open('webtextfile.txt', 'w') webtextfile.writelines(str(webtext)) webtextfile.close() # Using readlines() webtextfile = open('webtextfile.txt', 'r') Lines = webtextfile.readlines() # clean the text from Lines which is a list and split on tab,newslines,whitespaces # cleaned will become a list of lists cleaned = [item.split() for item in Lines] count = 0 start_position = 0 # if the item contains a '#' keep counting up if it doesn't that's the start position for line in cleaned: if "#" in line: count = count + 1 else: start_position = count break # skip header. (currently it would be at the header line. beginning with 'start') start_position += 1 ip_list = [] for i in range(start_position,len(Lines)): #each row is its own list print(cleaned[i]) #element 0 will be the ip ip_list.append(cleaned[i][0]) print(ip_list)
Reading in a web based text file I am getting a ton of errors with json()
I am getting a bunch of errors with respect to the json function. My code is below and it's pretty straight forward. I am trying to request this text file on the web and write it to a new file. Then parse the data to get the first IP address in each row. I am first just trying to get past all of these errors. #extracting text from website https://isc.sans.edu/block.txt import requests import json url = "https://isc.sans.edu/block.txt" webtext = requests.get(url).json() # writing to file webtextfile = open('webtextfile.txt', 'w') webtextfile.writelines(str(webtext)) webtextfile.close() # Using readlines() webtextfile = open('webtextfile.txt', 'r') Lines = webtextfile.readlines() print(Lines) #parsing the lines Errors below: Traceback (most recent call last): File "/home/allen/Education/Masters/VT/5480/Project_6/P6.py", line 8, in <module> webtext = requests.get(url).json() File "/usr/lib/python3/dist-packages/requests/models.py", line 900, in json return complexjson.loads(self.text, **kwargs) File "/usr/lib/python3.10/json/__init__.py", line 346, in loads return _default_decoder.decode(s) File "/usr/lib/python3.10/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/usr/lib/python3.10/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) I am getting a bunch of errors with respect to the json function. My code is below and it's pretty straight forward. I am trying to request this text file on the web and write it to a new file. Then parse the data to get the first IP address in each row. I am first just trying to get past all of these errors. I have tried without json. but I am not completely sure why I am using json.
[ "This may not be exactly optimized but it works and probably can be fixed up a bit. As mentioned in the comments you were trying to read the site as json data when it is a text file so I changed webtext = requests.get(url).json() to webtext = requests.get(url).text and added some parsing below your line Lines = webtextfile.readlines().\nSo the data you want starts at the first line without a '#\" so the first for loop will count up until it doesn't find the '#'. This line would be the header. After that loop through starting at the header+1. This would be the table data. Each row is in its own list so you just need the first element from that list which is the ip column you need.\nExplanations in code block as well.\n#extracting text from website https://isc.sans.edu/block.txt\n\nimport requests\nimport json\n\nurl = \"https://isc.sans.edu/block.txt\"\n\nwebtext = requests.get(url).text\n\n# webtext = webtext.split()\n# writing to file\nwebtextfile = open('webtextfile.txt', 'w')\nwebtextfile.writelines(str(webtext))\nwebtextfile.close()\n \n# Using readlines()\nwebtextfile = open('webtextfile.txt', 'r')\n\nLines = webtextfile.readlines()\n\n# clean the text from Lines which is a list and split on tab,newslines,whitespaces\n# cleaned will become a list of lists\ncleaned = [item.split() for item in Lines]\ncount = 0\nstart_position = 0\n\n# if the item contains a '#' keep counting up if it doesn't that's the start position\nfor line in cleaned:\n if \"#\" in line:\n count = count + 1\n else:\n start_position = count\n break\n\n\n# skip header. (currently it would be at the header line. beginning with 'start')\nstart_position += 1\nip_list = []\nfor i in range(start_position,len(Lines)):\n #each row is its own list \n print(cleaned[i])\n #element 0 will be the ip\n ip_list.append(cleaned[i][0])\n\nprint(ip_list)\n\n" ]
[ 0 ]
[]
[]
[ "json", "parsing", "python", "text_files" ]
stackoverflow_0074505872_json_parsing_python_text_files.txt
Q: Measure distance between meshes For my project, I need to measure the distance between two STL files. I wrote a script that allows reading the files, positioning them in relation to each other in the desired position. Now, in the next step I need to check the distance from one object to the other. Is there a function or script available on a library that allows me to carry out this process? Because then I’m going to want to define metrics like interpenetration area, maximum negative distance etc etc so I need to check first the distance between those objects and see if there is like mesh intersection and mesure that distance. I put the url for the combination of the 2 objects that I want to mesure the distance: https://imgur.com/wgNaalh A: Pyvista offers a really easy way of calculating just that: import pyvista as pv import numpy as np mesh_1 = pv.read(**path to mesh 1**) mesh_2 = pv.read(**path to mesh 2**) closest_cells, closest_points = mesh_2.find_closest_cell(mesh_1.points, return_closest_point=True) d_exact = np.linalg.norm(mesh_1 .points - closest_points, axis=1) print(f'mean distance is: {np.mean(d_exact)}') For more methods and examples, have a look at: https://docs.pyvista.org/examples/01-filter/distance-between-surfaces.html#using-pyvista-filter A: To calculate the distance between two meshes, first one needs to check whether these meshes intersect. If not, then the resulting distance can be computed as the distance between two closest points, one from each mesh (as on the picture below). If the meshes do intersect, then it is necessary to find the part of each mesh, which is inside the other mesh, then find two most distant points, one from each inner part. The distance between these points will be the maximum deepness of the meshes interpenetration. It can be returned with negative sign to distinguish it from the distance between separated meshes. In Python, one can use MeshLib library and findSignedDistance function from it as follows: import meshlib.mrmeshpy as mr mesh1 = mr.loadMesh("Cube.stl") mesh2 = mr.loadMesh("Torus.stl")) z = mr.findSignedDistance(mesh1.value(), mesh2.value()) print(z.signedDist) // 0.3624192774295807
Measure distance between meshes
For my project, I need to measure the distance between two STL files. I wrote a script that allows reading the files, positioning them in relation to each other in the desired position. Now, in the next step I need to check the distance from one object to the other. Is there a function or script available on a library that allows me to carry out this process? Because then I’m going to want to define metrics like interpenetration area, maximum negative distance etc etc so I need to check first the distance between those objects and see if there is like mesh intersection and mesure that distance. I put the url for the combination of the 2 objects that I want to mesure the distance: https://imgur.com/wgNaalh
[ "Pyvista offers a really easy way of calculating just that:\nimport pyvista as pv\nimport numpy as np\n\nmesh_1 = pv.read(**path to mesh 1**)\nmesh_2 = pv.read(**path to mesh 2**)\n\nclosest_cells, closest_points = mesh_2.find_closest_cell(mesh_1.points, return_closest_point=True)\nd_exact = np.linalg.norm(mesh_1 .points - closest_points, axis=1)\nprint(f'mean distance is: {np.mean(d_exact)}')\n\nFor more methods and examples, have a look at:\nhttps://docs.pyvista.org/examples/01-filter/distance-between-surfaces.html#using-pyvista-filter\n", "To calculate the distance between two meshes, first one needs to check whether these meshes intersect. If not, then the resulting distance can be computed as the distance between two closest points, one from each mesh (as on the picture below).\n\nIf the meshes do intersect, then it is necessary to find the part of each mesh, which is inside the other mesh, then find two most distant points, one from each inner part. The distance between these points will be the maximum deepness of the meshes interpenetration. It can be returned with negative sign to distinguish it from the distance between separated meshes.\nIn Python, one can use MeshLib library and findSignedDistance function from it as follows:\nimport meshlib.mrmeshpy as mr\nmesh1 = mr.loadMesh(\"Cube.stl\")\nmesh2 = mr.loadMesh(\"Torus.stl\"))\nz = mr.findSignedDistance(mesh1.value(), mesh2.value())\nprint(z.signedDist) // 0.3624192774295807\n\n" ]
[ 1, 0 ]
[]
[]
[ "ascii", "distance", "intersection", "python", "stl_format" ]
stackoverflow_0061159587_ascii_distance_intersection_python_stl_format.txt
Q: understanding librosa.feature.spectral_contrast i am using python and I am trying to use this function but i am struggling with it. def extract_feature_for_one_signal(signal): signal = signal.astype(float) mel = np.mean(librosa.feature.melspectrogram(signal, sr=SAMPLE_RATE, n_fft=N_FFT, hop_length=HOP_LENGTH).T, axis=0) mfccs = np.mean(librosa.feature.mfcc(y=signal, sr=SAMPLE_RATE, n_mfcc=40).T, axis=0) stft = np.abs(librosa.stft(signal)) chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=SAMPLE_RATE).T, axis=0) **contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=SAMPLE_RATE).T, axis=0)** tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(signal), sr=SAMPLE_RATE).T, axis=0) average_distance = [] for std in STD_NUMS: average_distance.append(average_distance_between_spikes(np.abs(signal), std, 320)) average_distance.append(average_distance_between_spikes(signal, std, 320)) return mfccs, chroma, mel, contrast, tonnetz, average_distance The program falls here: contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=SAMPLE_RATE).T, axis=0) SAMPLE_RATE = 1000 (it must be 1000.....) what can i do to make it work? A: Your nyquist is would be greater than the sampling rate. Try redcuing the number of filter band from default 6 to maybe 3 or 4. You can also reduce your fmin to say 50. The sampling rate you have choosen is too small. Keep it around 44100, which is the standard. It should work fine then
understanding librosa.feature.spectral_contrast
i am using python and I am trying to use this function but i am struggling with it. def extract_feature_for_one_signal(signal): signal = signal.astype(float) mel = np.mean(librosa.feature.melspectrogram(signal, sr=SAMPLE_RATE, n_fft=N_FFT, hop_length=HOP_LENGTH).T, axis=0) mfccs = np.mean(librosa.feature.mfcc(y=signal, sr=SAMPLE_RATE, n_mfcc=40).T, axis=0) stft = np.abs(librosa.stft(signal)) chroma = np.mean(librosa.feature.chroma_stft(S=stft, sr=SAMPLE_RATE).T, axis=0) **contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=SAMPLE_RATE).T, axis=0)** tonnetz = np.mean(librosa.feature.tonnetz(y=librosa.effects.harmonic(signal), sr=SAMPLE_RATE).T, axis=0) average_distance = [] for std in STD_NUMS: average_distance.append(average_distance_between_spikes(np.abs(signal), std, 320)) average_distance.append(average_distance_between_spikes(signal, std, 320)) return mfccs, chroma, mel, contrast, tonnetz, average_distance The program falls here: contrast = np.mean(librosa.feature.spectral_contrast(S=stft, sr=SAMPLE_RATE).T, axis=0) SAMPLE_RATE = 1000 (it must be 1000.....) what can i do to make it work?
[ "Your nyquist is would be greater than the sampling rate. Try redcuing the number of filter band from default 6 to maybe 3 or 4. You can also reduce your fmin to say 50.\nThe sampling rate you have choosen is too small. Keep it around 44100, which is the standard. It should work fine then\n" ]
[ 0 ]
[]
[]
[ "librosa", "python" ]
stackoverflow_0064119762_librosa_python.txt
Q: Fill NaN with the max value from a group I have an input data as shown: df = pd.DataFrame({"colony" : [22, 22, 22, 33, 33, 33], "measure" : [np.nan, 7, 11, 13, np.nan, 9,], "net/gross" : [np.nan, "gross", "net", "gross", "np.nan", "net"]}) df colony measure net/gross 0 22 NaN NaN 1 22 7 gross 2 22 11 net 3 33 13 gross 4 33 NaN NaN 5 33 9 net I want to fill the NaN in the measure column with maximum value from each group of the colony, then fill the NaN in the net/gross column with the net/gross value at the row where the measure was maximum (e.g fill the NaN at index 0 with the value corresponding to where the measure was max which is "net") and create a remark column to document all the NaN filled rows as "max_filled" and the other rows as "unchanged" to arrive at an output as below: colony measure net/gross remarks 0 22 11 net max_filled 1 22 7 gross unchanged 2 22 11 net unchanged 3 33 13 gross unchanged 4 33 13 gross max_filled 5 33 9 net unchanged A: You can use SeriesGroupBy.transform to get the maximum value for each group then use pandas.Series.fillna. Try this : df['measure']= df['measure'].fillna(df.groupby('colony')['measure'].transform('max')) # Output : print(df) colony measure 0 22 11.0 1 22 7.0 2 22 11.0 3 33 13.0 4 33 13.0 5 33 9.0
Fill NaN with the max value from a group
I have an input data as shown: df = pd.DataFrame({"colony" : [22, 22, 22, 33, 33, 33], "measure" : [np.nan, 7, 11, 13, np.nan, 9,], "net/gross" : [np.nan, "gross", "net", "gross", "np.nan", "net"]}) df colony measure net/gross 0 22 NaN NaN 1 22 7 gross 2 22 11 net 3 33 13 gross 4 33 NaN NaN 5 33 9 net I want to fill the NaN in the measure column with maximum value from each group of the colony, then fill the NaN in the net/gross column with the net/gross value at the row where the measure was maximum (e.g fill the NaN at index 0 with the value corresponding to where the measure was max which is "net") and create a remark column to document all the NaN filled rows as "max_filled" and the other rows as "unchanged" to arrive at an output as below: colony measure net/gross remarks 0 22 11 net max_filled 1 22 7 gross unchanged 2 22 11 net unchanged 3 33 13 gross unchanged 4 33 13 gross max_filled 5 33 9 net unchanged
[ "You can use SeriesGroupBy.transform to get the maximum value for each group then use pandas.Series.fillna.\nTry this :\ndf['measure']= df['measure'].fillna(df.groupby('colony')['measure'].transform('max'))\n\n# Output :\nprint(df)\n\n colony measure\n0 22 11.0\n1 22 7.0\n2 22 11.0\n3 33 13.0\n4 33 13.0\n5 33 9.0\n\n" ]
[ 0 ]
[]
[]
[ "numpy", "pandas", "python" ]
stackoverflow_0074506156_numpy_pandas_python.txt
Q: Check for substrings in sequential order using python I’m currently developing a Blender Add-on that is a lip-sync tool for 2D and 3D animations, and this Add-on includes a Phoneme extractor tool that extracts phonemes from each word. for example, the sentence I love pizza which is aɪ lʌv ˈpiːtsə. That’s the reason why I’m making a script that will evaluate each character looking for phonemes in each word (there are like 44 phonemes or something). But to put it simply: say you have string = bcda I need something like *b detected, do something *c detected, do something *d detected, do something *a detected, do something and in case it is string = abcd *a detected, do something *b detected, do something *c detected, do something *d detected, do something But whatever I do in python I always get abcd and I need the sequential order! And it’s even worst because I tried doing this in c# and I did succeed (and I tried using regex, text1 in text2 and .find) text2 = "bca" aString = "a" bString = "b" cString = "c" if aString in text2: print("contains a") if bString in text2: print("contains b") if cString in text2: print("contains c") I tried using .find, using text1 in text2, and even using regex and it works, but not in sequential order A: You could do this just by looping through the string with a forloop and just having a massive switch statment. You could also have a dictionary of phonemes and their acording functions. aString = "bca" """ it got a bit convoluted but the lambda: print("contains a") really just allows you to call a function (print) with specific attributes, if you wanted your own function you probably wouldn't need the lambda. """ functions = {"a":lambda: print("contains a"),"b":lambda: print("contains b"),"c":lambda: print("contains c")} for c in aString: # just loops through each letter and calls the according function found in the dictionary. functions[c]() which produces: contains b contains c contains a If your looking for finding multiple letter occurences here's an approach: aString = "ac" bString = "ba" cString = "cb" dString = "cba" sampleStr = "bcbabac" print("**** Iterate over each character using for loop****") for elem in range(len(sampleStr)): # Here rather than looping through the characters we loop through the index of each characters """ To break down the if statments: The sampleStr[a:b] just takes a substring of sampleStr from a-b (not including b) Here the range is from elem to elem+len(aString) (so a substring the size of the phonemes) And then the final min() just avoids situations where the phonemes would be too long and wrap back around. So in all we're looping through the String and taking substrings of the size of the phonemes we're comparing it to. """ if aString == sampleStr[elem:min(elem+len(aString),len(sampleStr))]: print("detected " + sampleStr[elem:elem+len(aString)]) if bString == sampleStr[elem:min(elem+len(bString),len(sampleStr))]: print("detected " + sampleStr[elem:elem+len(bString)]) if cString == sampleStr[elem:min(elem+len(cString),len(sampleStr))]: print("detected " + sampleStr[elem:elem+len(cString)]) if dString == sampleStr[elem:min(elem+len(dString),len(sampleStr))]: print("detected " + sampleStr[elem:elem+len(dString)]) which produces: **** Iterate over each character using for loop**** detected cb detected cba detected ba detected ba detected ac
Check for substrings in sequential order using python
I’m currently developing a Blender Add-on that is a lip-sync tool for 2D and 3D animations, and this Add-on includes a Phoneme extractor tool that extracts phonemes from each word. for example, the sentence I love pizza which is aɪ lʌv ˈpiːtsə. That’s the reason why I’m making a script that will evaluate each character looking for phonemes in each word (there are like 44 phonemes or something). But to put it simply: say you have string = bcda I need something like *b detected, do something *c detected, do something *d detected, do something *a detected, do something and in case it is string = abcd *a detected, do something *b detected, do something *c detected, do something *d detected, do something But whatever I do in python I always get abcd and I need the sequential order! And it’s even worst because I tried doing this in c# and I did succeed (and I tried using regex, text1 in text2 and .find) text2 = "bca" aString = "a" bString = "b" cString = "c" if aString in text2: print("contains a") if bString in text2: print("contains b") if cString in text2: print("contains c") I tried using .find, using text1 in text2, and even using regex and it works, but not in sequential order
[ "You could do this just by looping through the string with a forloop and just having a massive switch statment. You could also have a dictionary of phonemes and their acording functions.\naString = \"bca\"\n\"\"\"\nit got a bit convoluted but the lambda: print(\"contains a\") really just allows you\nto call a function (print) with specific attributes, if you wanted your own\nfunction you probably wouldn't need the lambda.\n\"\"\"\nfunctions = {\"a\":lambda: print(\"contains a\"),\"b\":lambda: print(\"contains b\"),\"c\":lambda: print(\"contains c\")}\n\nfor c in aString: # just loops through each letter and calls the according function found in the dictionary.\n functions[c]() \n\nwhich produces:\ncontains b\ncontains c\ncontains a\n\nIf your looking for finding multiple letter occurences here's an approach:\naString = \"ac\" \nbString = \"ba\" \ncString = \"cb\" \ndString = \"cba\" \nsampleStr = \"bcbabac\" \nprint(\"**** Iterate over each character using for loop****\") \nfor elem in range(len(sampleStr)): # Here rather than looping through the characters we loop through the index of each characters\n \"\"\"\n To break down the if statments:\n The sampleStr[a:b] just takes a substring of sampleStr from a-b (not including b)\n Here the range is from elem to elem+len(aString) (so a substring the size of the phonemes) \n And then the final min() just avoids situations where the phonemes would be too long and wrap back around.\n So in all we're looping through the String and taking substrings of the size of the phonemes we're comparing it to.\n \"\"\"\n if aString == sampleStr[elem:min(elem+len(aString),len(sampleStr))]:\n print(\"detected \" + sampleStr[elem:elem+len(aString)])\n if bString == sampleStr[elem:min(elem+len(bString),len(sampleStr))]: \n print(\"detected \" + sampleStr[elem:elem+len(bString)])\n if cString == sampleStr[elem:min(elem+len(cString),len(sampleStr))]:\n print(\"detected \" + sampleStr[elem:elem+len(cString)]) \n if dString == sampleStr[elem:min(elem+len(dString),len(sampleStr))]:\n print(\"detected \" + sampleStr[elem:elem+len(dString)]) \n\nwhich produces:\n**** Iterate over each character using for loop****\ndetected cb\ndetected cba\ndetected ba\ndetected ba\ndetected ac\n\n" ]
[ 0 ]
[]
[]
[ "blender", "evaluate", "find", "python", "string" ]
stackoverflow_0074505647_blender_evaluate_find_python_string.txt
Q: How can i get chemical element list? I'd like to share something with a chemical formula. For example C14H19NO, C10H12O2, C15H26O to {"C14","H19","N","O","C10","H12","O2","C15","H26","O"} like this I also want to know how to process .txt at once please help me.. num=["1","2","3","4","5","6","7","8","9","0"] text=input("C9H8Cl3") lis=list(text) for i in range(len(text)): if lis[i] in num: lis[i]=int(lis[i]) lis2=lis[:] k=1 for i in range(len(text)-1): if type(lis[i])==int and type(lis[i+1])==str: lis2.insert(i+k, "|") k+=1 for i in range(len(lis2)): if type(lis2[i])==int: lis2[i]=str(lis2[i]) result="" for i in range(len(lis2)): result+=lis2[i] print(result) I tried this, but only one can be converted at a time, and neither is converted. I want another code.. help me A: Generally, we can use re.findall here: import re inp = ["C14H19NO", "C10H12O2", "C15H26O"] for f in inp: atoms = re.findall(r'[A-Z][a-z]?[0-9]*', f) print(atoms) This prints: ['C14', 'H19', 'N', 'O'] ['C10', 'H12', 'O2'] ['C15', 'H26', 'O']
How can i get chemical element list?
I'd like to share something with a chemical formula. For example C14H19NO, C10H12O2, C15H26O to {"C14","H19","N","O","C10","H12","O2","C15","H26","O"} like this I also want to know how to process .txt at once please help me.. num=["1","2","3","4","5","6","7","8","9","0"] text=input("C9H8Cl3") lis=list(text) for i in range(len(text)): if lis[i] in num: lis[i]=int(lis[i]) lis2=lis[:] k=1 for i in range(len(text)-1): if type(lis[i])==int and type(lis[i+1])==str: lis2.insert(i+k, "|") k+=1 for i in range(len(lis2)): if type(lis2[i])==int: lis2[i]=str(lis2[i]) result="" for i in range(len(lis2)): result+=lis2[i] print(result) I tried this, but only one can be converted at a time, and neither is converted. I want another code.. help me
[ "Generally, we can use re.findall here:\nimport re\n\ninp = [\"C14H19NO\", \"C10H12O2\", \"C15H26O\"]\nfor f in inp:\n atoms = re.findall(r'[A-Z][a-z]?[0-9]*', f)\n print(atoms)\n\nThis prints:\n['C14', 'H19', 'N', 'O']\n['C10', 'H12', 'O2']\n['C15', 'H26', 'O']\n\n" ]
[ 1 ]
[]
[]
[ "python" ]
stackoverflow_0074506148_python.txt
Q: WinError 267 The directory name is invalid I tried this code in jupyter notebook, and this error occured. Error : [WinError 267] The directory name is invalid: 'plantdisease/PlantVillage/Pepper__bell___Bacterial_spot/0022d6b7-d47c-4ee2-ae9a-392a53f48647___JR_B.Spot 8964.JPG/' I'm using python 3.6 in anaconda environment, I tried running this code but it showed error. I can't figure out what the problem is.The file location actually exists at the given location, still it shows invalid. image_list, label_list = [], [] try: print("[INFO] Loading images ...") root_dir = listdir(directory_root) for directory in root_dir : # remove .DS_Store from list if directory == ".DS_Store" : root_dir.remove(directory) for plant_folder in root_dir : plant_disease_folder_list = listdir(f"{directory_root}/{plant_folder}") for disease_folder in plant_disease_folder_list : # remove .DS_Store from list if disease_folder == ".DS_Store" : plant_disease_folder_list.remove(disease_folder) for plant_disease_folder in plant_disease_folder_list: print(f"[INFO] Processing {plant_disease_folder} ...") plant_disease_image_list = listdir(f"{directory_root}/{plant_folder}/{plant_disease_folder}/") for single_plant_disease_image in plant_disease_image_list : if single_plant_disease_image == ".DS_Store" : plant_disease_image_list.remove(single_plant_disease_image) for image in plant_disease_image_list[:200]: image_directory = f"{directory_root}/{plant_folder}/{plant_disease_folder}/{image}" if image_directory.endswith(".jpg") == True or image_directory.endswith(".JPG") == True: image_list.append(convert_image_to_array(image_directory)) label_list.append(plant_disease_folder) print("[INFO] Image loading completed") except Exception as e: print(f"Error : {e}") [SOLVED] the problem was in loading the root director make sure you're root directory is loaded, if your root directory is plantDiseases then, keep it similar, son't get deep in the directory. A: Your path is invalid because it is not a directory. It is a file A: I have changed the path to this directory_root = 'D:\Coding files\Plant disease/x/' x is the folder in which PlantVillage dataset is saved in
WinError 267 The directory name is invalid
I tried this code in jupyter notebook, and this error occured. Error : [WinError 267] The directory name is invalid: 'plantdisease/PlantVillage/Pepper__bell___Bacterial_spot/0022d6b7-d47c-4ee2-ae9a-392a53f48647___JR_B.Spot 8964.JPG/' I'm using python 3.6 in anaconda environment, I tried running this code but it showed error. I can't figure out what the problem is.The file location actually exists at the given location, still it shows invalid. image_list, label_list = [], [] try: print("[INFO] Loading images ...") root_dir = listdir(directory_root) for directory in root_dir : # remove .DS_Store from list if directory == ".DS_Store" : root_dir.remove(directory) for plant_folder in root_dir : plant_disease_folder_list = listdir(f"{directory_root}/{plant_folder}") for disease_folder in plant_disease_folder_list : # remove .DS_Store from list if disease_folder == ".DS_Store" : plant_disease_folder_list.remove(disease_folder) for plant_disease_folder in plant_disease_folder_list: print(f"[INFO] Processing {plant_disease_folder} ...") plant_disease_image_list = listdir(f"{directory_root}/{plant_folder}/{plant_disease_folder}/") for single_plant_disease_image in plant_disease_image_list : if single_plant_disease_image == ".DS_Store" : plant_disease_image_list.remove(single_plant_disease_image) for image in plant_disease_image_list[:200]: image_directory = f"{directory_root}/{plant_folder}/{plant_disease_folder}/{image}" if image_directory.endswith(".jpg") == True or image_directory.endswith(".JPG") == True: image_list.append(convert_image_to_array(image_directory)) label_list.append(plant_disease_folder) print("[INFO] Image loading completed") except Exception as e: print(f"Error : {e}") [SOLVED] the problem was in loading the root director make sure you're root directory is loaded, if your root directory is plantDiseases then, keep it similar, son't get deep in the directory.
[ "Your path is invalid because it is not a directory. It is a file\n", "I have changed the path to this\ndirectory_root = 'D:\\Coding files\\Plant disease/x/'\nx is the folder in which PlantVillage dataset is saved in\n\n\n" ]
[ 0, 0 ]
[]
[]
[ "artificial_intelligence", "python" ]
stackoverflow_0059332004_artificial_intelligence_python.txt
Q: Can I make my padx and pady in place() [In python GUI] I was trying to use padx and pady in place() in Python tkinter GUI something I tried : I want to know that how can I use padx and pady in place() in this way: from tkinter import * app = Tk() app.geometry("433x255") border = Frame(background = "red") aboutme = Label(border, text = "welcome to my tkinter GUI").place(padx=1, pady=1) border.place() app.mainloop() A: I don't think you can use padx and pady configurations on place. Here is full list of what you can use. https://tcl.tk/man/tcl8.6/TkCmd/place.htm#M6
Can I make my padx and pady in place() [In python GUI]
I was trying to use padx and pady in place() in Python tkinter GUI something I tried : I want to know that how can I use padx and pady in place() in this way: from tkinter import * app = Tk() app.geometry("433x255") border = Frame(background = "red") aboutme = Label(border, text = "welcome to my tkinter GUI").place(padx=1, pady=1) border.place() app.mainloop()
[ "I don't think you can use padx and pady configurations on place.\nHere is full list of what you can use.\nhttps://tcl.tk/man/tcl8.6/TkCmd/place.htm#M6\n" ]
[ 1 ]
[]
[]
[ "python", "tkinter", "user_interface" ]
stackoverflow_0074506174_python_tkinter_user_interface.txt
Q: Simple python iteration exercise..stuck with try and except Write a program which repeatedly reads numbers until the user enters "done". Once "done" is entered, print out the total, count, and average of the numbers. If the user enters anything other than a number, detect their mistake using try and except and print an error message and skip to the next number. This is what I have. total = 0 count = 0 average = 0 while True: number = input("Enter a number:") if number == "done": break try: total += numbers count += 1 average = total / len(number) except: print ("Invalid input") continue print (total, count, average) When I run this, I always get invalid input for some reason. My except part must be wrong. EDIT: This is what I have now and it works. I do need, however, try and except, for non numbers. total = 0 count = 0 average = 0 while True: number = input("Enter a number:") if number == "done": break total += float(number) count += 1 average = total / count print (total, count, average) I think I got it?!?! total = 0 count = 0 average = 0 while True: number = input("Enter a number:") try: if number == "done": break total += float(number) count += 1 average = total / count except: print ("Invalid input") print ("total:", total, "count:", count, "average:", average) Should I panic if this took me like an hour? This isn't my first programming language but it's been a while. A: I know this is old, but thought I'd throw my 2-cents in there (since I myself many years later am using the same examples to learn). You could try: values=[] while True: A=input('Please type in a number.\n') if A == 'done': break try: B=int(A) values.append(B) except: print ('Invalid input') total=sum(values) average=total/(len(values)) print (total, len(values), average) I find this a tad cleaner (and personally easier to follow). A: The problem is when you try to use your input: try: total += numbers First, there is no value numbers; your variable is singular, not plural. Second, you have to convert the text input to a number. Try this: try: total += int(number) A: It's because there is no len(number) when number is an int. len is for finding the length of lists/arrays. you can test this for yourself by commenting out the try/except/continue. I think the code below is more what you are after? total = 0 count = 0 average = 0 while True: number = input("Enter a number:") if number == "done": break try: total += number count += 1 average = total / count except: print ("Invalid input") continue print (total, count, average) note there are still some issues. for example you literally have to type "done" in the input box in order to not get an error, but this fixes your initial problem because you had len(number) instead of count in your average. also note that you had total += numbers. when your variable is number not numbers. be careful with your variable names/usage. A: A solution... total = 0 count = 0 average = 0 while True: number = input("Enter a number:") if number == "done": break else: try: total += int(number) count += 1 average = total / count except ValueError as ex: print ("Invalid input") print('"%s" cannot be converted to an int: %s' % (number, ex)) print (total, count, average) Problems with your code: total+=numbers # numbers don't exist; is number len(number) # number is a string. for the average you need count if is not done, else process it Use try ... except ValueError to catch problem when convert the number to int. Also, you can use try ... except ValueError as ex to get an error message more comprehensible. A: So, after several attempts, I got the solution num = 0 count = 0 total = 0 average = 0 while True: num = input('Enter a number: ') if num == "done": break try: float(num) except: continue total = total + float(num) count = count + 1 average = total / count print(total, count, average) A: Old problem with Update solutions num = 0 total = 0.0 while True: number = input("Enter a number") if number == 'done': break try : num1 = float(number) except: print('Invailed Input') continue num = num+1 total = total + num1 print ('all done') print (total,num,total/num) Write and Run picture A: Covers all error and a few more things. Even rounds the results to two decimal places. count = 0 total = 0 average = 0 print() print('Enter integers and type "done" when finished.') print('Results are rounded to two decimals.') while True: inp = input("Enter a number: ") try: if count >= 2 and inp == 'done': #only breaks if more than two integers are entered break count = count + 1 total += float(inp) average = total / count except: if count <=1 and inp == 'done': print('Enter at least 2 integers.') else: print('Bad input') count = count - 1 print() print('Done!') print('Count: ' , count, 'Total: ' , round(total, 2), 'Average: ' , round(average, 2))
Simple python iteration exercise..stuck with try and except
Write a program which repeatedly reads numbers until the user enters "done". Once "done" is entered, print out the total, count, and average of the numbers. If the user enters anything other than a number, detect their mistake using try and except and print an error message and skip to the next number. This is what I have. total = 0 count = 0 average = 0 while True: number = input("Enter a number:") if number == "done": break try: total += numbers count += 1 average = total / len(number) except: print ("Invalid input") continue print (total, count, average) When I run this, I always get invalid input for some reason. My except part must be wrong. EDIT: This is what I have now and it works. I do need, however, try and except, for non numbers. total = 0 count = 0 average = 0 while True: number = input("Enter a number:") if number == "done": break total += float(number) count += 1 average = total / count print (total, count, average) I think I got it?!?! total = 0 count = 0 average = 0 while True: number = input("Enter a number:") try: if number == "done": break total += float(number) count += 1 average = total / count except: print ("Invalid input") print ("total:", total, "count:", count, "average:", average) Should I panic if this took me like an hour? This isn't my first programming language but it's been a while.
[ "I know this is old, but thought I'd throw my 2-cents in there (since I myself many years later am using the same examples to learn). You could try: \nvalues=[]\nwhile True: \n A=input('Please type in a number.\\n')\n if A == 'done':\n break\n try:\n B=int(A)\n values.append(B)\n except:\n print ('Invalid input')\n\ntotal=sum(values)\naverage=total/(len(values))\nprint (total, len(values), average)\n\nI find this a tad cleaner (and personally easier to follow). \n", "The problem is when you try to use your input:\ntry:\n total += numbers\n\nFirst, there is no value numbers; your variable is singular, not plural. Second, you have to convert the text input to a number. Try this:\ntry:\n total += int(number)\n\n", "It's because there is no len(number) when number is an int. len is for finding the length of lists/arrays. you can test this for yourself by commenting out the try/except/continue. I think the code below is more what you are after?\ntotal = 0\ncount = 0\naverage = 0\nwhile True:\n number = input(\"Enter a number:\")\n if number == \"done\":\n break\n try:\n total += number\n count += 1\n average = total / count\n except:\n print (\"Invalid input\")\n continue\nprint (total, count, average)\n\nnote there are still some issues. for example you literally have to type \"done\" in the input box in order to not get an error, but this fixes your initial problem because you had len(number) instead of count in your average. also note that you had total += numbers. when your variable is number not numbers. be careful with your variable names/usage.\n", "A solution...\ntotal = 0\ncount = 0\naverage = 0\nwhile True:\n number = input(\"Enter a number:\")\n if number == \"done\":\n break\n else:\n try:\n total += int(number)\n count += 1\n average = total / count\n except ValueError as ex:\n print (\"Invalid input\")\n print('\"%s\" cannot be converted to an int: %s' % (number, ex))\nprint (total, count, average)\n\nProblems with your code:\n\ntotal+=numbers # numbers don't exist; is number\nlen(number) # number is a string. for the average you need count\nif is not done, else process it\nUse try ... except ValueError to catch problem when convert the number to int.\nAlso, you can use try ... except ValueError as ex to get an error message more comprehensible.\n\n", "So, after several attempts, I got the solution\nnum = 0\ncount = 0\ntotal = 0\naverage = 0\nwhile True:\n num = input('Enter a number: ')\n if num == \"done\":\n break\n try:\n float(num)\n except:\n continue\n total = total + float(num)\n count = count + 1\n average = total / count\n print(total, count, average)\n\n", "Old problem with Update solutions\nnum = 0\ntotal = 0.0\nwhile True:\nnumber = input(\"Enter a number\")\nif number == 'done':\n break\ntry :\n num1 = float(number)\nexcept:\n print('Invailed Input')\n continue\nnum = num+1\ntotal = total + num1\nprint ('all done')\nprint (total,num,total/num)\n\nWrite and Run picture\n", "Covers all error and a few more things. Even rounds the results to two decimal places.\ncount = 0\ntotal = 0\naverage = 0\nprint()\nprint('Enter integers and type \"done\" when finished.')\nprint('Results are rounded to two decimals.')\nwhile True:\n inp = input(\"Enter a number: \")\n\n try:\n if count >= 2 and inp == 'done': #only breaks if more than two integers are entered\n break\n count = count + 1\n total += float(inp)\n average = total / count\n except:\n if count <=1 and inp == 'done':\n print('Enter at least 2 integers.')\n else:\n print('Bad input')\n count = count - 1\n\nprint()\nprint('Done!')\nprint('Count: ' , count, 'Total: ' , round(total, 2), 'Average: ' , round(average, 2))\n\n" ]
[ 1, 0, 0, 0, 0, 0, 0 ]
[]
[]
[ "python" ]
stackoverflow_0039175218_python.txt
Q: Clicking a button by class name using selenium with python Probably a silly question, but I have spent a ridiculous amount of time trying to figure this out. I am building a scrapper bot using selenium in python, and I am just trying to click a button on a web page. The web page opens and resizes... def initalize_browser(): driver.get("**website name**") driver.maximize_window() but I cannot get it to click a specific button. This is the buttons HTML code: <button class="mx-auto green-btn btnHref" onclick="window.location ='/medical'" onkeypress="window.location='/medical'"> Medical and Hospital Costs </button> And this is my code: click_button=driver.find_element(by=By.CLASS_NAME, value="mx-auto green-btn btnHref") click_button.click() This is the error I get for this code: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".mx-auto green-btn btnHref"} I have tried out so many variations of this, including: driver.find_element_by_xpath('//button[@class="mx-auto green-btn btnHref"]').click() Where I get this error: AttributeError: 'WebDriver' object has no attribute 'find_element_by_xpath' I have also checked to see if there are perhaps any other attributes with the same class name, but there is not. Any help would be super appreciated, thank you! A: The method find_element_by_xpath is deprecated now. Use this line: driver.find_element(By.XPATH, '//button[@class="mx-auto green-btn btnHref"]').click() instead of: driver.find_element_by_xpath('//button[@class="mx-auto green-btn btnHref"]').click() And be sure you have this in imports: from selenium.webdriver.common.by import By The locator click_button=driver.find_element(by=By.CLASS_NAME, value="mx-auto green-btn btnHref") doesn't work because By.CLASS_NAME needs only one class name to find an element, but you gave it 3 class names. The html attribute class consists of a list of elements divided by space. So, in this html code <button class="mx-auto green-btn btnHref" onclick="window.location ='/medical'" onkeypress="window.location='/medical'"> Medical and Hospital Costs </button> the attribute class has 3 class names mx-auto, green-btn and btnHref You can't use all the 3 classes with By.CLASS_NAME but you can use all of them using the By.XPATH
Clicking a button by class name using selenium with python
Probably a silly question, but I have spent a ridiculous amount of time trying to figure this out. I am building a scrapper bot using selenium in python, and I am just trying to click a button on a web page. The web page opens and resizes... def initalize_browser(): driver.get("**website name**") driver.maximize_window() but I cannot get it to click a specific button. This is the buttons HTML code: <button class="mx-auto green-btn btnHref" onclick="window.location ='/medical'" onkeypress="window.location='/medical'"> Medical and Hospital Costs </button> And this is my code: click_button=driver.find_element(by=By.CLASS_NAME, value="mx-auto green-btn btnHref") click_button.click() This is the error I get for this code: selenium.common.exceptions.NoSuchElementException: Message: no such element: Unable to locate element: {"method":"css selector","selector":".mx-auto green-btn btnHref"} I have tried out so many variations of this, including: driver.find_element_by_xpath('//button[@class="mx-auto green-btn btnHref"]').click() Where I get this error: AttributeError: 'WebDriver' object has no attribute 'find_element_by_xpath' I have also checked to see if there are perhaps any other attributes with the same class name, but there is not. Any help would be super appreciated, thank you!
[ "The method find_element_by_xpath is deprecated now. Use this line:\ndriver.find_element(By.XPATH, '//button[@class=\"mx-auto green-btn btnHref\"]').click()\n\ninstead of:\ndriver.find_element_by_xpath('//button[@class=\"mx-auto green-btn btnHref\"]').click()\n\nAnd be sure you have this in imports:\nfrom selenium.webdriver.common.by import By\n\nThe locator click_button=driver.find_element(by=By.CLASS_NAME, value=\"mx-auto green-btn btnHref\") doesn't work because By.CLASS_NAME needs only one class name to find an element, but you gave it 3 class names. The html attribute class consists of a list of elements divided by space. So, in this html code\n<button class=\"mx-auto green-btn btnHref\" onclick=\"window.location ='/medical'\" onkeypress=\"window.location='/medical'\">\n Medical and Hospital Costs\n </button>\n\nthe attribute class has 3 class names mx-auto, green-btn and btnHref\nYou can't use all the 3 classes with By.CLASS_NAME but you can use all of them using the By.XPATH\n" ]
[ 1 ]
[]
[]
[ "python", "selenium", "selenium_chromedriver" ]
stackoverflow_0074333322_python_selenium_selenium_chromedriver.txt
Q: How would I sort this json based off of each id's score value using Python? I'm trying to put these ID's in order based off of each ones score value, highest being on the top and lowest being on the bottom { "Users": { "586393728470745123": { "score": 150, "name": "user1" }, "437465122378874895": { "score": 115, "name": "user2" }, "904032786854346795": { "score": 65, "name": "user3" }, "397930609894490122": { "score": 810, "name": "user4" }, "384814725164433408": { "score": 10, "name": "user5" }, "337104925387390977": { "score": 1, "name": "user6" }, "243452651541495808": { "score": 10, "name": "user7"} } } I tried using pythons sorted() function but couldn't figure it out myself A: Well, i don't know what data structure you want to get, but in Python, dicts do not have no particular order of keys. So, if having a list of user id's, sorted by their score, try something like this (Scores is the dict you posted): sorted(list(Scores["Users"]),key= lambda x: Scores["Users"][x]["score"]) Also, notice, that the first id is the one with the lowest score. If you want it the other way round, add reverse = True after the lambda.
How would I sort this json based off of each id's score value using Python?
I'm trying to put these ID's in order based off of each ones score value, highest being on the top and lowest being on the bottom { "Users": { "586393728470745123": { "score": 150, "name": "user1" }, "437465122378874895": { "score": 115, "name": "user2" }, "904032786854346795": { "score": 65, "name": "user3" }, "397930609894490122": { "score": 810, "name": "user4" }, "384814725164433408": { "score": 10, "name": "user5" }, "337104925387390977": { "score": 1, "name": "user6" }, "243452651541495808": { "score": 10, "name": "user7"} } } I tried using pythons sorted() function but couldn't figure it out myself
[ "Well, i don't know what data structure you want to get, but in Python, dicts do not have no particular order of keys. So, if having a list of user id's, sorted by their score, try something like this (Scores is the dict you posted):\nsorted(list(Scores[\"Users\"]),key= lambda x: Scores[\"Users\"][x][\"score\"])\n\nAlso, notice, that the first id is the one with the lowest score. If you want it the other way round, add reverse = True after the lambda.\n" ]
[ 0 ]
[]
[]
[ "json", "python" ]
stackoverflow_0074506230_json_python.txt
Q: How do I merge and sort JSON objects using its counts? I got two json objects that I need to combine together based on ID and do count and sort operations on it. Here is the first object comments: [ { "userId": 1, "id": 1, "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit", "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto" }, { "userId": 1, "id": 2, "title": "qui est esse", "body": "est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla" }, { "userId": 1, "id": 3, "title": "ea molestias quasi exercitationem repellat qui ipsa sit aut", "body": "et iusto sed quo iure\nvoluptatem occaecati omnis eligendi aut ad\nvoluptatem doloribus vel accusantium quis pariatur\nmolestiae porro eius odio et labore et velit aut" }, { "userId": 1, "id": 4, "title": "eum et est occaecati", "body": "ullam et saepe reiciendis voluptatem adipisci\nsit amet autem assumenda provident rerum culpa\nquis hic commodi nesciunt rem tenetur doloremque ipsam iure\nquis sunt voluptatem rerum illo velit" }, ] This is second json object: [ { "postId": 1, "id": 1, "name": "id labore ex et quam laborum", "email": "Eliseo@gardner.biz", "body": "laudantium enim quasi est quidem magnam voluptate ipsam eos\ntempora quo necessitatibus\ndolor quam autem quasi\nreiciendis et nam sapiente accusantium" }, { "postId": 1, "id": 2, "name": "quo vero reiciendis velit similique earum", "email": "Jayne_Kuhic@sydney.com", "body": "est natus enim nihil est dolore omnis voluptatem numquam\net omnis occaecati quod ullam at\nvoluptatem error expedita pariatur\nnihil sint nostrum voluptatem reiciendis et" }, { "postId": 1, "id": 3, "name": "odio adipisci rerum aut animi", "email": "Nikita@garfield.biz", "body": "quia molestiae reprehenderit quasi aspernatur\naut expedita occaecati aliquam eveniet laudantium\nomnis quibusdam delectus saepe quia accusamus maiores nam est\ncum et ducimus et vero voluptates excepturi deleniti ratione" }, { "postId": 1, "id": 4, "name": "alias odio sit", "email": "Lew@alysha.tv", "body": "non et atque\noccaecati deserunt quas accusantium unde odit nobis qui voluptatem\nquia voluptas consequuntur itaque dolor\net qui rerum deleniti ut occaecati" }, { "postId": 2, "id": 5, "name": "et fugit eligendi deleniti quidem qui sint nihil autem", "email": "Presley.Mueller@myrl.com", "body": "doloribus at sed quis culpa deserunt consectetur qui praesentium\naccusamus fugiat dicta\nvoluptatem rerum ut voluptate autem\nvoluptatem repellendus aspernatur dolorem in" }, { "postId": 2, "id": 6, "name": "repellat consequatur praesentium vel minus molestias voluptatum", "email": "Dallas@ole.me", "body": "maiores sed dolores similique labore et inventore et\nquasi temporibus esse sunt id et\neos voluptatem aliquam\naliquid ratione corporis molestiae mollitia quia et magnam dolor" }, ] Object one is basically posts with poster details and object two is comments with commenter details. So expected that object one has one to many relationships with second object. For example one post has many comments. This relationship is based on id in object one is postId in object two. The ultimate objective is to count and sort post by number of comments. I attempt the problem with simple for loops and creating new json object, I managed to combine them together, but I dont know how to count and sort them properly. in the views: for i in posts: if (id==postId): newobj.append(objtwo[i]) count+=1 else: newobj.append(count) count=0 Normally I use django ORM to sort this but I dont have access to the database and model of the table. How to count and sort the new object so it can return list of posts with most comments counts and descend to lower comments counts? A: Assuming your posts and comments data structures are lists, you can use python's defaultdict to count the comments. Then, use posts.sort(key=...) to sort your posts based on the collected counts using the key parameter. Altogether, it could like like this: import json from collections import defaultdict posts = [ ... ] comments = [ ... ] # data structure to count the to comments # automatically initializes to 0 comments_per_post = defaultdict(int) # iterate through the comments to increase the count for the posts for comment in comments: comments_per_post[comment['postId']] += 1 # add comment count to post for post in posts: post['number_of_comments'] = comments_per_post[post['id']] # sort the posts based on the counts collected posts.sort(key=lambda post: post['number_of_comments'], reverse=True) # print them to verify # number of comments per Post will be in the `number_of_comments` key on the post dict. print(json.dumps(posts, indent=2)) Note: this sorts the posts array in-place. If you don't want this, you can use sorted_posts = sorted(posts, key=... instead. A: My answer is very similar to Byted's answer. I would use Counter from the built-in collections to count the number of postIds in the second object. Then sort the first object by using these counts from the previous step as a sorting key. Counter object returns 0 if a key is not present in it, so just use it as a lookup as a sorting key. The negative sign ensures a descending order (because sorted() sorts in ascending order by default). import json from collections import Counter # count the comments counts = Counter([d['postId'] for d in objtwo]) # add the counts to each post for d in objone: d["number of comments"] = counts[d['id']] # sort posts by number of comments in descending order objone.sort(key=lambda x: -x['number of comments']) # convert to json json.dumps(objone, indent=4) Intermediate output for this input: print(counts) # Counter({1: 4, 2: 2})
How do I merge and sort JSON objects using its counts?
I got two json objects that I need to combine together based on ID and do count and sort operations on it. Here is the first object comments: [ { "userId": 1, "id": 1, "title": "sunt aut facere repellat provident occaecati excepturi optio reprehenderit", "body": "quia et suscipit\nsuscipit recusandae consequuntur expedita et cum\nreprehenderit molestiae ut ut quas totam\nnostrum rerum est autem sunt rem eveniet architecto" }, { "userId": 1, "id": 2, "title": "qui est esse", "body": "est rerum tempore vitae\nsequi sint nihil reprehenderit dolor beatae ea dolores neque\nfugiat blanditiis voluptate porro vel nihil molestiae ut reiciendis\nqui aperiam non debitis possimus qui neque nisi nulla" }, { "userId": 1, "id": 3, "title": "ea molestias quasi exercitationem repellat qui ipsa sit aut", "body": "et iusto sed quo iure\nvoluptatem occaecati omnis eligendi aut ad\nvoluptatem doloribus vel accusantium quis pariatur\nmolestiae porro eius odio et labore et velit aut" }, { "userId": 1, "id": 4, "title": "eum et est occaecati", "body": "ullam et saepe reiciendis voluptatem adipisci\nsit amet autem assumenda provident rerum culpa\nquis hic commodi nesciunt rem tenetur doloremque ipsam iure\nquis sunt voluptatem rerum illo velit" }, ] This is second json object: [ { "postId": 1, "id": 1, "name": "id labore ex et quam laborum", "email": "Eliseo@gardner.biz", "body": "laudantium enim quasi est quidem magnam voluptate ipsam eos\ntempora quo necessitatibus\ndolor quam autem quasi\nreiciendis et nam sapiente accusantium" }, { "postId": 1, "id": 2, "name": "quo vero reiciendis velit similique earum", "email": "Jayne_Kuhic@sydney.com", "body": "est natus enim nihil est dolore omnis voluptatem numquam\net omnis occaecati quod ullam at\nvoluptatem error expedita pariatur\nnihil sint nostrum voluptatem reiciendis et" }, { "postId": 1, "id": 3, "name": "odio adipisci rerum aut animi", "email": "Nikita@garfield.biz", "body": "quia molestiae reprehenderit quasi aspernatur\naut expedita occaecati aliquam eveniet laudantium\nomnis quibusdam delectus saepe quia accusamus maiores nam est\ncum et ducimus et vero voluptates excepturi deleniti ratione" }, { "postId": 1, "id": 4, "name": "alias odio sit", "email": "Lew@alysha.tv", "body": "non et atque\noccaecati deserunt quas accusantium unde odit nobis qui voluptatem\nquia voluptas consequuntur itaque dolor\net qui rerum deleniti ut occaecati" }, { "postId": 2, "id": 5, "name": "et fugit eligendi deleniti quidem qui sint nihil autem", "email": "Presley.Mueller@myrl.com", "body": "doloribus at sed quis culpa deserunt consectetur qui praesentium\naccusamus fugiat dicta\nvoluptatem rerum ut voluptate autem\nvoluptatem repellendus aspernatur dolorem in" }, { "postId": 2, "id": 6, "name": "repellat consequatur praesentium vel minus molestias voluptatum", "email": "Dallas@ole.me", "body": "maiores sed dolores similique labore et inventore et\nquasi temporibus esse sunt id et\neos voluptatem aliquam\naliquid ratione corporis molestiae mollitia quia et magnam dolor" }, ] Object one is basically posts with poster details and object two is comments with commenter details. So expected that object one has one to many relationships with second object. For example one post has many comments. This relationship is based on id in object one is postId in object two. The ultimate objective is to count and sort post by number of comments. I attempt the problem with simple for loops and creating new json object, I managed to combine them together, but I dont know how to count and sort them properly. in the views: for i in posts: if (id==postId): newobj.append(objtwo[i]) count+=1 else: newobj.append(count) count=0 Normally I use django ORM to sort this but I dont have access to the database and model of the table. How to count and sort the new object so it can return list of posts with most comments counts and descend to lower comments counts?
[ "Assuming your posts and comments data structures are lists, you can use python's defaultdict to count the comments. Then, use posts.sort(key=...) to sort your posts based on the collected counts using the key parameter. Altogether, it could like like this:\nimport json\nfrom collections import defaultdict\n\nposts = [ ... ]\ncomments = [ ... ]\n\n# data structure to count the to comments\n# automatically initializes to 0\ncomments_per_post = defaultdict(int)\n# iterate through the comments to increase the count for the posts\nfor comment in comments:\n comments_per_post[comment['postId']] += 1\n\n# add comment count to post\nfor post in posts:\n post['number_of_comments'] = comments_per_post[post['id']]\n\n# sort the posts based on the counts collected\nposts.sort(key=lambda post: post['number_of_comments'], reverse=True)\n\n# print them to verify\n# number of comments per Post will be in the `number_of_comments` key on the post dict.\nprint(json.dumps(posts, indent=2))\n\nNote: this sorts the posts array in-place. If you don't want this, you can use sorted_posts = sorted(posts, key=... instead.\n", "My answer is very similar to Byted's answer.\nI would use Counter from the built-in collections to count the number of postIds in the second object.\nThen sort the first object by using these counts from the previous step as a sorting key. Counter object returns 0 if a key is not present in it, so just use it as a lookup as a sorting key. The negative sign ensures a descending order (because sorted() sorts in ascending order by default).\nimport json\nfrom collections import Counter\n\n# count the comments\ncounts = Counter([d['postId'] for d in objtwo])\n\n# add the counts to each post\nfor d in objone:\n d[\"number of comments\"] = counts[d['id']]\n\n# sort posts by number of comments in descending order\nobjone.sort(key=lambda x: -x['number of comments'])\n\n# convert to json\njson.dumps(objone, indent=4)\n\nIntermediate output for this input:\nprint(counts)\n# Counter({1: 4, 2: 2})\n\n" ]
[ 2, 1 ]
[]
[]
[ "django", "json", "python", "sorting" ]
stackoverflow_0074505491_django_json_python_sorting.txt
Q: I want to install scipy in debian10/armv7l environment, but it fails root@ZZZZZ:/home/dev/packages/scipy-1.9.3# pip install . Processing /home/dev/packages/scipy-1.9.3 Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [369 lines of output] Ignoring numpy: markers 'python_version == "3.8" and platform_machine == "aarch64" and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version == "3.8" and platform_machine == "arm64" and platform_system == "Darwin"' don't match your environment Ignoring numpy: markers 'python_version == "3.9" and platform_machine == "arm64" and platform_system == "Darwin"' don't match your environment Ignoring numpy: markers 'platform_machine == "loongarch64"' don't match your environment Ignoring numpy: markers 'python_version == "3.10" and platform_system == "Windows" and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version == "3.10" and (platform_system != "Windows" and platform_machine != "loongarch64") and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version == "3.11" and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version >= "3.12"' don't match your environment Ignoring numpy: markers 'python_version >= "3.8" and platform_python_implementation == "PyPy"' don't match your environment Collecting meson-python>=0.9.0 Using cached meson_python-0.10.0-py3-none-any.whl (18 kB) Collecting Cython<3.0,>=0.29.32 Using cached Cython-0.29.32-py2.py3-none-any.whl (986 kB) Collecting pybind11<2.11.0,>=2.4.3 Using cached pybind11-2.10.1-py3-none-any.whl (216 kB) Collecting pythran<0.13.0,>=0.9.12 Using cached pythran-0.12.0-py3-none-any.whl (4.2 MB) Collecting wheel<0.38.0 Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB) Collecting numpy==1.23.4 Using cached numpy-1.23.4-cp39-cp39-linux_armv7l.whl Collecting pyproject-metadata>=0.5.0 Using cached pyproject_metadata-0.6.1-py3-none-any.whl (7.4 kB) Collecting tomli>=1.0.0 Using cached tomli-2.0.1-py3-none-any.whl (12 kB) Collecting meson>=0.62.0 Using cached meson-0.64.0-py3-none-any.whl (895 kB) Collecting ninja Using cached ninja-1.11.1.tar.gz (27 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting ply>=3.4 Using cached ply-3.11-py2.py3-none-any.whl (49 kB) Collecting beniget~=0.4.0 Using cached beniget-0.4.1-py3-none-any.whl (9.4 kB) Collecting gast~=0.5.0 Using cached gast-0.5.3-py3-none-any.whl (19 kB) Collecting packaging>=19.0 Using cached packaging-21.3-py3-none-any.whl (40 kB) Collecting pyparsing!=3.0.5,>=2.0.2 Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB) Building wheels for collected packages: ninja Building wheel for ninja (pyproject.toml): started Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error × Building wheel for ninja (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [304 lines of output] -------------------------------------------------------------------------------- -- Trying "Ninja" generator -------------------------------- --------------------------- ---------------------- ----------------- ------------ ------- -- Not searching for unused variables given on the command line. CMake Error at CMakeLists.txt:2 (PROJECT): Running '/usr/bin/ninja' '--version' failed with: Traceback (most recent call last): File "/usr/bin/ninja", line 5, in <module> from ninja import ninja ModuleNotFoundError: No module named 'ninja' -- Configuring incomplete, errors occurred! See also "/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_cmake_test_compile/build/CMakeFiles/CMakeOutput.log". -- ------- ------------ ----------------- ---------------------- --------------------------- -------------------------------- -- Trying "Ninja" generator - failure -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -- Trying "Unix Makefiles" generator -------------------------------- --------------------------- ---------------------- ----------------- ------------ ------- -- Not searching for unused variables given on the command line. -- The C compiler identification is GNU 8.3.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- The CXX compiler identification is GNU 8.3.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_cmake_test_compile/build -- ------- ------------ ----------------- ---------------------- --------------------------- -------------------------------- -- Trying "Unix Makefiles" generator - success -------------------------------------------------------------------------------- Configuring Project Working directory: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build Command: cmake /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038 -G 'Unix Makefiles' -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-install -DPYTHON_VERSION_STRING:STRING=3.9.7 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-we6iehs1/overlay/lib/python3.9/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/usr/bin/python3.9 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.9 -DPYTHON_LIBRARY:PATH=/usr/lib/libpython3.9.a -DPython_EXECUTABLE:PATH=/usr/bin/python3.9 -DPython_ROOT_DIR:PATH=/usr -DPython_INCLUDE_DIR:PATH=/usr/include/python3.9 -DPython_FIND_REGISTRY:STRING=NEVER -DPython3_EXECUTABLE:PATH=/usr/bin/python3.9 -DPython3_ROOT_DIR:PATH=/usr -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.9 -DPython3_FIND_REGISTRY:STRING=NEVER -DCMAKE_BUILD_TYPE:STRING=Release -- The C compiler identification is GNU 8.3.0 -- The CXX compiler identification is GNU 8.3.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- ********************************************* -- Ninja Python Distribution -- -- BUILD_VERBOSE : OFF -- RUN_NINJA_TEST : ON -- -- ARCHIVE_DOWNLOAD_DIR : /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build -- -- src_archive : unix_source -- <src_archive>_url : https://github.com/Kitware/ninja/archive/v1.11.1.g95dee.kitware.jobserver-1.tar.gz -- <src_archive>_sha256 : 7ba84551f5b315b4270dc7c51adef5dff83a2154a3665a6c9744245c122dd0db -- ********************************************* CMake Warning (dev) at /usr/local/share/cmake-3.25/Modules/ExternalProject.cmake:3075 (message): The DOWNLOAD_EXTRACT_TIMESTAMP option was not given and policy CMP0135 is not set. The policy's OLD behavior will be used. When using a URL download, the timestamps of extracted files should preferably be that of the time of extraction, otherwise code that depends on the extracted contents might not be rebuilt if the URL changes. The OLD behavior preserves the timestamps from the archive instead, but this is usually not what you want. Update your project to the NEW behavior or specify the DOWNLOAD_EXTRACT_TIMESTAMP option with a value of true to avoid this robustness issue. Call Stack (most recent call first): /usr/local/share/cmake-3.25/Modules/ExternalProject.cmake:4185 (_ep_add_download_command) CMakeLists.txt:65 (ExternalProject_add) This warning is for project developers. Use -Wno-dev to suppress it. -- download_ninja_source - URL: https://github.com/Kitware/ninja/archive/v1.11.1.g95dee.kitware.jobserver-1.tar.gz -- SuperBuild - CMAKE_BUILD_TYPE: Release -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project: PYTHON_EXECUTABLE PYTHON_INCLUDE_DIR PYTHON_LIBRARY PYTHON_VERSION_STRING Python3_EXECUTABLE Python3_FIND_REGISTRY Python3_INCLUDE_DIR Python3_ROOT_DIR Python_EXECUTABLE Python_FIND_REGISTRY Python_INCLUDE_DIR Python_ROOT_DIR SKBUILD -- Build files have been written to: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build [ 5%] Creating directories for 'download_ninja_source' [ 11%] Performing download step (download, verify and extract) for 'download_ninja_source' -- Downloading... dst='/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/v1.11.1.g95dee.kitware.jobserver-1.tar.gz' timeout='none' inactivity timeout='none' -- Using src='https://github.com/Kitware/ninja/archive/v1.11.1.g95dee.kitware.jobserver-1.tar.gz' -- verifying file... file='/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/v1.11.1.g95dee.kitware.jobserver-1.tar.gz' -- Downloading... done -- extracting... src='/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/v1.11.1.g95dee.kitware.jobserver-1.tar.gz' dst='/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/Ninja-src' -- extracting... [tar xfz] -- extracting... [analysis] -- extracting... [rename] -- extracting... [clean up] -- extracting... done [ 16%] No update step for 'download_ninja_source' [ 22%] No patch step for 'download_ninja_source' [ 27%] No configure step for 'download_ninja_source' [ 33%] No build step for 'download_ninja_source' [ 38%] No install step for 'download_ninja_source' [ 44%] Completed 'download_ninja_source' [ 44%] Built target download_ninja_source [ 50%] Creating directories for 'build_ninja' [ 55%] No download step for 'build_ninja' [ 61%] No update step for 'build_ninja' [ 66%] No patch step for 'build_ninja' [ 72%] Performing configure step for 'build_ninja' loading initial cache file /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/build_'ninja-prefix/tmp/build_ninja-cache-Release.cmake -- The C compiler identification is GNU 8.3.0' -- The CXX compiler identification is GNU 8.3'.0 -- Detecting C compiler ABI info' -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- IPO / LTO enabled -- Performing Test flag_no_deprecated -- Performing Test flag_no_deprecated - Success -- Performing Test flag_color_diag -- Performing Test flag_color_diag - Success CMake Warning at CMakeLists.txt:49 (message): re2c was not found; changes to src/*.in.cc will not affect your build. -- Looking for fork -- Looking for fork - found -- Looking for pipe -- Looking for pipe - found -- Configuring done -- Generating done -- Build files have been written to: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/Ninja-build [ 77%] Performing build step for 'build_ninja' [ 1%] Building CXX object CMakeFiles/libninja-re2c.dir/src/depfile_parser.cc.o [ 2%] Building CXX object CMakeFiles/libninja-re2c.dir/src/lexer.cc.o [ 2%] Built target libninja-re2c [ 4%] Building CXX object CMakeFiles/libninja.dir/src/build_log.cc.o [ 5%] Building CXX object CMakeFiles/libninja.dir/src/build.cc.o [ 7%] Building CXX object CMakeFiles/libninja.dir/src/clean.cc.o [ 8%] Building CXX object CMakeFiles/libninja.dir/src/clparser.cc.o [ 10%] Building CXX object CMakeFiles/libninja.dir/src/dyndep.cc.o [ 11%] Building CXX object CMakeFiles/libninja.dir/src/dyndep_parser.cc.o [ 13%] Building CXX object CMakeFiles/libninja.dir/src/debug_flags.cc.o [ 14%] Building CXX object CMakeFiles/libninja.dir/src/deps_log.cc.o [ 16%] Building CXX object CMakeFiles/libninja.dir/src/disk_interface.cc.o [ 17%] Building CXX object CMakeFiles/libninja.dir/src/edit_distance.cc.o [ 19%] Building CXX object CMakeFiles/libninja.dir/src/eval_env.cc.o [ 20%] Building CXX object CMakeFiles/libninja.dir/src/graph.cc.o [ 22%] Building CXX object CMakeFiles/libninja.dir/src/graphviz.cc.o [ 23%] Building CXX object CMakeFiles/libninja.dir/src/json.cc.o [ 25%] Building CXX object CMakeFiles/libninja.dir/src/line_printer.cc.o [ 26%] Building CXX object CMakeFiles/libninja.dir/src/manifest_parser.cc.o [ 28%] Building CXX object CMakeFiles/libninja.dir/src/metrics.cc.o [ 29%] Building CXX object CMakeFiles/libninja.dir/src/missing_deps.cc.o [ 31%] Building CXX object CMakeFiles/libninja.dir/src/parser.cc.o [ 32%] Building CXX object CMakeFiles/libninja.dir/src/state.cc.o [ 34%] Building CXX object CMakeFiles/libninja.dir/src/status.cc.o [ 35%] Building CXX object CMakeFiles/libninja.dir/src/string_piece_util.cc.o [ 37%] Building CXX object CMakeFiles/libninja.dir/src/tokenpool-gnu-make.cc.o [ 38%] Building CXX object CMakeFiles/libninja.dir/src/util.cc.o [ 40%] Building CXX object CMakeFiles/libninja.dir/src/version.cc.o [ 41%] Building CXX object CMakeFiles/libninja.dir/src/subprocess-posix.cc.o [ 43%] Building CXX object CMakeFiles/libninja.dir/src/tokenpool-gnu-make-posix.cc.o [ 43%] Built target libninja [ 44%] Generating build/browse_py.h [ 46%] Building CXX object CMakeFiles/ninja.dir/src/ninja.cc.o [ 47%] Building CXX object CMakeFiles/ninja.dir/src/browse.cc.o [ 49%] Linking CXX executable ninja [ 49%] Built target ninja [ 50%] Building CXX object CMakeFiles/ninja_test.dir/src/build_log_test.cc.o [ 52%] Building CXX object CMakeFiles/ninja_test.dir/src/build_test.cc.o [ 53%] Building CXX object CMakeFiles/ninja_test.dir/src/clean_test.cc.o [ 55%] Building CXX object CMakeFiles/ninja_test.dir/src/clparser_test.cc.o [ 56%] Building CXX object CMakeFiles/ninja_test.dir/src/depfile_parser_test.cc.o [ 58%] Building CXX object CMakeFiles/ninja_test.dir/src/deps_log_test.cc.o [ 59%] Building CXX object CMakeFiles/ninja_test.dir/src/disk_interface_test.cc.o [ 61%] Building CXX object CMakeFiles/ninja_test.dir/src/dyndep_parser_test.cc.o [ 62%] Building CXX object CMakeFiles/ninja_test.dir/src/edit_distance_test.cc.o [ 64%] Building CXX object CMakeFiles/ninja_test.dir/src/graph_test.cc.o [ 65%] Building CXX object CMakeFiles/ninja_test.dir/src/json_test.cc.o [ 67%] Building CXX object CMakeFiles/ninja_test.dir/src/lexer_test.cc.o [ 68%] Building CXX object CMakeFiles/ninja_test.dir/src/manifest_parser_test.cc.o [ 70%] Building CXX object CMakeFiles/ninja_test.dir/src/missing_deps_test.cc.o [ 71%] Building CXX object CMakeFiles/ninja_test.dir/src/ninja_test.cc.o [ 73%] Building CXX object CMakeFiles/ninja_test.dir/src/state_test.cc.o [ 74%] Building CXX object CMakeFiles/ninja_test.dir/src/string_piece_util_test.cc.o [ 76%] Building CXX object CMakeFiles/ninja_test.dir/src/subprocess_test.cc.o [ 77%] Building CXX object CMakeFiles/ninja_test.dir/src/test.cc.o [ 79%] Building CXX object CMakeFiles/ninja_test.dir/src/tokenpool_test.cc.o [ 80%] Building CXX object CMakeFiles/ninja_test.dir/src/util_test.cc.o [ 82%] Linking CXX executable ninja_test [ 82%] Built target ninja_test [ 83%] Building CXX object CMakeFiles/build_log_perftest.dir/src/build_log_perftest.cc.o [ 85%] Linking CXX executable build_log_perftest [ 85%] Built target build_log_perftest [ 86%] Building CXX object CMakeFiles/canon_perftest.dir/src/canon_perftest.cc.o [ 88%] Linking CXX executable canon_perftest [ 88%] Built target canon_perftest [ 89%] Building CXX object CMakeFiles/clparser_perftest.dir/src/clparser_perftest.cc.o [ 91%] Linking CXX executable clparser_perftest [ 91%] Built target clparser_perftest [ 92%] Building CXX object CMakeFiles/depfile_parser_perftest.dir/src/depfile_parser_perftest.cc.o [ 94%] Linking CXX executable depfile_parser_perftest [ 94%] Built target depfile_parser_perftest [ 95%] Building CXX object CMakeFiles/hash_collision_bench.dir/src/hash_collision_bench.cc.o [ 97%] Linking CXX executable hash_collision_bench [ 97%] Built target hash_collision_bench [ 98%] Building CXX object CMakeFiles/manifest_parser_perftest.dir/src/manifest_parser_perftest.cc.o [100%] Linking CXX executable manifest_parser_perftest [100%] Built target manifest_parser_perftest [ 83%] Stripping CMake executables [ 88%] Running Ninja test suite make[2]: *** [CMakeFiles/build_ninja.dir/build.make:120: build_ninja-prefix/src/build_ninja-stamp/build_ninja-run_ninja_test_suite] Error 130 make[1]: *** [CMakeFiles/Makefile2:111: CMakeFiles/build_ninja.dir/all] Error 2 make: *** [Makefile:136: all] Error 2 Traceback (most recent call last): File "/tmp/pip-build-env-we6iehs1/overlay/lib/python3.9/site-packages/skbuild/setuptools_wrap.py", line 640, in setup cmkr.make(make_args, install_target=cmake_install_target, env=env) File "/tmp/pip-build-env-we6iehs1/overlay/lib/python3.9/site-packages/skbuild/cmaker.py", line 670, in make self.make_impl(clargs=clargs, config=config, source_dir=source_dir, install_target=install_target, env=env) File "/tmp/pip-build-env-we6iehs1/overlay/lib/python3.9/site-packages/skbuild/cmaker.py", line 701, in make_impl raise SKBuildError( An error occurred while building with CMake. Command: cmake --build . --target install --config Release -- Install target: install Source directory: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038 Working directory: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build Please check the install target is valid and see CMake's output for more information. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for ninja Failed to build ninja ERROR: Could not build wheels for ninja, which is required to install pyproject.toml-based projects [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Execution Environment is root@ZZZZZ:/home/dev/packages/scipy-1.9.3# python3 -V Python 3.9.7 root@ZZZZZ:/home/dev/packages/scipy-1.9.3# pip list Cython 0.29.32 joblib 1.2.0 meson 0.64.0 ninja 1.11.1 numpy 1.23.4 pip 22.3.1 setuptools 65.6.0 wheel 0.38.4 root@ZZZZZ:/home/dev/packages/scipy-1.9.3# /usr/bin/ninja --version 1.11.1.git.kitware.jobserver-1 root@ZZZZZ:/home/dev/packages/scipy-1.9.3# cmake --version cmake version 3.25.0 I really want to install pandas and scikit-learn, but I get an error. So I installed them individually and found that scipy is the cause. Also, the default is numpy==1.19.3, but I changed it to numpy==1.23.4. (pyproject.toml) Why can't it recognize ninja if it is already in there? Is there a switch to make this work? If it needs to be modified, which module is it? A: This is an error that happens with your pip version. Try to downgrade it to pip=19.0 and try again to see if it works. Also you can try the solution here if downgrading pip doesn't work.
I want to install scipy in debian10/armv7l environment, but it fails
root@ZZZZZ:/home/dev/packages/scipy-1.9.3# pip install . Processing /home/dev/packages/scipy-1.9.3 Installing build dependencies ... error error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> [369 lines of output] Ignoring numpy: markers 'python_version == "3.8" and platform_machine == "aarch64" and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version == "3.8" and platform_machine == "arm64" and platform_system == "Darwin"' don't match your environment Ignoring numpy: markers 'python_version == "3.9" and platform_machine == "arm64" and platform_system == "Darwin"' don't match your environment Ignoring numpy: markers 'platform_machine == "loongarch64"' don't match your environment Ignoring numpy: markers 'python_version == "3.10" and platform_system == "Windows" and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version == "3.8" and (platform_machine != "arm64" or platform_system != "Darwin") and platform_machine != "aarch64" and platform_machine != "loongarch64" and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version == "3.10" and (platform_system != "Windows" and platform_machine != "loongarch64") and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version == "3.11" and platform_python_implementation != "PyPy"' don't match your environment Ignoring numpy: markers 'python_version >= "3.12"' don't match your environment Ignoring numpy: markers 'python_version >= "3.8" and platform_python_implementation == "PyPy"' don't match your environment Collecting meson-python>=0.9.0 Using cached meson_python-0.10.0-py3-none-any.whl (18 kB) Collecting Cython<3.0,>=0.29.32 Using cached Cython-0.29.32-py2.py3-none-any.whl (986 kB) Collecting pybind11<2.11.0,>=2.4.3 Using cached pybind11-2.10.1-py3-none-any.whl (216 kB) Collecting pythran<0.13.0,>=0.9.12 Using cached pythran-0.12.0-py3-none-any.whl (4.2 MB) Collecting wheel<0.38.0 Using cached wheel-0.37.1-py2.py3-none-any.whl (35 kB) Collecting numpy==1.23.4 Using cached numpy-1.23.4-cp39-cp39-linux_armv7l.whl Collecting pyproject-metadata>=0.5.0 Using cached pyproject_metadata-0.6.1-py3-none-any.whl (7.4 kB) Collecting tomli>=1.0.0 Using cached tomli-2.0.1-py3-none-any.whl (12 kB) Collecting meson>=0.62.0 Using cached meson-0.64.0-py3-none-any.whl (895 kB) Collecting ninja Using cached ninja-1.11.1.tar.gz (27 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting ply>=3.4 Using cached ply-3.11-py2.py3-none-any.whl (49 kB) Collecting beniget~=0.4.0 Using cached beniget-0.4.1-py3-none-any.whl (9.4 kB) Collecting gast~=0.5.0 Using cached gast-0.5.3-py3-none-any.whl (19 kB) Collecting packaging>=19.0 Using cached packaging-21.3-py3-none-any.whl (40 kB) Collecting pyparsing!=3.0.5,>=2.0.2 Using cached pyparsing-3.0.9-py3-none-any.whl (98 kB) Building wheels for collected packages: ninja Building wheel for ninja (pyproject.toml): started Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): still running... Building wheel for ninja (pyproject.toml): finished with status 'error' error: subprocess-exited-with-error × Building wheel for ninja (pyproject.toml) did not run successfully. │ exit code: 1 ╰─> [304 lines of output] -------------------------------------------------------------------------------- -- Trying "Ninja" generator -------------------------------- --------------------------- ---------------------- ----------------- ------------ ------- -- Not searching for unused variables given on the command line. CMake Error at CMakeLists.txt:2 (PROJECT): Running '/usr/bin/ninja' '--version' failed with: Traceback (most recent call last): File "/usr/bin/ninja", line 5, in <module> from ninja import ninja ModuleNotFoundError: No module named 'ninja' -- Configuring incomplete, errors occurred! See also "/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_cmake_test_compile/build/CMakeFiles/CMakeOutput.log". -- ------- ------------ ----------------- ---------------------- --------------------------- -------------------------------- -- Trying "Ninja" generator - failure -------------------------------------------------------------------------------- -------------------------------------------------------------------------------- -- Trying "Unix Makefiles" generator -------------------------------- --------------------------- ---------------------- ----------------- ------------ ------- -- Not searching for unused variables given on the command line. -- The C compiler identification is GNU 8.3.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- The CXX compiler identification is GNU 8.3.0 -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- Configuring done -- Generating done -- Build files have been written to: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_cmake_test_compile/build -- ------- ------------ ----------------- ---------------------- --------------------------- -------------------------------- -- Trying "Unix Makefiles" generator - success -------------------------------------------------------------------------------- Configuring Project Working directory: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build Command: cmake /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038 -G 'Unix Makefiles' -DCMAKE_INSTALL_PREFIX:PATH=/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-install -DPYTHON_VERSION_STRING:STRING=3.9.7 -DSKBUILD:INTERNAL=TRUE -DCMAKE_MODULE_PATH:PATH=/tmp/pip-build-env-we6iehs1/overlay/lib/python3.9/site-packages/skbuild/resources/cmake -DPYTHON_EXECUTABLE:PATH=/usr/bin/python3.9 -DPYTHON_INCLUDE_DIR:PATH=/usr/include/python3.9 -DPYTHON_LIBRARY:PATH=/usr/lib/libpython3.9.a -DPython_EXECUTABLE:PATH=/usr/bin/python3.9 -DPython_ROOT_DIR:PATH=/usr -DPython_INCLUDE_DIR:PATH=/usr/include/python3.9 -DPython_FIND_REGISTRY:STRING=NEVER -DPython3_EXECUTABLE:PATH=/usr/bin/python3.9 -DPython3_ROOT_DIR:PATH=/usr -DPython3_INCLUDE_DIR:PATH=/usr/include/python3.9 -DPython3_FIND_REGISTRY:STRING=NEVER -DCMAKE_BUILD_TYPE:STRING=Release -- The C compiler identification is GNU 8.3.0 -- The CXX compiler identification is GNU 8.3.0 -- Detecting C compiler ABI info -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- ********************************************* -- Ninja Python Distribution -- -- BUILD_VERBOSE : OFF -- RUN_NINJA_TEST : ON -- -- ARCHIVE_DOWNLOAD_DIR : /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build -- -- src_archive : unix_source -- <src_archive>_url : https://github.com/Kitware/ninja/archive/v1.11.1.g95dee.kitware.jobserver-1.tar.gz -- <src_archive>_sha256 : 7ba84551f5b315b4270dc7c51adef5dff83a2154a3665a6c9744245c122dd0db -- ********************************************* CMake Warning (dev) at /usr/local/share/cmake-3.25/Modules/ExternalProject.cmake:3075 (message): The DOWNLOAD_EXTRACT_TIMESTAMP option was not given and policy CMP0135 is not set. The policy's OLD behavior will be used. When using a URL download, the timestamps of extracted files should preferably be that of the time of extraction, otherwise code that depends on the extracted contents might not be rebuilt if the URL changes. The OLD behavior preserves the timestamps from the archive instead, but this is usually not what you want. Update your project to the NEW behavior or specify the DOWNLOAD_EXTRACT_TIMESTAMP option with a value of true to avoid this robustness issue. Call Stack (most recent call first): /usr/local/share/cmake-3.25/Modules/ExternalProject.cmake:4185 (_ep_add_download_command) CMakeLists.txt:65 (ExternalProject_add) This warning is for project developers. Use -Wno-dev to suppress it. -- download_ninja_source - URL: https://github.com/Kitware/ninja/archive/v1.11.1.g95dee.kitware.jobserver-1.tar.gz -- SuperBuild - CMAKE_BUILD_TYPE: Release -- Configuring done -- Generating done CMake Warning: Manually-specified variables were not used by the project: PYTHON_EXECUTABLE PYTHON_INCLUDE_DIR PYTHON_LIBRARY PYTHON_VERSION_STRING Python3_EXECUTABLE Python3_FIND_REGISTRY Python3_INCLUDE_DIR Python3_ROOT_DIR Python_EXECUTABLE Python_FIND_REGISTRY Python_INCLUDE_DIR Python_ROOT_DIR SKBUILD -- Build files have been written to: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build [ 5%] Creating directories for 'download_ninja_source' [ 11%] Performing download step (download, verify and extract) for 'download_ninja_source' -- Downloading... dst='/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/v1.11.1.g95dee.kitware.jobserver-1.tar.gz' timeout='none' inactivity timeout='none' -- Using src='https://github.com/Kitware/ninja/archive/v1.11.1.g95dee.kitware.jobserver-1.tar.gz' -- verifying file... file='/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/v1.11.1.g95dee.kitware.jobserver-1.tar.gz' -- Downloading... done -- extracting... src='/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/v1.11.1.g95dee.kitware.jobserver-1.tar.gz' dst='/tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/Ninja-src' -- extracting... [tar xfz] -- extracting... [analysis] -- extracting... [rename] -- extracting... [clean up] -- extracting... done [ 16%] No update step for 'download_ninja_source' [ 22%] No patch step for 'download_ninja_source' [ 27%] No configure step for 'download_ninja_source' [ 33%] No build step for 'download_ninja_source' [ 38%] No install step for 'download_ninja_source' [ 44%] Completed 'download_ninja_source' [ 44%] Built target download_ninja_source [ 50%] Creating directories for 'build_ninja' [ 55%] No download step for 'build_ninja' [ 61%] No update step for 'build_ninja' [ 66%] No patch step for 'build_ninja' [ 72%] Performing configure step for 'build_ninja' loading initial cache file /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/build_'ninja-prefix/tmp/build_ninja-cache-Release.cmake -- The C compiler identification is GNU 8.3.0' -- The CXX compiler identification is GNU 8.3'.0 -- Detecting C compiler ABI info' -- Detecting C compiler ABI info - done -- Check for working C compiler: /usr/bin/cc - skipped -- Detecting C compile features -- Detecting C compile features - done -- Detecting CXX compiler ABI info -- Detecting CXX compiler ABI info - done -- Check for working CXX compiler: /usr/bin/c++ - skipped -- Detecting CXX compile features -- Detecting CXX compile features - done -- IPO / LTO enabled -- Performing Test flag_no_deprecated -- Performing Test flag_no_deprecated - Success -- Performing Test flag_color_diag -- Performing Test flag_color_diag - Success CMake Warning at CMakeLists.txt:49 (message): re2c was not found; changes to src/*.in.cc will not affect your build. -- Looking for fork -- Looking for fork - found -- Looking for pipe -- Looking for pipe - found -- Configuring done -- Generating done -- Build files have been written to: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build/Ninja-build [ 77%] Performing build step for 'build_ninja' [ 1%] Building CXX object CMakeFiles/libninja-re2c.dir/src/depfile_parser.cc.o [ 2%] Building CXX object CMakeFiles/libninja-re2c.dir/src/lexer.cc.o [ 2%] Built target libninja-re2c [ 4%] Building CXX object CMakeFiles/libninja.dir/src/build_log.cc.o [ 5%] Building CXX object CMakeFiles/libninja.dir/src/build.cc.o [ 7%] Building CXX object CMakeFiles/libninja.dir/src/clean.cc.o [ 8%] Building CXX object CMakeFiles/libninja.dir/src/clparser.cc.o [ 10%] Building CXX object CMakeFiles/libninja.dir/src/dyndep.cc.o [ 11%] Building CXX object CMakeFiles/libninja.dir/src/dyndep_parser.cc.o [ 13%] Building CXX object CMakeFiles/libninja.dir/src/debug_flags.cc.o [ 14%] Building CXX object CMakeFiles/libninja.dir/src/deps_log.cc.o [ 16%] Building CXX object CMakeFiles/libninja.dir/src/disk_interface.cc.o [ 17%] Building CXX object CMakeFiles/libninja.dir/src/edit_distance.cc.o [ 19%] Building CXX object CMakeFiles/libninja.dir/src/eval_env.cc.o [ 20%] Building CXX object CMakeFiles/libninja.dir/src/graph.cc.o [ 22%] Building CXX object CMakeFiles/libninja.dir/src/graphviz.cc.o [ 23%] Building CXX object CMakeFiles/libninja.dir/src/json.cc.o [ 25%] Building CXX object CMakeFiles/libninja.dir/src/line_printer.cc.o [ 26%] Building CXX object CMakeFiles/libninja.dir/src/manifest_parser.cc.o [ 28%] Building CXX object CMakeFiles/libninja.dir/src/metrics.cc.o [ 29%] Building CXX object CMakeFiles/libninja.dir/src/missing_deps.cc.o [ 31%] Building CXX object CMakeFiles/libninja.dir/src/parser.cc.o [ 32%] Building CXX object CMakeFiles/libninja.dir/src/state.cc.o [ 34%] Building CXX object CMakeFiles/libninja.dir/src/status.cc.o [ 35%] Building CXX object CMakeFiles/libninja.dir/src/string_piece_util.cc.o [ 37%] Building CXX object CMakeFiles/libninja.dir/src/tokenpool-gnu-make.cc.o [ 38%] Building CXX object CMakeFiles/libninja.dir/src/util.cc.o [ 40%] Building CXX object CMakeFiles/libninja.dir/src/version.cc.o [ 41%] Building CXX object CMakeFiles/libninja.dir/src/subprocess-posix.cc.o [ 43%] Building CXX object CMakeFiles/libninja.dir/src/tokenpool-gnu-make-posix.cc.o [ 43%] Built target libninja [ 44%] Generating build/browse_py.h [ 46%] Building CXX object CMakeFiles/ninja.dir/src/ninja.cc.o [ 47%] Building CXX object CMakeFiles/ninja.dir/src/browse.cc.o [ 49%] Linking CXX executable ninja [ 49%] Built target ninja [ 50%] Building CXX object CMakeFiles/ninja_test.dir/src/build_log_test.cc.o [ 52%] Building CXX object CMakeFiles/ninja_test.dir/src/build_test.cc.o [ 53%] Building CXX object CMakeFiles/ninja_test.dir/src/clean_test.cc.o [ 55%] Building CXX object CMakeFiles/ninja_test.dir/src/clparser_test.cc.o [ 56%] Building CXX object CMakeFiles/ninja_test.dir/src/depfile_parser_test.cc.o [ 58%] Building CXX object CMakeFiles/ninja_test.dir/src/deps_log_test.cc.o [ 59%] Building CXX object CMakeFiles/ninja_test.dir/src/disk_interface_test.cc.o [ 61%] Building CXX object CMakeFiles/ninja_test.dir/src/dyndep_parser_test.cc.o [ 62%] Building CXX object CMakeFiles/ninja_test.dir/src/edit_distance_test.cc.o [ 64%] Building CXX object CMakeFiles/ninja_test.dir/src/graph_test.cc.o [ 65%] Building CXX object CMakeFiles/ninja_test.dir/src/json_test.cc.o [ 67%] Building CXX object CMakeFiles/ninja_test.dir/src/lexer_test.cc.o [ 68%] Building CXX object CMakeFiles/ninja_test.dir/src/manifest_parser_test.cc.o [ 70%] Building CXX object CMakeFiles/ninja_test.dir/src/missing_deps_test.cc.o [ 71%] Building CXX object CMakeFiles/ninja_test.dir/src/ninja_test.cc.o [ 73%] Building CXX object CMakeFiles/ninja_test.dir/src/state_test.cc.o [ 74%] Building CXX object CMakeFiles/ninja_test.dir/src/string_piece_util_test.cc.o [ 76%] Building CXX object CMakeFiles/ninja_test.dir/src/subprocess_test.cc.o [ 77%] Building CXX object CMakeFiles/ninja_test.dir/src/test.cc.o [ 79%] Building CXX object CMakeFiles/ninja_test.dir/src/tokenpool_test.cc.o [ 80%] Building CXX object CMakeFiles/ninja_test.dir/src/util_test.cc.o [ 82%] Linking CXX executable ninja_test [ 82%] Built target ninja_test [ 83%] Building CXX object CMakeFiles/build_log_perftest.dir/src/build_log_perftest.cc.o [ 85%] Linking CXX executable build_log_perftest [ 85%] Built target build_log_perftest [ 86%] Building CXX object CMakeFiles/canon_perftest.dir/src/canon_perftest.cc.o [ 88%] Linking CXX executable canon_perftest [ 88%] Built target canon_perftest [ 89%] Building CXX object CMakeFiles/clparser_perftest.dir/src/clparser_perftest.cc.o [ 91%] Linking CXX executable clparser_perftest [ 91%] Built target clparser_perftest [ 92%] Building CXX object CMakeFiles/depfile_parser_perftest.dir/src/depfile_parser_perftest.cc.o [ 94%] Linking CXX executable depfile_parser_perftest [ 94%] Built target depfile_parser_perftest [ 95%] Building CXX object CMakeFiles/hash_collision_bench.dir/src/hash_collision_bench.cc.o [ 97%] Linking CXX executable hash_collision_bench [ 97%] Built target hash_collision_bench [ 98%] Building CXX object CMakeFiles/manifest_parser_perftest.dir/src/manifest_parser_perftest.cc.o [100%] Linking CXX executable manifest_parser_perftest [100%] Built target manifest_parser_perftest [ 83%] Stripping CMake executables [ 88%] Running Ninja test suite make[2]: *** [CMakeFiles/build_ninja.dir/build.make:120: build_ninja-prefix/src/build_ninja-stamp/build_ninja-run_ninja_test_suite] Error 130 make[1]: *** [CMakeFiles/Makefile2:111: CMakeFiles/build_ninja.dir/all] Error 2 make: *** [Makefile:136: all] Error 2 Traceback (most recent call last): File "/tmp/pip-build-env-we6iehs1/overlay/lib/python3.9/site-packages/skbuild/setuptools_wrap.py", line 640, in setup cmkr.make(make_args, install_target=cmake_install_target, env=env) File "/tmp/pip-build-env-we6iehs1/overlay/lib/python3.9/site-packages/skbuild/cmaker.py", line 670, in make self.make_impl(clargs=clargs, config=config, source_dir=source_dir, install_target=install_target, env=env) File "/tmp/pip-build-env-we6iehs1/overlay/lib/python3.9/site-packages/skbuild/cmaker.py", line 701, in make_impl raise SKBuildError( An error occurred while building with CMake. Command: cmake --build . --target install --config Release -- Install target: install Source directory: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038 Working directory: /tmp/pip-install-h317wd1u/ninja_ea80f17956454895b214a420d61cc038/_skbuild/linux-armv7l-3.9/cmake-build Please check the install target is valid and see CMake's output for more information. [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for ninja Failed to build ninja ERROR: Could not build wheels for ninja, which is required to install pyproject.toml-based projects [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × pip subprocess to install build dependencies did not run successfully. │ exit code: 1 ╰─> See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. Execution Environment is root@ZZZZZ:/home/dev/packages/scipy-1.9.3# python3 -V Python 3.9.7 root@ZZZZZ:/home/dev/packages/scipy-1.9.3# pip list Cython 0.29.32 joblib 1.2.0 meson 0.64.0 ninja 1.11.1 numpy 1.23.4 pip 22.3.1 setuptools 65.6.0 wheel 0.38.4 root@ZZZZZ:/home/dev/packages/scipy-1.9.3# /usr/bin/ninja --version 1.11.1.git.kitware.jobserver-1 root@ZZZZZ:/home/dev/packages/scipy-1.9.3# cmake --version cmake version 3.25.0 I really want to install pandas and scikit-learn, but I get an error. So I installed them individually and found that scipy is the cause. Also, the default is numpy==1.19.3, but I changed it to numpy==1.23.4. (pyproject.toml) Why can't it recognize ninja if it is already in there? Is there a switch to make this work? If it needs to be modified, which module is it?
[ "This is an error that happens with your pip version. Try to downgrade it to pip=19.0 and try again to see if it works. Also you can try the solution here if downgrading pip doesn't work.\n" ]
[ 0 ]
[]
[]
[ "pip", "python", "python_wheel", "scipy" ]
stackoverflow_0074506114_pip_python_python_wheel_scipy.txt
Q: Error: File could not be downloaded from url: 2Cpatcha Api I am trying to solve normal captcha using 2Captcha python API, but it gives error that file could not be downloaded. I dont know why is this happening, as I can download it manually from browser and do save as .png to download it. The below is the code import sys import os sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) from twocaptcha import TwoCaptcha solver = TwoCaptcha(apikey) try: result = solver.normal('https://v2.gcchmc.org/captcha/image/aa699f305917812978c911e87ab126a782f726e7/') except Exception as e: sys.exit(e) else: sys.exit('solved: ' + str(result)) I also tried to download the file using requests and then give it to the API but that also shows error. The code for requests is url = 'https://v2.gcchmc.org/captcha/image/aa699f305917812978c911e87ab126a782f726e7/' import requests from PIL import Image from io import BytesIO response = requests.get(url) img = Image.open(BytesIO(response.content)) # error occurs here img.save('output.png') The Error is raise UnidentifiedImageError( PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x00000141224AB2C0> If anyone can help me download the image using script I will be thankful. The captcha is showed on the following url: https://v2.gcchmc.org/book-appointment/ A: Your code is fine and it is the problem caused by headers. The url expects headers from you and you are not providing headers. This causes error response which the PIL library can not understand. The working code will be url = 'https://v2.gcchmc.org/captcha/image/aa699f305917812978c911e87ab126a782f726e7/' import requests from PIL import Image from io import BytesIO headers = { 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0', 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8', 'Accept-Language': 'en-US,en;q=0.5', # 'Accept-Encoding': 'gzip, deflate, br', 'DNT': '1', 'Connection': 'keep-alive', 'Upgrade-Insecure-Requests': '1', 'Sec-Fetch-Dest': 'document', 'Sec-Fetch-Mode': 'navigate', 'Sec-Fetch-Site': 'none', 'Sec-Fetch-User': '?1', } response = requests.get(url, headers) img = Image.open(BytesIO(response.content)) # error occurs here img.save('output.png')
Error: File could not be downloaded from url: 2Cpatcha Api
I am trying to solve normal captcha using 2Captcha python API, but it gives error that file could not be downloaded. I dont know why is this happening, as I can download it manually from browser and do save as .png to download it. The below is the code import sys import os sys.path.append(os.path.dirname(os.path.dirname(os.path.realpath(__file__)))) from twocaptcha import TwoCaptcha solver = TwoCaptcha(apikey) try: result = solver.normal('https://v2.gcchmc.org/captcha/image/aa699f305917812978c911e87ab126a782f726e7/') except Exception as e: sys.exit(e) else: sys.exit('solved: ' + str(result)) I also tried to download the file using requests and then give it to the API but that also shows error. The code for requests is url = 'https://v2.gcchmc.org/captcha/image/aa699f305917812978c911e87ab126a782f726e7/' import requests from PIL import Image from io import BytesIO response = requests.get(url) img = Image.open(BytesIO(response.content)) # error occurs here img.save('output.png') The Error is raise UnidentifiedImageError( PIL.UnidentifiedImageError: cannot identify image file <_io.BytesIO object at 0x00000141224AB2C0> If anyone can help me download the image using script I will be thankful. The captcha is showed on the following url: https://v2.gcchmc.org/book-appointment/
[ "Your code is fine and it is the problem caused by headers. The url expects headers from you and you are not providing headers. This causes error response which the PIL library can not understand.\nThe working code will be\nurl = 'https://v2.gcchmc.org/captcha/image/aa699f305917812978c911e87ab126a782f726e7/'\nimport requests\nfrom PIL import Image\nfrom io import BytesIO\nheaders = {\n 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:107.0) Gecko/20100101 Firefox/107.0',\n 'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',\n 'Accept-Language': 'en-US,en;q=0.5',\n # 'Accept-Encoding': 'gzip, deflate, br',\n 'DNT': '1',\n 'Connection': 'keep-alive',\n 'Upgrade-Insecure-Requests': '1',\n 'Sec-Fetch-Dest': 'document',\n 'Sec-Fetch-Mode': 'navigate',\n 'Sec-Fetch-Site': 'none',\n 'Sec-Fetch-User': '?1',\n}\nresponse = requests.get(url, headers)\nimg = Image.open(BytesIO(response.content)) # error occurs here\nimg.save('output.png')\n\n" ]
[ 2 ]
[]
[]
[ "2captcha", "python", "python_imaging_library", "python_requests" ]
stackoverflow_0074455872_2captcha_python_python_imaging_library_python_requests.txt
Q: python tkinter main window I was trying to open a code with pycharm and the following lines are the begining . but it doesn't open any window . what should I do ? import tkinter mainwindow=tkinter.Tk() mainwindow.title("Calculator") mainwindow.geometry('480x240') buttonOne= tkinter.Button(mainwindow,text='1') it runs and instantly closes without opening any window A: In order to make sure the window doesn't close you need the mainloop function. import tkinter mainwindow=tkinter.Tk() mainwindow.title("Calculator") mainwindow.geometry('480x240') buttonOne= tkinter.Button(mainwindow,text='1') tkinter.mainloop()
python tkinter main window
I was trying to open a code with pycharm and the following lines are the begining . but it doesn't open any window . what should I do ? import tkinter mainwindow=tkinter.Tk() mainwindow.title("Calculator") mainwindow.geometry('480x240') buttonOne= tkinter.Button(mainwindow,text='1') it runs and instantly closes without opening any window
[ "In order to make sure the window doesn't close you need the mainloop function.\nimport tkinter\nmainwindow=tkinter.Tk()\nmainwindow.title(\"Calculator\")\nmainwindow.geometry('480x240')\nbuttonOne= tkinter.Button(mainwindow,text='1')\ntkinter.mainloop()\n\n" ]
[ 0 ]
[]
[]
[ "python", "tkinter" ]
stackoverflow_0074506323_python_tkinter.txt