QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,255,983
| 19,336,534
|
Weird behaviour in tensorflow metric
|
<p>I have created a tensorflow metric as seen below:</p>
<pre><code>def AttackAcc(y_true, y_pred):
r = tf.random.uniform(shape=(), minval=0, maxval=11, dtype=tf.int32)
if tf.math.greater(r,tf.constant(5) ):
return tf.math.equal( tf.constant(0.6) , tf.constant(0.2) )
else:
return tf.math.equal( tf.constant(0.6) , tf.constant(0.6) )
</code></pre>
<p>The metric is added to the <code>model.compile</code> as :</p>
<pre><code>metrics=[AttackAcc]
</code></pre>
<p>This should return 0 half of the time and 1 in the other half. SO while training my model i should see a value for this metric of around 0.5.
However it is always 0.<br />
Any ideas about why?</p>
|
<python><python-3.x><tensorflow><tensorflow2.0>
|
2023-01-27 09:08:29
| 1
| 551
|
Los
|
75,255,653
| 940,208
|
Discriminated union in Python
|
<p>Imagine I have a base class and two derived classes. I also have a factory method, that returns an object of one of the classes. The problem is, mypy or IntelliJ can't figure out which type the object is. They know it can be both, but not which one exactly. Is there any way I can help mypy/IntelliJ to figure this out WITHOUT putting a type hint next to the <code>conn</code> variable name?</p>
<pre><code>import abc
import enum
import typing
class BaseConnection(abc.ABC):
@abc.abstractmethod
def sql(self, query: str) -> typing.List[typing.Any]:
...
class PostgresConnection(BaseConnection):
def sql(self, query: str) -> typing.List[typing.Any]:
return "This is a postgres result".split()
def only_postgres_things(self):
pass
class MySQLConnection(BaseConnection):
def sql(self, query: str) -> typing.List[typing.Any]:
return "This is a mysql result".split()
def only_mysql_things(self):
pass
class ConnectionType(enum.Enum):
POSTGRES = 1
MYSQL = 2
def connect(conn_type: ConnectionType) -> typing.Union[PostgresConnection, MySQLConnection]:
if conn_type is ConnectionType.POSTGRES:
return PostgresConnection()
if conn_type is ConnectionType.MYSQL:
return MySQLConnection()
conn = connect(ConnectionType.POSTGRES)
conn.only_postgres_things()
</code></pre>
<p>Look at how IntelliJ handles this:
<a href="https://i.sstatic.net/0DBY4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0DBY4.png" alt="enter image description here" /></a></p>
<p>As you can see both methods: <code>only_postgres_things</code> and <code>only_mysql_things</code> are suggested when I'd like IntelliJ/mypy to figure it out out of the type I'm passing to the <code>connect</code> function.</p>
|
<python><pycharm><python-typing><mypy><discriminated-union>
|
2023-01-27 08:32:58
| 2
| 17,398
|
mnowotka
|
75,255,520
| 17,788,573
|
WinError 10060- A connection attempt failed because the connected party did not properly respond after a period of time
|
<p>I'm trying a tutorial of spaCy(NLP) and when I try to pip install the requirements(as mentioned in the spacy website), I'm getting this error. Can anybody please tell me what I'm doing wrong? The same error showed up while installing NLTK as well(figured spaCy would work but the same error pops up again).</p>
<p>I've looked at other similar questions and found out no similarities b/w the context. I'm a beginner in ML and I'm finding it tough to deal with errors at each and every step. Please help!</p>
<p>Here's the full error.</p>
<pre><code>TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\krish\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 703, in urlopen
httplib_response = self._make_request(
File "C:\Users\krish\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 386, in _make_request
self._validate_conn(conn)
File "C:\Users\krish\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 1042, in _validate_conn
conn.connect()
File "C:\Users\krish\anaconda3\lib\site-packages\urllib3\connection.py", line 358, in connect
self.sock = conn = self._new_conn()
File "C:\Users\krish\anaconda3\lib\site-packages\urllib3\connection.py", line 186, in _new_conn
raise NewConnectionError(
urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPSConnection object at 0x000001C494D17640>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\krish\anaconda3\lib\site-packages\requests\adapters.py", line 489, in send
resp = conn.urlopen(
File "C:\Users\krish\anaconda3\lib\site-packages\urllib3\connectionpool.py", line 787, in urlopen
retries = retries.increment(
File "C:\Users\krish\anaconda3\lib\site-packages\urllib3\util\retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001C494D17640>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "C:\Users\krish\anaconda3\lib\runpy.py", line 197, in _run_module_as_main
return _run_code(code, main_globals, None,
File "C:\Users\krish\anaconda3\lib\runpy.py", line 87, in _run_code
exec(code, run_globals)
File "C:\Users\krish\anaconda3\lib\site-packages\spacy\__main__.py", line 4, in <module>
setup_cli()
File "C:\Users\krish\anaconda3\lib\site-packages\spacy\cli\_util.py", line 71, in setup_cli
command(prog_name=COMMAND)
File "C:\Users\krish\anaconda3\lib\site-packages\click\core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "C:\Users\krish\anaconda3\lib\site-packages\typer\core.py", line 778, in main
return _main(
File "C:\Users\krish\anaconda3\lib\site-packages\typer\core.py", line 216, in _main
rv = self.invoke(ctx)
File "C:\Users\krish\anaconda3\lib\site-packages\click\core.py", line 1659, in invoke
return _process_result(sub_ctx.command.invoke(sub_ctx))
File "C:\Users\krish\anaconda3\lib\site-packages\click\core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "C:\Users\krish\anaconda3\lib\site-packages\click\core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "C:\Users\krish\anaconda3\lib\site-packages\typer\main.py", line 683, in wrapper
return callback(**use_params) # type: ignore
File "C:\Users\krish\anaconda3\lib\site-packages\spacy\cli\download.py", line 36, in download_cli
download(model, direct, sdist, *ctx.args)
File "C:\Users\krish\anaconda3\lib\site-packages\spacy\cli\download.py", line 70, in download
compatibility = get_compatibility()
File "C:\Users\krish\anaconda3\lib\site-packages\spacy\cli\download.py", line 97, in get_compatibility
r = requests.get(about.__compatibility__)
File "C:\Users\krish\anaconda3\lib\site-packages\requests\api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "C:\Users\krish\anaconda3\lib\site-packages\requests\api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "C:\Users\krish\anaconda3\lib\site-packages\requests\sessions.py", line 587, in request
resp = self.send(prep, **send_kwargs)
File "C:\Users\krish\anaconda3\lib\site-packages\requests\sessions.py", line 701, in send
r = adapter.send(request, **kwargs)
File "C:\Users\krish\anaconda3\lib\site-packages\requests\adapters.py", line 565, in send
raise ConnectionError(e, request=request)
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='raw.githubusercontent.com', port=443): Max retries exceeded with url: /explosion/spacy-models/master/compatibility.json (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x000001C494D17640>: Failed to establish a new connection: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond'))
</code></pre>
|
<python><machine-learning><nlp><nltk><spacy>
|
2023-01-27 08:18:25
| 0
| 311
|
Vaikruta
|
75,255,453
| 10,729,772
|
Convert the output of protoc --decode_raw into json
|
<p>I'm trying to convert the message of a protobuf blob into a json, without the corresponding schema. This is the code I'm using but it doesn't get nested objects. Maybe there is a way to convert blobs without the schema? I just need a json. The variable names don't matter to me.</p>
<pre><code>message_dict = {}
for line in result.stdout.split("\n"):
if not line:
continue
parts = line.split(": ")
field_number = parts[0]
value = parts[1] if len(parts) > 1 else None
message_dict[field_number] = value
</code></pre>
|
<python><protocol-buffers><grpc>
|
2023-01-27 08:10:43
| 2
| 468
|
Georg K.
|
75,255,445
| 12,940,363
|
Streams data from multiple while loops
|
<p>I am using psutil to analyze the open_files() at any given point of time. As soon as program creates a file with a specific extension of interest (could be .xlsx, .csv, .docx, .dat), I want to save the file in another directory before it is modified and then perform sequential operations on it.</p>
<p>Right now I am using nested while loops, but then if the program outputs multiple files, they aren't detected as the loop might not have processed the previous file.</p>
<pre><code>files_of_interest = []
process = psutil.Process(pid)
while True:
for i in process.open_files():
print(i.path) # warning, traffic in terminal
if i.path.endswith(".xslx"):
print("This is a file of interest")
files_of_interest.append(i.path)
# Second while loop
safe_location = "C:\\safe"
while True:
if len(files_of_interest) > 0:
try:
os.rename(files_of_interest[0], files_of_interest[0]) # check file in use
# temporarily copy to safe location for processing
shutil.copy(files_of_interest[0], safe_location)
files_of_interest.pop()
break
except OSError:
continue
# Third to process safe folder items one by one
while True:
if len(files_in_safe_location) > 0:
do_something()
</code></pre>
<p>I need to optimize this, run the while loops in parallel and get the best possible outcome without any race conditions (as far as possible).</p>
<p>I know how to do this in dart (isolates) and nodejs (web workers), but the process in python seems very different since I assumed python does not have streams like async functionality (dynamically react to changes in instance data).</p>
|
<python><python-multiprocessing><psutil>
|
2023-01-27 08:09:52
| 1
| 327
|
RealRK
|
75,255,070
| 15,781,591
|
How to groupby multiple columns in dataframe, except one in python
|
<p>I have the following dataframe:</p>
<pre><code> ID Code Color Value
-----------------------------------
0 111 AAA Blue 23
1 111 AAA Red 43
2 111 AAA Green 4
3 121 ABA Green 45
4 121 ABA Green 23
5 121 ABA Red 75
6 122 AAA Red 52
7 122 ACA Blue 24
8 122 ACA Blue 53
9 122 ACA Green 14
...
</code></pre>
<p>I want to group this dataframe by the columns "ID", and "Code", and sum the values from the "Value" column, while excluding the "Color" column from this grouping. Or in other words, I want to groupy by all non-Value columns, except for the "Color" column, and then sum the values from the "Value" column. I am using python for this.</p>
<p>What I am thinking of doing is creating a list of all column names that are not "Color" and "Value", and creating this "column_list", and then simply running:</p>
<pre><code>df.groupby['column_list'].sum()
</code></pre>
<p>Though this will not work. How might I augment this code so that I can properly groupby as intended?</p>
<p>EDIT:</p>
<p>This code works:</p>
<pre><code>bins = df.groupby([df.columns[0],
df.columns[1],
df.columns[2]).count()
bins["Weight"] = bins / bins.groupby(df.columns[0]).sum()
bins.reset_index(inplace=True)
bins['Weight'] = bins['Weight'].round(4)
display(HTML(bins.to_html()))
</code></pre>
<p>Full code that is not working:</p>
<pre><code>column_list = [c for c in df.columns if c not in ['Value']]
bins = df.groupby(column_list, as_index=False)['Value'].count()
bins["Weight"] = bins / bins.groupby(df.columns[0]).sum()
bins.reset_index(inplace=True)
bins['Weight'] = bins['Weight'].round(4)
display(HTML(bins.to_html()))
</code></pre>
|
<python><pandas>
|
2023-01-27 07:18:24
| 1
| 641
|
LostinSpatialAnalysis
|
75,254,841
| 18,125,194
|
Define which neighbours a Ball tree should return
|
<p>I have a dataframe with several locations. I want to find each locations nearest neighbours.</p>
<p>To do this, I am using a Ball tree. However, the output seems to be comparing all of the locations with each other, including the original location. For example, I have locations A,B,C..... the output will list A as a neighbour for A.</p>
<p>Also, I have a column for time that I want to use in my analysis. I have set the time column to the index before fitting the Ball tree. But the output will return A at time 1, A at time 2, A at time 3 as neighbours of A.</p>
<p>I created a smaller dataframe with fake data to mirror my own (displayed below) and using this smaller dataset, I can run the tree with a larger number of neighbours that I would otherwise use and remove the 'wrong' neighbours from the output.</p>
<p>However, this method is too computationally expensive to use with my real data.</p>
<p>Is there a way to defining which type of neighbours to return in the Ball Tree?</p>
<p>sample code:</p>
<pre><code>from sklearn.neighbors import BallTree
import numpy as np
import pandas as pd
test_data = pd.DataFrame({'latitude':[51.51, 51.52,61.53,61.54,71.55, 71.56,
51.51, 51.52,61.53,61.54,71.55, 71.56,
51.51, 51.52,61.53,61.54,71.55, 71.56],
'longitude':[-0.13,-0.13,-0.13,-0.14,-0.13,-0.13,
-0.13,-0.13,-0.13,-0.14,-0.13,-0.13,
-0.13,-0.13,-0.13,-0.14,-0.13,-0.13],
'id':['A','B','C','D','E','F',
'A','B','C','D','E','F',
'A','B','C','D','E','F'],
'target':[35,410,1,100,114,78,
14,254,101,278,3578,435,
254,254,37,47,38,101],
'time':['2019-03-10 11:00:00','2019-03-10 11:00:00','2019-03-10 11:00:00','2019-03-10 11:00:00','2019-03-10 11:00:00','2019-03-10 11:00:00',
'2019-03-10 11:10:00','2019-03-10 11:10:00','2019-03-10 11:10:00','2019-03-10 11:10:00','2019-03-10 11:10:00','2019-03-10 11:10:00',
'2019-03-10 11:20:00','2019-03-10 11:20:00','2019-03-10 11:20:00','2019-03-10 11:20:00','2019-03-10 11:20:00','2019-03-10 11:20:00',
]})
# --- STEP 1) Prepairing the data
test_data=test_data.reset_index()
# Convert latitude and longitude to radions
for column in test_data[['latitude','longitude']]:
rad = np.deg2rad(test_data[column].values)
test_data[f'{column}'] = rad
# Creating a duplicate of the time column, one will be set as the index
test_data['time2']=test_data['time']
# Convert time to datetime
test_data['time']=pd.to_datetime(test_data['time'])
test_data = test_data.set_index('time').astype('str')
# --- STEP 2) FITTING THE BALL TREE
locations_a = test_data
locations_b = test_data
col_name = 'ss_id'
latitude = "latitude"
longitude = "longitude"
# make ball tree
ball = BallTree(locations_a[[latitude, longitude]].values, metric='haversine')
# The amount of neighbors to return
k = 6
# Calculating distances
distances, indices = ball.query(locations_b[[latitude, longitude]].values, k = k)
# --- STEP 3) Merging Results into dataframe
dists = pd.DataFrame(distances).stack()
rel = pd.DataFrame(indices).stack()
# Create dataframe
neighbor_info_df = pd.merge(dists.rename('distance'), rel.rename('neighbor_idx'), right_index=True, left_index=True)
# Resetting and renaming indexes
neighbor_info_df = neighbor_info_df.reset_index(level=1).rename({'level_1': 'neighbor_number'}, axis=1)
neighbor_info_df = neighbor_info_df.reset_index().rename({'index': 'id_index_no'}, axis=1)
neighbor_info_df.head(10)
</code></pre>
|
<python><pandas><scikit-learn><nearest-neighbor>
|
2023-01-27 06:48:39
| 1
| 395
|
Rebecca James
|
75,254,838
| 10,694,247
|
Dynamodb lambda function Error Missing required parameter in AttributeDefinitions[0]
|
<p>I am working with AWS Lambda Function and want to integrate them with Dynamo-db to keep the track of my cloudwatch data matrix into it in order to keep track of alerts for sending messages if alerts are sent or not to record in DB.</p>
<p>The function is perfectly fine if I don't use Dynamo-DB, below is the code with Dynamo-db.
I never used DynamoDB but just started.</p>
<h2>Code:</h2>
<pre><code>import json
import os
import boto3
from datetime import datetime, timedelta
from boto3.dynamodb.conditions import Key
from botocore.exceptions import ClientError
def lambda_handler(event, context):
fsx = boto3.client('fsx')
cloudwatch = boto3.client('cloudwatch')
ses = boto3.client('ses')
region_name = os.environ['AWS_REGION']
dynamodb = boto3.resource('dynamodb', region_name=region_name)
now = datetime.utcnow()
start_time = (now - timedelta(minutes=5)).strftime('%Y-%m-%dT%H:%M:%SZ')
end_time = now.strftime('%Y-%m-%dT%H:%M:%SZ')
table = []
result = []
next_token = None
while True:
if next_token:
response = fsx.describe_file_systems(NextToken=next_token)
else:
response = fsx.describe_file_systems()
for filesystem in response.get('FileSystems'):
filesystem_id = filesystem.get('FileSystemId')
table.append(filesystem_id)
next_token = response.get('NextToken')
if not next_token:
break
try:
# Create the DynamoDB table if it does not exist
table = dynamodb.create_table(
TableName='FsxNMonitorFsx',
KeySchema=[
{
'AttributeName': 'filesystem_id',
'KeyType': 'HASH' #Partition key
}
],
AttributeDefinitions=[
{
'attributeName': 'filesystem_id',
'AttributeType': 'S'
},
{
'attributeName': 'alert_sent',
'attributeType': 'BOOL'
}
],
ProvisionedThroughput={
'ReadCapacityUnits': 10,
'WriteCapacityUnits': 10
}
)
# Wait for the table to be created
table.meta.client.get_waiter('table_exists').wait(TableName='FsxNMonitorFsx')
except ClientError as e:
if e.response['Error']['Code'] != 'ResourceInUseException':
raise
# Code to retrieve metric data and check if alert needs to be sent
for filesystem_id in table:
response = cloudwatch.get_metric_data(
MetricDataQueries=[
{
'Id': 'm1',
'MetricStat': {
'Metric': {
'Namespace': 'AWS/FSx',
'MetricName': 'StorageCapacity',
'Dimensions': [
{
'Name': 'FileSystemId',
'Value': filesystem_id
},
{
'Name': 'StorageTier',
'Value': 'SSD'
},
{
'Name': 'DataType',
'Value': 'All'
}
]
},
'Period': 60,
'Stat': 'Sum'
},
'ReturnData': True
},
{
'Id': 'm2',
'MetricStat': {
'Metric': {
'Namespace': 'AWS/FSx',
'MetricName': 'StorageUsed',
'Dimensions': [
{
'Name': 'FileSystemId',
'Value': filesystem_id
},
{
'Name': 'StorageTier',
'Value': 'SSD'
},
{
'Name': 'DataType',
'Value': 'All'
}
]
},
'Period': 60,
'Stat': 'Sum'
},
'ReturnData': True
}
],
StartTime=start_time,
EndTime=end_time
)
storage_capacity = response['MetricDataResults'][0]['Values']
storage_used = response['MetricDataResults'][1]['Values']
if storage_capacity:
storage_capacity = storage_capacity[0]
else:
storage_capacity = None
if storage_used:
storage_used = storage_used[0]
else:
storage_used = None
if storage_capacity and storage_used:
percent_used = (storage_used / storage_capacity) * 100
else:
percent_used = None
response = dynamodb.get_item(
TableName='FsxNMonitorFsx',
Key={'filesystem_id': {'S': filesystem_id}}
)
if 'Item' in response:
alert_sent = response['Item']['alert_sent']['BOOL']
else:
alert_sent = False
# Send alert if storage usage exceeds threshold and no alert has been sent yet
if percent_used > 80 and not alert_sent:
email_body = "... some code..."
ses.send_email(
Source='abc@example.com'',
Destination={
'ToAddresses': ['examp_mail@example.com'],
},
Message={
'Subject': {
'Data': email_subject
},
'Body': {
'Html': {
'Data': email_body
}
}
}
)
# Update FsxNMonitorFsx in DynamoDB
dynamodb.update_item(
TableName='FsxNMonitorFsx',
Key={'filesystem_id': {'S': filesystem_id}},
UpdateExpression='SET alert_sent = :val',
ExpressionAttributeValues={':val': {'BOOL': True}}
)
return {
'statusCode': 200,
'body': json.dumps('Email sent!')
}
</code></pre>
<h2>Error :</h2>
<pre><code>Response
{
"errorMessage": "Parameter validation failed:\nMissing required parameter in AttributeDefinitions[0]: \"AttributeName\"\nUnknown parameter in AttributeDefinitions[0]: \"attributeName\", must be one of: AttributeName, AttributeType\nMissing required parameter in AttributeDefinitions[1]: \"AttributeName\"\nMissing required parameter in AttributeDefinitions[1]: \"AttributeType\"\nUnknown parameter in AttributeDefinitions[1]: \"attributeName\", must be one of: AttributeName, AttributeType\nUnknown parameter in AttributeDefinitions[1]: \"attributeType\", must be one of: AttributeName, AttributeType",
"errorType": "ParamValidationError",
"requestId": "54de7194-f8e8-4a9f-91d6-0f77575de775",
Function Logs
START RequestId: 54de7194-f8e8-4a9f-91d6-0f77575de775 Version: $LATEST
[ERROR] ParamValidationError: Parameter validation failed:
Missing required parameter in AttributeDefinitions[0]: "AttributeName"
Unknown parameter in AttributeDefinitions[0]: "attributeName", must be one of: AttributeName, AttributeType
Missing required parameter in AttributeDefinitions[1]: "AttributeName"
Missing required parameter in AttributeDefinitions[1]: "AttributeType"
Unknown parameter in AttributeDefinitions[1]: "attributeName", must be one of: AttributeName, AttributeType
Unknown parameter in AttributeDefinitions[1]: "attributeType", must be one of: AttributeName, AttributeType
</code></pre>
<h2>Edit:</h2>
<p>In the below section i just changed the <code>'filesystem_id'</code> to the <code>filesystem_id</code> and another one is all lowercae <code>attribute</code> to first letter upper-cae like <code>Attribute</code>.</p>
<pre><code> {
'AttributeName': filesystem_id,
'KeyType': 'HASH' #Partition key
}
],
AttributeDefinitions=[
{
'AttributeName': filesystem_id,
'AttributeType': 'S'
},
{
'attributeName': 'alert_sent',
'attributeType': 'BOOL'
}
],
ProvisionedThroughput={
'ReadCapacityUnits': 10,
'WriteCapacityUnits': 10
}
</code></pre>
<h2>Now the New Error:</h2>
<pre><code>Response
{
"errorMessage": "An error occurred (ValidationException) when calling the CreateTable operation: 1 validation error detected: Value 'BOOL' at 'attributeDefinitions.2.member.attributeType' failed to satisfy constraint: Member must satisfy enum value set: [B, N, S]",
"errorType": "ClientError",
</code></pre>
<p>Can someone please help on this.</p>
|
<python><aws-lambda><amazon-dynamodb>
|
2023-01-27 06:48:21
| 1
| 488
|
user2023
|
75,254,698
| 5,807,808
|
Pass dataframe column values as String in dbduck sql query using loop
|
<p>I am using dbduck to run sql query on the following dataframe <code>df</code>:</p>
<p><a href="https://i.sstatic.net/iII8h.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iII8h.png" alt="enter image description here" /></a></p>
<p>In this sql, I need to pass the values from dataframe col3 using a loop:</p>
<pre class="lang-py prettyprint-override"><code>aa = ps.sqldf("select * from result where col3= 'id1'")
print(aa)
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-27 06:24:22
| 1
| 401
|
peter
|
75,254,643
| 6,212,530
|
Why are TypedDict types without field and with NotRequired field incompatible?
|
<p>I am trying to create a few functions which will return values of different TypedDict types. Most of fields in them will be same so I want to generate base dictionary with same function in all cases. However I am getting stumped by typing this correctly.</p>
<p>My idea was to create base type <code>Parent</code> and inherit from it, adding only <code>NotRequired</code> fields.</p>
<pre><code>from typing_extensions import NotRequired, TypedDict
class Parent(TypedDict):
parent_field: str
class Child(Parent):
child_field: NotRequired[bool | None]
def create_parent() -> Parent:
return {"parent_field": "example"}
child: Child = create_parent()
# Error:
# Expression of type "Parent" cannot be assigned to declared type "Child"
# "child_field" is missing from "Type[Parent]"
</code></pre>
<p>However this fails since field <code>child_field</code> is missing, even though its type is <code>NotRequired</code>. Why it fails and how to evade this problem?</p>
<p>EDIT: I am using pylance (so pyright) for typechecking.</p>
<p><code>mypy</code> (<a href="https://mypy-play.net/?mypy=master&python=3.10&gist=15a201f20618e94330221df384bdf738" rel="nofollow noreferrer">playground</a>) gives similar error message:</p>
<blockquote>
<p>Incompatible types in assignment (expression has type "Parent", variable has type "Child") [assignment]</p>
</blockquote>
|
<python><mypy><python-typing><python-3.10><pylance>
|
2023-01-27 06:14:13
| 1
| 1,028
|
Matija Sirk
|
75,254,591
| 317,797
|
Create Altair Chart of Value Counts of Multiple Columns
|
<p>Using Altair charting, how can I create a chart of value_counts() of multiple columns? This is easily done by matplotlib. How can the identical chart be created using Altair?</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
df = pd.DataFrame({'Col1':[0,1,2,3],
'Col2':[0,1,2,2],
'Col3':[2,3,3,3]})
pd.DataFrame({col:df[col].value_counts(normalize=True) for col in df}).plot(kind='bar')
</code></pre>
<p><a href="https://i.sstatic.net/Yifks.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yifks.png" alt="enter image description here" /></a></p>
|
<python><bar-chart><altair>
|
2023-01-27 06:04:33
| 1
| 9,061
|
BSalita
|
75,254,524
| 3,713,236
|
Preserve original column name in pd.get_dummies()
|
<p>I have a list of columns whose values are all strings. I need to one hot encode them with <code>pd.get_dummies()</code>.</p>
<p>I want to keep the original name of those columns along with the value.
So lets say I have a column named <code>Street</code>, and its values are <code>Paved</code> and <code>Not Paved</code>.
After running <code>get_dummies()</code>, I would like the 2 resulting columns to be entitled <code>Street_Paved</code> and <code>Street_Not_Paved</code>. Is this possible? Basically the format for the <code>prefix</code> parameter is <code>{i}_{value}</code>, with <code>i</code> referring to the <code>for i in cols</code> common nomenclature.</p>
<p>My code is:</p>
<pre><code>cols = ['Street', 'Alley', 'CentralAir', 'Utilities', 'LandSlope', 'PoolQC']
pd.get_dummies(df, columns = cols, prefix = '', prefix_sep = '')
</code></pre>
|
<python><pandas><dataframe>
|
2023-01-27 05:53:28
| 1
| 9,075
|
Katsu
|
75,254,325
| 2,989,642
|
shutil.move() not working after reading object with pywin32
|
<p>I've got a script that is intended to sort my photo/video collection (Windows). The photos work fine as they are sortable by EXIF which is easily accessed.</p>
<p>Videos are harder because I have to get the file's "Media Creation Date" which is readable by only pywin32, to my understanding. However, once I've accessed the media creation date, <code>shutil.move()</code> does not work. It throws no error, it just runs indefinitely without progress until I manually kill the script:</p>
<p>Here's the snippet in question:</p>
<pre><code>from datetime import datetime
import exifread
import os
from pathlib import Path
import shutil
from win32com.propsys import propsys, pscon
# get the file list, do stuff with photos, etc
# f is the file
# cr is the path root to which it will be moved
elif str(f).lower().endswith(("mp4", "mov")):
props = propsys.SHGetPropertyStoreFromParsingName(f)
dt = props.GetValue(pscon.PKEY_Media_DateEncoded).GetValue()
year, month = str(dt.year), str(dt.month).zfill(2)
new_fn = dt.strftime("%Y-%m-%d_%H%M%S")
new_fn = f"{new_fn}{os.path.splitext(f)[1]}"
move_path = os.path.join(cr, year, month, new_fn)
print(f"SRC: {f}")
print(f"DESTINATION: {move_path}")
print("----------------------------------")
shutil.move(f, move_path)
</code></pre>
<p>It prints the source correctly, and the destination correctly, but does not move the file. I have also tried <code>os.rename()</code> and <code>os.replace()</code> with the same result, which suggests that perhaps the <code>propsys</code> method still has a lock on the file? How do I free up this file for moving?</p>
|
<python><windows><pywin32><shutil>
|
2023-01-27 05:17:11
| 1
| 549
|
auslander
|
75,254,132
| 14,808,637
|
Why is numpy.where() returning 2 arrays?
|
<p>I am confused with <code>numpy</code> function <code>np.where()</code>. For example if we have:</p>
<pre><code>b = np.array([[1, 2, 3, 4, 5,6], [1,0,3,0,9,6]])
f = np.where(b)
</code></pre>
<p><strong>output</strong></p>
<pre><code>print(f)
(array([0, 0, 0, 0, 0, 0, 1, 1, 1, 1]), array([0, 1, 2, 3, 4, 5, 0, 2, 4, 5]))
</code></pre>
<p>Here, array <code>b</code> contains 2 rows and 6 columns. I am unsure as to why <code>np.where</code> outputs two arrays, but a 2d array might be the reason. However, each array contains ten elements; how this comes?</p>
|
<python><arrays><numpy>
|
2023-01-27 04:38:36
| 2
| 774
|
Ahmad
|
75,254,131
| 2,397,119
|
Unable to Click to next page while crawling selenium
|
<p>I'm trying to scrap the list of services we have for us from <a href="https://www.tamm.abudhabi/en/life-events/individual/HousingProperties" rel="nofollow noreferrer">this site</a> but not able to click to the next page.<br />
This is what I've tried so far using selenium & bs4,</p>
<pre class="lang-py prettyprint-override"><code>#attempt1
next_pg_btn = browser.find_elements(By.CLASS_NAME, 'ui-lib-pagination_item_nav')
next_pg_btn.click() # nothing happens
#attemp2
browser.find_element(By.XPATH, "//div[@role = 'button']").click() # nothing happens
#attempt3 - saw in some stackoverflow post that sometimes we need to scroll to the
#bottom of page to have the button clickable, so tried that
browser.execute_script("window.scrollTo(0,2500)")
browser.find_element(By.XPATH, "//div[@role = 'button']").click() # nothing happens
</code></pre>
<p>I'm not so experienced with scrapping, pls advice how to handle this and where I'm going wrong.<br />
Thanks</p>
|
<python><selenium><selenium-webdriver><web-scraping><webdriverwait>
|
2023-01-27 04:38:21
| 1
| 1,241
|
Aman Singh
|
75,254,029
| 11,803,187
|
Create new dataframe fields from row calculations
|
<p>I'd like to create some new columns based on calculation from each row values</p>
<p>For example,
input</p>
<pre><code>data = {"c1": [10], "c2": [20], "c3":[30], "c4":[40], "c5":[50], "c6":[10]}
df = pd.DataFrame(data=data)
</code></pre>
<p>Let us say we take values from series=c2:c6, [20 30 40 50 10]</p>
<pre><code>new_column1= np.mean(series[0:2]). # np.mean([20,30]) = 25
new_column2 = np.mean(series[2:4]) # np.mean(40,50) = 45
new_column3 = new_column1+new_column2 # 70
</code></pre>
<p>output:</p>
<pre><code> c1 c2 c3 c4 c5 c6 new_column1 new_column_2 new_column_3
0 10 20 30 40 50 10. 25. 45 70
</code></pre>
<p>I am looking for an efficient way (list comprehension or apply function?) instead of iterrows</p>
|
<python><pandas><dataframe>
|
2023-01-27 04:14:39
| 1
| 547
|
tudou
|
75,254,024
| 11,998,382
|
Expression that returns mutated list
|
<p>I'm looking for a single expression, mutates an element and returns the modified list</p>
<p>The following is a bit verbose</p>
<pre><code># key=0; value=3; rng=[1,2]
[(v if i != key else value) for i, v in enumerate(rng)]
</code></pre>
<hr />
<p><strong>Edit</strong>:</p>
<p>I'm looking for a way to inline the following function in a single expression</p>
<pre><code>def replace(rng: List, key: int, value):
a = list(rng)
a[key] = value
return a
</code></pre>
<hr />
<p><strong>Edit 2</strong>: the code that actually motivated this question</p>
<pre><code>class TextDecoder(nn.Module):
def forward(self, x: Tensor, kv_cache: Tensor):
kv_cache_write = torch.zeros((_:=list(kv_cache.shape))).__setitem__(-2, x.shape[-1]) or _)
...
</code></pre>
|
<python><functional-programming><python-3.8><python-assignment-expression>
|
2023-01-27 04:13:48
| 3
| 3,685
|
Tom Huntington
|
75,253,943
| 12,574,341
|
Selenium wait for presence of nested element query
|
<p>In Selenium you can wait for a DOM element to load using <code>WebDriverWait</code> and <code>EC.presence_of_element()</code></p>
<pre class="lang-py prettyprint-override"><code>WebDriverWait(driver, 5).until(
EC.presence_of_element_located((By.TAG_NAME, 'button')))
</code></pre>
<p>However, this queries the entire DOM. This does not suit scenarios where you have multiple <code>button</code>'s on the the page and are querying for a specific one. The <code>presence_of_element_located()</code> will trigger on the first instance of a <code>button</code></p>
<pre class="lang-html prettyprint-override"><code><body>
<div class="sign-in">
<button>Sign In!</button>
</div>
<div class="controls">
<button>play!</button> <!-- The button we care about-->
</div>
</body>
</code></pre>
<p>I want to perform a <code>WebDriverWait</code> for a the <code>button</code> inside of <code>.controls</code> specifically</p>
<p>Selenium allows you to perform nested queries by chaining <code>.find_element()</code></p>
<pre class="lang-py prettyprint-override"><code>control_button = driver.find_element(
By.CLASS_NAME, 'controls').find_element(
By.TAG_NAME, 'button')
</code></pre>
<p>is there a comparable technique for <code>presence_of_element()</code>?</p>
|
<python><selenium>
|
2023-01-27 03:57:52
| 1
| 1,459
|
Michael Moreno
|
75,253,893
| 19,980,284
|
Find overlapping time intervals based on condition in another column pandas
|
<p>I have cleaned up a data set to get it into this format. The <code>assigned_pat_loc</code> represents a room number, so I am trying to identify when two different patients (<code>patient_id</code>) are in the same room at the same time; i.e., overlapping <code>start_time</code> and <code>end_time</code> between rows with the same <code>assigned_pat_loc</code> but different <code>patient_id</code>'s. The <code>start_time</code> and <code>end_time</code> represent the times that that particular patient was in that room. So if those times are overlapping between two patients in the same room, it means that they shared the room together. This is what I'm ultimately looking for. Here is the base data set from which I want to construct these changes:</p>
<pre class="lang-py prettyprint-override"><code> patient_id assigned_pat_loc start_time end_time
0 19035648 SICU^6108 2009-01-10 18:27:48 2009-02-25 15:45:54
1 19039244 85^8520 2009-01-02 06:27:25 2009-01-05 10:38:41
2 19039507 55^5514 2009-01-01 13:25:45 2009-01-01 13:25:45
3 19039555 EIAB^EIAB 2009-01-15 01:56:48 2009-02-23 11:36:34
4 19039559 EIAB^EIAB 2009-01-16 11:24:18 2009-01-19 18:41:33
... ... ... ... ...
140906 46851413 EIAB^EIAB 2011-12-31 22:28:38 2011-12-31 23:15:49
140907 46851422 EIAB^EIAB 2011-12-31 21:52:44 2011-12-31 22:50:08
140908 46851430 4LD^4LDX 2011-12-31 22:41:10 2011-12-31 22:44:48
140909 46851434 EIC^EIC 2011-12-31 23:45:22 2011-12-31 23:45:22
140910 46851437 EIAB^EIAB 2011-12-31 22:54:40 2011-12-31 23:30:10
</code></pre>
<p>I am thinking I should approach this with a groupby of some sort, but I'm not sure exactly how to implement. I would show an attempt but it took me about 6 hours to even get to this point so I would appreciate even just some thoughts.</p>
<p><strong>EDIT</strong></p>
<p>Example of original data:</p>
<pre><code>id Date Time assigned_pat_loc prior_pat_loc Activity
1 May/31/11 8:00 EIAB^EIAB^6 Admission
1 May/31/11 9:00 8w^201 EIAB^EIAB^6 Transfer
1 Jun/8/11 15:00 8w^201 Discharge
2 May/31/11 5:00 EIAB^EIAB^4 Admission
2 May/31/11 7:00 10E^45 EIAB^EIAB^4 Transfer
2 Jun/1/11 1:00 8w^201 10E^45 Transfer
2 Jun/1/11 8:00 8w^201 Discharge
3 May/31/11 9:00 EIAB^EIAB^2 Admission
3 Jun/1/11 9:00 8w^201 EIAB^EIAB^2 Transfer
3 Jun/5/11 9:00 8w^201 Discharge
4 May/31/11 9:00 EIAB^EIAB^9 Admission
4 May/31/11 7:00 10E^45 EIAB^EIAB^9 Transfer
4 Jun/1/11 8:00 10E^45 Death
</code></pre>
<p>Example of desired output:</p>
<pre><code>id r_id start_date start_time end_date end_time length location
1 2 Jun/1/11 1:00 Jun/1/11 8:00 7 8w^201
1 3 Jun/1/11 9:00 Jun/5/11 9:00 96 8w^201
2 4 May/31/11 7:00 Jun/1/11 1:00 18 10E^45
2 1 Jun/1/11 1:00 Jun/1/11 8:00 7 8w^201
3 1 Jun/1/11 9:00 Jun/5/11 9:00 96 8w^201
</code></pre>
<p>Where <code>r_id</code> is the "other" patient who is sharing the same room as another one, and <code>length</code> is the number of time in hours that the room was shared.</p>
<p>In this example:</p>
<ol>
<li>r_id is the name of the variable you will generate for the id of the other patient.</li>
<li>patient 1 had two room-sharing episodes, both in 8w^201 (room 201 of unit 8w); he shared the room with patient 2 for 7 hours (1 am to 8 am on June 1) and with patient 3 for 96 hours (9 am on June 1 to 9 am on June 5).</li>
<li>Patient 2 also had two room sharing episodes. The first one was with patient 4 in in 10E^45 (room 45 of unit 10E) and lasted 18 hours (7 am May 31 to 1 am June 1); the second one is the 7-hour episode with patient 1 in 8w^201.</li>
<li>Patient 3 had only one room-sharing episode with patient 1 in room 8w^201, lasting 96 hours.</li>
<li>Patient 4, also, had only one room-sharing episode, with patient 2 in room 10E^45, lasting 18 hours.</li>
<li>Note: room-sharing episodes are listed twice, once for each patient.</li>
</ol>
|
<python><pandas><dataframe><group-by><overlap>
|
2023-01-27 03:47:28
| 2
| 671
|
hulio_entredas
|
75,253,793
| 16,971,617
|
Is global variable static in python
|
<p>This is my utils.py</p>
<pre><code>detector = cv2.mcc.CCheckerDetector_create()
def process():
print(detector)
</code></pre>
<p>main.py</p>
<pre><code>for i in range(100):
process()
</code></pre>
<p>This question might sound stupid.
As the variable detector in process() is initiated everytime I call process() if I put it inside the function process() so I change it as a global variable. Will it be created only once?</p>
|
<python><variables>
|
2023-01-27 03:22:25
| 1
| 539
|
user16971617
|
75,253,751
| 169,992
|
How to draw a line in the Poincaré disk using geoopt for Python?
|
<p>I am trying to learn the API to geoopt and Python at once, but have this example so far:</p>
<pre><code>import geoopt
import torch
import matplotlib.pyplot as plt
# Create the Poincare ball model
poincare = geoopt.PoincareBall()
# Define two points in the hyperbolic space
point1 = torch.tensor([0.1, 0.2])
point2 = torch.tensor([0.3, 0.4])
#Map the points to the tangent space at the identity element
point1_tangent = poincare.expmap(point1, torch.tensor([1.0,0.0,0.0,1.0]))
point2_tangent = poincare.expmap(point2, torch.tensor([1.0,0.0,0.0,1.0]))
# Map the points back to the hyperbolic space
point1_hyperbolic = poincare.logmap(point1_tangent, torch.tensor([1.0,0.0,0.0,1.0]))
point2_hyperbolic = poincare.logmap(point2_tangent, torch.tensor([1.0,0.0,0.0,1.0]))
# Transform the points using the Poincare ball model
transformed_point1 = poincare.mobius_add(torch.tensor([1.0,0.0,0.0,1.0]), point1_hyperbolic)
transformed_point2 = poincare.mobius_add(torch.tensor([1.0,0.0,0.0,1.0]), point2_hyperbolic)
# Plot the line connecting the two points
plt.plot([transformed_point1[0], transformed_point2[0]], [transformed_point1[1], transformed_point2[1]])
plt.show()
</code></pre>
<p>This doesn't quite work, any ideas how to get a simple line drawn in the hyperbolic plane using <a href="https://github.com/geoopt/geoopt" rel="nofollow noreferrer">geoopt</a>, something as simple as this (or simpler) is all I'm trying to do:</p>
<p><a href="https://i.sstatic.net/IaBT4.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IaBT4.jpg" alt="enter image description here" /></a></p>
|
<python><hyperbolic-function>
|
2023-01-27 03:13:01
| 0
| 80,366
|
Lance Pollard
|
75,253,564
| 7,984,318
|
SQLAlchemy pass parameters with like '%STRING%' in raw sql
|
<p>I have a sql:</p>
<pre><code>select *
from my table
where exists (
select
from unnest(procedure) elem
where elem like '%String%'
)
</code></pre>
<p>It works well when directly execute in database ide.</p>
<p>My question is how can I set 'String' as a parameters ?</p>
<p>I have tried:</p>
<pre><code>String = 'hello world'
my_query = '''
select *
from my_table
where exists (
select
from unnest(procedure) elem
where elem like '% :String %'
)
'''
result=connection.execute(text(my_query),String=String)
</code></pre>
<p>However it returns nothing,I think maybe "like %" is special ?</p>
<p>Any friend can help ?</p>
|
<python><sql><database><sqlalchemy><flask-sqlalchemy>
|
2023-01-27 02:25:45
| 1
| 4,094
|
William
|
75,253,431
| 8,888,469
|
How to replace a part of column value with values from another two columns based on a condition in pandas
|
<p>I have a dataframe <code>df</code> as shown below. I want replace all the <code>temp_id</code>column values which are having <code>_</code>(underscore with another value which combination of numerical part of the temp_id + city+ country column values.</p>
<p><strong>df</strong></p>
<pre><code> temp_id city country
12225IND DELHI IND
14445UX_TY AUSTIN US
56784SIN BEDOK SIN
72312SD_IT_UZ NEW YORK US
47853DUB DUBAI UAE
80976UT_IS_SZ SYDENY AUS
89012TY_JP_IS TOKOYO JPN
51309HJ_IS_IS
42087IND MUMBAI IND
</code></pre>
<p><strong>Expected Output</strong></p>
<pre><code>temp_id city country
12225IND DELHI IND
14445AUSTINUS AUSTIN US
56784SIN BEDOK SIN
72312NEWYORKUS NEW YORK US
47853DUB DUBAI UAE
80976SYDENYAUS SYDENY AUS
89012TOKOYOJPN TOKOYO JPN
51309HJ_IS_IS
42087IND MUMBAI IND
</code></pre>
<p>How can this be done in pandas python</p>
|
<python><python-3.x><pandas><regex><dataframe>
|
2023-01-27 01:57:30
| 2
| 933
|
aeapen
|
75,253,235
| 17,243,835
|
Scraping Dynamic Webpages w/ a date selector
|
<p>I am looking to use the requests module in python to scrape:</p>
<pre><code>https://www.lines.com/betting/nba/odds
</code></pre>
<p>This site contains historical betting odds data.</p>
<p>The main issues, is there is a date selector on this page, and i can not seem to find where the date value is stored. Ive tried looking in the headers and the cookies, and still cant seem to find where date is stored, in order to programmatically change it, to scrape data from different dates.</p>
<p>Looking on the network tab, it seems like it is pulling this data from:</p>
<pre><code>https://www.lines.com/betting/nba/odds/best-line?date=2023-01-23'
</code></pre>
<p>However, even with using the headers, i am unable to access this site. It just returns the data from:</p>
<pre><code>https://www.lines.com/betting/nba/odds
</code></pre>
<p>which is the current date.</p>
<p>I am looking to do so without using a different method (i.e. Selenium) which seems pretty straight forward (Open Page -> Download Data -> Click Previous Date -> Repeat)</p>
<p>Here is my code to do so:</p>
<pre><code>import requests
url = 'https://www.lines.com/betting/nba/odds/'
requests.get(url).text
</code></pre>
<p>Thanks!</p>
|
<python><http><web-scraping><request>
|
2023-01-27 01:15:40
| 1
| 355
|
drew wood
|
75,253,022
| 8,682,074
|
error while parsing JSON file, probally a hidden value on the JSON content
|
<p>I have this JSON file:</p>
<p><a href="https://drive.google.com/file/d/1zh_fJJNWs9GaPnlLZ459twSubsYkzMi5/view?usp=share_link" rel="nofollow noreferrer">https://drive.google.com/file/d/1zh_fJJNWs9GaPnlLZ459twSubsYkzMi5/view?usp=share_link</a></p>
<p>Looks normal at first, even using online <strong>json schema validators</strong>
However when parsing it locally, I get error.
I tried it with python, nodejs and golang but It's not working.
I think it probably has some hidden value that make impossible to parse it</p>
|
<python><node.js><json><go>
|
2023-01-27 00:31:37
| 1
| 2,916
|
John Balvin Arias
|
75,252,933
| 4,506,929
|
Merging several variables of an xarray.Dataset into one using a new dimension
|
<p>I have a Dataset with several variables that named something like <code>t1</code>, <code>t2</code>, <code>t3</code>, etc. I'm looking for a simple function to merge them all into one variable <code>t</code> through the use an extra dimension.</p>
<p>Basically I want the output that I'm getting in the MWE below:</p>
<pre class="lang-py prettyprint-override"><code>import xarray as xr
import numpy as np
ds = xr.Dataset({"t1": (("y", "x"), np.random.rand(6).reshape(2, 3)),
"t2": (("y", "x"), np.random.rand(6).reshape(2, 3)),
"t3": (("y", "x"), np.random.rand(6).reshape(2, 3)),
}, coords={"y": [0, 1], "x": [10, 20, 30]},)
t_list = []
for i, v in enumerate(ds.variables.keys()):
if v.startswith("t"):
t_list.append(ds[v].expand_dims("α").assign_coords(α=[i]))
ds = ds.drop(v)
ds["t"] = xr.concat(t_list, dim="α")
</code></pre>
<p>This pretty much achieves what I want, but I'm pretty sure there's an xarray function for it already. Or at least I'm convinced it can be done in fewer lines.</p>
|
<python><python-xarray>
|
2023-01-27 00:17:32
| 1
| 3,547
|
TomCho
|
75,252,878
| 17,724,172
|
string .join method confusion
|
<p>I tried to join a sample string in three ways, first entered by the code and then entered by user input. I got different results.</p>
<p>#Why isn't the output the same for these (in python 3.10.6):</p>
<pre><code>sampleString = 'Fred','you need a nap! (your mother)'
ss1 = ' - '.join(sampleString)
print(ss1), print()
sampleString = input('please enter something: ') #entered 'Fred','you need a nap! (your mother)'
ss2 = ' - '.join(sampleString)
print(ss2)
sampleString = input(['Fred','you need a nap! (your mother)'])
ss2 = ' - '.join(sampleString)
print(ss2)
</code></pre>
<p>output:</p>
<pre><code>Fred - you need a nap! (your mother)
please enter something: 'Fred','you need a nap! (your mother)'
' - F - r - e - d - ' - , - ' - y - o - u - - n - e - e - d - - a - - n - a - p - ! - - ( - y - o - u - r - - m - o - t - h - e - r - ) - '
['Fred', 'you need a nap! (your mother)']
</code></pre>
|
<python><string><join>
|
2023-01-27 00:06:34
| 2
| 418
|
gerald
|
75,252,853
| 2,706,344
|
Set column with different time zones as index
|
<p>I have a DataFrame with time values from different timezones. See here:</p>
<p><a href="https://i.sstatic.net/gO3Dr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gO3Dr.png" alt="enter image description here" /></a></p>
<p>The start of the data is the usual time and the second half is daylight savings time. As you can see I want to convert it to a datetime column but because of the different time zones it doesn't work. My goal is to set this column as index. How can I do that?</p>
|
<python><pandas>
|
2023-01-27 00:01:42
| 1
| 4,346
|
principal-ideal-domain
|
75,252,830
| 420,157
|
Python Swig interface for C function allocating a list of structures
|
<p>I'm trying to get the following C function to be exposed as a python interface.</p>
<pre><code>void myfunc(struct MyStruct** list, int* size) {
int n = 10;
*size = n;
*list = (struct MyStruct*) malloc(n * sizeof(struct MyStruct));
for (int i = 0; i < n; i++) {
(*list)[i].x = i;
(*list)[i].y = i * 0.1;
}
}
</code></pre>
<p>Unfortunately the swig documentation didn't help narrow down on a solution. Could you please help with any pointers or code references to how we can write a corresponding swig interface file to make this function be called from python?</p>
<p>It would be a bonus if I could access this as a list of objects in python.</p>
|
<python><c><python-3.x><list><swig>
|
2023-01-26 23:57:15
| 2
| 777
|
Maverickgugu
|
75,252,747
| 2,339,664
|
Regular expression for YYYY-MM-DDTHH:MM:SS is not detecting the presence of .00Z
|
<p>I have the following regular expression to match the date time format:</p>
<pre><code>start_time = "1970-01-08T00:38:45"
regular_exp = '\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}'
import re
if not re.match(regular_exp, start_time):
print('error')
</code></pre>
<p>If I use the date/time: start_time = "1970-01-08T00:38:45:<strong>00Z</strong>" <-- Note the <strong>00Z</strong></p>
<p>the reguar expression is not detecting it.</p>
<p>How can I ensure I do not let the format with the 00Z is gettingn through?</p>
|
<python><regex><python-re>
|
2023-01-26 23:38:33
| 0
| 4,917
|
Harry Boy
|
75,252,728
| 344,669
|
Python SQLAlchemy PostgreSQL find by primary key Deprecate message
|
<p>I use below code to find the object by primary key. Now I am getting this <code>Deprecated features detected</code> message.</p>
<p>How can I re-write this query to fix the deprecated message.</p>
<p><strong>Code:</strong></p>
<pre><code> def find_by_id(self, obj_id):
with self.session() as s:
x = s.query(User).get(obj_id)
return x
</code></pre>
<p><strong>Warning:</strong></p>
<pre><code> LegacyAPIWarning: Deprecated API features detected! These feature(s) are not compatible with SQLAlchemy 2.0. To prevent incompati
ble upgrades prior to updating applications, ensure requirements files are pinned to "sqlalchemy<2.0". Set environment variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set environment variable SQLALCHEMY_SILENCE_UBER
_WARNING=1 to silence this message. (Background on SQLAlchemy 2.0 at: https://sqlalche.me/e/b8d9)
x = s.query(User).get(obj_id)
</code></pre>
|
<python><sqlalchemy><flask-sqlalchemy><sqlalchemy-utils>
|
2023-01-26 23:35:01
| 1
| 19,251
|
sfgroups
|
75,252,687
| 2,056,067
|
Avoid deeply nested function calls Pythonically
|
<p>I have a series of functions, where the result of one is fed into the next, plus other inputs:</p>
<pre class="lang-py prettyprint-override"><code>result = function1(
function2(
function3(
function4(x, y, z),
a, b, c),
d, e, f),
g, h, i)
</code></pre>
<p>This is ugly and hard to understand. In particular it isn't obvious that <code>function1</code> is actually the last one to be called.</p>
<p>How can this code be written nicer, in a Pythonic way?</p>
<p>I could e.g. assign intermediate results to variables:</p>
<pre class="lang-py prettyprint-override"><code>j = function4(x, y, z)
k = function3(j, a, b, c)
l = function2(k, d, e, f)
result = function1(l, g, h, i)
</code></pre>
<p>But this also puts additional variables for things I don't need into the namespace, and may keep a large amount of memory from being freed – unless I add a <code>del j, k, l</code>, which is its own kind of ugly. Plus, I have to come up with names.</p>
<p>Or I could use the name of the final result also for the intermediate results:</p>
<pre class="lang-py prettyprint-override"><code>result = function4(x, y, z)
result = function3(result, a, b, c)
result = function2(result, d, e, f)
result = function1(result, g, h, i)
</code></pre>
<p>The disadvantage here is that the same name is used for possibly wildly different things, which may make reading and debugging harder.</p>
<p>Then maybe <code>_</code> or <code>__</code>?</p>
<pre class="lang-py prettyprint-override"><code>__ = function4(x, y, z)
__ = function3(__, a, b, c)
__ = function2(__, d, e, f)
result = function1(__, g, h, i)
</code></pre>
<p>A bit better, but again not super clear. And I might have to add a <code>del __</code> at the end.</p>
<p>Is there a better, more Pythonic way?</p>
|
<python>
|
2023-01-26 23:28:08
| 3
| 8,495
|
A. Donda
|
75,252,652
| 344,669
|
Python SQLAlchemy PostgreSQL Deprecated API features
|
<p>I am using following code to create the function and trigger to update the <code>created_at</code> and <code>updated_at</code> fields. with upgrade of new module getting the deprecated API warning.</p>
<p>How can I replace <code> engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema)))</code> line to remove the warning message?</p>
<p><strong>Code:</strong></p>
<pre><code>mapper_registry.metadata.create_all(engine, checkfirst=True)
create_refresh_updated_at_func = """
CREATE OR REPLACE FUNCTION {schema}.refresh_updated_at()
RETURNS TRIGGER AS $$
BEGIN
NEW.updated_at = NOW();
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
"""
my_schema = "public"
engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema)))
</code></pre>
<p><strong>Warrning:</strong></p>
<blockquote>
<p>RemovedIn20Warning: Deprecated API features detected! These
feature(s) are not compatible with SQLAlchemy 2.0. To prevent
incompatible upgrades prior to updating applications, ensure
requirements files are pinned to "sqlalchemy<2.0". Set environment
variable SQLALCHEMY_WARN_20=1 to show all deprecation warnings. Set
environment variable SQLALCHEMY_SILENCE_UBER_WARNING=1 to silence this
message. (Background on SQLAlchemy 2.0 at: <a href="https://sqlalche.me/e/b8d9" rel="nofollow noreferrer">https://sqlalche.me/e/b8d9</a>)
engine.execute(sa.text(create_refresh_updated_at_func.format(schema=my_schema)))</p>
</blockquote>
|
<python><sqlalchemy><flask-sqlalchemy><sqlalchemy-utils>
|
2023-01-26 23:22:23
| 1
| 19,251
|
sfgroups
|
75,252,625
| 8,554,833
|
for loop through multiple items based on a dataframe
|
<p>So I have several dataframes of different widths.</p>
<p>I want a for loop that will perform an operation on each dataframe's columns:</p>
<p>Table 1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>col1</th>
<th>col2</th>
<th>col3</th>
</tr>
</thead>
<tbody>
<tr>
<td>Hi</td>
<td>1</td>
<td>Jake</td>
</tr>
<tr>
<td>Bye</td>
<td>2</td>
<td>Mike</td>
</tr>
<tr>
<td>Red</td>
<td>Blue</td>
<td>Pink</td>
</tr>
</tbody>
</table>
</div>
<p>Table 2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>cl1</th>
<th>cl2</th>
<th>cl3</th>
<th>c4</th>
</tr>
</thead>
<tbody>
<tr>
<td>Frank</td>
<td>Toy</td>
<td>Hello</td>
<td>Present</td>
</tr>
<tr>
<td>Bike</td>
<td>Ride</td>
<td>Blue</td>
<td>Mike</td>
</tr>
<tr>
<td>Red</td>
<td>Blue</td>
<td>Pink</td>
<td>Fred</td>
</tr>
</tbody>
</table>
</div>
<p>These tables are in the form a list of tuples.</p>
<p>I want to take these two loops an effectively just have one loop that takes the number of header as the number of items to loop through.</p>
<pre><code> row = 1
col = 0
for col1, col2, col3 in (table):
worksheet.write(row, col, col1)
worksheet.write(row, col + 1, col2)
worksheet.write(row, col + 2, col3)
row += 1
</code></pre>
<br>
<pre><code> row = 1
col = 0
for cl1, cl2, cl3, cl4 in (table):
worksheet.write(row, col, cl1)
worksheet.write(row, col + 1, cl2)
worksheet.write(row, col + 2, cl3)
worksheet.write(row, col + 2, cl3)
row += 1
</code></pre>
<br>
<p>Here's what I want</p>
<p>iterate through each column in the table no matter the number of columns. What I think it would look like</p>
<pre><code>row = 1
col = 0
elements = table.column.names
for elements in (table):
for i in elements:
worksheet.write(row, col, i)
col = col +1
row = row +1
</code></pre>
|
<python><loops><xlsxwriter>
|
2023-01-26 23:16:33
| 2
| 728
|
David 54321
|
75,252,581
| 333,262
|
Flask-Babel shows translated text locally but not on production on GAE
|
<p>Once the app is deployed on Google AppEngine, pages show the msgid and not the translated text. I can't see any error either. But the locale seem to be ignored on GAE (see different output in debug messages below).</p>
<p>Other similar questions on SO mention that sometimes is due to case sensitive folders names, or different references to the translations file paths. But I double checked everything.</p>
<p>What else can I debug?</p>
<p><strong>babel.cfg</strong></p>
<pre><code>[python: app/**.py]
[jinja2: app/templates/**.html]
[jinja2: app/main/templates/**.html]
extensions = jinja2.ext.autoescape, jinja2.ext.with_
encoding = utf-8
</code></pre>
<p><strong>config.py</strong></p>
<pre><code>"""Flask config class."""
import os
basedir = os.path.abspath(os.path.dirname(__file__))
class Config(object):
"""Babel config"""
LANGUAGES = ['en','es']
BABEL_TRANSLATION_DIRECTORIES = os.path.join(basedir, 'app/translations')
</code></pre>
<p><strong>__init.py__</strong></p>
<pre><code>from flask import Flask
from flask import request
from flask import current_app
from config import Config
from flask_babel import Babel
babel = Babel()
def create_app(config_class=Config):
app = Flask(__name__)
app.config.from_object(config_class)
babel.init_app(app, locale_selector=get_locale)
from app.main import main_bp
app.register_blueprint(main_bp)
return app
def get_locale():
print('\n\n--> Babel debug')
print(current_app.config['LANGUAGES'])
print(babel.list_translations())
print('<-- Babel debug\n\n')
return request.accept_languages.best_match(current_app.config['LANGUAGES'])
</code></pre>
<p><strong>OUTPUT on DEV</strong></p>
<pre><code>--> Babel debug
/Users/???/???/???/???/app
/Users/???/???/???/???/app/translations
['en', 'es']
[Locale('es'), Locale('en'), Locale('en')]
<-- Babel debug
</code></pre>
<p><strong>OUTPUT on GAE</strong></p>
<pre><code>--> Babel debug
/srv/app
/srv/app/translations
['en', 'es']
[Locale('en')]
<-- Babel debug
</code></pre>
|
<python><flask><google-app-engine><flask-babel>
|
2023-01-26 23:11:32
| 1
| 658
|
Andrea
|
75,252,561
| 14,425,076
|
How to adapt this python script to apt installed matplotlib vs pip3 installed
|
<p>I have a script (MWE supplied)</p>
<pre><code>import matplotlib.pyplot as plt
import matplotlib
s_xLocs = [864]
s_yLocs = [357]
s_score = [0.33915146615180547]
sMax = 0.34704810474264386
for i in range(len(s_xLocs)):
plt.scatter(s_xLocs[i], s_yLocs[i], c=s_score[i], s=(20*(s_score[i]+1.5)**4), cmap="plasma", marker='.', vmin=0, vmax=sMax)
matplotlib.pyplot.close()
</code></pre>
<p>which was being used to generate some plots using matplotlib. On my dev machine, I used matplotlib installed via <code>pip3</code>. The script is now being used on some other machines managed by IT and limited to using the version of matplotlib installed via <code>apt install python3-matplotlib</code>. This has caused my script to fail, throwing the error</p>
<pre><code>Traceback (most recent call last):
File "./heatmaps.py", line 9, in <module>
plt.scatter(s_xLocs[i], s_yLocs[i], c=s_score[i], s=(20*(s_score[i]+1.5)**4), cmap="plasma", marker='.', vmin=0, vmax=sMax)
File "/usr/lib/python3/dist-packages/matplotlib/pyplot.py", line 2836, in scatter
__ret = gca().scatter(
File "/usr/lib/python3/dist-packages/matplotlib/__init__.py", line 1601, in inner
return func(ax, *map(sanitize_sequence, args), **kwargs)
File "/usr/lib/python3/dist-packages/matplotlib/axes/_axes.py", line 4451, in scatter
self._parse_scatter_color_args(
File "/usr/lib/python3/dist-packages/matplotlib/axes/_axes.py", line 4264, in _parse_scatter_color_args
n_elem = c_array.shape[0]
IndexError: tuple index out of range
</code></pre>
<p>After reading <a href="https://stackoverflow.com/questions/59466371/tuple-index-out-of-range-in-scatter-plot">this Q/A</a> I was able to seemingly narrow down the issue to the colormap <code>c</code> argument. After also reading the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.scatter.html" rel="nofollow noreferrer">documentation</a> I also tried passing in the entire list of <code>s_score</code> with no indexing ala</p>
<pre><code>plt.scatter(s_xLocs[i], s_yLocs[i], c=s_score, s=(20*(s_score[i]+1.5)**4), cmap="plasma", marker='.', vmin=0, vmax=sMax)
</code></pre>
<p>but that gave a different and more confusing (IMO) error:</p>
<pre><code>...
ValueError: Invalid RGBA argument: 0.33915146615180547
During handling of the above exception, another exception occurred:
...
ValueError: 'c' argument has 1 elements, which is not acceptable for use with 'x' with size 1, 'y' with size 1.
</code></pre>
<p>I am hoping someone can provide a solution to this issue which will work with <code>python3-matplotlib</code> and perhaps also clarify the errors/what is different between the version installed with <code>pip3</code> vs <code>apt</code>.</p>
|
<python><python-3.x><matplotlib><plot>
|
2023-01-26 23:08:30
| 2
| 853
|
Douglas B
|
75,252,499
| 12,906,445
|
Proper way converting TorchScript Version to Core ML
|
<p>I'm trying to convert <code>PyTorch ml model</code> into <code>Core ML</code>. As shown in this <a href="https://developer.apple.com/videos/play/tech-talks/10154/" rel="nofollow noreferrer">WWDC video</a>, I first converted it to <code>TorchScript Version</code>.</p>
<p>But the problem happens when converting <code>TorchScript Version</code> to <code>Core ML</code>.</p>
<p>As first, I did as follow:</p>
<pre><code>import coremltools as ct
model = ct.convert(
traced_model,
inputs=[ct.TensorType(shape=input_ids.shape), ct.TensorType(shape=attention_mask.shape)],
outputs=[ct.TensorType(shape=decoder_input_ids.shape)]
)
</code></pre>
<p>But it was giving me an error saying:</p>
<pre><code>ValueError: The 'shape' argument must not be specified for the outputs, since it is automatically inferred from the input shapes and the ops in the model
</code></pre>
<p>So I used following code for converting it to <code>Core ml</code>:</p>
<pre><code>import coremltools as ct
model = ct.convert(
traced_model,
inputs=[ct.TensorType(shape=input_ids.shape), ct.TensorType(shape=attention_mask.shape), ct.TensorType(shape=decoder_input_ids.shape)]
)
</code></pre>
<p>This time code block ran successfully, but when I actually downloaded the converted coreml, it was not detecting decoder_input_ids as one of inputs like following:</p>
<p><a href="https://i.sstatic.net/VuJgX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VuJgX.png" alt="enter image description here" /></a></p>
<p>How can I fix this problem, and what am I doing wrong? Btw the model is Seq2Seq model</p>
|
<python><machine-learning><pytorch><coreml><coremltools>
|
2023-01-26 22:57:35
| 0
| 1,002
|
Seungjun
|
75,252,374
| 8,354,766
|
Tensorflow gelu out of memory error, but not relu
|
<p>Gelu activation causes my model to return an OOM error when training, but when I switch to relu the problem goes away. Even if the model is doubled in size, the model with relu performs fine.</p>
<pre><code>if activation=="gelu":
out = tfa.layers.GELU()(out)
elif activation=="relu":
out = KL.ReLU()(out)
</code></pre>
<p>The OOM error does not happen on the gelu function, but since the two models are the same except for the difference in activation function, I don't think this is the issue.</p>
<pre><code> File ".../python3.9/site-packages/keras/backend.py", line 3693, in resize_images
x = tf.image.resize(x, new_shape, method=interpolations[interpolation])
Node: 'model/up_sampling2d_2/resize/ResizeNearestNeighbor'
2 root error(s) found.
(0) RESOURCE_EXHAUSTED: OOM when allocating tensor with shape[8,320,240,64] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node model/up_sampling2d_2/resize/ResizeNearestNeighbor}}]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info. This isn't available when running in Eager mode.
</code></pre>
|
<python><tensorflow><out-of-memory><relu>
|
2023-01-26 22:40:52
| 0
| 488
|
Alberto MQ
|
75,252,125
| 1,318,266
|
How to create model's tables in MySQL?
|
<p>I'm trying to replicate a PHP/Symfony project in Python/Django as a python learning exercise. Platform = Windows 10. The expected result is that a <code>migrate</code> command will add tables related to all of the entries in <code>settings.py INSTALLED_APPS{...}</code>. Instead, the <code>migrate</code> command adds all <code>Django</code> tables but none of the tables of <code>models.py</code>.</p>
<p>What, then, must be done to allow <code>migrate</code> to add the 5 MySQL tables?</p>
<p>Result:</p>
<pre><code>mysql> use diet_py;
Database changed
mysql> show tables;
+----------------------------+
| Tables_in_diet_py |
+----------------------------+
| auth_group |
| auth_group_permissions |
| auth_permission |
| auth_user |
| auth_user_groups |
| auth_user_user_permissions |
| django_admin_log |
| django_content_type |
| django_migrations |
| django_session |
+----------------------------+
</code></pre>
<p>Following Django tutorial documentation, with slight modifications, I have these directories & files:</p>
<p><code>Tree:</code></p>
<pre><code>...DB1-PROJECT
│ db.sqlite3
│ manage.py
│
├───diet
│ │ admin.py
│ │ apps.py
│ │ models.py
│ │ tests.py
│ │ views.py
│ │ __init__.py
│ │
│ ├───migrations
│ │ │ __init__.py
│ │ │
│ │ └───__pycache__
│ │ __init__.cpython-311.pyc
│ │
│ └───__pycache__
│ admin.cpython-311.pyc
│ apps.cpython-311.pyc
│ models.cpython-311.pyc
│ __init__.cpython-311.pyc
│
└───mysite
│ asgi.py
│ settings.py
│ urls.py
│ wsgi.py
│ __init__.py
│
└───__pycache__
settings.cpython-311.pyc
urls.cpython-311.pyc
wsgi.cpython-311.pyc
__init__.cpython-311.pyc
</code></pre>
<p><code>..\diet\models.py</code></p>
<pre><code>from django.db import models
# Create your models here.
from django.db import models
class Food(models.Model):
food_name = models.CharField(max_length=255)
class Meta:
db_table = 'food'
class Gut(models.Model):
description = models.CharField(max_length=255, blank=True, null=True)
datetime = models.DateTimeField()
reaction_id = models.IntegerField()
class Meta:
db_table = 'gut'
class Meal(models.Model):
meal_type = models.CharField(max_length=255)
date = models.DateTimeField()
class Meta:
db_table = 'meal'
class MealFood(models.Model):
meal = models.OneToOneField(Meal, models.DO_NOTHING, primary_key=True)
food = models.ForeignKey(Food, models.DO_NOTHING)
class Meta:
db_table = 'meal_food'
unique_together = (('meal', 'food'),)
class Reaction(models.Model):
reaction = models.CharField(max_length=45)
class Meta:
db_table = 'reaction'
</code></pre>
<p><code>...\mysite\settings.py</code></p>
<pre><code>from pathlib import Path
import pymysql
pymysql.install_as_MySQLdb()
...
INSTALLED_APPS = [
'diet.apps.DietConfig',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
...
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
'NAME': 'diet_py',
'USER': 'username',
'PASSWORD': 'password',
'HOST': '127.0.0.1',
'PORT': '3306',
}
}
...
</code></pre>
|
<python><mysql><django>
|
2023-01-26 22:03:21
| 1
| 4,728
|
geoB
|
75,252,015
| 13,231,896
|
How to represent SVG polygon in HTML using deffinition coming from POSTGIS ST_AsSVG function
|
<p>In my django app I have postgis database. I tried to get a polygon as a SVG, so I could represent that polygon in HTML using the SVG standard.</p>
<p>I use the following query:
SELECT ST_AsSVG(geom) from country_limit cl where cl.id=3;</p>
<p>And It returned the following result:</p>
<pre><code>M -85.941653 -12.285635 L -85.941653 -12.291673 -85.927577 -12.291673 -85.927577 -12.285635 Z
</code></pre>
<p>But when I try to represent that result inside an SVG, in HTML, It does not display the polygon.
Here is my code.</p>
<pre><code> <svg height="210" width="400">
<path d="M -85.941653 -12.285635 L -85.941653 -12.291673 -85.927577 -12.291673 -85.927577 -12.285635 Z" />
</svg>
</code></pre>
<p>How can I use the result from postgis ST_AsSVG to represent a geometry as SVG in HTML</p>
|
<python><django><svg><postgis><geodjango>
|
2023-01-26 21:52:39
| 1
| 830
|
Ernesto Ruiz
|
75,251,981
| 6,367,971
|
Convert hours-minutes-seconds duration in dataframe to minutes
|
<p>I have a csv with a column that represents time durations of two discrete events.</p>
<pre><code>Day,Duration
Mon,"S: 3h0s, P: 18m0s"
Tues,"S: 3h0s, P: 18m0s"
Wed,"S: 4h0s, P: 18m0s"
Thurs,"S: 30h, P: 10m0s"
Fri,"S: 15m, P: 3h0s"
</code></pre>
<p>I want to split that duration into two distinct columns and consistently represent the time in <code>minutes</code>. Right now, it is shown in <code>hours</code>, <code>minutes</code>, and <code>seconds</code>, like <code>S: 3h0s, P: 18m0s</code>. So the output should look like this:</p>
<pre><code> Day Duration S(min) P(min)
0 Mon S: 3h0s, P: 18m0s 180 18
1 Tues S: 3h0s, P: 18m0s 180 18
2 Wed S: 4h0s, P: 18m0s 240 18
3 Thur S: 30h0s, P: 10m0s 1800 10
4 Fri S: 15m, P: 3h0s 15 180
</code></pre>
<p>But when I do in <code>str.replace</code></p>
<pre><code>import pandas as pd
df = pd.read_csv("/file.csv")
df["S(min)"] = df['Duration'].str.split(',').str[0]
df["P(min)"] = df['Duration'].str.split(',').str[-1]
df['S(min)'] = df['S(min)'].str.replace("S: ", '').str.replace("h", '*60').str.replace('m','*1').str.replace('s','*(1/60)').apply(eval)
df['P(min)'] = df['P(min)'].str.replace("P: ", '').str.replace("h", '*60').str.replace('m','*1').str.replace('s','*(1/60)').apply(eval)
</code></pre>
<p>some of the calculations are off:</p>
<pre><code> Day Duration S(min) P(min)
0 Mon S: 3h0s, P: 18m0s 30.0 3.000000
1 Tues S: 3h0s, P: 18m0s 30.0 3.000000
2 Wed S: 4h0s, P: 18m0s 40.0 3.000000
3 Thurs S: 30h, P: 10m0s 1800.0 1.666667
4 Fri S: 15m, P: 3h0s 15.0 30.000000
</code></pre>
|
<python><pandas><dataframe><timedelta>
|
2023-01-26 21:48:19
| 3
| 978
|
user53526356
|
75,251,927
| 539,490
|
Plot.ly move y_title from overlapping with subplot yaxis_title and yaxis2_title
|
<p>Is it possible to configure plot.ly to stop the y axis titles and subplot titles from overlapping?</p>
<p><a href="https://i.sstatic.net/sTO0L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/sTO0L.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>import plotly.graph_objects as go
from plotly.subplots import make_subplots
max_y_counts = 400
max_y_month_counts = 60
fig = make_subplots(
rows=2, cols=1,
shared_xaxes=True, vertical_spacing=0.08,
row_heights=[3,1],
y_title="__________Y Title__________",
)
fig.update_xaxes(range=["2020-01-01", "2020-07-01"], row=1, col=1)
fig.update_xaxes(range=["2020-01-01", "2020-07-01"], row=2, col=1)
fig.update_yaxes(range=[0, max_y_counts], row=1, col=1)
fig.update_yaxes(range=[0, max_y_month_counts], row=2, col=1)
for row_max_y in [[1, max_y_counts], [2, max_y_month_counts]]:
fig.add_trace(
go.Scatter(
x=["2020-06-01", "2020-06-01"],
y=[0, row_max_y[1]],
mode="lines",
line=dict(color="#ac231b"),
),
row=row_max_y[0], col=1,
secondary_y=False,
)
def merge (dict1, dict2):
merged = dict1.copy()
merged.update(dict2)
return merged
# Define axis_style
def axis_style (**kwargs):
return merge(dict(zeroline=False, showline=True, mirror=True), kwargs)
# Add labels to vertical event lines
# Specify layout style
layout = dict(
width=750, height=600,
xaxis=axis_style(),
showlegend=False,
xaxis_title="",
yaxis=axis_style(
title=dict(
text="_____Y-Axis 1_____", standoff=0,
)
),
yaxis2=axis_style(title="_____Y-Axis 2_____"),
# margin=dict(l=100,)
)
fig.update_layout(layout)
fig.write_html("first_figure.html", auto_open=True)
</code></pre>
|
<python><plotly>
|
2023-01-26 21:42:50
| 1
| 29,009
|
AJP
|
75,251,911
| 6,761,231
|
How to find the sklearn version of pickled model?
|
<p>I have a pickled sklearn model, which I need to get to run. This model, however, is trained in unknown version of sklearn.</p>
<p>When I look up the model in debugger, I find that there is a bunch of strange tracebacks inside, instead of the keys you'd expect, for example:</p>
<pre><code>decision_function -> 'RandomForestClassifier' object has no attribute 'decision_function'
fit_predict -> 'RandomForestClassifier' object has no attribute 'fit_predict'
score_samples -> 'RandomForestClassifier' object has no attribute 'score_samples'
</code></pre>
<p>How can I get this model to run? Does these error message hint you anything?</p>
<p>EDIT: The solution is to brute force search the sklearn version. In my case when I got to the correct major version, the error message pointed me to the correct minor version.</p>
|
<python><scikit-learn><pickle>
|
2023-01-26 21:40:59
| 2
| 303
|
Artur Pschybysz
|
75,251,874
| 19,916,174
|
Declaring and Looping over a variable in one line
|
<p>Just for fun, I am trying to compress a programming problem into one line. I know this is typically a bad practice, but it is a fun challenge that I am asking for your help on.</p>
<p>I have a piece of code which declares the variables and in the second line which loops over a list created in the first line, until a number is not found anymore. Finally it returns that value.</p>
<p>The programming question is as follows. Given a sentence, convert each character to it's ascii representation. Then convert that ascii value to binary (filling the remaining spaces with 0 if the binary number is less than 8 digits), and combine the numbers into one string. Starting from the number 0, convert it to binary and check if it is in the string. If it is, add one to the number and check again. Return the last consecutive binary number that is in the string.
Ex)</p>
<p>string = "0000010"</p>
<p>0 in string: add 1</p>
<p>1 in string: add 1</p>
<p>10 in string: add 1</p>
<p>11 not in string: the last consecutive binary number was 10<sub>2</sub>=2<sub>10</sub>. Return 2</p>
<p>You can see my code below</p>
<pre><code>def findLastBinary(s: str):
string, n = ''.join(['0'*(10-len(bin(ord(char))))+bin(ord(char))[2:] for char in s]), 0
while bin(n)[2:] in string: n+=1
return n-1
</code></pre>
<p>It would also be nice if I could combine the return statement and loop into one line as well.</p>
<h2>EDIT</h2>
Fixed the code (it should work now). Also below, you will see a sample test case. Hope this helps with answering this question.
<p>Sample test case</p>
<p>Input:</p>
<p>s="Roses and thorns"</p>
<p>Below you will see the steps my code follows to get the correct answer (obviously made more readable)</p>
<p>Organized into columns in the following order:</p>
<p>Character-Ascii-Binary Representation of ascii value:</p>
<p>R - 82 - 01010010</p>
<p>o - 111 - 01101111</p>
<p>s - 115 - 01110011</p>
<p>etc.</p>
<p>Keep in mind that if the binary number has less than 8 digits, zeros should be added to the beginning of the number until it is 8 digits.</p>
<p>Each binary integer is then concatenated into a single string (I added spaces for readability only):</p>
<p>01010010 01101111 01110011 01100101 01110011 00100000 01100001 01101110 01100100 00100000 01110100 01101000 01101111 01110010 01101110 01110011</p>
<p>Now we start from the binary number 0, and check if it is in the string. It is so we move on to 1. 1 is in the string, so we move on to 10. 10 is in the string. And so we continue until we find the binary string 11111 is not in our string. 11111<sub>2</sub>=31<sub>10</sub>. Since 31 was the first number whose decimal representation was not in the string, we return the last number whose decimal number was in the string: namely, 31-1=30. 30 is what the function should return.</p>
|
<python><python-3.x><string><binary><ascii>
|
2023-01-26 21:36:25
| 1
| 344
|
Jason Grace
|
75,251,687
| 1,210,614
|
Python 3.9.12 build failed - generate-posix-vars failed
|
<p>New to python. Trying to install <code>3.9.12</code> via pyenv. Getting the following error:</p>
<pre><code>pyenv install 3.9.12
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Downloading Python-3.9.12.tar.xz...
-> https://www.python.org/ftp/python/3.9.12/Python-3.9.12.tar.xz
Installing Python-3.9.12...
python-build: use tcl-tk from homebrew
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
BUILD FAILED (OS X 12.6.1 using python-build 20180424)
Inspect or clean up the working tree at /var/folders/dz/3d8j_wx508jgkxqzjrfwhbt40000gp/T/python-build.20230125165700.93087
Results logged to /var/folders/dz/3d8j_wx508jgkxqzjrfwhbt40000gp/T/python-build.20230125165700.93087.log
Last 10 log lines:
DYLD_LIBRARY_PATH=/var/folders/dz/3d8j_wx508jgkxqzjrfwhbt40000gp/T/python-build.20230125165700.93087/Python-3.9.12 ./python.exe -E -S -m sysconfig --generate-posix-vars ;\
if test $? -ne 0 ; then \
echo "generate-posix-vars failed" ; \
rm -f ./pybuilddir.txt ; \
exit 1 ; \
fi
dyld[4402]: symbol not found in flat namespace (_libintl_bindtextdomain)
/bin/sh: line 1: 4402 Abort trap: 6 DYLD_LIBRARY_PATH=/var/folders/dz/3d8j_wx508jgkxqzjrfwhbt40000gp/T/python-build.20230125165700.93087/Python-3.9.12 ./python.exe -E -S -m sysconfig --generate-posix-vars
generate-posix-vars failed
make: *** [pybuilddir.txt] Error 1
</code></pre>
<p>I'm on an M1. Not sure if that has anything to do with it.</p>
|
<python><python-3.x><pyenv>
|
2023-01-26 21:12:33
| 1
| 10,962
|
jordan
|
75,251,635
| 4,414,359
|
'ZipExtFile' object has no attribute 'open'
|
<p>I have a file <code>f</code> that i pulled from a url response:</p>
<p><code><zipfile.ZipExtFile name='some_data_here.csv' mode='r' compress_type=deflate></code></p>
<p>But for some reason i can't do <code>f.open('some_data_here.csv')</code></p>
<p>I get an error saying:
<code>'ZipExtFile' object has no attribute 'open'</code></p>
<p>I'm really confused because isn't that one of the attributes of ZipExtFile?</p>
|
<python><zip>
|
2023-01-26 21:07:37
| 1
| 1,727
|
Raksha
|
75,251,570
| 11,648,332
|
Can't unlock pgpy private key: NotImplementedError: <SymmetricKeyAlgorithm.Plaintext: 0>
|
<p>I am trying to load a pgpy private key and decrypt an encrypted text message.</p>
<pre><code>Private_key_path = r"/content/Private_KEY.sub"
privkey, _ = pgpy.PGPKey.from_file(Private_key_path)
</code></pre>
<p>The key and the encrypted message manage to load without problems:</p>
<pre><code>file_path=r"/content/we.csv"
message_from_file = pgpy.PGPMessage.from_file(file_path)
print(message_from_file)
</code></pre>
<p>The encrypoted message shows correctly:</p>
<pre><code>-----BEGIN PGP MESSAGE-----
Version: BouncyCastle.NET Cryptography (net6.0) v2.0.0+4f800b4d32
hQEMA7yVCvCXVxN9AQf/Wegn6Dxk/zLWn594RJ5QAgCtmU0F+Yh3P4moL8UKuTLc
eifxnuG88dtpUpOuzc5cu9w84EBnQq+l8fMszuy0dMB6wkvbNtRZ03bOzJv1vkAD
4tudbbEH1+YfGqYj2gJRZ9LAprH/KtqL52SzUBmXdG9NrUnjFhIT3sWw6b+tfvMR
pZgpg6O1PsyIw1xdvCjoRjLyNT1eyvNw1nUP1wEi9G2blFlvsxAJnUo/SxD2qTVr
...
/dAc+aGk+DU0cHeA+P/Gon9Io2jPpgt3Ur9uahQ3mRvgpLBgvDsxD1ZhXZd44Dj4
4h+p30SJoeQUYh9lD7wsrQl9wUspo4p+jULSQRmps4wDv4KKLk/pt+ZBEQhhnJmR
/O9d2ZfL31BY1GV9bg==
=hw2Z
-----END PGP MESSAGE-----
</code></pre>
<p>When I try to unlock the key with the passphrase though:</p>
<pre><code>with privkey.unlock('correctpassphrase') as key:
raw_message = key.decrypt(message_from_file).message
print(raw_message)
</code></pre>
<p>I get the following error:</p>
<pre><code>NotImplementedError Traceback (most recent call last)
<ipython-input-139-f188768ef141> in <module>
----> 1 with privkey.unlock(PASSPHRASE) as key:
2 raw_message = key.decrypt(message_from_file).message
3 print(raw_message)
6 frames
/usr/lib/python3.8/contextlib.py in __enter__(self)
111 del self.args, self.kwds, self.func
112 try:
--> 113 return next(self.gen)
114 except StopIteration:
115 raise RuntimeError("generator didn't yield") from None
/usr/local/lib/python3.8/dist-packages/pgpy/pgp.py in unlock(self, passphrase)
1809 try:
1810 for sk in itertools.chain([self], self.subkeys.values()):
-> 1811 sk._key.unprotect(passphrase)
1812 del passphrase
1813 yield self
/usr/local/lib/python3.8/dist-packages/pgpy/packet/packets.py in unprotect(self, passphrase)
939
940 def unprotect(self, passphrase):
--> 941 self.keymaterial.decrypt_keyblob(passphrase)
942 del passphrase
943
/usr/local/lib/python3.8/dist-packages/pgpy/packet/fields.py in decrypt_keyblob(self, passphrase)
1351
1352 def decrypt_keyblob(self, passphrase):
-> 1353 kb = super(RSAPriv, self).decrypt_keyblob(passphrase)
1354 del passphrase
1355
/usr/local/lib/python3.8/dist-packages/pgpy/packet/fields.py in decrypt_keyblob(self, passphrase)
1252
1253 # derive the session key from our passphrase, and then unreference passphrase
-> 1254 sessionkey = self.s2k.derive_key(passphrase)
1255 del passphrase
1256
/usr/local/lib/python3.8/dist-packages/pgpy/packet/fields.py in derive_key(self, passphrase)
1017 def derive_key(self, passphrase):
1018 ##TODO: raise an exception if self.usage is not 254 or 255
-> 1019 keylen = self.encalg.key_size
1020 hashlen = self.halg.digest_size * 8
1021
/usr/local/lib/python3.8/dist-packages/pgpy/constants.py in key_size(self)
237 return ks[self]
238
--> 239 raise NotImplementedError(repr(self))
240
241 def gen_iv(self):
NotImplementedError: <SymmetricKeyAlgorithm.Plaintext: 0>
</code></pre>
<p>I can't find any reference on the documentation or the internet whatsoever and I'm totally lost.
The passhphrase is a typical python string, it should be correct, even if it was wrong it should give a different error, I think the problem might reside with the key I loaded (which I was given and am not sure how it was created).</p>
<p>Please can anybody help me with any suggestion?</p>
|
<python><encryption><encryption-asymmetric>
|
2023-01-26 21:00:39
| 0
| 447
|
9879ypxkj
|
75,251,551
| 919,264
|
Several lines on the same diagram with Pandas plot() grouping
|
<p>I have a CSV with 3 data sets, each coresponding to a line to plot. I use Pandas <code>plot()</code> grouping to group the entries for the 3 lines. This generates 3 separate diagrams, but I would like to plot all 3 lines on the same diagram.</p>
<p>The CSV:</p>
<pre class="lang-none prettyprint-override"><code>shop,timestamp,sales
north,2023-01-01,235
north,2023-01-02,147
north,2023-01-03,387
north,2023-01-04,367
north,2023-01-05,197
south,2023-01-01,235
south,2023-01-02,98
south,2023-01-03,435
south,2023-01-04,246
south,2023-01-05,273
east,2023-01-01,197
east,2023-01-02,389
east,2023-01-03,87
east,2023-01-04,179
east,2023-01-05,298
</code></pre>
<p>The code (tested in Jupyter Lab):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
csv = pd.read_csv('./tmp/sample.csv')
csv.timestamp = pd.to_datetime(csv.timestamp)
csv.plot(x='timestamp', by='shop')
</code></pre>
<p>This gives the following:</p>
<p><a href="https://i.sstatic.net/Ah0gt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ah0gt.png" alt="result" /></a></p>
<p>Any idea how to render them 3 on one single diagram?</p>
|
<python><pandas><plot>
|
2023-01-26 20:58:02
| 3
| 2,327
|
Florent Georges
|
75,251,484
| 7,077,761
|
adding python binary distribution to path
|
<p>I am trying to get the python package <code>simms</code> to work on my computing cluster. Unfortunately there isn't much documentation. I have
inside my conda env installed it via</p>
<pre><code>pip install simms
</code></pre>
<p>I also git cloned <a href="https://github.com/ratt-ru/simms.git" rel="nofollow noreferrer">https://github.com/ratt-ru/simms.git</a> to</p>
<pre><code>my_dir/simms
</code></pre>
<p>Then inside my conda env, running the suggested:</p>
<pre><code>python my_dir/simms/tests/tests.py
</code></pre>
<p>gives an error</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory:
'casa': 'casa'
</code></pre>
<p>I have filled the instructions at
<a href="https://github.com/ratt-ru/simms/blob/master/tests/test.py" rel="nofollow noreferrer">https://github.com/ratt-ru/simms/blob/master/tests/test.py</a> to install the
CASA package (from <a href="https://casa.nrao.edu/installlinux.shtml" rel="nofollow noreferrer">https://casa.nrao.edu/installlinux.shtml</a>) . After
uploading and unzipping, I added it to my path variable with</p>
<pre><code>export PATH=$PATH: mydir/casa-6.5.2-26-py3.8
</code></pre>
<p>Note that <code>casa-6.5.2-26-py3.8</code> has a subfolder <code>bin/</code> so things Python should know where to look, but I still get the same <code>FileNotFoundError</code>.</p>
|
<python><installation><package>
|
2023-01-26 20:48:41
| 0
| 966
|
math_lover
|
75,251,349
| 4,876,561
|
Place ellipsis on seaborn catplot
|
<p>I have a <a href="https://seaborn.pydata.org/generated/seaborn.catplot.html" rel="nofollow noreferrer"><code>seaborn.catplot</code></a> that looks like this:</p>
<p><a href="https://i.sstatic.net/vPtc6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vPtc6.png" alt="enter image description here" /></a></p>
<p>What I am trying to do is highlight differences in the graph with the following rules:</p>
<ul>
<li>If A-B > 4, color it green</li>
<li>If A-B < -1, color it red</li>
<li>If A-B = <2= and >=0, color it blue</li>
</ul>
<p>I am looking to produce something akin to the below image:
<a href="https://i.sstatic.net/8EcHe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8EcHe.png" alt="enter image description here" /></a></p>
<p>I have an <a href="https://stackoverflow.com/help/minimal-reproducible-example">MRE</a> here:</p>
<pre><code># Stack Overflow Example
import numpy as np, pandas as pd, seaborn as sns
from random import choice
from string import ascii_lowercase, digits
chars = ascii_lowercase + digits
lst = [''.join(choice(chars) for _ in range(2)) for _ in range(100)]
np.random.seed(8)
t = pd.DataFrame(
{
'Key': [''.join(choice(chars) for _ in range(2)) for _ in range(5)]*2,
'Value': np.random.uniform(low=1, high=10, size=(10,)),
'Type': ['A', 'A', 'A', 'A', 'A', 'B', 'B', 'B', 'B', 'B']
}
)
ax = sns.catplot(data=t, x='Value', y='Key', hue='Type', palette="dark").set(title="Stack Overflow Help Me")
plt.show()
</code></pre>
<p>I believe an ellipsis will need to be plotted around the points of interest, and I have looked into some questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/20126061/creating-a-confidence-ellipses-in-a-sccatterplot-using-matplotlib">Creating a Confidence Ellipses in a sccatterplot using matplotlib</a></li>
<li><a href="https://stackoverflow.com/questions/46762633/plot-ellipse-in-a-seaborn-scatter-plot">plot ellipse in a seaborn scatter plot</a></li>
</ul>
<p>But none seem to be doing this with <code>catplot</code> in particular, or with customizing their color and with rules.</p>
<p>How can I achieve the desired result with my toy example?</p>
|
<python><matplotlib><seaborn><scatter-plot><catplot>
|
2023-01-26 20:31:51
| 1
| 7,351
|
artemis
|
75,251,348
| 11,280,068
|
Python selenium wait until element has text (when it didn't before)
|
<p>There is a div that is always present on the page but doesn't contain anything until a button is pressed.<br />
Is there a way to wait until the div contains text, when it didn't before, and then get the text?</p>
|
<python><selenium><xpath><webdriverwait><expected-condition>
|
2023-01-26 20:31:46
| 2
| 1,194
|
NFeruch - FreePalestine
|
75,251,221
| 20,959,773
|
Python interprets list as a tuple by mistake
|
<p>I have this piece of code:</p>
<pre><code> for keys in self.keys:
if has_classes := self.class_checker(keys[0]):
print(type(keys[0])) -> #just for demonstrating that it is actual list
keys[0] = [x for x in keys[0] if 'class="' not in x]
for classes in has_classes:
keys[0].append(f'class="{classes}"')
</code></pre>
<p>I want to change the list by using list comprehension and it is showing this error:</p>
<pre><code> <class 'list'>
Traceback (most recent call last):
File "C:\Users\USER\OneDrive\Desktop\XPATH\base\main.py", line 300, in <module>
XPanther('<h1 class="Uo8X3b OhScic zsYMMe">Lidhjet e qasshmërisë</h1>', 'C:\\Users\\USER\\OneDrive\\Desktop\\XPATH\\xpath_test_case.txt').capture()
File "C:\Users\USER\OneDrive\Desktop\XPATH\base\main.py", line 100, in capture
keys[0] = [x for x in keys[0] if 'class="' not in x]
~~~~^^^
TypeError: 'tuple' object does not support item assignment
</code></pre>
<p>As you can see on the first line of the error, it is printing the type of keys[0] as a list (which I also know is a list but anyways), and then it suddenly becomes a tuple ?</p>
<p>I'm very confused, please someone help me!</p>
|
<python><list><tuples>
|
2023-01-26 20:18:56
| 1
| 347
|
RifloSnake
|
75,251,110
| 1,628,353
|
How to check if Paramiko SFTP client is still open
|
<p>How do I check sftp client opened earlier via Paramiko is still active throughout</p>
<pre><code>self.sftp = ssh.open_sftp()
</code></pre>
<p>My application logic at times keeps the SFTP connection idle from anywhere between 0.5 sec to 5 minutes.</p>
|
<python><python-2.7><ssh><sftp><paramiko>
|
2023-01-26 20:06:43
| 2
| 1,547
|
chirag7jain
|
75,250,954
| 7,255,965
|
Use `sympy.Order` with functions, instead of symbols
|
<p>I have a problem like this f(u(x,y), v(x,y)) = 0. For a simple example we could choose f=u^2*v. I want to perturb the state with u=u_0+du,v=v_0+dv.
Just doing it in sympy like:</p>
<pre><code>import sympy as sp
x, y = sp.symbols("x, y")
u0, v0 = sp.symbols("u_0, v_0")
du = sp.Function("\\delta u")(x, y)
dv = sp.Function("\\delta v")(x, y)
u = u0 + du
v = v0 + dv
f = u**2 * v
print(sp.expand(u**2 * v))
</code></pre>
<p>I get <code>u_0**2*v_0 + u_0**2*\delta v(x, y) + 2*u_0*v_0*\delta u(x, y) + 2*u_0*\delta u(x, y)*\delta v(x, y) + v_0*\delta u(x, y)**2 + \delta u(x, y)**2*\delta v(x, y)</code></p>
<p>Is there a way to tell sympy that any product of deltas can be zero?
I tried using <code>sp.Order</code> but it doesn't work where the series is a power of some function</p>
<p>Thanks</p>
|
<python><sympy>
|
2023-01-26 19:46:14
| 1
| 322
|
Yotam Ohad
|
75,250,814
| 3,130,926
|
How to make `concurrent.futures.ProcessPoolExecutor().map` work with kwonly args?
|
<p>How to make <code>concurrent.futures.ProcessPoolExecutor().map</code> work with kwonly args? Is this even possible?</p>
<p>With positional args:</p>
<pre class="lang-py prettyprint-override"><code>
def worker_function(x):
# Return the square of the passed argument:
return x ** 2
# concurrent.futures example:
from concurrent.futures import ProcessPoolExecutor
with ProcessPoolExecutor() as executor:
squares = list(executor.map(worker_function, (1, 2, 3, 4, 5, 6)))
</code></pre>
<p>How to make this work for a function accepting keyword only args:</p>
<pre><code>def worker_function(*, x):
# Return the square of the passed argument:
return x ** 2
</code></pre>
<p>I tried this</p>
<pre><code>with ProcessPoolExecutor() as executor:
squares = list(executor.map(worker_function, (dict(x=x) for x in (1, 2))))
</code></pre>
<p>which fails because dict is not passed as kwargs for some reason</p>
<pre><code>TypeError: worker_function() takes 0 positional arguments but 1 was given
</code></pre>
|
<python><python-multiprocessing><concurrent.futures>
|
2023-01-26 19:31:04
| 1
| 14,217
|
muon
|
75,250,803
| 8,383,726
|
Python how to create a dictionary using the values in multiple pandas dataframe columns as tuple keys and a single column as value
|
<p>I would like to create a dictionary using the values in the pandas' data frame multiple columns as tuple keys and single-column values as the value(s). And where there are no values for a particular tuple pair, I would like to assign a generic value of say 99999. This latter part is proving to be a challenge and I wish to seek help from this forum on how to achieve this task. Thank you.</p>
<p>Sample extract data:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>Periods(Days)</th>
<th>Factory</th>
<th>Warehouse</th>
<th>Sales Outlets</th>
<th>Products</th>
<th>Dist Fact-Whse</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>Berlin</td>
<td>Teltow</td>
<td>Magdeburg</td>
<td>Maracuja</td>
<td>19.6</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>Hamburg</td>
<td>Wismar</td>
<td>Lubeck</td>
<td>Himbeer</td>
<td>126.2</td>
</tr>
<tr>
<td>2</td>
<td>3</td>
<td>Berlin</td>
<td>Kleinmachnow</td>
<td>Halle</td>
<td>Malaga</td>
<td>26.9</td>
</tr>
<tr>
<td>3</td>
<td>4</td>
<td>Hamburg</td>
<td>Wismar</td>
<td>Lubeck</td>
<td>Waldmeister</td>
<td>126.2</td>
</tr>
<tr>
<td>4</td>
<td>5</td>
<td>Berlin</td>
<td>Kleinmachnow</td>
<td>Leipzig</td>
<td>Walnuss</td>
<td>26.9</td>
</tr>
</tbody>
</table>
</div>
<p>Based on the above data set, the following piece of code is how I am creating my dictionary object(s) from the data frame:</p>
<pre><code>F = df.Factory.drop_duplicates().to_list()
W = df.Warehouse.drop_duplicates().to_list()
dist1 = {};
for i in df.index:
key = (df.at[i, 'Factory'], df.at[i, 'Warehouse'])
value = df.at[i, 'Dist Fact-Whse']
dicT = {key : value}
dist1.update(dicT)
for f in F:
for w in W:
if (f, w) not in dist1:
dist1[(f, w)] = 9999
</code></pre>
<p>I get my desired or expected outcome: <code>{('Berlin', 'Teltow'): 19.6, ('Hamburg', 'Wismar'): 126.2, ('Berlin', 'Kleinmachnow'): 26.9, ('Berlin', 'Wismar'): 9999, ('Hamburg', 'Teltow'): 9999, ('Hamburg', 'Kleinmachnow'): 9999}, </code></p>
<p>but this is too elaborous, time consuming, and not efficient as I have a of other parameter similar to "dist1" to create in my entire code.</p>
<p>I kindly welcome a more elegant and smart solution to this issue.</p>
|
<python><dataframe><data-science>
|
2023-01-26 19:29:00
| 1
| 335
|
Vondoe79
|
75,250,792
| 6,077,239
|
Keep nan in result when perform statsmodels OLS regression in python
|
<p>I want to perform OLS regression using python's statsmodels package. But my dataset has nans in it. Currently, I know I can use missing='drop' option when perform OLS regression but some of the results (fitted value or residuals) will have different lengths as the original y variable.</p>
<p>I have the following code as an example:</p>
<pre><code>import numpy as np
import statsmodels.api as sm
yvars = np.array([1.0, 6.0, 3.0, 2.0, 8.0, 4.0, 5.0, 2.0, np.nan, 3.0])
xvars = np.array(
[
[1.0, 8.0],
[8.0, np.nan],
[np.nan, 3.0],
[3.0, 6.0],
[5.0, 3.0],
[2.0, 7.0],
[1.0, 3.0],
[2.0, 2.0],
[7.0, 9.0],
[3.0, 1.0],
]
)
res = sm.OLS(yvar, sm.add_constant(xvars), missing='drop').fit()
res.resid
</code></pre>
<p>The result is as follows:</p>
<pre><code>array([-0.71907958, -1.9012464 , 1.78811122, 1.18983701, 2.63854267,
-1.45254075, -1.54362416])
</code></pre>
<p>My question is that the result is an array has length 7 (after dropping nans), but the length of yvar is 10. So, what if I want to return the residual of the same length as yvar and just output nan in whatever position where there are at least 1 nan in either yvar or xvars?</p>
<p>Basically, the result I want to get is:</p>
<pre><code>array([-0.71907958, nan , nan , -1.9012464 , 1.78811122, 1.18983701, 2.63854267,
-1.45254075, nan , -1.54362416])
</code></pre>
|
<python><numpy><statsmodels>
|
2023-01-26 19:27:57
| 1
| 1,153
|
lebesgue
|
75,250,788
| 13,885,312
|
How to prevent python3.11 TaskGroup from canceling all the tasks
|
<p>I just discovered new features of Python 3.11 like ExceptionGroup and TaskGroup and I'm confused with the following TaskGroup behavior: if one or more tasks inside the group fails then all other normal tasks are cancelled and <strong>I have no chance to change that behavior</strong>
Example:</p>
<pre><code>async def f_error():
raise ValueError()
async def f_normal(arg):
print('starting', arg)
await asyncio.sleep(1)
print('ending', arg)
async with asyncio.TaskGroup() as tg:
tg.create_task(f_normal(1))
tg.create_task(f_normal(2))
tg.create_task(f_error())
# starting 1
# starting 2
#----------
#< traceback of the error here >
</code></pre>
<p>In the example above I cannot make "ending 1" and "ending 2" to be printed. Meanwhile it will be very useful to have something like <code>asyncio.gather(return_exceptions=True)</code> option to do not cancel the remaining tasks when an error occurs.</p>
<p>You can say "just do not use TaskGroup if you do not want this cancellation behavior", but the answer is I want to use new <strong>exception groups</strong> feature and it's strictly bound to TaskGroup</p>
<p>So the questions are:</p>
<ol>
<li>May I somehow utilize exception groups in asyncio without this all-or-nothing cancellation policy in TaskGroup?</li>
<li>If for the previous the answer is "NO": why python developers eliminated the possibility to disable cancellation in the TaskGroup API?</li>
</ol>
|
<python><asynchronous><python-asyncio><python-3.11>
|
2023-01-26 19:27:11
| 4
| 415
|
Anton M.
|
75,250,578
| 17,835,656
|
how can i hash a specific tag from XML file in a correct way?
|
<p>we are working on e-invoicing and to send the invoice to the government.</p>
<p>and they gave us what they want from us to do like signing some tags and hashing another.</p>
<p>my problem now is in hashing, when i hash the specific tag after doing every thing right and after that send it i get errors about hashing only.</p>
<p>and they gave us some samples, i took the sample and i took the tag that i face an error whan i hash it and try to hash it and compare it with its hash in the same file and i get different one , not the same.</p>
<p>i called them about this problem they said > you when you take the tag you are taking it in a wrong way.</p>
<p>the hash is : <code>sha256</code></p>
<p>this is the invoice as XML:</p>
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<Invoice xmlns="urn:oasis:names:specification:ubl:schema:xsd:Invoice-2" xmlns:cac="urn:oasis:names:specification:ubl:schema:xsd:CommonAggregateComponents-2" xmlns:cbc="urn:oasis:names:specification:ubl:schema:xsd:CommonBasicComponents-2" xmlns:ext="urn:oasis:names:specification:ubl:schema:xsd:CommonExtensionComponents-2"><ext:UBLExtensions>
<ext:UBLExtension>
<ext:ExtensionURI>urn:oasis:names:specification:ubl:dsig:enveloped:xades</ext:ExtensionURI>
<ext:ExtensionContent>
<!-- Please note that the signature values are sample values only -->
<sig:UBLDocumentSignatures xmlns:sig="urn:oasis:names:specification:ubl:schema:xsd:CommonSignatureComponents-2" xmlns:sac="urn:oasis:names:specification:ubl:schema:xsd:SignatureAggregateComponents-2" xmlns:sbc="urn:oasis:names:specification:ubl:schema:xsd:SignatureBasicComponents-2">
<sac:SignatureInformation>
<cbc:ID>urn:oasis:names:specification:ubl:signature:1</cbc:ID>
<sbc:ReferencedSignatureID>urn:oasis:names:specification:ubl:signature:Invoice</sbc:ReferencedSignatureID>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#" Id="signature">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2006/12/xml-c14n11"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#ecdsa-sha256"/>
<ds:Reference Id="invoiceSignedData" URI="">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/TR/1999/REC-xpath-19991116">
<ds:XPath>not(//ancestor-or-self::ext:UBLExtensions)</ds:XPath>
</ds:Transform>
<ds:Transform Algorithm="http://www.w3.org/TR/1999/REC-xpath-19991116">
<ds:XPath>not(//ancestor-or-self::cac:Signature)</ds:XPath>
</ds:Transform>
<ds:Transform Algorithm="http://www.w3.org/TR/1999/REC-xpath-19991116">
<ds:XPath>not(//ancestor-or-self::cac:AdditionalDocumentReference[cbc:ID='QR'])</ds:XPath>
</ds:Transform>
<ds:Transform Algorithm="http://www.w3.org/2006/12/xml-c14n11"/>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<ds:DigestValue>RvCSpMYz8009KbJ3ku72oaCFWpzEfQNcpc+5bulh3Jk=</ds:DigestValue>
</ds:Reference>
<ds:Reference Type="http://www.w3.org/2000/09/xmldsig#SignatureProperties" URI="#xadesSignedProperties">
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<ds:DigestValue>OGU1M2Q3NGFkOTdkYTRiNDVhOGZmYmU2ZjE0YzI3ZDhhNjlmM2EzZmQ4MTU5NTBhZjBjNDU2MWZlNjU3MWU0ZQ==</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>MEYCIQDYsDnviJYPgYjyCIYAyzETeYthIoJaQhChblP4eAAPPAIhAJl6zfHgiKmWTtsfUz8YBZ8QkQ9rBL4Uy7mK0cxvWooH</ds:SignatureValue>
<ds:KeyInfo>
<ds:X509Data>
<ds:X509Certificate>MIID6TCCA5CgAwIBAgITbwAAf8tem6jngr16DwABAAB/yzAKBggqhkjOPQQDAjBjMRUwEwYKCZImiZPyLGQBGRYFbG9jYWwxEzARBgoJkiaJk/IsZAEZFgNnb3YxFzAVBgoJkiaJk/IsZAEZFgdleHRnYXp0MRwwGgYDVQQDExNUU1pFSU5WT0lDRS1TdWJDQS0xMB4XDTIyMDkxNDEzMjYwNFoXDTI0MDkxMzEzMjYwNFowTjELMAkGA1UEBhMCU0ExEzARBgNVBAoTCjMxMTExMTExMTExDDAKBgNVBAsTA1RTVDEcMBoGA1UEAxMTVFNULTMxMTExMTExMTEwMTExMzBWMBAGByqGSM49AgEGBSuBBAAKA0IABGGDDKDmhWAITDv7LXqLX2cmr6+qddUkpcLCvWs5rC2O29W/hS4ajAK4Qdnahym6MaijX75Cg3j4aao7ouYXJ9GjggI5MIICNTCBmgYDVR0RBIGSMIGPpIGMMIGJMTswOQYDVQQEDDIxLVRTVHwyLVRTVHwzLWE4NjZiMTQyLWFjOWMtNDI0MS1iZjhlLTdmNzg3YTI2MmNlMjEfMB0GCgmSJomT8ixkAQEMDzMxMTExMTExMTEwMTExMzENMAsGA1UEDAwEMTEwMDEMMAoGA1UEGgwDVFNUMQwwCgYDVQQPDANUU1QwHQYDVR0OBBYEFDuWYlOzWpFN3no1WtyNktQdrA8JMB8GA1UdIwQYMBaAFHZgjPsGoKxnVzWdz5qspyuZNbUvME4GA1UdHwRHMEUwQ6BBoD+GPWh0dHA6Ly90c3RjcmwuemF0Y2EuZ292LnNhL0NlcnRFbnJvbGwvVFNaRUlOVk9JQ0UtU3ViQ0EtMS5jcmwwga0GCCsGAQUFBwEBBIGgMIGdMG4GCCsGAQUFBzABhmJodHRwOi8vdHN0Y3JsLnphdGNhLmdvdi5zYS9DZXJ0RW5yb2xsL1RTWkVpbnZvaWNlU0NBMS5leHRnYXp0Lmdvdi5sb2NhbF9UU1pFSU5WT0lDRS1TdWJDQS0xKDEpLmNydDArBggrBgEFBQcwAYYfaHR0cDovL3RzdGNybC56YXRjYS5nb3Yuc2Evb2NzcDAOBgNVHQ8BAf8EBAMCB4AwHQYDVR0lBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMDMCcGCSsGAQQBgjcVCgQaMBgwCgYIKwYBBQUHAwIwCgYIKwYBBQUHAwMwCgYIKoZIzj0EAwIDRwAwRAIgOgjNPJW017lsIijmVQVkP7GzFO2KQKd9GHaukLgIWFsCIFJF9uwKhTMxDjWbN+1awsnFI7RLBRxA/6hZ+F1wtaqU</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
<ds:Object>
<xades:QualifyingProperties xmlns:xades="http://uri.etsi.org/01903/v1.3.2#" Target="signature">
<xades:SignedProperties Id="xadesSignedProperties">
<xades:SignedSignatureProperties>
<xades:SigningTime>2022-09-15T00:41:21Z</xades:SigningTime>
<xades:SigningCertificate>
<xades:Cert>
<xades:CertDigest>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<ds:DigestValue>YTJkM2JhYTcwZTBhZTAxOGYwODMyNzY3NTdkZDM3YzhjY2IxOTIyZDZhM2RlZGJiMGY0NDUzZWJhYWI4MDhmYg==</ds:DigestValue>
</xades:CertDigest>
<xades:IssuerSerial>
<ds:X509IssuerName>CN=TSZEINVOICE-SubCA-1, DC=extgazt, DC=gov, DC=local</ds:X509IssuerName>
<ds:X509SerialNumber>2475382886904809774818644480820936050208702411</ds:X509SerialNumber>
</xades:IssuerSerial>
</xades:Cert>
</xades:SigningCertificate>
</xades:SignedSignatureProperties>
</xades:SignedProperties>
</xades:QualifyingProperties>
</ds:Object>
</ds:Signature>
</sac:SignatureInformation>
</sig:UBLDocumentSignatures>
</ext:ExtensionContent>
</ext:UBLExtension>
</ext:UBLExtensions>
<cbc:ProfileID>reporting:1.0</cbc:ProfileID>
<cbc:ID>SME00010</cbc:ID>
<cbc:UUID>8e6000cf-1a98-4174-b3e7-b5d5954bc10d</cbc:UUID>
<cbc:IssueDate>2022-08-17</cbc:IssueDate>
<cbc:IssueTime>17:41:08</cbc:IssueTime>
<cbc:InvoiceTypeCode name="0200000">388</cbc:InvoiceTypeCode>
<cbc:Note languageID="ar">ABC</cbc:Note>
<cbc:DocumentCurrencyCode>SAR</cbc:DocumentCurrencyCode>
<cbc:TaxCurrencyCode>SAR</cbc:TaxCurrencyCode>
<cac:AdditionalDocumentReference>
<cbc:ID>ICV</cbc:ID>
<cbc:UUID>10</cbc:UUID>
</cac:AdditionalDocumentReference>
<cac:AdditionalDocumentReference>
<cbc:ID>PIH</cbc:ID>
<cac:Attachment>
<cbc:EmbeddedDocumentBinaryObject mimeCode="text/plain">NWZlY2ViNjZmZmM4NmYzOGQ5NTI3ODZjNmQ2OTZjNzljMmRiYzIzOWRkNGU5MWI0NjcyOWQ3M2EyN2ZiNTdlOQ==</cbc:EmbeddedDocumentBinaryObject>
</cac:Attachment>
</cac:AdditionalDocumentReference>
<cac:AdditionalDocumentReference>
<cbc:ID>QR</cbc:ID>
<cac:Attachment>
<cbc:EmbeddedDocumentBinaryObject mimeCode="text/plain">ARNBY21lIFdpZGdldOKAmXMgTFREAg8zMTExMTExMTExMDExMTMDFDIwMjItMDgtMTdUMTc6NDE6MDhaBAYyMzEuMTUFBTMwLjE1BixSdkNTcE1ZejgwMDlLYkoza3U3Mm9hQ0ZXcHpFZlFOY3BjKzVidWxoM0prPQdgTUVZQ0lRRFlzRG52aUpZUGdZanlDSVlBeXpFVGVZdGhJb0phUWhDaGJsUDRlQUFQUEFJaEFKbDZ6ZkhnaUttV1R0c2ZVejhZQlo4UWtROXJCTDRVeTdtSzBjeHZXb29ICFgwVjAQBgcqhkjOPQIBBgUrgQQACgNCAARhgwyg5oVgCEw7+y16i19nJq+vqnXVJKXCwr1rOawtjtvVv4UuGowCuEHZ2ocpujGoo1++QoN4+GmqO6LmFyfRCUYwRAIgOgjNPJW017lsIijmVQVkP7GzFO2KQKd9GHaukLgIWFsCIFJF9uwKhTMxDjWbN+1awsnFI7RLBRxA/6hZ+F1wtaqU</cbc:EmbeddedDocumentBinaryObject>
</cac:Attachment>
</cac:AdditionalDocumentReference><cac:Signature>
<cbc:ID>urn:oasis:names:specification:ubl:signature:Invoice</cbc:ID>
<cbc:SignatureMethod>urn:oasis:names:specification:ubl:dsig:enveloped:xades</cbc:SignatureMethod>
</cac:Signature><cac:AccountingSupplierParty>
<cac:Party>
<cac:PartyIdentification>
<cbc:ID schemeID="CRN">324223432432432</cbc:ID>
</cac:PartyIdentification>
<cac:PostalAddress>
<cbc:StreetName>الامير سلطان</cbc:StreetName>
<cbc:BuildingNumber>3242</cbc:BuildingNumber>
<cbc:PlotIdentification>4323</cbc:PlotIdentification>
<cbc:CitySubdivisionName>32423423</cbc:CitySubdivisionName>
<cbc:CityName>الرياض | Riyadh</cbc:CityName>
<cbc:PostalZone>32432</cbc:PostalZone>
<cac:Country>
<cbc:IdentificationCode>SA</cbc:IdentificationCode>
</cac:Country>
</cac:PostalAddress>
<cac:PartyTaxScheme>
<cbc:CompanyID>311111111101113</cbc:CompanyID>
<cac:TaxScheme>
<cbc:ID>VAT</cbc:ID>
</cac:TaxScheme>
</cac:PartyTaxScheme>
<cac:PartyLegalEntity>
<cbc:RegistrationName>Acme Widget’s LTD</cbc:RegistrationName>
</cac:PartyLegalEntity>
</cac:Party>
</cac:AccountingSupplierParty>
<cac:AccountingCustomerParty>
<cac:Party>
<cac:PostalAddress>
<cbc:StreetName/>
<cbc:CitySubdivisionName>32423423</cbc:CitySubdivisionName>
<cac:Country>
<cbc:IdentificationCode>SA</cbc:IdentificationCode>
</cac:Country>
</cac:PostalAddress>
<cac:PartyTaxScheme>
<cac:TaxScheme>
<cbc:ID>VAT</cbc:ID>
</cac:TaxScheme>
</cac:PartyTaxScheme>
<cac:PartyLegalEntity>
<cbc:RegistrationName/>
</cac:PartyLegalEntity>
</cac:Party>
</cac:AccountingCustomerParty>
<cac:PaymentMeans>
<cbc:PaymentMeansCode>10</cbc:PaymentMeansCode>
</cac:PaymentMeans>
<cac:AllowanceCharge>
<cbc:ChargeIndicator>false</cbc:ChargeIndicator>
<cbc:AllowanceChargeReason>discount</cbc:AllowanceChargeReason>
<cbc:Amount currencyID="SAR">0.00</cbc:Amount>
<cac:TaxCategory>
<cbc:ID schemeID="UN/ECE 5305" schemeAgencyID="6">S</cbc:ID>
<cbc:Percent>15</cbc:Percent>
<cac:TaxScheme>
<cbc:ID schemeID="UN/ECE 5153" schemeAgencyID="6">VAT</cbc:ID>
</cac:TaxScheme>
</cac:TaxCategory>
<cac:TaxCategory>
<cbc:ID schemeID="UN/ECE 5305" schemeAgencyID="6">S</cbc:ID>
<cbc:Percent>15</cbc:Percent>
<cac:TaxScheme>
<cbc:ID schemeID="UN/ECE 5153" schemeAgencyID="6">VAT</cbc:ID>
</cac:TaxScheme>
</cac:TaxCategory>
</cac:AllowanceCharge>
<cac:TaxTotal>
<cbc:TaxAmount currencyID="SAR">30.15</cbc:TaxAmount>
</cac:TaxTotal>
<cac:TaxTotal>
<cbc:TaxAmount currencyID="SAR">30.15</cbc:TaxAmount>
<cac:TaxSubtotal>
<cbc:TaxableAmount currencyID="SAR">201.00</cbc:TaxableAmount>
<cbc:TaxAmount currencyID="SAR">30.15</cbc:TaxAmount>
<cac:TaxCategory>
<cbc:ID schemeID="UN/ECE 5305" schemeAgencyID="6">S</cbc:ID>
<cbc:Percent>15.00</cbc:Percent>
<cac:TaxScheme>
<cbc:ID schemeID="UN/ECE 5153" schemeAgencyID="6">VAT</cbc:ID>
</cac:TaxScheme>
</cac:TaxCategory>
</cac:TaxSubtotal>
</cac:TaxTotal>
<cac:LegalMonetaryTotal>
<cbc:LineExtensionAmount currencyID="SAR">201.00</cbc:LineExtensionAmount>
<cbc:TaxExclusiveAmount currencyID="SAR">201.00</cbc:TaxExclusiveAmount>
<cbc:TaxInclusiveAmount currencyID="SAR">231.15</cbc:TaxInclusiveAmount>
<cbc:AllowanceTotalAmount currencyID="SAR">0.00</cbc:AllowanceTotalAmount>
<cbc:PrepaidAmount currencyID="SAR">0.00</cbc:PrepaidAmount>
<cbc:PayableAmount currencyID="SAR">231.15</cbc:PayableAmount>
</cac:LegalMonetaryTotal>
<cac:InvoiceLine>
<cbc:ID>1</cbc:ID>
<cbc:InvoicedQuantity unitCode="PCE">33.000000</cbc:InvoicedQuantity>
<cbc:LineExtensionAmount currencyID="SAR">99.00</cbc:LineExtensionAmount>
<cac:TaxTotal>
<cbc:TaxAmount currencyID="SAR">14.85</cbc:TaxAmount>
<cbc:RoundingAmount currencyID="SAR">113.85</cbc:RoundingAmount>
</cac:TaxTotal>
<cac:Item>
<cbc:Name>كتاب</cbc:Name>
<cac:ClassifiedTaxCategory>
<cbc:ID>S</cbc:ID>
<cbc:Percent>15.00</cbc:Percent>
<cac:TaxScheme>
<cbc:ID>VAT</cbc:ID>
</cac:TaxScheme>
</cac:ClassifiedTaxCategory>
</cac:Item>
<cac:Price>
<cbc:PriceAmount currencyID="SAR">3.00</cbc:PriceAmount>
<cac:AllowanceCharge>
<cbc:ChargeIndicator>false</cbc:ChargeIndicator>
<cbc:AllowanceChargeReason>discount</cbc:AllowanceChargeReason>
<cbc:Amount currencyID="SAR">0.00</cbc:Amount>
</cac:AllowanceCharge>
</cac:Price>
</cac:InvoiceLine>
<cac:InvoiceLine>
<cbc:ID>2</cbc:ID>
<cbc:InvoicedQuantity unitCode="PCE">3.000000</cbc:InvoicedQuantity>
<cbc:LineExtensionAmount currencyID="SAR">102.00</cbc:LineExtensionAmount>
<cac:TaxTotal>
<cbc:TaxAmount currencyID="SAR">15.30</cbc:TaxAmount>
<cbc:RoundingAmount currencyID="SAR">117.30</cbc:RoundingAmount>
</cac:TaxTotal>
<cac:Item>
<cbc:Name>قلم</cbc:Name>
<cac:ClassifiedTaxCategory>
<cbc:ID>S</cbc:ID>
<cbc:Percent>15.00</cbc:Percent>
<cac:TaxScheme>
<cbc:ID>VAT</cbc:ID>
</cac:TaxScheme>
</cac:ClassifiedTaxCategory>
</cac:Item>
<cac:Price>
<cbc:PriceAmount currencyID="SAR">34.00</cbc:PriceAmount>
<cac:AllowanceCharge>
<cbc:ChargeIndicator>false</cbc:ChargeIndicator>
<cbc:AllowanceChargeReason>discount</cbc:AllowanceChargeReason>
<cbc:Amount currencyID="SAR">0.00</cbc:Amount>
</cac:AllowanceCharge>
</cac:Price>
</cac:InvoiceLine>
</Invoice>
</code></pre>
<p>and the specific tag that i need to take it in a correct way and hash it it is :</p>
<blockquote>
<p>xades:SignedProperties
this is its ID:
Id="xadesSignedProperties"</p>
</blockquote>
<p>when i <code>hash </code>the tag and <code>encode </code>it into <code>base64 </code>it needs to be same like this result:</p>
<blockquote>
<p>OGU1M2Q3NGFkOTdkYTRiNDVhOGZmYmU2ZjE0YzI3ZDhhNjlmM2EzZmQ4MTU5NTBhZjBjNDU2MWZlNjU3MWU0ZQ==</p>
</blockquote>
<p>because it is the result in the sample.</p>
<hr />
<p>what i have tried is:</p>
<p>i did a cancocalization on the XML file using <code>Python Code</code> after that i had taken the tag and took the hash of it and encode it into base64 and this is my Code:</p>
<pre class="lang-py prettyprint-override"><code>import lxml.etree as ET
import hashlib
import base64
et = ET.parse("sample_Invoice.xml")
et.write_c14n("my_xml_file.xml")
my_xml = open("my_xml_file.xml","rb")
my_xml_result = my_xml.read().decode()
# i will split the tag that is before <xades:SignedProperties Id="xadesSignedProperties">
# to get the <xades:SignedProperties Id="xadesSignedProperties"> and the rest
SignedProperties_1 = my_xml_result.split('<xades:QualifyingProperties xmlns:xades="http://uri.etsi.org/01903/v1.3.2#" Target="signature">')[-1]
# i will split the tag that is after <xades:SignedProperties Id="xadesSignedProperties">
# to get the specific tag that i want only
SignedProperties_final = SignedProperties_1.split("</xades:QualifyingProperties>")[0]
# i will take the hash as hex
hashed_tag = hashlib.sha256(SignedProperties_final.encode()).hexdigest()
print(hashed_tag)
# i will encode the hex code into base64
print(base64.b64encode(hashed_tag.encode()))
</code></pre>
<p>this is my result:</p>
<blockquote>
<p>ZjcyZjUyNmFmYmY0ZGRmYWM2NDBlNzljYWVlZWNjOTM5ZjU4ZTY4ZTA3Y2JjM2Q0NzA4MzgwY2ZmOWM2ZTAzMw==</p>
</blockquote>
<p>they are not the same at all.</p>
<p>i do not know what is the wrong.</p>
|
<python><xml><xml-parsing><invoice><ubl>
|
2023-01-26 19:04:00
| 2
| 721
|
Mohammed almalki
|
75,250,483
| 5,592,430
|
jupyter notebook display plots as separate output instead of updating existing one
|
<p>I'd like to draw interactive plot and dropdown winget. For this aim I use the following code in my jupyter notebook:</p>
<pre><code>import ipywidgets as widgets
import plotly.graph_objects as go
import pandas as pd
df = pd.DataFrame({'timestamp' : [1,2,3,4,5,6], 'close' : [11,22,33,44,55,66], 'open' : [111,222,333,444,555,666]})
def plot(feature):
fig = go.Figure(data=go.Scatter(x = df['timestamp'].values, y = df[feature].values),
layout_title_text = feature
)
fig.show()
_ = widgets.interact(plot, feature = ['close', 'open'])
</code></pre>
<p>Every time when I select value in dropdown box the corresponding plot is displayed in separate output - but I'd like to update existing:</p>
<p><a href="https://i.sstatic.net/IbcUB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IbcUB.png" alt="enter image description here" /></a></p>
<p>PLease explain how to fix this issue</p>
|
<python><jupyter-notebook><plotly>
|
2023-01-26 18:55:18
| 0
| 981
|
Roman Kazmin
|
75,250,429
| 2,789,788
|
Pandas dataframe to dictionary where values are lists
|
<p>I have a dataframe with two columns:</p>
<pre><code>key | value
"a" | 1
"a" | 2
"b" | 4
</code></pre>
<p>which I would like to map to a dictionary that would look like:</p>
<pre><code>my_dict["a"] = [1,2]
my_dict["b"] = [4]
</code></pre>
<p>My current implementation is</p>
<pre><code>for k in df["keys"].unique():
vals = df[df["keys"] == k]["value"]
my_dict[k] = vals
</code></pre>
<p>But this implementation takes a long time on my dataframe that has ~400k rows. Is there a better way to do this?</p>
|
<python><pandas>
|
2023-01-26 18:49:37
| 0
| 1,810
|
theQman
|
75,250,367
| 867,549
|
How do I write a proper test for this mocked context manager and ZipFile?
|
<p>How am I supposed to write this test? I've tried the various options listed but each returns the same failed test:</p>
<pre><code>import zipfile
from mock import Mock, patch
def unzip_file(fp):
with zipfile.ZipFile(fp, 'r') as z:
z.extractall('dir')
@patch('zipfile.ZipFile')
def test_unzip_file(m_zipfile):
# I have tried the following...
# m_zipfile.__enter__.extractall = Mock()
# m_zipfile.extractall = Mock()
# m_zipfile.return_value.__enter__.return_value = Mock()
m_zipfile.return_value.__enter__.return_value.extractall = Mock()
unzip_file('test')
m_zipfile.assert_called_with('test', 'r') # this test passes
m_zipfile.extractall.assert_called_with('dir') # this test fails
</code></pre>
<p>I tried to use <a href="https://stackoverflow.com/a/28852060/867549">this</a> answer as a guide but I'm still lost as to how to properly do this. The actual function in our code is more complex with additional parameters but I am trying to start at the base first.</p>
<p>The failure...</p>
<pre><code>E AssertionError: expected call not found.
E Expected: extractall('dir')
E Actual: not called.
</code></pre>
|
<python><pytest>
|
2023-01-26 18:42:06
| 1
| 21,781
|
TravisVOX
|
75,250,360
| 3,510,043
|
pandas fillna removes timezone when used with value=dict
|
<p>I bumped into an unexpected-to-me behaviour in pandas. Here is the code, running python 3.10.8.</p>
<pre class="lang-py prettyprint-override"><code>In [1]: from datetime import datetime, timezone
In [2]: import pandas
In [3]: pandas.__version__
Out[3]: '1.4.4'
In [4]: df = pandas.DataFrame(data={"end_date": [datetime(2022, 1, 20, tzinfo=timezone.utc)]})
In [5]: df.end_date.dt.tz
Out[5]: datetime.timezone.utc
In [6]: df.fillna(value={"end_date": datetime.now(tz=timezone.utc)}).end_date.dt.tz
In [7]: df.assign(end_date=lambda df: df["end_date"].fillna(datetime.now(tz=timezone.utc))).end_date.dt.tz
Out[7]: datetime.timezone.utc
</code></pre>
<p>As you can see, when using <code>.fillna(value={...})</code>, the timezone information is lost even if you do not have any value to fill. But it is kept when no dictionary is used.</p>
<p>Is it expected?</p>
<p>Thanks in advance.</p>
|
<python><pandas><datetime><timezone>
|
2023-01-26 18:41:28
| 0
| 820
|
Flavien Lambert
|
75,250,358
| 1,223,946
|
Howto process email from google
|
<p>Im not sure if this is google or email in general, but I'm seeing a some encoding that im not sure how to handle. Here is a snip that is the Washington post mailer form my google acct.</p>
<p>the subject</p>
<pre><code> b'=?UTF-8?Q?The_Morning:_Peru=E2=80=99s_deadly_protests?='
</code></pre>
<p>actually reads</p>
<p>The Morning: Peru’s deadly protests</p>
<p>part of the body.</p>
<pre><code><!DOCTYPE html><html xmlns=3D"http://www.w3.org/1999/xhtml" xmlns:v=3D"urn:=
schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-microsoft-com:office:offi=
ce"><head> <title>The Morning: Peru=E2=80=99s deadly protests</title> <!--[=
if !mso]><!-- --> <meta http-equiv=3D"X-UA-Compatible" content=3D"IE=3Dedge=
"> <!--<![endif]--> <meta http-equiv=3D"Content-Type" content=3D"text/html;=
charset=3DUTF-8"> <meta name=3D"viewport" content=3D"width=3Ddevice-width,=
initial-scale=3D1"> <style type=3D"text/css">#outlook a{padding:0}body{marg=
in:0;padding:0;-webkit-text-size-adjust:100%;-ms-text-size-adjust:100%}tabl=
e,td{border-collapse:collapse;mso-table-lspace:0;mso-table-rspace:0}img{bor=
der:0;height:auto;line-height:100%;outline:0;text-decoration:none;-ms-inter=
polation-mode:bicubic}p{display:block;margin:13px 0}p,ul{margin-top:0}@medi=
a (max-width:600px){body{padding:0 15px!important}}</style> <!--[if mso]>=
=0A <xml>=0A <o:OfficeDocumentSettings>=0A
</code></pre>
<p>Everything is line wrapped with a <code>'=\f\n'</code> which is no problem to remove with</p>
<pre><code>bodytext = ''.join( bodytext.split('=\r\n') )
</code></pre>
<p>But there is other stuff in there like <code>=3D</code> <code>=0D=0A</code> and yeh they are ascii encodings but what decoder library do we use to decode this?</p>
<p>For reference or to try yourself: here is the python code.</p>
<pre><code>import email, datetime
from imapclient import IMAPClient
with IMAPClient('imap.gmail.com', port=None, use_uid=True, ssl=True, stream=False, ssl_context=None, timeout=None) as client:
client.login("#####@gmail.com", "######")
client.select_folder('INBOX')
SEARCH_SINCE = (datetime.datetime.now() - datetime.timedelta( 4 )).date()
search_for = "Morning"
seq_nums = client.search([u'SUBJECT', f'{search_for}', u'SINCE', SEARCH_SINCE ])
print('seq nums', seq_nums)
for seqid, objs in client.fetch( seq_nums, [b'ENVELOPE', b'BODY[TEXT]']).items():
msg_body = email.message_from_bytes( objs[b'BODY[TEXT]'] )
envelope = objs[b'ENVELOPE']
print('subject', envelope.subject)
print('body', msg_body.get_payload()[:1000])
</code></pre>
<p>I also use red box and imap_tools to do this stuff but they are in order of magnitude slower than this IMAPClient method.</p>
|
<python><email-parsing><imapclient>
|
2023-01-26 18:40:42
| 1
| 2,176
|
Peter Moore
|
75,250,345
| 9,105,621
|
best way to pickup different pandas column value if current column value is blank
|
<p>I'm trying to write a logic that will pickup a different column value if the current value is blank. Here is what I have so far:</p>
<pre><code>df['column1'] = df.apply(lambda x: x["column2"] if x["column1"].astype(str)=='' else x["column1"], axis=1)
</code></pre>
<p>Is there a more efficient way to test for blank/null?</p>
|
<python><pandas>
|
2023-01-26 18:39:36
| 1
| 556
|
Mike Mann
|
75,250,300
| 75,103
|
setup.py command to show install_requires?
|
<p>Is there any way to get the list of packages specified in the <code>install_requires</code> parameter to the setup command (in setup.py)?</p>
<p>I'm looking for something similar to <code>pip show pkgname | grep -i requires</code>, but for local packages (and that reports version specifiers and filters).</p>
<p>The real task I'm solving is to check if versions specified in setup.py and requirements.txt have diverged, so if there is a tool that can do this directly...?</p>
|
<python>
|
2023-01-26 18:34:14
| 1
| 27,572
|
thebjorn
|
75,250,226
| 2,532,408
|
disable pytest plugin terminalreporter via conftest?
|
<p>Is there a programmatic equivalent (i.e. conftest.py) to setting <code>-p no:terminal</code> in pytest?</p>
<p>I know it's possible to add it to the <code>addopts</code> in pytest.ini. But ultimately I would like to be able to have one plugin disable or prevent another plugin from loading.</p>
<p>In my case, my plugin would replace terminalreporter entirely.</p>
|
<python><pytest>
|
2023-01-26 18:27:08
| 1
| 4,628
|
Marcel Wilson
|
75,250,117
| 147,562
|
What is the second type argument to ForeignKey in the django stubs typehints?
|
<p>What should I put in the place of the <strong><code>_GT</code></strong> type variable in this snippet?</p>
<pre class="lang-py prettyprint-override"><code>from django.db.models import ForeignKey, PROTECT
order = ForeignKey["ExtensionOrder", _GT](
"subscriptions.ExtensionOrder",
null=False,
on_delete=PROTECT
)
</code></pre>
<p>The stub is here: (<a href="https://github.com/typeddjango/django-stubs/blob/1.14.0/django-stubs/db/models/fields/related.pyi#L107" rel="nofollow noreferrer">source</a>)</p>
<pre class="lang-py prettyprint-override"><code>class ForeignKey(ForeignObject[_ST, _GT]):
_pyi_private_set_type: Union[Any, Combinable]
_pyi_private_get_type: Any
...
# class access
@overload # type: ignore
def __get__(self, instance: None, owner) -> ForwardManyToOneDescriptor: ...
# Model instance access
@overload
def __get__(self, instance: Model, owner) -> **_GT**: ...
# non-Model instances
@overload
def __get__(self: _F, instance, owner) -> _F: ...
</code></pre>
<p>Pylance is griping about it, but I'm not sure what <strong><code>_GT</code></strong> should be</p>
|
<python><django><python-typing><pylance>
|
2023-01-26 18:15:34
| 1
| 18,247
|
boatcoder
|
75,249,741
| 5,212,614
|
Sankey Plot not Showing in Jupyter Notebook
|
<p>I'm pretty sure my code is fine, btu I can't generate a plot of a simple Sankey Chart. Maybe something is off with the code, not sure. Here's what I have now. Can anyone see a problem with this?</p>
<pre><code>import pandas as pd
import holoviews as hv
import plotly.graph_objects as go
import plotly.express as pex
hv.extension('bokeh')
data = [['TMD','TMD Create','Sub-Section 1',17],['TMD','TMD Create','Sub-Section 1',17],['C4C','Customer Tab','Sub-Section 1',10],['C4C','Customer Tab','Sub-Section 1',10],['C4C','Customer Tab','Sub-Section 1',17]]
df = pd.DataFrame(data, columns=['Source','Target','Attribute','Value'])
df
source = df["Source"].values.tolist()
target = df["Target"].values.tolist()
value = df["Value"].values.tolist()
labels = df["Attribute"].values.tolist()
import plotly.graph_objs as go
#create links
link = dict(source=source, target=target, value=value,
color=["turquoise","tomato"] * len(source))
#create nodes
node = dict(label=labels, pad=15, thickness=5)
#create a sankey object
chart = go.Sankey(link=link, node=node, arrangement="snap")
#build a figure
fig = go.Figure(chart)
fig.show()
</code></pre>
<p>I am trying to follow the basic example shown in the link below.</p>
<p><a href="https://python.plainenglish.io/create-a-sankey-diagram-in-python-e09e23cb1a75" rel="nofollow noreferrer">https://python.plainenglish.io/create-a-sankey-diagram-in-python-e09e23cb1a75</a></p>
|
<python><plotly><sankey-diagram><holoviews>
|
2023-01-26 17:35:38
| 1
| 20,492
|
ASH
|
75,249,718
| 8,372,336
|
python: hex to bin with binascii.a2b_hex results in binascii.Error: Odd-length string
|
<p>i am trying to convert a hex string to bin using <code>binascii.a2b_hex</code> but i get <code>binascii.Error: Odd-length string</code> only with some strings, not everytime.</p>
<p>for example this is the string throwing the error: <code>177B16283F6C72F52DB9F00DF2629EB6F925A67AEF85A93F5588C62DCDB0050</code></p>
<p>if i try to do <code>binascii.a2b_hex('177B16283F6C72F52DB9F00DF2629EB6F925A67AEF85A93F5588C62DCDB0050')</code> i get:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
binascii.Error: Odd-length string
</code></pre>
<p>just in case: i get this string converting in hex a string of bytes using:</p>
<pre><code>"{0:0>4X}".format(int('0000000101110111101100010110001010000011111101101100011100101111010100101101101110011111000000001101111100100110001010011110101101101111100100100101101001100111101011101111100001011010100100111111010101011000100011000110001011011100110110110000000001010000',2))
</code></pre>
<p>i am not having this problem with other strings of bits different from this one</p>
|
<python><binary><hex><binascii>
|
2023-01-26 17:33:23
| 1
| 1,730
|
91DarioDev
|
75,249,518
| 1,987,599
|
Right way to publish authors on PyPi from setuptools
|
<p>I currently use <code>setuptools</code> to build my Python's package and I have declared the two authors that way in my <code>pyproject.toml</code> file:</p>
<pre><code>authors = [
{name = "X Y", email = "x.y@tt.net"},
{name = "Z H", email = "z.h@tt.net"},
]
</code></pre>
<p>Everything works and I can publish it on PyPI but only the first author is published. How can I display both authors.</p>
<p>I have tried to use the following syntax</p>
<pre><code>authors = ["X Y <x.y@tt.net>, Z H <z.h@tt.net>"]
</code></pre>
<p>But I have the following error</p>
<pre><code>ValueError: invalid pyproject.toml config: `project.authors[{data__authors_x}]`.
configuration error: `project.authors[{data__authors_x}]` must be object
</code></pre>
<p>Notice that I specify:</p>
<pre><code>[build-system]
requires = ["setuptools","numpy","scipy","wheel"]
build-backend = "setuptools.build_meta"
</code></pre>
|
<python><setuptools><pypi><python-packaging><pyproject.toml>
|
2023-01-26 17:12:02
| 1
| 609
|
Guuk
|
75,249,408
| 19,980,284
|
Convert object-type hours:minutes:seconds column to datetime type in Pandas
|
<p>I have a column called <code>Time</code> in a dataframe that looks like this:</p>
<pre><code>599359 12:32:25
326816 17:55:22
326815 17:55:22
358789 12:48:25
361553 12:06:45
...
814512 21:22:07
268266 18:57:31
659699 14:28:20
659698 14:28:20
268179 17:48:53
Name: Time, Length: 546967, dtype: object
</code></pre>
<p>And right now it is an <code>object</code> dtype. I've tried the following to convert it to a datetime:</p>
<p><code>df['Time'] = pd.to_datetime(df['Time'], format='%H:%M:%S', errors='coerce', utc = True).dt.time</code></p>
<p>And I understand that the <code>.dt.time</code> methods are needed to prevent the Year and Month from being added, but I believe this is causing the dtype to revert to an object.</p>
<p>Any workarounds? I know I could do</p>
<p><code>df['Time'] = df['Time'].apply(pd.to_datetime, format='%H:%M:%S', errors='coerce', utc = True)</code></p>
<p>but I have over 500,000 rows and this is taking forever.</p>
|
<python><pandas><datetime><object><dtype>
|
2023-01-26 17:03:45
| 1
| 671
|
hulio_entredas
|
75,249,095
| 4,391,249
|
Return a 3rd party pybind type
|
<p>My C++ library depends on a 3rd party C++ library with its own bindings.</p>
<p>I bind a <code>struct</code> that uses <code>def_readwrite</code> to expose its members. One of its members is a type from the 3rd party library.</p>
<p>Basically I have:</p>
<pre class="lang-cpp prettyprint-override"><code>struct MyStruct {
ClassFromThirdParty member{};
}
py::class_<MyStruct>(m, "MyStruct")
.def_readwrite("member", &MyStruct::member)
</code></pre>
<p>In Python I try:</p>
<pre class="lang-py prettyprint-override"><code>from my_bindings import MyStruct
obj = MyStruct()
print(obj.member)
</code></pre>
<p>but this raises <code>TypeError: Unable to convert function return value to a Python type!</code>.</p>
<p>It's worth also noting that if I do:</p>
<pre class="lang-py prettyprint-override"><code>import ThirdPartyLibrary
from my_bindings import MyStruct
obj = MyStruct()
print(obj.member)
</code></pre>
<p>no error is raised. But I don't like this solution as the Python user would have to import <code>ThirdPartyLibrary</code> even if they don't explicitly need it.</p>
<p>How do I write my binding such that the first Python snippet works?</p>
<p>PS: The third party binding in question can be found <a href="https://github.com/RobotLocomotion/drake/blob/6274f77de4f9df4c96a4314cfe5097967915dcc9/bindings/pydrake/math_py.cc#L56" rel="nofollow noreferrer">here</a>. In the absence of a general answer, I'd also be happy to hear answers related to that library in particular.</p>
|
<python><c++><pybind11><drake>
|
2023-01-26 16:38:40
| 1
| 3,347
|
Alexander Soare
|
75,249,016
| 12,304,000
|
A DataFrame object does not have an attribute select
|
<p>In palantir foundry, I am trying to read all xml files from a dataset. Then, in a for loop, I parse the xml files.</p>
<p>Until the second last line, the code runs fine without errors.</p>
<pre><code>from transforms.api import transform, Input, Output
from transforms.verbs.dataframes import sanitize_schema_for_parquet
from bs4 import BeautifulSoup
import pandas as pd
import lxml
@transform(
output=Output("/Spring/xx/datasets/mydataset2"),
source_df=Input("ri.foundry.main.dataset.123"),
)
def read_xml(ctx, source_df, output):
df = pd.DataFrame()
filesystem = source_df.filesystem()
hadoop_path = filesystem.hadoop_path
files = [f"{hadoop_path}/{f.path}" for f in filesystem.ls()]
for i in files:
with open(i, 'r') as f:
file = f.read()
soup = BeautifulSoup(file,'xml')
data = []
for e in soup.select('offer'):
data.append({
'meldezeitraum': e.find_previous('data').get('meldezeitraum'),
'id':e.get('id'),
'parent_id':e.get('parent_id'),
})
df = df.append(data)
output.write_dataframe(sanitize_schema_for_parquet(df))
</code></pre>
<p>However, as soon as I add the last line:</p>
<pre><code>output.write_dataframe(sanitize_schema_for_parquet(df))
</code></pre>
<p>I get this error:</p>
<pre><code>Missing transform attribute
A DataFrame object does not have an attribute select. Please check the spelling and/or the datatype of the object.
/transforms-python/src/myproject/datasets/mydataset.py
output.write_dataframe(sanitize_schema_for_parquet(df))
</code></pre>
<p>What am I doing wrong?</p>
|
<python><pandas><pyspark><palantir-foundry><foundry-code-repositories>
|
2023-01-26 16:32:12
| 1
| 3,522
|
x89
|
75,248,993
| 825,924
|
With Selenium, I want to find elements by text like Ctrl-F “Find in Page” in Chrome does
|
<p>Working with Selenium I need to find an element. Contrary to what is recommended, I want to find that element based on what the user sees. I know that this is difficult to solve
in the general case, but at least I want to be able to go for elements based on what text/label they carry.</p>
<p>In my case I have an <code><input type=submit value=text_I_see></code>, but it might also be a <code><button></code>, a normal link or even a not so normal link (e.g. JavaScript onclick-thingy) etc.</p>
<p><strong>The browser’s <kbd>Ctrl</kbd>-<kbd>F</kbd> “Find in page...” search box finds what I want. It highlights it perfectly. And I want to click exactly that!</strong> How do I do this?</p>
<p>I <strong>do not</strong> want to find it using:</p>
<ul>
<li>id</li>
<li>class</li>
<li>element/tag name</li>
<li>element specifics</li>
<li>information only seen in the HTML source</li>
</ul>
<p>Is this standard use case not supported at all?</p>
|
<python><selenium><selenium-webdriver><selenium-chromedriver><selenium-webdriver-python>
|
2023-01-26 16:30:25
| 0
| 35,122
|
Robert Siemer
|
75,248,819
| 17,945,841
|
Get the rows that are not shared between two data frames
|
<p>I have two data frames with the exact same column, but one of them have 1000 rows (<code>df1</code>), and one of them 500 (<code>df2</code>). The rows of <code>df2</code> are also found in the data frame <code>df1</code>, but I want the rows that are not.</p>
<p>For example, lets say this is <code>df1</code>:</p>
<pre><code> Gender Age
1 F 43
3 F 56
33 M 76
476 F 30
810 M 29
</code></pre>
<p>and <code>df2</code>:</p>
<pre><code> Gender Age
3 F 56
476 F 30
</code></pre>
<p>I want a new data frame, <code>df3</code>, that have the unshared rows:</p>
<pre><code> Gender Age
1 F 43
33 M 76
810 M 29
</code></pre>
<p>How can I do that ?</p>
|
<python><pandas>
|
2023-01-26 16:16:34
| 3
| 1,352
|
Programming Noob
|
75,248,774
| 3,922,727
|
Python reshaping for the transpose is returning an error of different dimensions when dot product is used
|
<p>I need to calculate the weights of linear regression model using the expression below:</p>
<p><a href="https://i.sstatic.net/nDoXD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nDoXD.png" alt="enter image description here" /></a></p>
<p>We should use reshape in order to get the transpose of X having the shape of (100, 3).</p>
<p>I used:</p>
<pre><code>xT = x.reshape(-1, 1)
</code></pre>
<p>then, I wanted to get the dot of X transpose with X:</p>
<pre><code>xDot = np.dot(Xtrain, xT)
</code></pre>
<p>And I got the following error:</p>
<blockquote>
<p>shapes (100,3) and (300,1) not aligned: 3 (dim 1) != 300 (dim 0)</p>
</blockquote>
<p>How the reshape should be in that case?</p>
|
<python><numpy><reshape>
|
2023-01-26 16:12:57
| 0
| 5,012
|
alim1990
|
75,248,668
| 2,463,948
|
Why subprocess.run() freezes on certain application?
|
<p>Why subprocess.run() freezes on this application?</p>
<pre><code>import subprocess
subprocess.run('eumdac.exe')
</code></pre>
<p>The app is from official source: <a href="https://gitlab.eumetsat.int/eumetlab/data-services/eumdac/-/releases/1.2.0" rel="nofollow noreferrer">https://gitlab.eumetsat.int/eumetlab/data-services/eumdac/-/releases/1.2.0</a></p>
<p><em>Windows Binary</em>: <a href="https://gitlab.eumetsat.int/eumetlab/data-services/eumdac/uploads/ddc0cac2c969efa51f000f4a5eccca59/eumdac-1.2.0-win.zip" rel="nofollow noreferrer">https://gitlab.eumetsat.int/eumetlab/data-services/eumdac/uploads/ddc0cac2c969efa51f000f4a5eccca59/eumdac-1.2.0-win.zip</a></p>
<p>This is what I am getting by running it in cmd.exe:</p>
<pre><code>(project_directory)>eumdac
usage: eumdac [-h] [--version] [--set-credentials ConsumerKey ConsumerSecret] [--debug]
{describe,search,download,subscribe,tailor} ...
EUMETSAT Data Access Client
positional arguments:
{describe,search,download,subscribe,tailor}
describe describe a collection or product
search search for products at the collection level
download download product(s) from a collection
subscribe subscribe a server for a collection
tailor tailoring product(s) from collection
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
--set-credentials ConsumerKey ConsumerSecret
permanently set consumer key and secret and exit, see https://api.eumetsat.int/api-key
--debug show backtrace for errors
</code></pre>
<p>PS. "cmd /c eumdac.exe" neither works.</p>
|
<python><python-3.x><windows>
|
2023-01-26 16:02:59
| 1
| 12,087
|
Flash Thunder
|
75,248,577
| 11,986,167
|
find pattern in a string without using regex
|
<p>I'm trying to find a pattern in a string. Example:</p>
<p>trail = '<code>AABACCCACCACCACCACCACC</code>" one can note the "<code>ACC</code>" repetition after a prefix of AAB; so the result should be AAB(ACC)</p>
<p>Without using regex 'import re' how can I do this. What I did so far:</p>
<pre><code> def get_pattern(trail):
for j in range(0,len(trail)):
k = j+1
while k<len(trail) and trail[j]!=trail[k]:
k+=1
if k==len(trail)-1:
continue
window = ''
stop = trail[j]
m = j
while m<len(trail) and k<len(trail) and trail[m]==trail[k]:
window+=trail[m]
m+=1
k+=1
if trail[m]==stop and len(window)>1:
break
if len(window)>1:
prefix=''
if j>0:
prefix = trail[0:j]
return prefix+'('+window+')'
return False
</code></pre>
<p>This will do (almost) the trick because in a use case like this:
"<code>AAAAAAAAAAAAAAAAAABDBDBDBDBDBDBDBDBDBDBDBDBDBDBDBD</code>"
the result is <code>AA</code> but it should be: <code>AAAAAAAAAAAAAAAAAA(BD)</code></p>
|
<python><algorithm><pattern-matching>
|
2023-01-26 15:55:51
| 1
| 741
|
ProcolHarum
|
75,248,508
| 3,003,432
|
Celery, RabbitMQ removes worker from consumers list while it is performing tasks
|
<p>I have started my celery worker, which uses RabbitMQ as broker, like this:</p>
<pre><code>celery -A my_app worker -l info -P gevent -c 100 --prefetch-multiplier=1 -Q my_app
</code></pre>
<p>Then I have task which looks quite like this:</p>
<pre><code>@shared_task(queue='my_app', default_retry_delay=10, max_retries=1, time_limit=8 * 60)
def example_task():
# getting queryset with some filtering
my_models = MyModel.objects.filter(...)
for my_model in my_models.iterator():
my_model.execute_something()
</code></pre>
<p>Sometimes this task can be fininshed less than a minute and sometimes, during highload, it requires more than 5 minutes to finish.</p>
<p>The main problem is that RabbitMQ constantly removes my worker from consumers list. It looks really random. Because of that I need to restart worker again.</p>
<p>Workers also starts throwing these errors:</p>
<pre><code>SSLEOFError: EOF occurred in violation of protocol (_ssl.c:2396)
</code></pre>
<p>Sometimes these errors:</p>
<pre><code>consumer: Cannot connect to amqps://my_app:**@example.com:5671/prod: SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:997)').
Couldn't ack 2057, reason:"RecoverableConnectionError(None, 'connection already closed', None, '')"
</code></pre>
<p>I have tried to add <code>--without-heartbeat</code> but it does nothing.</p>
<p>How to solve this problems? Sometimes my tasks takes more than 30 minutes to finish, and I can't constantly monitor if workers were kicked out from rabbitmq.</p>
|
<python><django><rabbitmq><celery>
|
2023-01-26 15:48:45
| 1
| 7,963
|
Mr.D
|
75,248,450
| 1,860,089
|
custom python IDEA debugger type renderer that requires an import
|
<p>I am trying to add a <a href="https://www.jetbrains.com/help/idea/customizing-views.html#renderers" rel="nofollow noreferrer">custom type renderer</a> in the IDEA Python debugger.
Specifically, I would like to render an <a href="https://docs.python.org/3/library/xml.etree.elementtree.html#xml.etree.ElementTree.Element" rel="nofollow noreferrer">xml Element</a> from the standard xml package as a string, e.g. <code><x a=1><y>2</y></x></code></p>
<p>The code to do so is <code>ElementTree.tostring(self)</code> where <code>self</code> represents the variable or watch in the debugger.</p>
<p>ElementTree needs to be imported so I have unsuccessful tried:</p>
<pre class="lang-py prettyprint-override"><code>xml.etree.ElementTree.tostring(self)
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>from xml.etree import ElementTree
ElementTree.tostring(self)
</code></pre>
<p>In both cases I got an error <code>Unable to evaluate: name 'xml' is not defined</code> in the debugger watch window. See screenshot:</p>
<p><a href="https://i.sstatic.net/CtB60.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CtB60.png" alt="custom renderer error" /></a></p>
<p>The docs don't mention such cases where importing of the rendering function is required.</p>
<p>Has anyone been able to do so?</p>
|
<python><debugging><intellij-idea><pycharm>
|
2023-01-26 15:43:32
| 1
| 2,330
|
Amnon
|
75,248,321
| 4,784,683
|
PyQt - what if you do not call .show()
|
<pre><code>import sys
from PySide6.QtWidgets import QApplication, QLabel
app = QApplication(sys.argv)
label = QLabel("<font color=red size=40>Hello World!</font>")
# label.show()
app.exec()
</code></pre>
<p>What actually happens if you do not call <code>label.show()</code>?<br />
I see that the window does not appear but . . .</p>
<p>Is there a way to direct a "close window event" to the non-shown window?</p>
|
<python><qt><pyqt>
|
2023-01-26 15:33:36
| 0
| 5,180
|
Bob
|
75,248,097
| 4,806,787
|
Delete rows with overlapping intervals efficiently
|
<p>Consider the following DataFrame</p>
<pre><code>>>> df
Start End Tiebreak
0 1 6 0.376600
1 5 7 0.050042
2 15 20 0.628266
3 10 15 0.984022
4 11 12 0.909033
5 4 8 0.531054
</code></pre>
<p>Whenever the <code>[Start, End]</code> intervals of two rows overlap I want the row with lower tiebreaking value to be removed. The result of the example would be</p>
<pre><code>>>> df
Start End Tiebreak
2 15 20 0.628266
3 10 15 0.984022
5 4 8 0.531054
</code></pre>
<p>I have a double-loop which does the job inefficiently and was wondering whether there exists an approach which exploits built-ins and works columnwise.</p>
<pre><code>import pandas as pd
import numpy as np
# initial data
df = pd.DataFrame({
'Start': [1, 5, 15, 10, 11, 4],
'End': [6, 7, 20, 15, 12, 8],
'Tiebreak': np.random.uniform(0, 1, 6)
})
# checking for overlaps
list_idx_drop = []
for i in range(len(df) - 1):
for j in range(i + 1, len(df)):
idx_1 = df.index[i]
idx_2 = df.index[j]
cond_1 = (df.loc[idx_1, 'Start'] < df.loc[idx_2, 'End'])
cond_2 = (df.loc[idx_2, 'Start'] < df.loc[idx_1, 'End'])
# if rows overlaps
if cond_1 & cond_2:
tie_1 = df.loc[idx_1, 'Tiebreak']
tie_2 = df.loc[idx_2, 'Tiebreak']
# delete row with lower tiebreaking value
if tie_1 < tie_2:
df.drop(idx_1, inplace=True)
else:
df.drop(idx_2, inplace=True)
</code></pre>
|
<python><pandas><dataframe><for-loop><coding-efficiency>
|
2023-01-26 15:16:27
| 2
| 313
|
clueless
|
75,248,003
| 11,182,916
|
Derivatives of delta
|
<p>I want to calculate <code>E</code> by this equation. But I am not sure if I can obtain results with <code>numpy.diff</code> module. It exports 4 points only.</p>
<p><a href="https://i.sstatic.net/HjCei.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HjCei.png" alt="enter image description here" /></a></p>
<pre class="lang-py prettyprint-override"><code>from numpy import diff
x = [395.33, 472.12, 560.45, 652.72, 732.55]
y = [0.17, 0.22, 0.28, 0.34, 0.41]
E = diff(y) / diff(x)
print(E)
</code></pre>
<p>Output:</p>
<pre class="lang-py prettyprint-override"><code>[0.00065113 0.00067927 0.00065027 0.00087686]
</code></pre>
|
<python><numpy>
|
2023-01-26 15:10:05
| 1
| 405
|
Binh Thien
|
75,247,606
| 4,876,561
|
Add hue to Seaborn Histogram annotation
|
<p>I have a snippet of code that produces 2 <code>seaborn.histogram</code> plots on the same axes, split by <code>hue</code>, and annotated:</p>
<p><a href="https://i.sstatic.net/4KyVD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4KyVD.png" alt="enter image description here" /></a></p>
<p>The two histograms are appropriately colored differently using the <code>hue</code> parameter, and the count of data in each bin are also appropriately annotated. However, can I also color the <strong>annotations / counts of what is in each bin</strong>?</p>
<p>Current <a href="https://stackoverflow.com/help/minimal-reproducible-example">MRE</a>:</p>
<pre><code>np.random.seed(8)
t = pd.DataFrame(
{
'Value': np.random.uniform(low=100000, high=500000, size=(50,)),
'Type': ['B' if x < 6 else 'R' for x in np.random.uniform(low=1, high=10, size=(50,))]
}
)
ax = sns.histplot(data=t, x='Value', bins=5, hue='Type', palette="dark")
ax.set(title="R against B")
ax.xaxis.set_major_formatter(FormatStrFormatter('%.0f'))
for p in ax.patches:
ax.annotate(f'{p.get_height():.0f}\n',
(p.get_x() + p.get_width() / 2, p.get_height()), ha='center', va='center', color='crimson')
plt.show()
</code></pre>
|
<python><matplotlib><annotations><seaborn><histogram>
|
2023-01-26 14:39:52
| 1
| 7,351
|
artemis
|
75,247,505
| 5,896,319
|
How to make filtering with SerializerMethodField()?
|
<p>I'm creating table that show objects of a model and I have a SerializerMethodField that shows a value from a different table with same transaction ID.</p>
<p>The problem is I'm using the serializers for the filtering table and chargeback is not working in there. How Can I make it filterable?</p>
<p>Simplifying the code, a have this model:</p>
<pre><code>class PSerializer(serializers.ModelSerializer):
...
chargeback = serializers.SerializerMethodField()
def get_value(self, obj):
ctransaction = CTransaction.objects.raw('SELECT * '
'FROM ctransaction '
'WHERE TRIM(mid)=TRIM(%s) '
'AND TRIM(unique_id)=TRIM(%s) '
'AND TRIM(num)=TRIM(%s) '
'AND rc_code IN (%s, %s, %s)',
[obj.mid, obj.unique_id, obj.num, '1', '1', '1'])
if len(cs_transaction) > 0:
return 'Yes'
return 'No'
</code></pre>
|
<python><django><django-rest-framework>
|
2023-01-26 14:30:19
| 1
| 680
|
edche
|
75,247,456
| 456,396
|
Django Api-Key with unit test
|
<p>I am trying to implement unit tests to an existing project, the existing project uses Api-Key's to access and authenticate against the Api endpoints.</p>
<p>if I do the following via postman or command line:</p>
<pre><code>curl --location --request GET 'http://127.0.0.1:8000/api/user_db' \
--header 'Authorization: Api-Key REDACTED' \
--header 'Content-Type: application/json' \
--data-raw '{
"username" : "test@testing.local"
}'
</code></pre>
<p>This will call the following view function and return the user details with the corresponding oid (json response) without error.</p>
<pre><code>from django.shortcuts import render
from rest_framework_api_key.permissions import HasAPIKey
from rest_framework.permissions import IsAuthenticated
from rest_framework.response import Response
from rest_framework.views import APIView
from user_api.classes.UserController import (
GetBusinessUser,
CreateBusinessUser,
UpdateBusinessUser,
DeleteBusinesssUser
)
from celery.utils.log import get_task_logger
import environ
logger = get_task_logger(__name__)
env = environ.Env()
class ProcessUserRequest(APIView):
permission_classes = [HasAPIKey |IsAuthenticated ]
def get(self, request):
logger.info("Get Business User Request Received")
result = GetBusinessUser(request)
return Response(result["result"],
content_type='application/json charset=utf-8',
status=result["statuscode"]
</code></pre>
<p>This additionally calls the following shortened function:</p>
<pre><code>def GetBusinessUser(request) -> Dict[str, Union[str, int]]:
logger.info(f"Processing Get Username Request: {request.data}")
valid_serializer = ValidateGetBusinessUserFormSerializer(data=request.data)
valid_serializer.is_valid(raise_exception=True)
username = valid_serializer.validated_data['username']
return BusinessUser.objects.filter(username=username).first()
</code></pre>
<p>As I wish to make unit test cases to ensure I can validate prior to deployment, I have implemented the following in the modules tests.py file:</p>
<pre><code>from rest_framework.test import APITestCase, APIClient
from rest_framework_api_key.models import APIKey
from user_api.classes.UserController import GetBusinessUser
from django.urls import reverse
# Class Method for GetBusinessUser (truncated)
# try except handling and other user checks removed for stack
class ProcessUserRequestTest(APITestCase):
def setUp(self):
self.client = APIClient()
# have also tried: self.headers = {'HTTP_AUTHORIZATION': f'Api-Key {self.api_key.key}'}
self.client.credentials(HTTP_AUTHORIZATION='Api-Key SomeApiKeyValue')
self.url = reverse('business_user')
self.valid_payload = {'username': 'test@testing.local'}
self.invalid_payload = {'param1': '', 'param2': 'value2'}
def test_get_business_user_request(self):
# also tried based on above:
# response = self.client.get(self.url, **self.headers, format='json')
response = self.client.get(self.url, data=self.valid_payload, format='json')
self.assertEqual(response.status_code, 200)
self.assertEqual(response.data, GetBusinessUser(response.data).data)
</code></pre>
<p>No matter what I seem to do the following is always returned, so it appears from testing adding authentication headers or using the <code>client.credentials</code> does not work with <code>Authorization: Api-Key somekey</code> as a header?</p>
<pre><code>creating test database for alias 'default'...
System check identified no issues (0 silenced).
{'detail': ErrorDetail(string='Authentication credentials were not provided.', code='not_authenticated')}
F
======================================================================
FAIL: test_get_business_user_request (user_api.tests.ProcessUserRequestTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
File "../truncated/tests.py", line 19, in in test_get_business_user_request
self.assertEqual(response.status_code, 200)
AssertionError: 403 != 200
----------------------------------------------------------------------
Ran 1 test in 0.018s
FAILED (failures=1)
Destroying test database for alias 'default'...
</code></pre>
<p>Has this been encountered before and is there a workable solution so I can create unit tests?</p>
|
<python><django><django-rest-framework>
|
2023-01-26 14:26:21
| 3
| 356
|
Psymon25
|
75,247,445
| 11,696,358
|
Error when making a dash DataTable filterable by columns values
|
<p>Only when I add the property <code>filter_action="native"</code> in a <code>dash.DataTable</code> in order to make it possible for the user to filter rows by column values I get an error that varies with the browser I run the webapp on:</p>
<ul>
<li>Edge: <code>Cannot read properties of undefined (reading 'placeholder_text') (This error originated from the built-in JavaScript code that runs Dash apps. Click to see the full stack trace or open your browser's console.) TypeError: Cannot read properties of undefined (reading 'placeholder_text') at s.value (http://localhost:8050/_dash-component-suites/dash/dash_table/async-table.js:2:236716)</code>...</li>
<li>Chrome: same as Edge</li>
<li>Firefox: <code>r is undefined (This error originated from the built-in JavaScript code that runs Dash apps. Click to see the full stack trace or open your browser's console.) value@http://localhost:8050/_dash-component-suites/dash/dash_table/async-table.js:2:236702</code>...</li>
</ul>
<p>Note that without setting that single property the app works perfectly.</p>
<p>I terribly need the user to be able to filter rows by column values: what can I do to solve this issue?</p>
<h2>environment</h2>
<p>python 3.7, Flask 2.2.2, dash 2.8.0</p>
|
<python><reactjs><plotly-dash>
|
2023-01-26 14:25:36
| 1
| 478
|
user11696358
|
75,247,232
| 1,441,720
|
Spark Dataframe column name change does not reflect
|
<p>I am trying to rename some special characters from my spark dataframe. For some weird reason, it shows the updated column name when I print the schema, but any attempt to access the data results in an error complaining about the old column name. Here is what I am trying:</p>
<pre><code># Original Schema
upsertDf.columns
# Output: ['col 0', 'col (0)', 'col {0}', 'col =0', 'col, 0', 'col; 0']
for c in upsertDf.columns:
upsertDf = upsertDf.withColumnRenamed(c, c.replace(" ", "_").replace("(","__").replace(")","__").replace("{","___").replace("}","___").replace(",","____").replace(";","_____").replace("=","_"))
upsertDf.columns
# Works and returns expected result
# Output: ['col_0', 'col___0__', 'col____0___', 'col__0', 'col_____0', 'col______0']
# Print contents of dataframe
# Throws error for original attribute name "
upsertDf.show()
AnalysisException: 'Attribute name "col 0" contains invalid character(s) among " ,;{}()\\n\\t=". Please use alias to rename it.;'
</code></pre>
<p>I have tried other options to rename the column (using alias etc...) and they all return the same error. Its almost as if the show operation is using a cached version of the schema but I can't figure out how to force it to use the new names.</p>
<p>Has anyone run into this issue before?</p>
|
<python><apache-spark>
|
2023-01-26 14:08:24
| 1
| 333
|
jawsnnn
|
75,247,085
| 7,624,196
|
Case when method in mixin depends on a method of the class that mixin-ed to
|
<p>I'd like to ask the design pattern when method in a mixin depends on a method of the class that mixin-ed to. The example below is in python, but the question will be also the case with other languages I believe.</p>
<p>For example, say I have the following two mixins, and I'd like to inject into some class. As in the code below I'd like to inject <code>f</code> but <code>f</code> requires that the class mixin-ed to implements <code>g</code> because <code>g</code> will be used in <code>f</code></p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
class MixinBase(ABC):
@abstractmethod
def f(self, a: int) -> int: ...
# the main function that we want to mix-in
@abstractmethod
def g(self, a: int) -> int: ...
# a method that we know that is used in f()
class Mixin1(MixinBase):
def f(self, a: int) -> int: return self.g(a) ** 2
class Mixin2(MixinBase):
def f(self, a: int) -> int: return self.g(a) + 2
</code></pre>
<p>Now, my question is, what is the better practice to inject such mixins?</p>
<h3>example</h3>
<p>I could come up with the following two ways to mixin. The case one is the implicit one:</p>
<pre class="lang-py prettyprint-override"><code>class ImplicitExample:
def g(self, a: int): return a
## and other methods ...
class ImplicitExampleWithMixin1(ImplicitExample, Mixin1): ...
class ImplicitExampleWithMixin2(ImplicitExample, Mixin2): ...
</code></pre>
<p>This mixing is implicit in the sense that the implementer of <code>ImplicitExample</code> implicitly know the dependency of the mixins on <code>ImplicitExample</code>.</p>
<p>Another way of mixing is explicitly inherit the <code>MixinBase</code> so that <code>g</code> is guaranteed to be implemented.</p>
<pre class="lang-py prettyprint-override"><code>class ExplicitExample(MixinBase):
def g(self, a: int): return a
# and other methods ...
class ExplicitExampleWithMixin1(ExplicitExample, Mixin1): ...
class ExplicitExampleWithMixin2(ExplicitExample, Mixin2): ...
</code></pre>
<p>I think the above two examples has pros-and-cons. The first explicit one is simpler dependency graph but the implementer must be aware the implicit dependency. On the other hand, for the second explicit example, mental stress of implementer is less intensive, but this causes diamond dependency graph. If MixIn is only few its ok, but if many mental stress could be intensive.</p>
|
<python><multiple-inheritance><mixins>
|
2023-01-26 13:56:21
| 1
| 1,623
|
HiroIshida
|
75,247,082
| 14,667,788
|
find pixel that is closest to the corners
|
<p>I have a following problem. I would like to find non-empty pixels coordinares that are closest to the left down corner and upper right corder, respectivelly.</p>
<p>This function returns upper left and down right coordinates and I cannot figure out why:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
def find_corner_pixels(img):
# Get image dimensions
height, width = img.shape[:2]
left_down = (height-1, width-1)
upper_right = (0, 0)
for i in range(height):
for j in range(width):
# non-black
if not np.array_equal(img[i,j], [0,0,0]):
if (i + j) < (left_down[0] + left_down[1]):
left_down = (i, j)
if (i + j) > (right_up[0] + right_up[1]):
right_up = (i, j)
return left_down, right_up
</code></pre>
<p>Can you help me to find the mistake, please?</p>
<p>The output is obviously wrong, see picture with red dots that should denote the corner:</p>
<p><a href="https://i.sstatic.net/fITpl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fITpl.png" alt="enter image description here" /></a></p>
|
<python><numpy>
|
2023-01-26 13:56:06
| 1
| 1,265
|
vojtam
|
75,247,001
| 11,622,712
|
How to calculate the duration between rows with the same stage value and then get the cumulative duration of each stage?
|
<p>I have the following dataframe:</p>
<pre><code>dt_datetime stage proc_val
2011-11-13 11:00 0 20
2011-11-13 11:10 0 21
2011-11-13 11:30 1 25
2011-11-13 11:40 2 22
2011-11-13 11:55 2 28
2011-11-13 12:00 2 29
</code></pre>
<p>I need to add a new column called <code>stage_duration</code> and get the following result:</p>
<pre><code>dt_datetime stage proc_val stage_duration
2011-11-13 11:00 0 20 30
2011-11-13 11:10 0 21 30
2011-11-13 11:30 1 25 10
2011-11-13 11:40 2 22 20
2011-11-13 11:55 2 28 20
2011-11-13 12:00 2 29 20
</code></pre>
<p>How can I do it?</p>
<p>This is my current code snippet but it does not provide an expected result. It should calculate the duration between rows with the same stage value and then get the cumulative duration of each stage, but it doesn't.</p>
<pre><code>df['stage_duration'] = df.groupby('stage')['dt_datetime'].diff().dt.total_seconds() / 60
df['stage_duration'] = df['stage_duration'].cumsum()
</code></pre>
<p><strong>Update:</strong></p>
<p>The solution should also work if the dataframe contains multiple groups of stages, e.g. see stage 0 that starts at <code>2011-11-13 11:00</code> and <code>2011-11-13 12:00</code>. It has different durations in both cases.</p>
<pre><code>dt_datetime stage proc_val stage_duration
2011-11-13 11:00 0 20 30
2011-11-13 11:10 0 21 30
2011-11-13 11:30 1 25 10
2011-11-13 11:40 2 22 20
2011-11-13 11:55 2 28 20
2011-11-13 12:00 2 29 20
2011-11-13 12:00 0 20 70
2011-11-13 13:10 0 21 70
</code></pre>
|
<python><pandas>
|
2023-01-26 13:48:01
| 1
| 2,998
|
Fluxy
|
75,246,963
| 13,518,907
|
Streamlit: Display Text after text, not all at once
|
<p>I want to build a data annotation interface. I read in an excel file, where only a text column is relevant. So "df" could also be replaced by a list of texts. This is my code:</p>
<pre><code>import streamlit as st
import pandas as pd
import numpy as np
st.title('Text Annotation')
df = pd.read_excel('mini_annotation.xlsx')
for i in range(len(df)):
text = df.iloc[i]['Text']
st.write(f"Text {i} out of {len(df)}")
st.write("Please classify the following text:")
st.write("")
st.write(text)
text_list = []
label_list = []
label = st.selectbox("Classification:", ["HOF", "NOT","Not Sure"])
if st.button("Submit"):
text_list.append(text)
label_list.append(label)
df_annotated = pd.DataFrame(columns=['Text', 'Label'])
df_annotated["Text"] = text_list
df_annotated["Label"] = label_list
df_annotated.to_csv("annotated_file.csv", sep=";")
</code></pre>
<p>The interface looks like this:
<a href="https://i.sstatic.net/7GDRU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7GDRU.png" alt="enter image description here" /></a></p>
<p>However, I want the interface to display just one text, e.g. the first text of my dataset. After the user has submitted his choice via the "Submit" button, I want the first text to be gone and the second text should be displayed. This process should continue until the last text of the dataset is reached.
How do I do this?
(I am aware of the Error message, for this I just have to add a key to the selectbox, but I am not sure if this is needed in the end)</p>
|
<python><annotations><streamlit>
|
2023-01-26 13:44:46
| 1
| 565
|
Maxl Gemeinderat
|
75,246,929
| 13,158,157
|
pandas groupby: selecting unique latest entries
|
<p>In the following pandas Data Frame:</p>
<pre><code> Name v date_modified
0 A 0 2023-01-01
1 A 1 2023-01-02
2 A 2 2023-01-03
3 B 0 2023-01-30
4 B 1 2023-01-02
5 B 2 2023-01-03
6 C 0 2023-01-30
7 C 1 2023-01-03
8 C 2 2023-01-03
</code></pre>
<p>How can I get two latest versions with most recent unique date_modified per group ['Name', 'v']?</p>
<p>In this example there are duplicates date_modified on <code>df.Name == C</code>. So far I tired to do something like this:
<code>df.sort_values('date_modified').groupby(['Name', 'v']).tail(2)</code>. This does not omit duplicates on date_modified and also for some reason return all rows not just tail of two</p>
|
<python><pandas><group-by><unique>
|
2023-01-26 13:41:13
| 1
| 525
|
euh
|
75,246,927
| 15,724,084
|
python scrapy is slowing down by the time it is parsing
|
<p>I have a scraper bot, which works fine. But as time passes when it is scraping the speed gets down.
I added <code>concurrent request</code>, <code>download_delay:0</code>,<code>'AUTOTHROTTLE_ENABLED':False</code> but result is same. It is starting with a fast pace but gets slower.
I guess it is about caching, but do not know if I have to clean cache, or why it behaves so?
The code is below would like to hear comments;</p>
<pre><code>import scrapy
from scrapy.crawler import CrawlerProcess
import pandas as pd
import scrapy_xlsx
itemList=[]
class plateScraper(scrapy.Spider):
name = 'scrapePlate'
allowed_domains = ['dvlaregistrations.dvla.gov.uk']
FEED_EXPORTERS = {'xlsx': 'scrapy_xlsx.XlsxItemExporter'}
custom_settings = {'FEED_EXPORTERS' :FEED_EXPORTERS,'FEED_FORMAT': 'xlsx','FEED_URI': 'output_r00.xlsx', 'LOG_LEVEL':'INFO','DOWNLOAD_DELAY': 0,'CONCURRENT_ITEMS':300,'CONCURRENT_REQUESTS':30,'AUTOTHROTTLE_ENABLED':False}
def start_requests(self):
df=pd.read_excel('data.xlsx')
columnA_values=df['PLATE']
for row in columnA_values:
global plate_num_xlsx
plate_num_xlsx=row
base_url =f"https://dvlaregistrations.dvla.gov.uk/search/results.html?search={plate_num_xlsx}&action=index&pricefrom=0&priceto=&prefixmatches=&currentmatches=&limitprefix=&limitcurrent=&limitauction=&searched=true&openoption=&language=en&prefix2=Search&super=&super_pricefrom=&super_priceto="
url=base_url
yield scrapy.Request(url,callback=self.parse, cb_kwargs={'plate_num_xlsx': plate_num_xlsx})
def parse(self, response, plate_num_xlsx=None):
plate = response.xpath('//div[@class="resultsstrip"]/a/text()').extract_first()
price = response.xpath('//div[@class="resultsstrip"]/p/text()').extract_first()
try:
a = plate.replace(" ", "").strip()
if plate_num_xlsx == plate.replace(" ", "").strip():
item = {"plate": plate_num_xlsx, "price": price.strip()}
itemList.append(item)
print(item)
yield item
else:
item = {"plate": plate_num_xlsx, "price": "-"}
itemList.append(item)
print(item)
yield item
except:
item = {"plate": plate_num_xlsx, "price": "-"}
itemList.append(item)
print(item)
yield item
process = CrawlerProcess()
process.crawl(plateScraper)
process.start()
import winsound
winsound.Beep(555,333)
</code></pre>
<p>EDIT: "log_stats"</p>
<pre><code>{'downloader/request_bytes': 1791806,
'downloader/request_count': 3459,
'downloader/request_method_count/GET': 3459,
'downloader/response_bytes': 38304184,
'downloader/response_count': 3459,
'downloader/response_status_count/200': 3459,
'dupefilter/filtered': 6,
'elapsed_time_seconds': 3056.810985,
'feedexport/success_count/FileFeedStorage': 1,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2023, 1, 27, 22, 31, 17, 17188),
'httpcompression/response_bytes': 238767410,
'httpcompression/response_count': 3459,
'item_scraped_count': 3459,
'log_count/INFO': 61,
'log_count/WARNING': 2,
'response_received_count': 3459,
'scheduler/dequeued': 3459,
'scheduler/dequeued/memory': 3459,
'scheduler/enqueued': 3459,
'scheduler/enqueued/memory': 3459,
'start_time': datetime.datetime(2023, 1, 27, 21, 40, 20, 206203)}
2023-01-28 02:31:17 [scrapy.core.engine] INFO: Spider closed (finished)
Process finished with exit code 0
</code></pre>
|
<python><caching><scrapy><slowdown>
|
2023-01-26 13:40:55
| 4
| 741
|
xlmaster
|
75,246,894
| 8,778,855
|
pandas group by time intervall with dynamic intervall start
|
<p>I have a dataframe defined as:</p>
<pre><code>datas = [['A', 51, 'id1', '2020-05-27 05:50:43.346'], ['A', 51, 'id2',
'2020-05-27 05:51:08.347'], ['B', 45, 'id3', '2020-05-24 17:24:05.142'],['B', 45, 'id4', '2020-05-24 17:23:30.141'], ['C', 34,
'id5', '2020-05-23 17:31:10.341']]
df = pd.DataFrame(datas, columns = ['col1', 'col2', 'cold_id',
'dates'])
df['dates'] = pd.to_datetime(df.dates)
</code></pre>
<p>looking like this</p>
<pre><code> col1 col2 cold_id dates
0 A 51 id1 2020-05-27 05:50:43.346
1 A 51 id2 2020-05-27 05:51:08.347
2 B 45 id3 2020-05-24 17:24:05.142
3 B 45 id4 2020-05-24 17:23:30.141
4 C 34 id5 2020-05-23 17:31:10.341
</code></pre>
<p>I want to group its rows such that all rows whose date is less than 2 minutes apart form part of one group. I tried the following approach:</p>
<pre><code>df.groupby([pd.Grouper(key='dates', freq='2 min'), 'col1']).agg(','.join).reset_index().sort_values('col1').reset_index(drop=True)
</code></pre>
<p>Which yields:</p>
<pre><code>dates col1 cold_id
0 2020-05-27 05:50:00 A id1,id2
1 2020-05-24 17:22:00 B id4
2 2020-05-24 17:24:00 B id3
3 2020-05-23 17:30:00 C id5
</code></pre>
<p>This is not what I am looking for since rows with id3 and id4 should be within the same group as they are only 30s apart from each other.</p>
<p>My preferred output looks like this:</p>
<pre><code>dates col1 cold_id
0 2020-05-27 05:50:43.346 A id1,id2
1 2020-05-24 17:23:30.141 B id3, id4
3 2020-05-23 17:31:10.341 C id5
</code></pre>
<p>How can it be achieved?</p>
|
<python><pandas><group-by>
|
2023-01-26 13:37:59
| 1
| 477
|
volfi
|
75,246,762
| 774,575
|
Function vectorization says there is a 0-dimensional argument while the argument is an array
|
<p>I'm implementing this equation and using it for the set of frequencies <code>nos</code>:</p>
<img src="https://i.sstatic.net/EexME.png" width="230" />
<p>The non vectorized code works:</p>
<pre><code>import numpy as np
h = np.array([1,2,3])
nos = np.array([4, 5, 6, 7])
func = lambda h, no: np.sum([hk * np.exp(-1j * no * k) for k, hk in enumerate(h)])
# Not vectorized
resps = np.zeros(nos.shape[0], dtype='complex')
for i, no in enumerate(nos):
resps[i] = func(h, no)
print(resps)
> Out: array([-0.74378734-1.45446975j,
> -0.94989022+3.54991188j,
> 5.45190245+2.16854975j,
> 2.91801616-4.28579526j])
</code></pre>
<p>I'd like to vectorize the call in order to pass <code>nos</code> at once instead of explicitly iterating:</p>
<pre><code>H = np.vectorize(func, excluded={'h'}, signature='(k),(n)->(n)')
resps = H(h, nos)
</code></pre>
<p>When calling <code>H</code>:</p>
<blockquote>
<p>Error: ValueError: 0-dimensional argument does not have enough dimensions for all core dimensions ('n',)</p>
</blockquote>
<p>I'm using the signature parameter but I'm not sure I use it in the correct way. Without this parameter there is an error in <code>func</code>:</p>
<blockquote>
<p>TypeError: 'numpy.int32' object is not iterable</p>
</blockquote>
<p>I don't understand where the problem is.</p>
|
<python><numpy><vectorization>
|
2023-01-26 13:26:05
| 1
| 7,768
|
mins
|
75,246,672
| 2,469,182
|
Dash is ignoring url_base_pathname for requests
|
<p>I am trying to get Dash to add a prefix to the URLs it fetches data from, so instead of loading a resource like /dash-layout it requests /my/prefix/dash-layout.</p>
<p>I understand that there is a parameter url_base_pathname (<a href="https://dash.plotly.com/reference" rel="nofollow noreferrer">https://dash.plotly.com/reference</a>) that does just that. There are two more parameters, requests_pathname_prefix and routes_pathname_prefix that have a similar function but its value default to url_base_pathname.</p>
<p>Setting requests_pathname_prefix can be done, either by passing requests_pathname_prefix="/my/prefix/" on the Dash constructor or using the DASH_URL_BASE_PATHNAME environment variable. Either method appears to be recognized as Dash prints:</p>
<p>Dash is running on <a href="http://127.0.0.1:8050/my/custom/path/" rel="nofollow noreferrer">http://127.0.0.1:8050/my/custom/path/</a></p>
<p>However, I see that while loading up <a href="http://127.0.0.1:8050/my/custom/path/" rel="nofollow noreferrer">http://127.0.0.1:8050/my/custom/path/</a> does indeed work, Dash is still requesting without the prefix, ie calling /dash-layout (among other resources). I examined the source code with the Developer Tools and came to find that the prefix appears to not be set as it seems to be null.</p>
<p><a href="https://i.sstatic.net/P20Ym.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P20Ym.png" alt="dash-layout being requested without prefix" /></a>
<a href="https://i.sstatic.net/4INgW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4INgW.png" alt="url_base_pathname is null" /></a></p>
<p>This is despite the program printing:</p>
<pre><code>Dash is running on http://127.0.0.1:8050/my/custom/path/
* Serving Flask app 'main'
* Debug mode: on
</code></pre>
|
<python><plotly-dash>
|
2023-01-26 13:18:02
| 0
| 462
|
wtf8_decode
|
75,246,622
| 6,768,126
|
Unwanted type conversion pandas apply (int64 --> float64)
|
<p>Why do pandas convert automatically <code>int64</code> to <code>float64</code>?<br />
I've checked out these questions:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/50300983/involuntary-conversion-of-int64-to-float64-in-pandas">Involuntary conversion of int64 to float64 in pandas</a></li>
<li><a href="https://stackoverflow.com/questions/50300983/involuntary-conversion-of-int64-to-float64-in-pandas">Unwanted automatic type conversion</a></li>
<li><a href="https://stackoverflow.com/questions/68851447/pandas-dtypes-float64-to-object-conversion">Pandas Dtypes : float64 to 'Object' Conversion</a></li>
</ul>
<p>but none of them are as simple as my case as far as I understood.<br />
I am running the code on Jupyter lab.</p>
<pre class="lang-py prettyprint-override"><code>>>> df.dtypes
cd_fndo int64
dif float64
dtype: object
</code></pre>
<p>so the types are <code>int64</code> and <code>float64</code>. However applying the identity function results in type change:</p>
<pre class="lang-py prettyprint-override"><code>>>> df.apply(lambda x: x, axis=1).dtypes
cd_fndo float64
dif float64
dtype: object
</code></pre>
<p>However, when considering only the first column, the type <code>int64</code> remains the same:</p>
<pre class="lang-py prettyprint-override"><code>>>> df.iloc[:, :1].apply(lambda x: x, axis=1).dtypes
cd_fndo int64
dtype: object
</code></pre>
<p>Could someone please explain the causes of this type change?</p>
|
<python><pandas><types><int64>
|
2023-01-26 13:14:25
| 1
| 422
|
Leonardo Maffei
|
75,246,549
| 10,722,752
|
how to sort within each group of a dataframe while retaining other column
|
<p>I am working with a large dataframe with millions of rows.</p>
<p>Sample data:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'id' : ['c1','c2','c1','c3','c2','c1','c3'],
'it' : ['it1','it2','it1','it5','it3','it7','it'],
'score' : [.8,.5,1.1,.65,.89,1.2,.91]})
df
id it score
0 c1 it1 0.8
1 c2 it2 0.5
2 c1 it1 1.1
3 c3 it5 0.65
4 c2 it3 0.89
5 c1 it7 1.2
6 c3 it 0.91
</code></pre>
<p>I am sorting the dataframe within each groups using:</p>
<pre><code>df.groupby('id', as_index = False).\
apply(pd.DataFrame.sort_values, 'score', ascending=False)
id it score
0 5 c1 it7 1.2
0 2 c1 it1 1.1
0 0 c1 it1 0.8
1 4 c2 it3 0.89
1 1 c2 it2 0.5
2 6 c3 it 0.91
2 3 c3 it5 0.65
</code></pre>
<p>But because of large size of the data, the process is taking a lot of time with <code>apply</code>.
Could someone please let me know how to perform the same operation in a much better time efficient way.</p>
|
<python><pandas>
|
2023-01-26 13:07:55
| 1
| 11,560
|
Karthik S
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.