QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,682,498
| 18,482,459
|
Combine list of dataframes into one big dataframe avoiding duplicates on columns and indices
|
<p>Multiple data points are in a list. I want to combine them into one pandas DataFrame. Minimal example:</p>
<pre><code>list_of_frames = [pd.DataFrame({'name':'adam', 'height':'180'}, index=[0]), pd.DataFrame({'name':'adam', 'weight':'80'}, index=[1]), pd.DataFrame({'name':'eve', 'height':'190'}, index=[2])]
</code></pre>
<p>How do I obtain the following DataFrame?</p>
<pre><code> name height weight
0 adam 180 80
1 eve 190 NaN
</code></pre>
<p>If I call <code>pd.concat(list_of_frames)</code> I obtain a list of entries</p>
<pre><code> name height weight
0 adam 180 NaN
1 adam NaN 80
2 eve 190 NaN
</code></pre>
<p>Obviously the <code>height</code> variable has been 'merged'. Can I collapse this DataFrame?</p>
<p>Alternatively I tried <code>reduce(lambda l, r: pd.merge(l, r, on='name', how='outer'), list_of_frames)</code> which leads to</p>
<pre><code> name height_x weight height_y
0 adam 180 80 NaN
1 eve NaN NaN 190
</code></pre>
<p>Here we have separate column names. I feel like I am missing something obvious. Thanks for the help!</p>
|
<python><pandas><dataframe>
|
2024-06-28 12:36:06
| 2
| 405
|
Firefighting Physicist
|
78,682,424
| 1,722,444
|
SeleniumBase: Web Crawler being affected by human verification system (Select Similar Images)
|
<p>While crawling a courier website using SeleniumBase for taking screenshot of tracking page, the bot is getting detected and "Select Similar Images" verification box is being shown.</p>
<p>URL: <a href="https://www.royalmail.com/track-your-item#/tracking-results/QF085212272GB" rel="nofollow noreferrer">https://www.royalmail.com/track-your-item#/tracking-results/QF085212272GB</a></p>
<p>Code:</p>
<pre><code>from seleniumbase import SB
fallback_ua = 'Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Mobile Safari/537.36'
chromium_arg = '--disable-gpu,--disable-blink-features=AutomationControlled'
with SB(uc=True, headless2=True, agent=fallback_ua, chromium_arg=chromium_arg) as sb:
url = "https://www.royalmail.com/track-your-item#/tracking-results/QF085212272GB"
sb.uc_open_with_reconnect(url, 3)
sb.sleep(1)
sb.save_screenshot('ss.png')
</code></pre>
<p>Expected screenshot:</p>
<p><a href="https://i.sstatic.net/78H07zeK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/78H07zeK.png" alt="site humbly submits to being scraped" /></a></p>
<p>Actual screenshot:</p>
<p><a href="https://i.sstatic.net/WxPg1Jmw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WxPg1Jmw.png" alt="CAPTCHA" /></a></p>
<h3>Versions</h3>
<ul>
<li><p>Dockerized environment: <code>python3.9-slim</code></p>
</li>
<li><p><code>seleniumbase</code>: 4.28.0</p>
</li>
<li><p>Chromium Browser: Chromium 126.0.6478.126 built on Debian 12.5, running on Debian 12.5</p>
</li>
<li><p>Chrome Driver: ChromeDriver 126.0.6478.126 (d36ace6122e0a59570e258d82441395206d60e1c-refs/branch-heads/6478@{#1591})</p>
</li>
</ul>
|
<python><selenium-chromedriver><undetected-chromedriver><seleniumbase>
|
2024-06-28 12:20:25
| 1
| 345
|
Vivek
|
78,682,382
| 5,418,176
|
Json & PySpark - read value from a struct that may be null
|
<p>I have a list of <code>.json files</code> that contain person information. One file contains info of one person. I want to load this data into table using <code>pyspark</code> in an Azure Databricks notebook.</p>
<p>Let's say the files are built like this:</p>
<pre><code>{
"id": 1,
"name": "Homer",
"address": {
"street": "742 Evergreen Terrace"
"city": "Springfield"
}
}
</code></pre>
<p>Fairly simple json here, which i can read into a datafrom with this code:</p>
<pre><code>from pyspark.sql.functions import *
sourcejson = spark.read.json("path/to/json")
df = (
sourcejson.select(
col('id'),
col('name'),
col('address.street').alias('street'),
col('address.city').alias('city')
)
)
</code></pre>
<p>which gives the expected result:</p>
<pre><code>id | name | street | city
1 | Homer | 742 Evergreen Terrace | Springfield
</code></pre>
<p>However. The problem start when the address is unknown. In that case, the whole address struct in the json will just be <code>null</code>:</p>
<pre><code>{
"id": 2,
"name": "Ned",
"address": null
}
</code></pre>
<p>In the example file above, we don't know Ned's address so we have a null. Using the code from before, I would expect a result like this:</p>
<pre><code>id | name | street | city
2 | Ned | null | null
</code></pre>
<p>however, running the code results in an error:</p>
<pre><code>[INVALID_EXTRACT_BASE_FIELD_TYPE] Can't extract a value from "address". Need a complex type [STRUCT, ARRAY, MAP] but got "STRING"
</code></pre>
<p>I understand the reason behind the error but I can't find any solution on it. Any idea's how we could handle this?</p>
|
<python><json><pyspark><databricks><azure-databricks>
|
2024-06-28 12:13:51
| 2
| 928
|
DenStudent
|
78,682,381
| 386,861
|
How to debug a module that is imported using a notebook file in VSCode
|
<p>I've built a utilities.py script to hold functions that I need to regularly perform on a dataset. It is largely in pandas.</p>
<p>I then have a notebook, call it process.ipynb, that imports the script with <code>import utilities as ut</code>. So far so good.</p>
<p>I've got a step in the process that I know I want to fix but I want to see what a variable containing a dataframe looks like.</p>
<p>The instructions should be simple:</p>
<ol>
<li>Click to set the red breakpoint function in the script where I want the code to stop.</li>
<li>Go to the notebook and in the cell that runs the code and click the left hand side of the cell and click 'debug cell'.</li>
<li>Code runs</li>
</ol>
<p>And in system output I get</p>
<pre class="lang-none prettyprint-override"><code>0.00s - make the debugger miss breakpoints. Please pass -Xfrozen_modules=off
0.00s - to python to disable frozen modules.
0.00s - Note: Debugging will proceed. Set PYDEVD_DISABLE_FILE_VALIDATION=1 to disable this validation.
</code></pre>
<p>So far, so good.</p>
<p>The code reaches breakpoint and I think I can interrogate variables.</p>
<p>Nothing shows in the variables window.</p>
<p><a href="https://i.sstatic.net/829NMFlT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/829NMFlTt.png" alt="enter image description here" /></a></p>
<p>I've tried <code>dir()</code> but I don't recognise any variables, and similarly unrecognise outputs from <code>locals()</code></p>
<p>I've typed <code>mgd</code>, the name of the variable, and <code>print(mgd)</code> but get the response:</p>
<pre><code>Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'mgd' is not defined
</code></pre>
<p>I'm keen to avoid using print statements where possible but this is confusing. What's the next step?</p>
<p>I got asked for a launch.json file. I used.</p>
<pre class="lang-json prettyprint-override"><code>{
"version": "0.2.0",
"configurations": [
{
"name": "Python: Current File",
"type": "debugpy",
"request": "launch",
"program": "${file}",
"console": "integratedTerminal"
}
]
}
</code></pre>
<p>EDIT: my example.</p>
<p>data_demo.py</p>
<pre><code>import pandas as pd
class City:
def __init__(self, df=pd.DataFrame()):
self.df = df
def transform_city(self):
print('Transforming city data...')
df = self.df.T
return df
</code></pre>
<p>notebook.</p>
<pre><code>import pandas as pd
import data_demo as dd
data = {
'Name': ['John', 'Anna', 'Peter', 'Linda'],
'Age': [28, 34, 29, 32],
'City': ['New York', 'Paris', 'Berlin', 'London']
}
# Create DataFrame
df = pd.DataFrame(data)
# Initialize the City class with the DataFrame
city_instance = dd.City(df)
# Now, call the transform_city method
transformed_city = city_instance.transform_city()
# You can put a breakpoint on the line below in VS Code
print(transformed_city)
</code></pre>
<p>I've put the red dot breakpoint on the line in the script</p>
<pre><code> df = self.df.T
</code></pre>
<p>Debug console:</p>
<pre><code>df
Traceback (most recent call last):
File "<string>", line 1, in <module>
NameError: name 'df' is not defined
</code></pre>
|
<python><visual-studio-code>
|
2024-06-28 12:13:33
| 0
| 7,882
|
elksie5000
|
78,682,369
| 5,180,979
|
Create limited Tables from ForeignKey dependency map in SQLAlchemy ORM
|
<p>I have the following code:</p>
<pre><code>from sqlalchemy import (create_engine, Column, String,
Integer, Float, ForeignKeyConstraint)
from sqlalchemy.orm import (DeclarativeBase, relationship)
class Base(DeclarativeBase):
pass
class Process(Base):
__tablename__ = 'process'
run_id = Column(Integer, primary_key=True)
process_id = Column(String, primary_key=True)
process_attrs = Column(String, nullable=True)
class ProcessWeekly(Base):
__tablename__ = 'process_weekly'
run_id = Column(Integer, primary_key=True)
process_id = Column(String, primary_key=True)
week = Column(Integer, primary_key=True)
loss_factor = Column(Float, default=1.0)
process = relationship(Process)
__table_args__ = (
ForeignKeyConstraint([run_id, process_id],
[Process.run_id, Process.process_id]),
)
class Location(Base):
__tablename__ = "location"
run_id = Column(Integer, primary_key=True)
location_id = Column(String, primary_key=True)
engine = create_engine("sqlite:///test.db")
Base.metadata.create_all(engine, tables=[ProcessWeekly.__table__])
</code></pre>
<p>What this code creates is just the <code>ProcessWeekly</code> table. What I wish to create is, since, the <code>ProcessWeekly</code> table has a foreignkey constraint on the <code>Process</code> table, I'd want to create that as well. <code>Location</code> table doesn't have any foreignkey dependency, hence, I do not want to create it.</p>
<p>This is just a simple example to suggest 1 level of Foreignkey constraint. However, in my use case, it'll be multi-level dependency, i.e., <code>Process</code> table can itself has a ForeignKey dependency on some further level tables.</p>
<p>How can I modify my code to create all the tables suggested in argument of the <code>Base.metadata.create_all(engine, tables=[...])</code> and all the further downstream tables with foreignkey dependency?</p>
<p>I'm using <code>sqlalchemy==2.0.25</code>.</p>
|
<python><oracle-database><sqlalchemy><python-3.11>
|
2024-06-28 12:11:44
| 1
| 315
|
CharcoalG
|
78,682,300
| 353,337
|
B-Spline through 3 points in 3D space
|
<p>I have three points in 3D and I'd like to a find a smooth curve (e.g., a spline) that goes through all points. With 4 points or more, I can use SciPy's B-splines:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.interpolate import splev, splprep
import matplotlib.pyplot as plt
dim = 3
# doesn't work for 3 points
num_points = 4
X = np.random.rand(dim, num_points)
tck, u = splprep(X, s=0)
new_points = splev(np.linspace(0, 1, 100), tck)
# plot
ax = plt.figure().add_subplot(projection="3d")
ax.scatter(*X)
ax.plot(*new_points)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/KnYLjQNG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KnYLjQNG.png" alt="enter image description here" /></a></p>
<p>Why doesn't this work for three points? What could I do instead?</p>
|
<python><scipy>
|
2024-06-28 11:55:56
| 1
| 59,565
|
Nico Schlömer
|
78,682,002
| 7,123,797
|
Definition of the term "value of an object" in CPython
|
<p>Is it possible to give a formal definition of the term "value of an object" in CPython? After briefly reading some reference articles and Python tutorials, I came to the conclusion that in most cases this term means one of the following two things (depending on the context). Is this conclusion correct or not?</p>
<ul>
<li><p>In a broad sense, "value of an object" means all the data stored in the object's struct (this struct is derived from the PyObject struct), excluding the type designator and reference counter. In other words, it is data (collection of bytes) stored in object's struct fields, excluding <code>ob_type</code> and <code>ob_refcnt</code> fields (and maybe some other bookkeeping fields related to garbage collection). It seems that Python object's instance variables are also stored in this struct. However, I am not sure about bound methods – where they are stored after creation.<br />
For example, every object of type <code>int</code> is generated by the following struct (after expanding C macros):</p>
<pre><code> struct _longobject {
Py_ssize_t ob_refcnt; /* object reference counter */
PyTypeObject* ob_type; /* object type designator*/
Py_ssize_t ob_size;
digit ob_digit[1];
};
</code></pre>
<p>Hence, its value is the data (collection of bytes) stored in <code>ob_size</code> and <code>ob_digit[1]</code>.</p>
</li>
<li><p>In a narrow sense, "value of an object" means the string representation of the object that is evaluated by either <code>obj.__repr__()</code> or <code>obj.__str__()</code>. This representation isn't usually stored explicitly in the PyObject struct, but is evaluated by the above-mentioned bound methods, which use several fields of the PyObject struct during this evaluation. Note that for objects of some types (for example, function objects or file objects) this string representation is trivial and doesn't contain any valuable information.</p>
</li>
</ul>
<hr />
<p>The official Python reference has no definition of the value of an object. It just says that</p>
<blockquote>
<p>Every object has an identity, a type, and a value.</p>
</blockquote>
<p>The Python glossary doesn't have such a definition either (although the term "value" appears in it 34 times). Moreover, it is not really clear how we should interpret the term "value" in its <a href="https://docs.python.org/3/glossary.html#term-object" rel="nofollow noreferrer">definition of object</a>:</p>
<blockquote>
<p>Object - any data with state (attributes or value) and defined behavior (methods).</p>
</blockquote>
<p>This definition contrasts "value" with "attributes". Maybe we should interpret "value" here in the narrow sense as I defined it above, but I am not sure.</p>
<hr />
<p>The answer below is a good explanation of the concept of "value" in Python. Now, I think the definition written in my first bullet can be called the definition of "memory representation of a value of an object" (synonym: "byte representation of a value of an object"). This representation is not unique because it is machine-dependent. And the definition written in my second bullet should be called the definition of "string representation of an object" (synonym: "string representation of a value of an object").</p>
|
<python><object><language-lawyer><cpython>
|
2024-06-28 10:44:48
| 2
| 355
|
Rodvi
|
78,681,955
| 2,706,344
|
Force python to write namespace in root tag
|
<p>I'm using python to generate an XML document for a third party. The third party has very precise ideas about how the final xml file shall look like. In my case it wants me to define a namespace in the root tag which is never used in the document. The
<code>ET.ElementTree.write</code> method is quite smart and defines only namespaces in the root tag which are used in the document. Is there a way to overcome its smartness and force it to write another additional namespace in the root tag?</p>
<p>The respective attribute in the root tag shall look like that:</p>
<pre><code>xmlns:SPECIAL_NAMESPACE="NAMESPACE_URL"
</code></pre>
|
<python><xml><elementtree><xml-namespaces>
|
2024-06-28 10:33:36
| 0
| 4,346
|
principal-ideal-domain
|
78,681,656
| 5,269,892
|
Pandas read csv simultaneously passing usecols and names args
|
<p>When reading a CSV file as a pandas dataframe, an error is raised when trying to select a subset of columns based on original column names (<code>usecols=</code>) and renaming the selected columns (<code>names=</code>). Passing renamed column names to <code>usecols</code> works, but all columns must be passed to <code>names</code> to correctly select columns.</p>
<pre><code># read the entire CSV
df1a = pd.read_csv(folder_csv+'test_read_csv.csv')
# select a subset of columns while reading the CSV
df1b = pd.read_csv(folder_csv+'test_read_csv.csv', usecols=['Col1','Col3'])
# rename columns while reading the CSV
df1c = pd.read_csv(folder_csv+'test_read_csv.csv', names=['first', 'second', 'third'], header=0)
# select a subset of columns and rename them while reading the CSV;
# throws error "ValueError: Usecols do not match columns, columns expected but not found: ['Col3', 'Col1']"
df1d = pd.read_csv(folder_csv+'test_read_csv.csv', usecols=['Col1','Col3'], names=['first','third'])
# selects columns 1 and 2, calling them 1 and 3
df1e = pd.read_csv(folder_csv+'test_read_csv.csv', usecols=['first','third'], names=['first','third'])
# selects columns 1 and 3 correctly
df1f = pd.read_csv(folder_csv+'test_read_csv.csv', usecols=['first','third'], names=['first','second','third'])
</code></pre>
<p>The CSV file <em>test_read_csv.csv</em> is:</p>
<pre><code>Col1,Col2,Col3
val1a,val2a,val3a
val1b,val2b,val3b
val1c,val2c,val3c
val1d,val2d,val3d
val1e,val2e,val3e
</code></pre>
<p><strong>Wouldn't it be a fairly common use case to select certain columns based on the <em>original</em> column names and then renaming only those columns while reading the data?</strong></p>
<p>Of course, it is possible to select the columns and rename them after loading the entire CSV file:</p>
<pre><code>df1 = df1[['Col1','Col3']]
df1.columns = ['first', 'third']
</code></pre>
<p>But I don't know how and whether this can be integrated directly when reading the data. The same holds also for <code>pd.read_excel()</code>.</p>
|
<python><pandas><dataframe><read-csv>
|
2024-06-28 09:26:10
| 3
| 1,314
|
silence_of_the_lambdas
|
78,681,428
| 17,614,576
|
Temporal SKD does not return all workflow executions
|
<p>I have an error in logs which tells that I have error in workflow with 'execute-flow-97bc4090-4cb4-4652-9841-7f01b8fda6f8' id:</p>
<pre><code>Completing activity as failed ({'activity_id': '1', 'activity_type': 'send_email', 'attempt': 23928, 'namespace': 'default', 'task_queue': 'execute-flow-task-queue', 'workflow_id': 'execute-flow-97bc4090-4cb4-4652-9841-7f01b8fda6f8', 'workflow_run_id': 'c2baa95b-8660-42e5-a451-221a05cf54c0', 'workflow_type': 'ExecuteFlowWorkflow'})
</code></pre>
<p>So I want to execute a debug script in python to find a similar problem in other workflows. But <code>temporal_client.list_workflows()</code> does not return all workflows that currently running.</p>
<p>To proof it I wrote a simple python code which retrieves a workflow with problem by it's id, then the code tries to find it by <code>temporal_client.list_workflows()</code> but the workflow wasn't found and the number of workflows returned by <code>temporal_client.list_workflows()</code> does not exceeds the limit of page.</p>
<pre><code>async def main():
temporal_settings = TemporalSettings()
temporal_client = await create_temporal_client(temporal_settings)
workflow_handle = temporal_client.get_workflow_handle("execute-flow-97bc4090-4cb4-4652-9841-7f01b8fda6f8")
print("Workflow id:", workflow_handle.id)
workflow_executions_count = 0
async for workflow_execution in temporal_client.list_workflows(page_size=1000):
workflow_executions_count += 1
if workflow_execution.id != workflow_handle.id:
continue
print("Workflow execution found")
break
else:
print("Workflow execution not found")
print("Workflow executions count:", workflow_executions_count)
asyncio.run(main())
</code></pre>
<p>Output:</p>
<pre><code>Workflow id: execute-flow-97bc4090-4cb4-4652-9841-7f01b8fda6f8
Workflow execution not found
Workflow executions count: 145
</code></pre>
|
<python><temporal><temporal-workflow>
|
2024-06-28 08:34:14
| 1
| 436
|
Prosto_Oleg
|
78,681,318
| 289,784
|
Using a programmatically generated type in type hints
|
<p>For some external reasons I'm generating a set of dataclasses dynamically with <code>make_dataclass</code>. In other parts of my codebase, I want to use these types in type hints. But both <code>mypy</code> and <code>pyright</code> complain.</p>
<pre class="lang-none prettyprint-override"><code>$ pyright dynamic_types.py
/home/user/testing/dynamic_types.py
/home/user/testing/dynamic_types.py:23:18 - error: Variable not allowed in type expression (reportInvalidTypeForm)
1 error, 0 warnings, 0 informations
$ mypy dynamic_types.py
dynamic_types.py:23: error: Variable "dynamic_types.mydc" is not valid as a type [valid-type]
dynamic_types.py:23: note: See https://mypy.readthedocs.io/en/stable/common_issues.html#variables-vs-type-aliases
Found 1 error in 1 file (checked 1 source file)
</code></pre>
<p>I understand the argument, but in my case the "dynamic" part is a dictionary within the same module. Is there some way I could get this to work?</p>
<p>MWE:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import asdict, field, make_dataclass
class Base:
def my_method(self):
pass
spec = {
"kind_a": [("foo", int, field(default=None)), ("bar", str, field(default=None))]
}
def make_mydc(kind: str) -> type:
"""Create dataclass from list of fields and types."""
fields = spec[kind]
return make_dataclass("mydc_t", fields, bases=(Base,))
mydc = make_mydc("kind_a")
def myfunc(data: mydc):
print(asdict(data))
data = mydc(foo=42, bar="test")
myfunc(data)
</code></pre>
<p>I can't use <code>Base</code> to type hint because it doesn't have all the attributes.</p>
|
<python><python-typing><python-dataclasses>
|
2024-06-28 08:06:08
| 1
| 4,704
|
suvayu
|
78,681,300
| 4,983,469
|
S3 PutObject works but List/Delete fails
|
<p>I am creating a session token by setting the permissions like this. This is done in kotlin.</p>
<pre><code> val stsClient = applicationContext.getBean("awsSecurityTokenClient", awsClient.getBasicCredentials()) as AWSSecurityTokenService
val folderName = "<folder>"
val keyRights = Statement(Statement.Effect.Allow)
keyRights.actions.addAll(listOf(S3Actions.PutObject, S3Actions.DeleteObject, S3Actions.ListObjects, S3Actions.GetObject))
keyRights.setResources(arrayListOf(Resource("arn:aws:s3:::$bucket/${folderName}/*")))
val statementList = listOf(keyRights)
val federationToken = stsClient.getFederationToken(getFederationTokenRequest(bucket, statementList))
val sessionToken = SessionToken(federationToken)
</code></pre>
<pre><code>private fun getFederationTokenRequest(userId: Any, accessStatements: List<Statement>): GetFederationTokenRequest {
val tokenRequest = GetFederationTokenRequest()
val policy = Policy()
policy.statements = accessStatements
tokenRequest.policy = policy.toJson()
tokenRequest.name = "$userId"
tokenRequest.durationSeconds = expireAt * 3600
return tokenRequest
}
</code></pre>
<p>The policy json from debugging is as -</p>
<pre><code>{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::<bucket_name>/<path>/"
]
}
]
}
</code></pre>
<p>The session tokens is then used to upload files to the allowed path. This works as I can see the upload work and the files reflect in S3. This is in python.</p>
<p>But the same token, if i try to do a list or delete fails.</p>
<pre><code> session = boto3.Session(aws_access_key_id=s3_meta['accessToken'],
aws_secret_access_key=s3_meta['secureToken'],
aws_session_token=s3_meta['sessionToken'])
s3 = session.resource('s3')
</code></pre>
<p>This fails:</p>
<pre><code> bucket = s3.Bucket(bucket_name)
bucket.objects.filter(Prefix=path_in_bucket).delete()
#Failue: ClientError: An error occurred (AccessDenied) when calling the ListObjects operation: Access Denied
</code></pre>
<p>This works:</p>
<pre><code>s3.meta.client.upload_file(file, bucket_name, bucket_upload_path)
</code></pre>
<p>I have clearly set list and delete in the allowed actions. How is Put working while the other 2 fail ? What am I missing here ?</p>
|
<python><amazon-web-services><kotlin><amazon-s3><boto3>
|
2024-06-28 08:01:43
| 1
| 1,997
|
leoOrion
|
78,681,138
| 6,141,238
|
In Python: Can I reverse the order in which several matplotlib plot windows are visible / in focus?
|
<p>When I display several (say, 10 or so) matplotlib plots in either IDLE or VS Code, they generally appear as a cascading stack of plots from top left to bottom right. In this stack, the first plot is on the bottom and last plot is on top (that is, fully visible). To view any plot in the stack other than the last, I would click on its title bar, which is visible because the plots in the stack are cascaded, or staggered, from upper left to lower right.</p>
<p>The downside of this default behavior is that I (and plausibly other Python users) often want to view the first plot first and the last plot last. So it would be more convenient to have the visibility of the plots ordered from last to first, and to do so without changing the figure numbers (Figure 1 through Figure 10 if there are 10 figures).</p>
<p>As a hack, we might try to accomplish this in the code by reversing the order in which the figures are plotted, and also reversing the figure numbers. But this solution can be cumbersome — for readability, we may want to generate the plots at the place in the code where the plotted variables are constructed, rather than postponing all plotting to the end.</p>
<p>Is there a better solution to this problem?</p>
|
<python><matplotlib><plot><window>
|
2024-06-28 07:22:54
| 1
| 427
|
SapereAude
|
78,680,919
| 7,227,146
|
ModuleNotFoundError: No module named 'setuptools.extern.six'
|
<p>I'm trying to <code>pip install streamlit-wordcloud</code> (<a href="https://github.com/rezaho/streamlit-wordcloud" rel="nofollow noreferrer">here</a>), but I get the following error:</p>
<pre><code>ModuleNotFoundError: No module named 'setuptools.extern.six'
</code></pre>
<p>My Python and pip versions are:</p>
<pre><code>> python --version
Python 3.12.3
> pip --version
pip 24.1.1 from C:\Users\...\.venv\Lib\site-packages\pip (python 3.12)
</code></pre>
<p>I tried doing <code>pip install --upgrade setuptools</code> as suggested <a href="https://stackoverflow.com/questions/78124109/heroku-deploy-modulenotfounderror-no-module-named-setuptools-extern-six">here</a> but it doesn't work.</p>
<p>Before getting this error, I got <code>ModuleNotFoundError: No module named 'distutils'</code>, for which <code>pip install setuptools</code> worked.</p>
|
<python><pip><setuptools>
|
2024-06-28 06:22:41
| 0
| 679
|
zest16
|
78,680,681
| 200,794
|
Why are pyparsing's `DelimitedList` and `Dict` so awkward to use together?
|
<p>Pyparsing offers the <code>ParseElementEnhance</code> subclass <code>DelimitedList</code> for parsing (typically comma-separated) lists:</p>
<pre class="lang-py prettyprint-override"><code>>>> kv_element = pp.Word(pp.alphanums)
>>> kv_list = pp.DelimitedList(kv_element)
>>> kv_list.parse_string('red, green, blue')
ParseResults(['red', 'green', 'blue'], {})
</code></pre>
<p>And it provides the <code>TokenConverter</code> subclass <code>Dict</code>, for transforming a repeating expression into a dictionary:</p>
<pre class="lang-py prettyprint-override"><code>>>> key = value = pp.Word(pp.alphanums)
>>> kv_pair = key + pp.Suppress("=") + value
>>> kv_dict = pp.Dict(pp.Group(kv_pair)[...])
>>> kv_dict.parse_string('R=red G=green B=blue')
ParseResults([
ParseResults(['R', 'red'], {}),
ParseResults(['G', 'green'], {}),
ParseResults(['B', 'blue'], {})
], {'R': 'red', 'G': 'green', 'B': 'blue'})
</code></pre>
<p>But combining them feels awkward. It's possible to build a successful combined <code>ParserElement</code> for parsing a dict out of a delimited list, but compared to the above it requires:</p>
<ol>
<li>Redefining the <code>DelimitedList</code> to output <code>Group()</code>s</li>
<li>Repeating the <code>DelimitedList</code> when constructing the <code>Dict()</code> around it, to appease the type checker.<sup>1</sup></li>
</ol>
<pre class="lang-py prettyprint-override"><code>>>> kv_pair = key + pp.Suppress("=") + value
>>> kv_pairlist = pp.DelimitedList(pp.Group(kv_pair))
>>> kv_pairdict = pp.Dict(kv_pairlist[...])
>>> kv_pairdict.parse_string('R=red, G=green, B=blue')
ParseResults([
ParseResults(['R', 'red'], {}),
ParseResults(['G', 'green'], {}),
ParseResults(['B', 'blue'], {})
], {'R': 'red', 'G': 'green', 'B': 'blue'})
</code></pre>
<p>The whole effect reads like you're defining a parser to create a dictionary from a <em>series</em> of 1-element delimited lists, each containing a single key-value pair match. (In fact, I'm not entirely sure that isn't what's <strong>actually happening</strong> in the parser.)</p>
<p>Writing code to express the intent — a parser definition to match a single delimited list, containing a series of key-value pair matches — feels like a struggle against the API. (The fact that using <code>kv_pairdict = pp.Dict(kv_pairlist)</code> will <em>function</em> the same as above, but runs afoul of the type checker, is especially vexing.)</p>
<p>Is there a cleaner way to express the intended parser definition, within the Pyparsing API? If not, is that a deficiency of my design, of Pyparsing's API, or something else?</p>
<p>(Do I have the definition inside out? <code>DelimitedList(Dict(Group(kv_pair)[1, ...]))</code> does also work, but feels even more conceptually backwards to me. But it doesn't involve nearly as much fighting against the API, so maybe I'm just looking at it wrong.)</p>
<h3>Notes</h3>
<ol>
<li>(Otherwise, at least in VSCode, it gets this vaguely insane-sounding annotation:)
<blockquote>
<p>No overload variant of "dict" matches argument type
"DelimitedList" (mypycall-overload)</p>
<p>Possible overload variants:</p>
<pre class="lang-py prettyprint-override"><code> def [_KT, _VT] __init__(self) -> dict[_KT, _VT]
def [_KT, _VT] __init__(self, **kwargs: _VT) -> dict[str, _VT]
def [_KT, _VT] __init__(self, SupportsKeysAndGetItem[_KT, _VT], /) -> dict[_KT, _VT]
def [_KT, _VT] __init__(self, SupportsKeysAndGetItem[str, _VT], /, **kwargs: _VT) -> dict[str, _VT]
def [_KT, _VT] __init__(self, Iterable[tuple[_KT, _VT]], /) -> dict[_KT, _VT]
def [_KT, _VT] __init__(self, Iterable[tuple[str, _VT]], /, **kwargs: _VT) -> dict[str, _VT]
def [_KT, _VT] __init__(self, Iterable[list[str]], /) -> dict[str, str]
def [_KT, _VT] __init__(self, Iterable[list[bytes]], /) -> dict[bytes, bytes]mypy(note)
</code></pre>
</blockquote>
</li>
</ol>
|
<python><parsing><pyparsing>
|
2024-06-28 04:42:43
| 1
| 2,191
|
FeRD
|
78,680,547
| 11,143,113
|
How to not show double quotes in ansible run output while displaying contents of a file?
|
<p>clean.txt file having data without any double quotes below.</p>
<pre><code>[wladmin@linuxhost ~]$ cat clean.txt
REPL session history will not be persisted.
WDA-EMEA-PROD
{
db: 'WDA-EMEA-PROD',
collections: 53,
ok: 1,
'$clusterTime': {
clusterTime: Timestamp({ t: 1719475282, i: 1 }),
signature: {
hash: Binary.createFromBase64('jXjZEjVC7wm8wCNuJVTSvGWZNO4=', 0),
keyId: Long('7348787707444723714')
}
},
operationTime: Timestamp({ t: 1719475282, i: 1 })
}
Get Collection Count
</code></pre>
<p>Ansible playbook trying to displaycontents of the file as is:</p>
<pre><code>[wladmin@linuxhost ~]$ cat logformat.yml
---
- name: "Play 1"
hosts: localhost
gather_facts: no
tasks:
- name: Read Github logs
raw: "cat clean.txt"
register: file_contentone
- debug:
msg: "{{ file_contentone.stdout_lines }}"
</code></pre>
<p>Ansible playbook run showing double quotes in output.</p>
<pre><code>[wladmin@linuxhost ~]$ ansible-playbook logformat.yml
PLAY [Play 1] **************************************************************************************************************************************************************
TASK [Read Github logs] ****************************************************************************************************************************************************
Thursday 27 June 2024 21:51:52 -0500 (0:00:00.013) 0:00:00.013 *********
changed: [localhost]
TASK [debug] ***************************************************************************************************************************************************************
Thursday 27 June 2024 21:51:52 -0500 (0:00:00.024) 0:00:00.038 *********
ok: [localhost] => {
"msg": [
"REPL session history will not be persisted.",
" EMEA-PROD",
"{",
" collections: 53,",
" ok: 1,",
" '$clusterTime': {",
" clusterTime: Timestamp({ t: 1719475282, i: 1 }),",
" signature: {",
" hash: Binary.createFromBase64('jXjZEjVC7wm8wCNuJVTSvGWZNO4=', 0),",
" keyId: Long('7348787707444723714')",
" }",
" },",
" operationTime: Timestamp({ t: 1719475282, i: 1 })",
"}",
"Get Collection Count"
]
}
PLAY RECAP *****************************************************************************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
Thursday 27 June 2024 21:51:52 -0500 (0:00:00.039) 0:00:00.077 *********
</code></pre>
<p>I understand that the file contents is without any double quotes, however Can I somehow print the output without any double quotes around each line?</p>
|
<python><ansible><text-formatting>
|
2024-06-28 03:23:38
| 0
| 3,175
|
Ashar
|
78,680,411
| 5,049,813
|
How to predict the resulting type after indexing a Pandas DataFrame
|
<p>I have a Pandas <code>DataFrame</code>, as defined <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.index.html" rel="nofollow noreferrer">here</a>:</p>
<pre class="lang-py prettyprint-override"><code>df = pd.DataFrame({'Name': ['Alice', 'Bob', 'Aritra'],
'Age': [25, 30, 35],
'Location': ['Seattle', 'New York', 'Kona']},
index=([10, 20, 30]))
</code></pre>
<p><strong>However, when I index into this <code>DataFrame</code>, I can't accurately predict what type of object is going to result from the indexing:</strong></p>
<pre class="lang-py prettyprint-override"><code># (1) str
df.iloc[0, df.columns.get_loc('Name')]
# (2) Series
df.iloc[0:1, df.columns.get_loc('Name')]
# (3) Series
df.iloc[0:2, df.columns.get_loc('Name')]
# (4) DataFrame
df.iloc[0:2, df.columns.get_loc('Name'):df.columns.get_loc('Age')]
# (5) Series
df.iloc[0, df.columns.get_loc('Name'):df.columns.get_loc('Location')]
# (6) DataFrame
df.iloc[0:1, df.columns.get_loc('Name'):df.columns.get_loc('Location')]
</code></pre>
<p>Note that each of the pairs above <em>contain the same data</em>. (e.g. <code>(2)</code> is a Series that contains a single string, <code>(4)</code> is a DataFrame that contains a single column, etc.)</p>
<p><strong>Why do they output different types of objects? How can I predict what type of object will be output?</strong></p>
<p>Given the data, it looks like the rule is based on how many slices (colons) you have in the index:</p>
<ul>
<li>0 slices (<code>(1)</code>): scalar value</li>
<li>1 slice (<code>(2)</code>, <code>(3)</code>, <code>(5)</code>): <code>Series</code></li>
<li>2 slices (<code>(4)</code>, <code>(6)</code>): <code>DataFrame</code></li>
</ul>
<p>However, I'm not sure if this is always true, and even if it is always true, <strong>I want to know the underlying mechanism as to why it is like that.</strong></p>
<p>I've spent a while looking at the <a href="https://pandas.pydata.org/docs/user_guide/indexing.html" rel="nofollow noreferrer">indexing documentation</a>, but it doesn't seem to describe this behavior clearly. The <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.iloc.html" rel="nofollow noreferrer">documentation for the <code>iloc</code> function</a> also doesn't describe the return types.</p>
<p><strong>I'm also interested in the same question for <code>loc</code></strong> instead of <code>iloc</code>, but, since <a href="https://stackoverflow.com/questions/49962417/why-does-loc-have-inclusive-behavior-for-slices"><code>loc</code> is inclusive</a>, the results aren't quite as bewildering. (That is, you can't get pairs of indexes with different types where the indexes should pull out the exact same data.)</p>
|
<python><pandas><dataframe><indexing>
|
2024-06-28 02:14:41
| 1
| 5,220
|
Pro Q
|
78,680,399
| 1,033,217
|
Create Process in Debug Using Python ctypes
|
<p>The following code is supposed to start a new process <code>calc.exe</code> in debug mode. However, it fails with the code 2 or <code>ERROR_FILE_NOT_FOUND</code>. However, this file <code>calc.exe</code> does exist on the system. What could be wrong with this code? Is there an issue with the path that can be improved so this works?</p>
<pre class="lang-py prettyprint-override"><code>from ctypes import *
kernel32 = windll.kernel32
WORD = c_ushort
DWORD = c_ulong
LPBYTE = POINTER(c_ubyte)
LPTSTR = POINTER(c_char)
HANDLE = c_void_p
class STARTUPINFO(Structure):
_fields_ = [
("cb", DWORD),
("lpReserved", LPTSTR),
("lpDesktop", LPTSTR),
("lpTitle", LPTSTR),
("dwX", DWORD),
("dwY", DWORD),
("dwXSize", DWORD),
("dwYSize", DWORD),
("dwXCountChars", DWORD),
("dwYCountChars", DWORD),
("dwFillAttribute",DWORD),
("dwFlags", DWORD),
("wShowWindow", WORD),
("cbReserved2", WORD),
("lpReserved2", LPBYTE),
("hStdInput", HANDLE),
("hStdOutput", HANDLE),
("hStdError", HANDLE),
]
class PROCESS_INFORMATION(Structure):
_fields_ = [
("hProcess", HANDLE),
("hThread", HANDLE),
("dwProcessId", DWORD),
("dwThreadId", DWORD),
]
DEBUG_PROCESS = 0x00000001
creation_flags = DEBUG_PROCESS
startupinfo = STARTUPINFO()
startupinfo.dwFlags = 0x1
startupinfo.wShowWindow = 0x0
startupinfo.cb = sizeof(startupinfo)
process_information = PROCESS_INFORMATION()
result = kernel32.CreateProcessA("C:\\Windows\\System32\\calc.exe",
None,
None,
None,
None,
creation_flags,
None,
None,
byref(startupinfo),
byref(process_information)
)
print(result)
print(kernel32.GetLastError())
</code></pre>
|
<python><python-3.x><windows><ctypes><kernel32>
|
2024-06-28 02:06:31
| 1
| 795
|
Utkonos
|
78,680,132
| 7,773,898
|
NLTK throws FileNotFoundError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']
|
<p>Hey can anyone tell what is the reason of <code>FileNotFoundError: [Errno 2] No usable temporary directory found in ['/tmp', '/var/tmp', '/usr/tmp', '/']</code></p>
<p>I am running on Docker ecs fargate task and this throws error.</p>
<p>I have tried to build and run image on local it works fine but on ecs fargate this throws error.</p>
<p>Below is my docker file</p>
<pre><code>FROM python:3.10-slim-bullseye
RUN apt-get update \
&& apt-get install --yes build-essential python3-dev default-libmysqlclient-dev libpq-dev pkg-config\
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN groupadd -r runner && useradd -r -g runner runner
COPY requirements /requirements
RUN pip install -U pip
RUN pip install -r requirements/base.txt
COPY handler.py /
COPY document_generation /document_generation
RUN export PYTHONPATH="${PYTHONPATH}:$(pwd)";
WORKDIR /
# Create a temporary directory and set permissions
RUN mkdir -p /documents_api_tmp && chmod 777 -R /documents_api_tmp && chmod o+t -R /documents_api_tmp
ENV TMPDIR=/documents_api_tmp
# Debugging step: Check directory permissions and disk usage
RUN df -h \
&& du -sh /documents_api_tmp
USER runner
CMD ["python", "handler.py"]
</code></pre>
|
<python><docker-compose><amazon-ecs>
|
2024-06-27 23:24:56
| 0
| 383
|
ALTAF HUSSAIN
|
78,680,120
| 12,946,401
|
Video codec for 3D volumetric video datasets
|
<p>I have 3D volumetric MRI scans representing a time series. So the sequence of these 3D volumes presents a video composed of voxels rather than the conventional 2D pixels from traditional 2D videos. But I noticed that 2D or RGB videos have several codecs available at their disposal to compress the video so it can be efficiently stored and viewed during playback. However, I am in search of the 3D equivalent and was not able to find a video codec for voxel data. The closest I have come is info on 3D volume compression, but not on 3D video volume compression (<a href="https://eisenwave.github.io/voxel-compression-docs/" rel="nofollow noreferrer">https://eisenwave.github.io/voxel-compression-docs/</a>). The other althernative is to use python with zarr arrays and store the N-dimensional array in appropriate sized chunks with the available numcodec compression algorithms. Is the recommended approach to individually compress each volume or are there better approaches to storing 3D video data?</p>
|
<python><multidimensional-array><3d><compression><image-compression>
|
2024-06-27 23:14:39
| 0
| 939
|
Jeff Boker
|
78,680,003
| 1,592,821
|
Az.cmd closes immediately
|
<p>When I Run az.cmd on my windows 10 screen some information flashes by and the screen closes.
When I right click the shortcut and choose Edit I see</p>
<blockquote>
<p>:: :: Microsoft Azure CLI - Windows Installer - Author file components
script :: Copyright (C) Microsoft Corporation. All Rights Reserved. ::</p>
<p>@IF EXIST "%~dp0..\python.exe" ( SET AZ_INSTALLER=MSI<br />
"%~dp0..\python.exe" -IBm azure.cli %* ) ELSE ( echo Failed to load
python executable. exit /b 1 )</p>
</blockquote>
<p>I reinstalled python and rebooted but this did not help.</p>
<p>I managed to capture this as it flashed by
<a href="https://i.sstatic.net/BHUVWbuz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BHUVWbuz.png" alt="Screen capture" /></a></p>
|
<python><powershell><azure-cli>
|
2024-06-27 22:23:13
| 1
| 18,788
|
Kirsten
|
78,679,962
| 25,413,271
|
Gitlab CI, artefacts
|
<p>I am doing my first CI project and I have recently got confused about artefacts...</p>
<p>Say I have config with next jobs:</p>
<pre><code>cleanup_build:
tags:
- block_autotest
stage: cleanup
script:
- Powershell $env:P7_TESTING_INSTALLATION_PATH\client\p7batch.exe --log-level=error --run $env:JOBS_FOLDER_PATH\clear.py
install_block:
tags:
- block_autotest
stage: installation
script:
- Powershell $env:P7_TESTING_INSTALLATION_PATH\client\p7batch.exe --log-level=error --run $env:JOBS_FOLDER_PATH\setup_block.py
</code></pre>
<p>"install_block" job is not to be done if the job "cleanup_build" has failed. So, I have to create some kind of artifact after "cleanup_build" has succeeded so this artefact is visible at the stage "installation" for the job "install_block". At the job "install_block" I could use python to address the artifact and ensure the one exists.</p>
<p>Also I have created a speciad folder for artifacts:</p>
<pre><code>ARTEFACTS_FOLDER_PATH: $CI_PROJECT_DIR\autotest\artefacts
</code></pre>
<p>So within the job "cleanup_build" I create a file "clean" at the artefact folder. But it seems that CI reloads repository at project directory, because if I leave just "cleanup_build" job (delete "install_block" from yml) I can see the "clean" file at the project, but if I leave both jobs this file dissapears before "install_block" job begins...</p>
|
<python><gitlab><gitlab-ci><gitlab-ci-runner>
|
2024-06-27 22:04:09
| 1
| 439
|
IzaeDA
|
78,679,925
| 119,527
|
How to get groups of rows bounded by specific values in Pandas?
|
<h2>Data</h2>
<p>I have data which resembles the following:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame(
[
["start", ""],
["data", 10],
["data", 11],
["stop", ""],
["start", ""],
["data", 20],
["data", 21],
["stop", ""],
],
columns=["type", "value"],
)
</code></pre>
<pre><code> type value
0 start
1 data 10
2 data 11
3 stop
4 start
5 data 20
6 data 21
7 stop
</code></pre>
<h2>Goal</h2>
<p>My goal is to iterate over plain lists of <code>data</code>, as bounded by <code>start</code> and <code>stop</code>:</p>
<pre><code>[10, 11]
[20, 21]
</code></pre>
<p>To do this, I would like to be able to iterate over groups of dataframes which are located by these specific values in specific columns.</p>
<h2>Attempt</h2>
<p>I can make this work by iterating:</p>
<pre><code>def iter_groups(df):
start_idx = None
for idx, row in df.iterrows():
if row["type"] == "start":
assert start_idx is None
start_idx = idx
continue
if row["type"] == "stop":
assert start_idx is not None
yield df.iloc[start_idx : idx+1]
start_idx = None
</code></pre>
<p>But this is, unsurprisingly, quite slow. How can I do this using Pandas methods?</p>
<h2>Simplification</h2>
<p>I think it is safe to assume that there are never any rows between the <code>stop</code> of one group and the <code>start</code> of the next group.</p>
|
<python><pandas>
|
2024-06-27 21:45:27
| 3
| 138,383
|
Jonathon Reinhart
|
78,679,917
| 3,311,728
|
How do I edit a yaml file with python in a way that preserves format, comments, anchors, etc.?
|
<p>Let's say I have this source yaml file:</p>
<pre class="lang-yaml prettyprint-override"><code>top_level: &TL
field0: somestring
field1: "this one has quotes"
field2: [1,2,3]
# This comment describes something very important about field3
field3:
deep_list:
- name: foo
field1: bar
field2: baz
quoxx: { flow_form: 42, bar: [one, two], field}
# Here is another super important comment about the blockform field
blockform:
- one
- two
- three
- name: bar
message: I was written by a human
and deliberately formatted
this way for a good reason.
level0:
level1:
level2:
- item
- item2: 23
tl: *TL
</code></pre>
<p>And I would like to modify it with a python program. For example, perhaps I want my python program to add a new element to <code>top_level.field3.deep_list</code>. But as you might have noticed from the content of this file, it's important for this modification/update operation preserve the formatting, string quoting conventions, anchors, and comments of the rest of the file. In other words, I want the modification to be <em>minimally invasive</em>: it should only add the new lines necessary to update the list and all other lines should be left completely untouched. Is there an easy way to accomplish this?</p>
<h2>Current solution</h2>
<p>Based on my research so far, the most viable option seems to be <a href="https://yaml.readthedocs.io/en/latest/" rel="nofollow noreferrer"><code>ruamel.yaml</code></a> which is a yaml serializer/deserializer that is capable of round-tripping a yaml document in such a way that preserves many aspects of the original formatting and comments. However <code>ruamel.yaml</code> is still an imperfect solution for my use case. Although <code>ruamel.yaml</code> gets me <strong>way</strong> closer to what I want than <code>PyYaml</code>, it seems that it is still not capable of preserving some formatting decisions from the origingl yaml document. For example, from what I understand, <code>ruamel.yaml</code> can't preserve whitespace or line breaking of strings.</p>
<p>So although <code>ruamel.yaml</code> is a somewhat viable option for my use case, I want to know if there is an even better solution out there that can give me what I really want, which is a minimally invasive edit, i.e. an edit operation that ONLY changes the lines directly effected by the data modification and leave all other lines as they exist in the original version verbatim.</p>
|
<python><yaml><formatting><roundtrip>
|
2024-06-27 21:43:01
| 0
| 2,461
|
RBF06
|
78,679,818
| 5,049,813
|
Terminology for ordinal index versus dataframe index
|
<p>I've <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.index.html" rel="nofollow noreferrer">set up a dataframe</a>:</p>
<pre><code>df = pd.DataFrame({'Name': ['Alice', 'Bob', 'Aritra'],
'Age': [25, 30, 35],
'Location': ['Seattle', 'New York', 'Kona']},
index=([10, 20, 30]))
</code></pre>
<p>I ask the user to put in which row of the data they want to see. They input a <code>0</code>, indicating the first row.</p>
<p>However, <code>df.loc[0]</code> does not refer to the first row. Instead, it doesn't exist, because the <code>index</code> only has the values 10, 20, and 30.</p>
<p>Is there terminology to distinguish these two types of indexes? The best I could come up with is <em>ordinal indexes</em> (for "What number row is this?") and <em>dataframe indexes</em> (for "What is the index of this row in the dataframe?").</p>
<p>To clarify, under my definitions, <code>df.index(ordinal_index) == df_index</code>.</p>
<p>Is there a standard terminology for this?</p>
|
<python><pandas><dataframe><terminology>
|
2024-06-27 21:08:56
| 1
| 5,220
|
Pro Q
|
78,679,802
| 12,158,757
|
How to make a `functools.reduce` implementation that looks similarly as `Reduce` in R?
|
<p>Here is an R example of using <code>Reduce</code></p>
<pre><code>x <- c(1, 2, 2, 4, 10, 5, 5, 7)
Reduce(\(a, b) if (tail(a, 1) != b) c(a, b) else a, x) # equivalent to `rle(x)$values`
</code></pre>
<p>The code above is to sort out the extract unique values in terms of run length, which can be easily obtained by <code>rle(x)$values</code>.</p>
<hr />
<p>I know in Python there is <code>itertools.groupby</code> that performs the same thing as <code>rle</code> in R, BUT, <strong>what I am curious about is</strong>: Is it possible to have a highly similar translation by using <code>functools.reduce</code> in Python to achieve the same functionality, say, for example</p>
<pre><code>from functools import reduce
x = [1,2,2,4,10,5,5,7]
reduce(lambda a, b: a + [b] if a[-1]!= b else a, x)
</code></pre>
<p>but which unfortunately gives errors like</p>
<pre><code>{
"name": "TypeError",
"message": "'int' object is not subscriptable",
"stack": "---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[58], line 4
1 from functools import reduce
2 x = [1,2,2,4,10,5,5,7]
----> 4 reduce(lambda a, b: a + [b] if a[-1]!= b else a, x)
Cell In[58], line 4, in <lambda>(a, b)
1 from functools import reduce
2 x = [1,2,2,4,10,5,5,7]
----> 4 reduce(lambda a, b: a + [b] if a[-1]!= b else a, x)
TypeError: 'int' object is not subscriptable"
}
</code></pre>
<hr />
<p><strong>My question is</strong>: Is there any one-liner of <code>reduce</code> in Python that looks like R code?</p>
|
<python><r><arrays><reduce>
|
2024-06-27 21:04:06
| 2
| 105,741
|
ThomasIsCoding
|
78,679,769
| 2,661,491
|
Multi-key GroupBy with shared data on one key
|
<p>I am working with a large dataset that includes multiple unique groups of data identified by a date and a group ID. Each group contains multiple IDs, each with several attributes. Here’s a simplified structure of my data:</p>
<pre><code>| date | group_id | inner_id | attr_a | attr_b | attr_c |
|------------|----------|----------|--------|--------|--------|
| 2023-06-01 | A1 | 001 | val | val | val |
| 2023-06-01 | A1 | 002 | val | val | val |
...
</code></pre>
<p>Additionally, for each date, I have a large matrix associated with it:</p>
<pre><code>| date | matrix |
|------------|--------------|
| 2023-06-01 | [[...], ...] |
...
</code></pre>
<p>I need to apply a function for each date and group_id that processes data using both the group attributes and the matrix associated with that date. The function looks like this:</p>
<pre class="lang-py prettyprint-override"><code>def run(group_data: pd.DataFrame, matrix) -> pd.DataFrame:
# process data
return processed_data
</code></pre>
<p>Here, <code>group_data</code> contains the attributes for a specific group:</p>
<pre><code>| inner_id | attr_a | attr_b | attr_c |
|----------|--------|--------|--------|
| 001 | val | val | val |
...
</code></pre>
<p>Here is my current implementation, it works but I can only run ~200 dates at a time because I am broadcasting all data to all workers (I have ~2k dates, ~100 groups per date, ~150 inner elements per group)</p>
<pre class="lang-py prettyprint-override"><code>def calculate_metrics(data: DataFrame, matrices: DataFrame) -> DataFrame:
# Convert matrices to a dictionary mapping dates to matrix
date_matrices = matrices.rdd.collectAsMap()
# Broadcast the matrices
broadcasted_matrices = spark_context.broadcast(date_matrices)
# Function to apply calculations
def apply_calculation(group_key: Tuple[str, str], data_group: pd.DataFrame) -> pd.DataFrame:
date = group_key[1]
return custom_calculation_function(broadcasted_matrices.value[date], data_group)
# Apply the function to each group
return data.groupby('group_id', 'date').applyInPandas(apply_calculation, schema_of_result)
</code></pre>
<p>How can I optimize this computation to parallelize the processing effectively, ensuring that the matrices are not redundantly loaded into memory more than necessary?</p>
|
<python><pandas><pyspark>
|
2024-06-27 20:52:42
| 1
| 5,572
|
evan.oman
|
78,679,538
| 1,028,270
|
How is protobuf generating this method and is it impossible to get auto-complete for it?
|
<p>I was looking at a code base (GCP SDK's monitoring API) trying to drill down to get familiar with some methods, but I hit a wall here: <a href="https://cloud.google.com/monitoring/custom-metrics/creating-metrics#monitoring_create_metric-python" rel="nofollow noreferrer">https://cloud.google.com/monitoring/custom-metrics/creating-metrics#monitoring_create_metric-python</a></p>
<p>Specifically this line <code>descriptor = ga_metric.MetricDescriptor()</code>. How does <code>MetricDescriptor()</code> get generated?</p>
<p>According to comments in <code>metric_pb2</code> (ga_metric is an alias to it) that file was generated by protobuf. In that module file I see no definition for <code>MetricDescriptor()</code> though. How am I able to call <code>ga_metric.MetricDescriptor()</code>? What part of the code here is generating the <code>MetricDescriptor()</code> method that I'm able to call?</p>
<pre><code># metric_pb2.py
"""Generated protocol buffer code."""
from google.protobuf import descriptor as _descriptor
from google.protobuf import descriptor_pool as _descriptor_pool
from google.protobuf import symbol_database as _symbol_database
from google.protobuf.internal import builder as _builder
# @@protoc_insertion_point(imports)
_sym_db = _symbol_database.Default()
from google.api import label_pb2 as google_dot_api_dot_label__pb2
from google.api import launch_stage_pb2 as google_dot_api_dot_launch__stage__pb2
from google.protobuf import duration_pb2 as google_dot_protobuf_dot_duration__pb2
DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(
b'\n\x17google/api/metric.proto\x12\ngoogle.api\x1a\x16google/api/label.proto\x1a\x1dgoogle/api/launch_stage.proto\x1a\x1egoogle/protobuf/duration.proto"\x9f\x06\n\x10MetricDescriptor\x12\x0c\n\x04name\x18\x01 \x01(\t\x12\x0c\n\x04type\x18\x08 \x01(\t\x12+\n\x06labels\x18\x02 \x03(\x0b\x32\x1b.google.api.LabelDescriptor\x12<\n\x0bmetric_kind\x18\x03 \x01(\x0e\x32\'.google.api.MetricDescriptor.MetricKind\x12:\n\nvalue_type\x18\x04 \x01(\x0e\x32&.google.api.MetricDescriptor.ValueType\x12\x0c\n\x04unit\x18\x05 \x01(\t\x12\x13\n\x0b\x64\x65scription\x18\x06 \x01(\t\x12\x14\n\x0c\x64isplay_name\x18\x07 \x01(\t\x12G\n\x08metadata\x18\n \x01(\x0b\x32\x35.google.api.MetricDescriptor.MetricDescriptorMetadata\x12-\n\x0claunch_stage\x18\x0c \x01(\x0e\x32\x17.google.api.LaunchStage\x12 \n\x18monitored_resource_types\x18\r \x03(\t\x1a\xb0\x01\n\x18MetricDescriptorMetadata\x12\x31\n\x0claunch_stage\x18\x01 \x01(\x0e\x32\x17.google.api.LaunchStageB\x02\x18\x01\x12\x30\n\rsample_period\x18\x02 \x01(\x0b\x32\x19.google.protobuf.Duration\x12/\n\x0cingest_delay\x18\x03 \x01(\x0b\x32\x19.google.protobuf.Duration"O\n\nMetricKind\x12\x1b\n\x17METRIC_KIND_UNSPECIFIED\x10\x00\x12\t\n\x05GAUGE\x10\x01\x12\t\n\x05\x44\x45LTA\x10\x02\x12\x0e\n\nCUMULATIVE\x10\x03"q\n\tValueType\x12\x1a\n\x16VALUE_TYPE_UNSPECIFIED\x10\x00\x12\x08\n\x04\x42OOL\x10\x01\x12\t\n\x05INT64\x10\x02\x12\n\n\x06\x44OUBLE\x10\x03\x12\n\n\x06STRING\x10\x04\x12\x10\n\x0c\x44ISTRIBUTION\x10\x05\x12\t\n\x05MONEY\x10\x06"u\n\x06Metric\x12\x0c\n\x04type\x18\x03 \x01(\t\x12.\n\x06labels\x18\x02 \x03(\x0b\x32\x1e.google.api.Metric.LabelsEntry\x1a-\n\x0bLabelsEntry\x12\x0b\n\x03key\x18\x01 \x01(\t\x12\r\n\x05value\x18\x02 \x01(\t:\x02\x38\x01\x42_\n\x0e\x63om.google.apiB\x0bMetricProtoP\x01Z7google.golang.org/genproto/googleapis/api/metric;metric\xa2\x02\x04GAPIb\x06proto3'
)
_globals = globals()
_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, "google.api.metric_pb2", _globals)
if _descriptor._USE_C_DESCRIPTORS == False:
DESCRIPTOR._options = None
DESCRIPTOR._serialized_options = b"\n\016com.google.apiB\013MetricProtoP\001Z7google.golang.org/genproto/googleapis/api/metric;metric\242\002\004GAPI"
_METRICDESCRIPTOR_METRICDESCRIPTORMETADATA.fields_by_name[
"launch_stage"
]._options = None
_METRICDESCRIPTOR_METRICDESCRIPTORMETADATA.fields_by_name[
"launch_stage"
]._serialized_options = b"\030\001"
_METRIC_LABELSENTRY._options = None
_METRIC_LABELSENTRY._serialized_options = b"8\001"
_globals["_METRICDESCRIPTOR"]._serialized_start = 127
_globals["_METRICDESCRIPTOR"]._serialized_end = 926
_globals["_METRICDESCRIPTOR_METRICDESCRIPTORMETADATA"]._serialized_start = 554
_globals["_METRICDESCRIPTOR_METRICDESCRIPTORMETADATA"]._serialized_end = 730
_globals["_METRICDESCRIPTOR_METRICKIND"]._serialized_start = 732
_globals["_METRICDESCRIPTOR_METRICKIND"]._serialized_end = 811
_globals["_METRICDESCRIPTOR_VALUETYPE"]._serialized_start = 813
_globals["_METRICDESCRIPTOR_VALUETYPE"]._serialized_end = 926
_globals["_METRIC"]._serialized_start = 928
_globals["_METRIC"]._serialized_end = 1045
_globals["_METRIC_LABELSENTRY"]._serialized_start = 1000
_globals["_METRIC_LABELSENTRY"]._serialized_end = 1045
# @@protoc_insertion_point(module_scope)
</code></pre>
<p>Per DazWilkin I was able to locate all the packages with proto files and generate pyi files for them. This works reasonably well. pylance finds them (though pylint does not?). Also, there is some gnarly bug with protoc and I had to run it grpc_tools to get it to work.</p>
<pre><code>packages_paths = site.getsitepackages()[0]
proto_folders: list[str] = []
for name in glob.glob(f"{packages_paths}/**/*.proto", recursive=True):
proto_folder = os.path.dirname(name)
proto_folders.append(proto_folder)
proto_folders = list(set(proto_folders))
for proto_folder in proto_folders:
os.chdir(proto_folder)
# If we wildcard and there is a single failure the rest are skipped
# So just loop over each file and run protoc for each one
for proto_file in glob.glob(f"{proto_folder}/*.proto", recursive=True):
file_name = os.path.basename(proto_file)
cmd = f"python -m grpc_tools.protoc --proto_path=. --pyi_out=. {file_name}"
< RUN CMD >
</code></pre>
|
<python><protocol-buffers><protobuf-python>
|
2024-06-27 19:34:33
| 1
| 32,280
|
red888
|
78,679,512
| 5,009,293
|
Dtypes Data Frame Assigns nvarchar(MAX) by default
|
<p>So I have this snippet. When the code runs to insert into a new table, it declares all columns as nvarchar(max). Clearly, this is undesirable behavior. My question is, is there a way to define a length here? So that it isn't MAX?</p>
<p>I know I have two options from my research, which are:</p>
<ol>
<li>Use a dict to pre-define all columns with appropriate data types.</li>
<li>Maintain the staging table and Append as opposed to replace. This of course requires a truncate first.</li>
</ol>
<p>Is there a way to do something like this <code>dtype=NVARCHAR(100)</code>? Or is there some other option I haven't thought of yet?</p>
<pre><code>data.to_sql
(
name=f'{table_name}'
, schema='stage'
, con=con
, if_exists='replace'
, index=False
, dtype=NVARCHAR
)
</code></pre>
|
<python><pandas><dataframe>
|
2024-06-27 19:27:19
| 1
| 7,207
|
Doug Coats
|
78,679,475
| 9,363,441
|
Python serial send byte array with byte values
|
<p>I would like to send such array for example via pyserial:</p>
<pre><code>[49, 56, 48] or [82]
</code></pre>
<p>I tried such solution before sending:</p>
<pre><code>print(list(str(180).encode()))
</code></pre>
<p>And it gives me array as I want. But when I tried to send it via pySerial, it said that I need to have buffer instead of list. In general I'm trying to get some analogue from kotlin:</p>
<pre><code>byteArrayOf('O'.code.toByte())
"70".toByteArray()
</code></pre>
<p>Which gives me such arrays for sending them to Arduino. I tried sending maps, dicts, and so on, but did not succeed. Whether it's possible to have something similar in Python?
python sending code:</p>
<pre><code>import json
import struct
import serial
ser = serial.Serial('/dev/tty.usbmodem14401', 9600)
data = [49, 56, 48]
ser.write(json.dumps(data) + '\n')
ser.write(str('R').encode())
ser.write(str(80).encode())
</code></pre>
|
<python><arduino><pyserial><python-bytearray>
|
2024-06-27 19:17:27
| 1
| 2,187
|
Andrew
|
78,679,465
| 6,622,697
|
Session management in Flask-SQLAlchemy-Lite
|
<p>I'm moving from vanilla SQLAlchemy to Flask-SQLAlchemy-Lite.</p>
<p>What is the best practice for session management? Can I still do something like this:</p>
<pre><code> with db.session as session, session.begin():
</code></pre>
<p>if I want the block to automatically commit?</p>
|
<python><sqlalchemy><flask-sqlalchemy-lite>
|
2024-06-27 19:15:51
| 2
| 1,348
|
Peter Kronenberg
|
78,679,339
| 1,867,328
|
Calculate 6 months forward date from a dataframe of dates
|
<p>I have below code</p>
<pre><code>import pandas as pd
from dateutil.relativedelta import relativedelta
date = pd.to_datetime(['2001-01-01', '2003-01-01'])
date + relativedelta(months = +6)
</code></pre>
<p>Basically I am trying to calculate 6 months ahead date from a dataframe of dates.</p>
<p>Above code is failing.</p>
<pre><code>TypeError: unsupported operand type(s) for +: 'DatetimeArray' and 'relativedelta'
</code></pre>
<p>Could you please help to correct this code?</p>
|
<python><pandas>
|
2024-06-27 18:39:53
| 1
| 3,832
|
Bogaso
|
78,679,269
| 19,155,645
|
Milvus ConnectionRefusedError: how to connect locally
|
<p>I am trying to run a RAG pipeline using haystack & Milvus.</p>
<p>Its my first time using Milvus, and I seem to have an issue with it.</p>
<p>I'm following this tutorial, with some basic changes: <a href="https://milvus.io/docs/integrate_with_haystack.md" rel="nofollow noreferrer">https://milvus.io/docs/integrate_with_haystack.md</a></p>
<p>Here is my code:</p>
<pre><code>import os
import urllib.request
from haystack import Pipeline
from haystack.components.converters import MarkdownToDocument
from haystack_integrations.components.embedders.ollama import OllamaDocumentEmbedder, OllamaTextEmbedder
from haystack.components.preprocessors import DocumentSplitter
from haystack.components.writers import DocumentWriter
from milvus_haystack import MilvusDocumentStore
from milvus_haystack.milvus_embedding_retriever import MilvusEmbeddingRetriever
url = "https://www.gutenberg.org/cache/epub/7785/pg7785.txt"
file_path = "./davinci.txt"
if not os.path.exists(file_path):
urllib.request.urlretrieve(url, file_path)
document_store = MilvusDocumentStore(
connection_args={"uri": "./milvus.db"},
drop_old=True,
)
indexing_pipeline = Pipeline()
indexing_pipeline.add_component("converter", MarkdownToDocument())
indexing_pipeline.add_component(
"splitter", DocumentSplitter(split_by="sentence", split_length=2)
)
indexing_pipeline.add_component("embedder", OllamaDocumentEmbedder())
indexing_pipeline.add_component("writer", DocumentWriter(document_store))
indexing_pipeline.connect("converter", "splitter")
indexing_pipeline.connect("splitter", "embedder")
indexing_pipeline.connect("embedder", "writer")
indexing_pipeline.draw('./pipeline_diagram.png')
indexing_pipeline.run({"converter": {"sources": [file_path]}})
</code></pre>
<p>It all works well until the last line, where I get a ConnectionRefusedError.
First the conversion (from markdown to document) runs well, but then the code fails.</p>
<p>I am not sure why it happens, as I see the <code>milvus.db</code> and <code>milvus.db.lock</code> files created as expected.</p>
<p>The full error is:</p>
<pre><code>---------------------------------------------------------------------------
ConnectionRefusedError Traceback (most recent call last)
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/connection.py:203, in HTTPConnection._new_conn(self)
202 try:
--> 203 sock = connection.create_connection(
204 (self._dns_host, self.port),
205 self.timeout,
206 source_address=self.source_address,
207 socket_options=self.socket_options,
208 )
209 except socket.gaierror as e:
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/util/connection.py:85, in create_connection(address, timeout, source_address, socket_options)
84 try:
---> 85 raise err
86 finally:
87 # Break explicitly a reference cycle
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/util/connection.py:73, in create_connection(address, timeout, source_address, socket_options)
72 sock.bind(source_address)
---> 73 sock.connect(sa)
74 # Break explicitly a reference cycle
ConnectionRefusedError: [Errno 61] Connection refused
The above exception was the direct cause of the following exception:
NewConnectionError Traceback (most recent call last)
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/connectionpool.py:791, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
790 # Make the request on the HTTPConnection object
--> 791 response = self._make_request(
792 conn,
793 method,
794 url,
795 timeout=timeout_obj,
796 body=body,
797 headers=headers,
798 chunked=chunked,
799 retries=retries,
800 response_conn=response_conn,
801 preload_content=preload_content,
802 decode_content=decode_content,
803 **response_kw,
804 )
806 # Everything went great!
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/connectionpool.py:497, in HTTPConnectionPool._make_request(self, conn, method, url, body, headers, retries, timeout, chunked, response_conn, preload_content, decode_content, enforce_content_length)
496 try:
--> 497 conn.request(
498 method,
499 url,
500 body=body,
501 headers=headers,
502 chunked=chunked,
503 preload_content=preload_content,
504 decode_content=decode_content,
505 enforce_content_length=enforce_content_length,
506 )
508 # We are swallowing BrokenPipeError (errno.EPIPE) since the server is
509 # legitimately able to close the connection after sending a valid response.
510 # With this behaviour, the received response is still readable.
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/connection.py:395, in HTTPConnection.request(self, method, url, body, headers, chunked, preload_content, decode_content, enforce_content_length)
394 self.putheader(header, value)
--> 395 self.endheaders()
397 # If we're given a body we start sending that in chunks.
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/http/client.py:1289, in HTTPConnection.endheaders(self, message_body, encode_chunked)
1288 raise CannotSendHeader()
-> 1289 self._send_output(message_body, encode_chunked=encode_chunked)
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/http/client.py:1048, in HTTPConnection._send_output(self, message_body, encode_chunked)
1047 del self._buffer[:]
-> 1048 self.send(msg)
1050 if message_body is not None:
1051
1052 # create a consistent interface to message_body
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/http/client.py:986, in HTTPConnection.send(self, data)
985 if self.auto_open:
--> 986 self.connect()
987 else:
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/connection.py:243, in HTTPConnection.connect(self)
242 def connect(self) -> None:
--> 243 self.sock = self._new_conn()
244 if self._tunnel_host:
245 # If we're tunneling it means we're connected to our proxy.
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/connection.py:218, in HTTPConnection._new_conn(self)
217 except OSError as e:
--> 218 raise NewConnectionError(
219 self, f"Failed to establish a new connection: {e}"
220 ) from e
222 # Audit hooks are only available in Python 3.8+
NewConnectionError: <urllib3.connection.HTTPConnection object at 0x30ca49690>: Failed to establish a new connection: [Errno 61] Connection refused
The above exception was the direct cause of the following exception:
MaxRetryError Traceback (most recent call last)
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/requests/adapters.py:486, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
485 try:
--> 486 resp = conn.urlopen(
487 method=request.method,
488 url=url,
489 body=request.body,
490 headers=request.headers,
491 redirect=False,
492 assert_same_host=False,
493 preload_content=False,
494 decode_content=False,
495 retries=self.max_retries,
496 timeout=timeout,
497 chunked=chunked,
498 )
500 except (ProtocolError, OSError) as err:
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/connectionpool.py:845, in HTTPConnectionPool.urlopen(self, method, url, body, headers, retries, redirect, assert_same_host, timeout, pool_timeout, release_conn, chunked, body_pos, preload_content, decode_content, **response_kw)
843 new_e = ProtocolError("Connection aborted.", new_e)
--> 845 retries = retries.increment(
846 method, url, error=new_e, _pool=self, _stacktrace=sys.exc_info()[2]
847 )
848 retries.sleep()
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/urllib3/util/retry.py:515, in Retry.increment(self, method, url, response, error, _pool, _stacktrace)
514 reason = error or ResponseError(cause)
--> 515 raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type]
517 log.debug("Incremented Retry for (url='%s'): %r", url, new_retry)
MaxRetryError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x30ca49690>: Failed to establish a new connection: [Errno 61] Connection refused'))
During handling of the above exception, another exception occurred:
ConnectionError Traceback (most recent call last)
Cell In[15], line 1
----> 1 indexing_pipeline.run({"converter": {"sources": [file_path]}})
3 print("Number of documents:", document_store.count_documents())
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/haystack/core/pipeline/pipeline.py:197, in Pipeline.run(self, data, debug, include_outputs_from)
195 span.set_content_tag("haystack.component.input", last_inputs[name])
196 logger.info("Running component {component_name}", component_name=name)
--> 197 res = comp.run(**last_inputs[name])
198 self.graph.nodes[name]["visits"] += 1
200 if not isinstance(res, Mapping):
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/haystack_integrations/components/embedders/ollama/document_embedder.py:139, in OllamaDocumentEmbedder.run(self, documents, generation_kwargs)
136 raise TypeError(msg)
138 texts_to_embed = self._prepare_texts_to_embed(documents=documents)
--> 139 embeddings, meta = self._embed_batch(
140 texts_to_embed=texts_to_embed, batch_size=self.batch_size, generation_kwargs=generation_kwargs
141 )
143 for doc, emb in zip(documents, embeddings):
144 doc.embedding = emb
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/haystack_integrations/components/embedders/ollama/document_embedder.py:107, in OllamaDocumentEmbedder._embed_batch(self, texts_to_embed, batch_size, generation_kwargs)
105 batch = texts_to_embed[i] # Single batch only
106 payload = self._create_json_payload(batch, generation_kwargs)
--> 107 response = requests.post(url=self.url, json=payload, timeout=self.timeout)
108 response.raise_for_status()
109 result = response.json()
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/requests/api.py:115, in post(url, data, json, **kwargs)
103 def post(url, data=None, json=None, **kwargs):
104 r"""Sends a POST request.
105
106 :param url: URL for the new :class:`Request` object.
(...)
112 :rtype: requests.Response
113 """
--> 115 return request("post", url, data=data, json=json, **kwargs)
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/requests/api.py:59, in request(method, url, **kwargs)
55 # By using the 'with' statement we are sure the session is closed, thus we
56 # avoid leaving sockets open which can trigger a ResourceWarning in some
57 # cases, and look like a memory leak in others.
58 with sessions.Session() as session:
---> 59 return session.request(method=method, url=url, **kwargs)
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/requests/sessions.py:589, in Session.request(self, method, url, params, data, headers, cookies, files, auth, timeout, allow_redirects, proxies, hooks, stream, verify, cert, json)
584 send_kwargs = {
585 "timeout": timeout,
586 "allow_redirects": allow_redirects,
587 }
588 send_kwargs.update(settings)
--> 589 resp = self.send(prep, **send_kwargs)
591 return resp
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/requests/sessions.py:703, in Session.send(self, request, **kwargs)
700 start = preferred_clock()
702 # Send the request
--> 703 r = adapter.send(request, **kwargs)
705 # Total elapsed time of the request (approximately)
706 elapsed = preferred_clock() - start
File /opt/anaconda3/envs/haystack_milvus_playground/lib/python3.11/site-packages/requests/adapters.py:519, in HTTPAdapter.send(self, request, stream, timeout, verify, cert, proxies)
515 if isinstance(e.reason, _SSLError):
516 # This branch is for urllib3 v1.22 and later.
517 raise SSLError(e, request=request)
--> 519 raise ConnectionError(e, request=request)
521 except ClosedPoolError as e:
522 raise ConnectionError(e, request=request)
ConnectionError: HTTPConnectionPool(host='localhost', port=11434): Max retries exceeded with url: /api/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x30ca49690>: Failed to establish a new connection: [Errno 61] Connection refused'))
</code></pre>
<p>any help resolving this would be appreciated. My assumption is that it is something very simple in creating the milvus database local connection, but I dont know where it is.</p>
|
<python><database-connection><large-language-model><milvus><haystack>
|
2024-06-27 18:09:32
| 3
| 512
|
ArieAI
|
78,679,112
| 5,405,684
|
Issue with ANSI Escape Codes in Output When Running Shell Script via Python Subprocess
|
<p>I have a bioinformatics pipeline that builds and runs <code>.sh</code> files in Python. I run the <code>.sh</code> through a subprocess module so I can use parallel computing.</p>
<p>This worked about 6 months ago, but now has stopped. It seems like for some reason the ANSI escape codes are being inserted in the command line arguments in some of the Perl scripts used in the pipeline. I'll paste the line in the error logs where teh code fails from a script run through Python's subprocess:</p>
<pre class="lang-none prettyprint-override"><code>2024-06-27 13:16:29 /Users/kevin/VirVarSeq/map_vs_consensus.pl !---- Paired end mapping using [35m./Sample_Q23.17-NHG_3BNC-10_E8-240314/Q23.17-NHG_3BNC-10_E8-240314_R1.fastq.gz[m[m [35m./Sample_Q23.17-NHG_3BNC-10_E8-240314/Q23.17-NHG_3BNC-10_E8-240314_R2.fastq.gz[m[m ...
[M::bwa_idx_load_from_disk] read 0 ALT contigs
[E::main_mem] fail to open file `[35m./Sample_Q23.17-NHG_3BNC-10_E8-240314/Q23.17-NHG_3BNC-10_E8-240314_R1.fastq.gz[m[m'.
</code></pre>
<p>However if I just run the EXACT same script from the shell, it runs fine! I don't know why. Here is the same line from the log when I run the <code>.sh</code> file from the terminal:</p>
<pre class="lang-none prettyprint-override"><code>2024-06-27 13:08:04 /Users/kevin/VirVarSeq/map_vs_consensus.pl !---- Paired end mapping using ./Sample_Q23.17-NHG_3BNC-10_E8-240314/Q23.17-NHG_3BNC-10_E8-240314_R1.fastq.gz ./Sample_Q23.17-NHG_3BNC-10_E8-240314/Q23.17-NHG_3BNC-10_E8-240314_R2.fastq.gz ...
[M::bwa_idx_load_from_disk] read 0 ALT contigs
[M::process] read 41262 sequences (6098025 bp)...
</code></pre>
<p>I've tried starting the subprocess a few ways and still having this same error. Like I said, if I just run the script in the zsh shell it runs without issue.</p>
<p>This is the <code>.sh</code> script up to the point where it errors:</p>
<pre><code>#!/bin/zsh
#VirVarSeq Pipeline
seq_header=Q23.17-NHG_3BNC-10_E8-240314
ln -s /Users/kevin/VirVarSeq/R ./
mkdir Sample_$seq_header
ln -s $PWD/*fastq.gz ./Sample_$seq_header
echo $seq_header > samples.txt
#make directories as VirVar would have:
mkdir VirVarResults
mkdir VirVarResults/codon_table
mkdir VirVarResults/consensus
mkdir VirVarResults/map_vs_consensus
mkdir VirVarResults/map_vs_ref
mkdir VirVarResults/mixture_model
#use bam file from lofreq mapping (unfortunately need it in SAM format):
samtools view -h -o ./VirVarResults/map_vs_ref/$seq_header.sam $seq_header.bam "Q23.17-NHG"
indir=.
outdir=./VirVarResults
samples=./samples.txt
ref=/Users/kevin/Documents/Bienasz_Lab/HIV1_bNab_Env_Escape/Sequencing_analysis/Variant_calling/3BNC117/240621_MiSeqRun/Reference_Genomes/Q23.17-NHG.fasta
startpos=6170
endpos=8829
region_start=6221
region_len=853
qv=0
consensus.pl --samplelist $samples --ref $ref --indir $indir --outdir $outdir --start $startpos --end $endpos >> VirVarSeq.log 2>&1
</code></pre>
<p><a href="https://i.sstatic.net/AJKJVYV8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJKJVYV8.png" alt="picture of error message as shown in TextMate (plain text editor)" /></a></p>
|
<python><shell><subprocess><ansi-escape>
|
2024-06-27 17:31:52
| 1
| 537
|
lstbl
|
78,679,016
| 13,187,876
|
Load Registered Component in Azure ML for Pipeline using Python sdk v2
|
<p>I'm working in Azure Machine Learning Studio to create components that I will run together in a pipeline. In this basic example, I have a single <code>python</code> script and a single <code>yml</code> file that make up my component, along with a notebook I am using to define, instantiate and run a pipeline. See an overview of the folder structure I have below for this component.</p>
<pre><code>📦component
┣ 📜notebook.ipynb
┣ 📜component_script.py
┗ 📜component_def.yml
</code></pre>
<p>Inside my notebook I can then load the component and register it to the workspace using the code below (note that here I have already instantiated my <code>ml_client</code> object).</p>
<pre><code># importing the Component Package
from azure.ai.ml import load_component
# Loading the component from the yml file
component = load_component("component_def.yml")
# Now we register the component to the workspace
component = ml_client.create_or_update(component)
</code></pre>
<p>I can then pass this component into a pipeline successfully. My question is, now that I have registered my component, I should no longer need to instantiate my component object using <code>component = load_component("component_def.yml")</code> which requires access to the <code>yml</code> file. I should instead be able to instantiate my component object from the registered component. How can I do this?</p>
|
<python><azure><machine-learning><components><azureml-python-sdk>
|
2024-06-27 17:10:34
| 1
| 773
|
Matt_Haythornthwaite
|
78,678,993
| 20,122,390
|
Is redis in Python asynchronous?
|
<p>I have the following Python code:</p>
<pre><code>import redis
from app.infra.services.notifications import INotifier
from app.schemas.notification import NotificationMessage
from app.config import settings
REDIS_CLIENT_PUBSUB = redis.StrictRedis(
host=settings.REDIS_HOST,
port=settings.REDIS_PORT,
password=settings.REDIS_PASSWORD,
)
class RedisNotifier(INotifier):
def __init__(self):
self.redis_client = REDIS_CLIENT_PUBSUB
async def send_notification(self, room: str, notification: NotificationMessage) -> None:
await self.redis_client.publish(room, notification.json())
redis_notifier = RedisNotifier()
</code></pre>
<p>I have a web application written in FastAPI that will use redis_notifier to post messages to channels. My question is if it really:
self.redis_client.publish(room, notification.json())
It is asynchronous, that is, my web application will be able to abandon this coroutine and do other things while publish is finished?
I'm a little confused with the redis and aioredis library, I don't know if my code makes sense or I'm doing something wrong</p>
|
<python><async-await><redis>
|
2024-06-27 17:03:48
| 2
| 988
|
Diego L
|
78,678,954
| 830,429
|
Visual Studio Code Terminal is Not Using Correct Python Interpreter
|
<p>I have Visual Studio Code (v 1.9) running on a Windows 10 machine. I want to use ESRI's Python interpreter and I selected that per this image:</p>
<p><a href="https://i.sstatic.net/f5iDdWA6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5iDdWA6.png" alt="Compiler Selection" /></a></p>
<p>After that, if I use the Run (right arrow) toward the top right of a python file then correct interpreter is selected and no issues. However, if I run the the python file using commands like in this screen:</p>
<p><a href="https://i.sstatic.net/gwebYyhI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwebYyhI.png" alt="Terminal Run of the Script" /></a></p>
<p>then some Python interpreter from Windows is selected. Please note: The error you see in this image of file not found is a fake error because the file really doesn't exist but it confirms that the Terminal is using wrong interpreter unlike the Run button.</p>
<p>I have tried to create a new Environment and selected the correct interpreter but still the Terminal is using the wrong interpreter . The Terminal does show the correct interpreter as in this image:</p>
<p><a href="https://i.sstatic.net/jtT57nZF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jtT57nZF.png" alt="Terminal View of the Compiler" /></a></p>
<p>I prefer to run the Python script using the Terminal using keyboard short cuts instead of the Run button in the IDE. What can I do to fix this problem?</p>
|
<python><visual-studio-code><pythoninterpreter>
|
2024-06-27 16:54:02
| 2
| 1,779
|
IrfanClemson
|
78,678,899
| 2,153,235
|
Python code elements to allow indention without flow control structures
|
<p>When using Vim, I prefer to have one scheme for demarcating folds across all programming languages and even text files for meeting notes. It saves me from having to finagle different schemes. It has worked for decades, but now that I'm trying to develop Python experience, it is running into a kink. I can't arbitrarily indent blocks of code. This is what I might try in another language:</p>
<pre><code># Calculate flux capacitances
This is code
It calculates flux capacitance
# Generate summary statistics for each group
This is more code
It groups the flux capacitances by spaceship size
For each group, it generates min, max, mean
</code></pre>
<p>The indented code becomes <em>automatically</em> hidden away by folds. However, Python requires flow control structures for indentation. To fulfill this, I prepend <code>if True:</code>:</p>
<pre><code>if True: # Calculate flux capacitances
This is code
It calculates flux capacitance
if True: # Generate summary statistics for each group
This is more code
It groups the flux capacitances by spaceship size
For each group, it generates min, max, mean
</code></pre>
<p>It adds some cognitive noise. In lengthy, complex code, anything that cuts down on cognitive noise helps. Is there a less noisy code element/scheme that I can use? I would prefer not to depart from my simply fold scheme based on indentation.</p>
<p>Another reason for seeking an alternative to <code>if True : # Section heading comment</code> is that some section headings require a multi-line comment, in which case <code>if True:</code> needs to occupy its own line <em>after</em> the heading, i.e., yet more cognitive noise.</p>
<pre><code># This is a multi-line
# section heading
if True:
More code
It does stuff
</code></pre>
<p>One workaround might be the following, though it is rather unconventional and therefore could interfere with trying to make sense of lots of code at once:</p>
<pre><code>if True: # This is a multi-line
# section heading
More code
It does stuff
</code></pre>
|
<python><indentation>
|
2024-06-27 16:38:31
| 0
| 1,265
|
user2153235
|
78,678,587
| 14,301,545
|
Numpy for loop vectorization - points, triangles, area, volume calculation
|
<p>I have two numpy arrays - one containing coordinates of 3D points and the second with triangles composed from those points. I need to calculate top surface 2D area and volume of this triangles (tetrahedrons?) with base height of the lowest point.</p>
<p>Minimal working example of what I got. It work OK, but is slow.</p>
<pre><code>import numpy as np
pts = np.array([[744, 547, 695], [784, 511, 653], [779, 546, 746], [784, 489, 645], [834, 423, 614], [619, 541, 598]])
trs = np.array([[1, 2, 3], [2, 3, 4], [3, 4, 5], [4, 5, 0], [5, 0, 1]])
h_min = np.min(pts[:, 2])
print(f"H min : {h_min} m", )
a = 0
v = 0
for tr in trs:
p1, p2, p3 = tr
x1, y1, z1 = pts[p1]
x2, y2, z2 = pts[p2]
x3, y3, z3 = pts[p3]
area2d = abs(0.5 * (((x2 - x1) * (y3 - y1)) - ((x3 - x1) * (y2 - y1))))
h_mean = (z1 + z2 + z3) / 3 - h_min
v_tr = area2d * h_mean
a += area2d
v += v_tr
print(f"AREA : {a} m2")
print(f"VOLUME: {v} m3")
</code></pre>
<p>The problem is that really those arrays contains millions of points and triangles and this is calculating way too long. I have found method called numpy vectorization, but have no idea how to make it work in my case. Can anyone explain to me if it is even possible? Fast volume calculation is a must. Area - would be great. Thank you!</p>
|
<python><numpy><vectorization>
|
2024-06-27 15:28:22
| 1
| 369
|
dany
|
78,678,566
| 20,122,390
|
Which is more efficient between Socket.io or WebSockets raw with Redis Pub/Sub?
|
<p>I currently have an application that uses Socket.IO to send notifications in real time (using server-side Python). We have about 30 instances of the service that handles these websockets. However, we have had problems with this functionality, so we have decided to migrate the websockets part directly to Go. In Go there is no good support for socket.io so we decided to use Gorilla Websockets. For the rooms concept, we decided to use Redis pub/sub. The idea then is that a websocket is established between the Frontend and Backend for each user, the backend subscribes to the channels necessary for each user and each message received from the channels is delivered to the frontend through the websocket connection. Likewise, the different microservices of the system that perform actions are connected to Redis and publish messages in the channel that is necessary. So, the architecture now would be something like this:</p>
<p><a href="https://i.sstatic.net/fzW0aNY6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fzW0aNY6.png" alt="enter image description here" /></a></p>
<p>(Previously, socket.io rooms were simply used, But also with AsyncRedisManager)
Some additional considerations:</p>
<ul>
<li>It is expected to have about 1000 active users at a time</li>
<li>Each user needs to listen to approximately 50 Redis channels.</li>
<li>In most channels messages are not published constantly, but in around 25% of them they are published every second.</li>
</ul>
<p>The problem is that I previously mistakenly thought that socket.io used a redis channel for each room but I'm not sure about that now. As I see the configuration, only one channel is used. I'm afraid that my new implementation in which more channels are now created will end up being less efficient. Is the new design a commonly used model? Does it make sense? With the context provided, does this seem like a good solution?</p>
|
<python><go><websocket><redis><socket.io>
|
2024-06-27 15:24:16
| 0
| 988
|
Diego L
|
78,678,322
| 10,795,473
|
StatsForecast AutoARIMA taking too long to fit
|
<p>I'm experiencing a rather “strange” issue with Nixtla's StatsForecast that's severely blocking my progress. I'm explaining it here to see if anyone has any ideas to try, or if those of you who have it installed can test if it works for you.</p>
<p>The case is that until yesterday it was running reasonably fast, and today suddenly it's about 100 times slower, no exaggeration.</p>
<p>The documentation mentions fitting 10 models on 1 million series in under 5 minutes... Well, I run this code and my computer can't finish (over 30 minutes and still running):</p>
<pre class="lang-py prettyprint-override"><code>import random
import pandas as pd
from statsforecast import StatsForecast
from statsforecast.models import AutoARIMA
import time
random.seed(1)
points = 500
test_data = pd.DataFrame({
"ds": pd.date_range(start="2024-06-01", periods=points, freq="W"),
"unique_id": "test",
"y": random.choices(range(0, 100), k=points)
})
model = StatsForecast(
models=[AutoARIMA(season_length=52)],
freq="W",
n_jobs=-1,
)
start = time.time()
results = model.forecast(df=test_data, h=6)
print(time.time() - start)
results= model.forecast(df=test_data, h=6)
results
</code></pre>
<p>Tests I've done (none worked):</p>
<ul>
<li>Restart the computer.</li>
<li>Create a new Python 3.10 environment from scratch and run it via terminal without PyCharm or anything.</li>
<li>Play with the number of cores, it doesn't seem to affect and it never uses more than 100% CPU.</li>
<li>Test on other machines: It seems to run much faster and with the same version of statsforecast 1.7.5.</li>
</ul>
<p>May be some of my PC configuration? I'm running on Ubuntu 20.04, 16 GiB RAM and an Intel Core i7 1.8Ghz x 8.</p>
<p>Any ideas?</p>
<p>EDIT: Fixed the seed and added time counting.</p>
|
<python><time-series><statsforecast>
|
2024-06-27 14:38:47
| 0
| 309
|
aarcas
|
78,678,288
| 2,397,318
|
Plotting rolling average on top of a stacked bar chart in pandas
|
<p>I am trying to plot how many daily events of each category happened, together with a 7 days rolling average of the total. Concretely my data has the shape:</p>
<pre><code>date_range = pd.date_range(start='2023-01-01', periods=10)
data = {
'Category A': np.random.randint(1, 10, 10),
'Category B': np.random.randint(1, 10, 10),
}
df = pd.DataFrame(data, index=date_range)
</code></pre>
<p>adding the rolling average:</p>
<pre><code>df['total'] = df.sum(axis=1)
df['rolling'] = df['total'].rolling(window=7).mean()
</code></pre>
<p>Then I thought I could simply do</p>
<pre><code>ax = df[['Category A', 'Category B']].plot(kind='bar', stacked=True)
df['rolling'].plot(kind='line', ax=ax)
</code></pre>
<p>However, I can only see the second plot. Is there a way to add one on top of each other, without overwriting the first? <code>alpha</code> does not seem to help here.</p>
|
<python><pandas><matplotlib><rolling-average>
|
2024-06-27 14:31:40
| 1
| 3,769
|
meto
|
78,678,138
| 4,532,062
|
How to apply hierarchical numbering to indented titles?
|
<p>I have a table of content in the form of indention to track the hierarchy like:</p>
<pre><code>- title1
-- title1-1
-- title1-2
--- title1-2-1
--- title1-2-2
- title2
-- title2-1
-- title2-2
- title3
- title4
</code></pre>
<p>I want to translate them with a numbering format like:</p>
<pre><code>1 title1
1.1 title1-1
1.2 title1-2
1.2.1 title1-2-1
1.2.2 title1-2-2
2 title2
2.1 title2-1
2.2 title2-2
3 title3
4 title4
</code></pre>
<p>This is just an example where the string "title-*" could be any heading text. Also the size of an indent could get greater than in this example.</p>
<p>This comes from my real work, where I collect headings, or manually hand-written headings, in a Word document and reformat these possible headings from beginning to end aiming to correct any wrong order and indention.</p>
<p>I have tried this myself, and while mostly these headings were transformed into the desired format, for some it did not work out. How should this be done?</p>
|
<python><tableofcontents>
|
2024-06-27 14:01:08
| 2
| 839
|
Jim
|
78,678,067
| 3,925,023
|
Spacy detect correctly GPE
|
<p>I've a set of string where I shall detetect the country its belongs to, referring to detected GPE.</p>
<pre><code>sentences = [
"I watched TV in germany",
"Mediaset ITA canale 5",
"Je viens d'Italie",
"Ich komme aus Deutschland",
"He is from the UK",
"Soy de Inglaterra",
"Sono del Regno Unito"
]
</code></pre>
<p>So my expectation :
"I watched TV in germany" => DE
"Soy de Inglaterra" => UK
etc..</p>
<p>I've tried with this code:</p>
<pre><code>from spacy.pipeline import EntityRuler
nlp = spacy.load('xx_ent_wiki_sm') # Multilingual model
def detect_country(text):
doc = nlp(text)
countries = []
for ent in doc.ents:
if ent.label_ == 'GPE':
countries.append(ent.text)
return countries
for sentence in sentences:
countries = detect_country(sentence)
print(f"Countries detected in '{sentence}': {countries}")
</code></pre>
<p>But results are completely empty:</p>
<pre><code>Countries detected in 'I watched TV in germany': []
Countries detected in 'Mediaset ITA canale 5': []
Countries detected in 'Je viens d'Italie': []
Countries detected in 'Ich komme aus Deutschland': []
Countries detected in 'He is from the UK': []
Countries detected in 'Soy de Inglaterra': []
Countries detected in 'Sono del Regno Unito': []
</code></pre>
<p>I don't understand if I'm missing something into spacy pipeline and if I'm actually take good choice for multilang model.
Thx in advance for any advise</p>
|
<python><nlp><spacy-3>
|
2024-06-27 13:44:57
| 0
| 687
|
user3925023
|
78,677,834
| 5,377
|
Python function to assert a type and cast
|
<p>Functions like <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.fsolve.html" rel="nofollow noreferrer">scipy.optimize.fsolve</a> have, uh, interesting APIs where setting the <code>full_output</code> parameter to a truthy value changes the return type in a way that type checkers can't infer.</p>
<p>I'd like a clean way to deal with statically typing this. Right now I am doing something like this:</p>
<pre class="lang-py prettyprint-override"><code>result = fsolve(...)
assert isinstance(result, numpy.ndarray)
# .. call numpy array methods on `result` without any type errors ...
</code></pre>
<p>It'd be trivial to define a function to both cast and assert:</p>
<pre class="lang-py prettyprint-override"><code>def assert_and_cast[T](obj: Any, typ: type[T]) -> T:
assert isinstance(obj, typ)
return obj # no need to cast(), checkers know it's type T
</code></pre>
<p>This would also be nice for cases where you know an optional variable is not-None:</p>
<pre class="lang-py prettyprint-override"><code>my_var: Optional[dict] = ...
# ... some case where I know my_var isn't actually None ...
assert_and_cast(my_var, dict)['abc'] # this is not a type eror
</code></pre>
<p><strong>Is there an existing function that does this?</strong> It seems like there would be since it's common in other languages (i.e. Guava's <code>checkNotNull</code> in Java, TypeScript's non-null assertion operator which both asserts and changes the type), but I can't find any common, existing one.</p>
|
<python><python-typing>
|
2024-06-27 13:02:29
| 0
| 6,458
|
aaronstacy
|
78,677,547
| 5,530,553
|
Django web app was blocked due to MIME type (“text/html”) mismatch
|
<p>I am trying to build a web-app with Django with a front-end made with Vue.js.</p>
<p>Here is how my dir is organized -</p>
<pre><code>reco_system_app/urls.py
reco_system_app/settings.py
recommender/urls.py
recommender/views.py
vueapp/
static/
templates/
</code></pre>
<p><code>reco_system_app/urls.py</code> is</p>
<pre><code>from django.contrib import admin
from recommender.views import home
from django.urls import path, include
from django.conf import settings
from django.conf.urls.static import static
urlpatterns = [
path('admin/', admin.site.urls),
path('api/', include('recommender.urls')),
path('', home, name='home'),
] + static(settings.STATIC_URL, document_root=settings.STATIC_ROOT)
</code></pre>
<p><code>settings.py</code> has these two set</p>
<pre><code>STATIC_URL = '/static/'
STATICFILES_DIRS = [
os.path.join(BASE_DIR, "static"),
]
</code></pre>
<p><code>recommender/urls.py</code> is</p>
<pre><code>from django.urls import path
from . import views
urlpatterns = [
path('', views.IndexView.as_view(), name='index'),
path('movies/', views.get_random_movies, name='get_random_movies'),
path('recommend/', views.get_recommendations, name='get_recommendations'),
]
</code></pre>
<p><code>view.py</code> is</p>
<pre><code>import random
from django.shortcuts import render
from django.views.generic import TemplateView
from django.http import JsonResponse
class IndexView(TemplateView):
template_name = 'index.html'
def home(request):
return render(request, "index.html")
def get_random_movies(request):
# Simulate a list of movies
sample_movies = ["The Shawshank Redemption", "The Godfather", "The Dark Knight", "Pulp Fiction", "The Lord of the Rings: The Return of the King"]
# Return a random sample of 5 movies
movies = random.sample(sample_movies, 3)
return JsonResponse({"movies": movies})
def get_recommendations(request):
# For simplicity, assume the movies are passed as a comma-separated string in the query parameter
selected_movies = request.GET.get('movies', '')
selected_movies_list = selected_movies.split(',')
# Simulate recommendations by shuffling the input movies
random.shuffle(selected_movies_list)
recommendations = selected_movies_list # In a real scenario, you'd query VertexAI here
return JsonResponse({"recommendations": recommendations})
</code></pre>
<p>My front-end made with Vue.js is working fine. When I run <code>npm run serve</code>, I am able to see the front-end that I have created.</p>
<p>The Django server is also working I think, when I run <code>python manage.py runserver 0.0.0.0:8000</code>, the server is running and I can do <code>curl http://0.0.0.0:8000/api/movies/</code> to get a list of movies.</p>
<p>But Django is not opening up the front-end webpage that I want to see. When I inspect the page, I can see the errors,</p>
<pre><code>The resource from “http://0.0.0.0:8000/js/app.d193da3f.js” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff).
The resource from “http://0.0.0.0:8000/js/chunk-vendors.5c940eb9.js” was blocked due to MIME type (“text/html”) mismatch (X-Content-Type-Options: nosniff).
</code></pre>
<p>I have made sure that the files that I build with <code>npm run build</code> are in the right locations. But I am still getting the error.</p>
<p>I am running the whole thing inside a docker container with</p>
<pre><code>docker run -it --entrypoint /bin/sh -v ${PWD}:/workspace -p 8080:8080 -p 8000:8000 recommendation
</code></pre>
<p>Both the ports are exposed in the Dockerfile as well. What could be going wrong?</p>
<p>EDIT: <code>DEBUG = True</code> is also set.</p>
|
<python><python-3.x><django><docker><vue.js>
|
2024-06-27 12:07:31
| 0
| 3,300
|
Ananda
|
78,677,469
| 3,197,404
|
Remove gap between two graphs with shared axis
|
<h2>Objective</h2>
<p>I want to generate a 'double ended' barchart, showing gained points (of some metric) vis-a-vis missed points, something like this:</p>
<p><a href="https://i.sstatic.net/65CTgW6B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65CTgW6B.png" alt="graph I want to make" /></a></p>
<h2>Result so far</h2>
<p>I managed to do this</p>
<pre><code>import altair as alt
import pandas as pd
source = pd.DataFrame(
{
"cohort": ["A", "B", "C", "D", "E", "F", "G", "H", "I"],
"gained": [28, 55, 43, 91, 81, 53, 19, 87, 52],
"missed": [5, 8, 34, 21, 16, 22, 9, 7, 11],
}
)
up = (
alt.Chart(source)
.mark_bar(color="blue")
.encode(
x=alt.X("cohort:N").axis(labels=False, title=None, ticks=False),
y=alt.Y("gained:Q"),
)
)
down = (
alt.Chart(source)
.mark_bar(color="red")
.encode(
x=alt.X("cohort:N").axis(labelAngle=0),
y=alt.Y("missed:Q", scale=alt.Scale(reverse=True)),
)
)
alt.vconcat(up, down).resolve_scale(x="shared")
</code></pre>
<p>which generates this:
<a href="https://i.sstatic.net/26fpdh1M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/26fpdh1M.png" alt="results so far" /></a></p>
<p>Is there any way I can remove the gap? Or perhaps go about it completely differently with Vega-Altair?</p>
|
<python><visualization><altair>
|
2024-06-27 11:56:26
| 2
| 931
|
dkapitan
|
78,677,364
| 5,790,653
|
How to store all key/values of a dictionary in a single line to store?
|
<p>This is the sample code:</p>
<pre class="lang-py prettyprint-override"><code>my_list = [
{'name': 'Saeed', 'source_country': 'IR', 'destination_country': 'IT', 'email': 'sample1@gmail.com'},
{'name': 'Vahid', 'source_country': 'US', 'destination_country': 'DE', 'email': 'sample2@gmail.com'},
{'name': 'Ali', 'source_country': 'UK', 'destination_country': 'JP', 'email': 'sample3@gmail.com'},
{'name': 'Joe', 'source_country': 'FR', 'destination_country': 'KR', 'email': 'sample4@gmail.com'},
]
html = '\n'.join([f"<p>{b.replace('_', ' ').title()}: {p[b]}</p>" for p in my_list for b in p])
</code></pre>
<p>This is current output:</p>
<pre><code>>>> print(html)
<p>Name: Saeed</p>
<p>Source Country: IR</p>
<p>Destination Country: IT</p>
<p>Email: sample1@gmail.com</p>
<p>Name: Vahid</p>
<p>Source Country: US</p>
<p>Destination Country: DE</p>
<p>Email: sample2@gmail.com</p>
<p>Name: Ali</p>
<p>Source Country: UK</p>
<p>Destination Country: JP</p>
<p>Email: sample3@gmail.com</p>
<p>Name: Joe</p>
<p>Source Country: FR</p>
<p>Destination Country: KR</p>
<p>Email: sample4@gmail.com</p>
</code></pre>
<p>I'm going to have this (expected output):</p>
<pre><code><p>Name: Saeed, Source Country: IR, Destination Country: IT, Email: sample1@gmail.com</p>
<p>Name: Vahid, Source Country: US, Destination Country: DE, Email: sample2@gmail.com</p>
<p>Name: Ali, Source Country: UK, Destination Country: JP, Email: sample3@gmail.com</p>
<p>Name: Joe, Source Country: FR, Destination Country: KR, Email: sample4@gmail.com</p>
</code></pre>
<p>How can I store and print the expected output?</p>
<p><strong>Edit1</strong></p>
<p>I know this way to save the output, but I don't mean it:</p>
<pre class="lang-py prettyprint-override"><code>html = '\n'.join([f"<p>Name: {p['name']}, Source Country: {p['source_country']}, Destination Country: {p['destination_country']}, Email: {p['email']}</p>" for p in my_list])
</code></pre>
|
<python><dictionary>
|
2024-06-27 11:34:42
| 1
| 4,175
|
Saeed
|
78,677,266
| 2,876,079
|
How to debug scripts for PyPSA-EUR?
|
<p>I would like to set some VisualStudioCode or PyCharm breakpoints for the script</p>
<p><a href="https://github.com/PyPSA/pypsa-eur/blob/master/scripts/prepare_sector_network.py" rel="nofollow noreferrer">https://github.com/PyPSA/pypsa-eur/blob/master/scripts/prepare_sector_network.py</a></p>
<p>and then run and debug it to better understand how it works.</p>
<p>Usually the scripts of PyPSA-EUR are run as part of a snakemake workflow.
Therefore, I currently see a few strategies:</p>
<p><strong>a)</strong> Create a run configuration (launch.json) that uses <strong>snakemake</strong> as python <strong>module</strong> or executable</p>
<p><strong>b)</strong> Run the script itself with <strong>python executable</strong> and mock the usage of snakemake workflow in a corresponding <strong>main</strong> function of the script if "snakemake" is not in globals()</p>
<p><strong>c)</strong> Use some snakemake specific <strong>plugin</strong> like</p>
<p><a href="https://github.com/JetBrains-Research/snakecharm" rel="nofollow noreferrer">https://github.com/JetBrains-Research/snakecharm</a></p>
<p><a href="https://open-vsx.org/extension/snakemake/snakemake-lang" rel="nofollow noreferrer">https://open-vsx.org/extension/snakemake/snakemake-lang</a></p>
<p>(do not support debugging, yet)</p>
<p><strong>=> What is the recommended way to do it and where can I find instructions?</strong></p>
<p>Actually, the script file does include a main section at the end of the file:</p>
<pre><code>if __name__ == "__main__":
#...
#snakemake = mock_snakemake(...)
</code></pre>
<p>However, that functionality seems to be outdated and/or serves a different purpose?
I created a corresponding bug ticket here:</p>
<p><a href="https://github.com/PyPSA/pypsa-eur/issues/1118" rel="nofollow noreferrer">https://github.com/PyPSA/pypsa-eur/issues/1118</a></p>
<p>If instead <strong>a)</strong> is the recommended way to do it, could someone please provide example vscode launch.json and PyCharm run configuration settings?</p>
<p>I tried to</p>
<p><strong>a1)</strong> Use snakemake as a module in a PyCharm run configuration</p>
<p><a href="https://i.sstatic.net/8DUgLnTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8DUgLnTK.png" alt="enter image description here" /></a></p>
<p><strong>a2)</strong> Create a dummy start script <code>snake.py</code>:</p>
<pre><code>import sys
from snakemake.cli import main
if __name__ == "__main__":
arguments = sys.argv[1:]
main(arguments)
</code></pre>
<p>Setting a breakpoint in this script works and the snakemake workflow
can be run with</p>
<pre><code>python snake.py -call all
</code></pre>
<p>However, breakpoints inside the script that is referenced from Snakemake file do not work.</p>
<p><strong>Related:</strong></p>
<p><a href="https://github.com/snakemake/snakemake/issues/2932" rel="nofollow noreferrer">https://github.com/snakemake/snakemake/issues/2932</a></p>
<p><a href="https://github.com/PyPSA/pypsa-eur/pull/107" rel="nofollow noreferrer">https://github.com/PyPSA/pypsa-eur/pull/107</a></p>
<p><a href="https://stackoverflow.com/questions/77887199/how-to-debug-snakemake-snakefile-in-visual-studio-code">How to debug snakemake snakefile in visual studio code?</a></p>
<p><a href="https://github.com/JetBrains-Research/snakecharm/issues/142" rel="nofollow noreferrer">https://github.com/JetBrains-Research/snakecharm/issues/142</a></p>
<p><a href="https://github.com/JetBrains-Research/snakecharm/issues/25" rel="nofollow noreferrer">https://github.com/JetBrains-Research/snakecharm/issues/25</a></p>
<p><a href="https://github.com/snakemake/snakemake/issues/247" rel="nofollow noreferrer">https://github.com/snakemake/snakemake/issues/247</a></p>
<p><a href="https://github.com/snakemake/snakemake/issues/1607" rel="nofollow noreferrer">https://github.com/snakemake/snakemake/issues/1607</a></p>
|
<python><snakemake><pypsa>
|
2024-06-27 11:11:49
| 2
| 12,756
|
Stefan
|
78,677,249
| 13,823,647
|
Second instance of a class suppresses/deletes the first one in my Python program
|
<p>I am creating the Pong game using the Turtle module in Python. So far, I have one file for the paddles (the bars that players control to hit the ball) called paddles.py and a second file for the main gameplay control called game.py. I still plan on making a file for the ball and another for score-keeping.</p>
<p>In my paddles file, I create a <code>Paddle</code> class using the <code>Turtle</code> class. It doesn't exactly inherit from <code>Turtle</code>, but instead creates several <code>Turtle</code> instances inside each <code>Paddle</code>. I made a condition inside the <code>__init__</code> of <code>Paddle</code> which sets a "left" paddle to the left and a "right" paddle to the right. Then I import the <code>Paddle</code> file to game.py and create a <code>left_pad</code> and a <code>right_pad</code> as follows:</p>
<pre><code>left_pad = Paddle("left")
right_pad = Paddle("right")
</code></pre>
<p>On running the game.py file, the <code>left_pad</code> appears briefly then disappears, then only the <code>right_pad</code> is actually showing on the screen.</p>
<p><a href="https://i.sstatic.net/AJ4mSP78.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJ4mSP78.png" alt="enter image description here" /></a></p>
<p>The shape is correct, the positioning is correct, but the <code>left_pad</code> is simply not there.</p>
<p>Here is the code for paddles.py (for now I am not worried about the moving functions):</p>
<pre><code>from turtle import Turtle
class Paddle:
side = ""
paddle = [Turtle(shape="square") for i in range(4)]
paddle_bottom = -30
paddle_range = []
def __init__(self, side):
self.side = side
self.paddle_range = [self.paddle[0].screen.canvheight-10, -self.paddle[0].screen.canvheight+10]
for box in range(len(self.paddle)):
self.paddle[box].penup()
# Set to middle height
for box in range(len(self.paddle)):
self.paddle[box].sety(self.paddle_bottom + 20*box)
# Set to sides
if side == "left":
for box in range(len(self.paddle)):
self.paddle[box].setx((-self.paddle[0].screen.canvwidth / 2) + 20)
else:
for box in range(len(self.paddle)):
self.paddle[box].setx((self.paddle[0].screen.canvwidth / 2) - 20)
def move():
global paddle
for box in len(range(paddle)):
paddle[box].forward(1)
paddle[0].screen.update()
def move_up():
global paddle_range
global move
for box in len(range(paddle)):
paddle[box].setheading(90)
while paddle[-1].ycor() < paddle_range[0]:
move()
def move_down():
for box in len(range(paddle)):
paddle[box].setheading(270)
while paddle[0].ycor() > paddle_range[1]:
move()
if side == "left":
paddle[0].screen.onkey(key="w", fun=move_up)
paddle[0].screen.onkey(key="s", fun=move_down)
else:
paddle[0].screen.onkey(key="Up", fun=move_up)
paddle[0].screen.onkey(key="Down", fun=move_down)
</code></pre>
<p>and the game.py:</p>
<pre><code>from turtle import Screen
from paddles import Paddle
screen = Screen()
screen.bgcolor("beige")
screen.title("Pong!")
screen.setup(width=820, height=620)
screen.screensize(canvwidth=800, canvheight=600)
screen.tracer(0)
screen.listen()
left_pad = Paddle("left")
screen.update()
right_pad = Paddle("right")
screen.update()
screen.exitonclick()
</code></pre>
|
<python><class><instance><turtle-graphics><python-turtle>
|
2024-06-27 11:08:01
| 0
| 417
|
Aya Noaman
|
78,676,973
| 10,200,497
|
How can I preserve the previous value to find the row that is greater than it?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'start': [3, 11, 9, 19, 22],
'end': [10, 17, 10, 25, 30]
}
)
</code></pre>
<p>And expected output is creating column <code>x</code>:</p>
<pre><code> start end x
0 3 10 10
1 11 17 17
2 9 10 NaN
3 19 25 25
4 22 30 NaN
</code></pre>
<p>Logic:</p>
<p>I explain it row by row. For row <code>0</code>, <code>x</code> is <code>df.end.iloc[0]</code>. Now this value of <code>x</code> needs to be preserved until a greater value is found in the next rows and in the <code>start</code> column.</p>
<p>So 10 should be saved then the process moves to row <code>1</code>. Is 11 > 10? If yes then <code>x</code> of second row is 17. For the next row, Is 9 > 17? No so the value is <code>NaN</code>.</p>
<p>The process moves to next row. Since no values is found that is greater than 17, 17 is preserved. Is 19 > 17? Yes so <code>x</code> is set to 25. And for the last row since 22 < 25, <code>NaN</code> is selected.</p>
<p>I have provided additional examples with different <code>df</code> and the desired outputs:</p>
<pre><code>df = pd.DataFrame({'start': [3, 20, 11, 19, 22],'end': [10, 17, 21, 25, 30]})
start end x
0 3 10 10.0
1 20 17 17.0
2 11 21 NaN
3 19 25 25.0
4 22 30 NaN
df = pd.DataFrame({'start': [3, 9, 11, 19, 22],'end': [10, 17, 21, 25, 30]})
start end x
0 3 10 10.0
1 9 17 NaN
2 11 21 21.0
3 19 25 NaN
4 22 30 30.0
df = pd.DataFrame({'start': [3, 11, 9, 19, 22],'end': [10, 17, 21, 25, 30]})
start end x
0 3 10 10.0
1 11 17 17.0
2 9 21 NaN
3 19 25 25.0
4 22 30 NaN
</code></pre>
<p>This gives me the result. Is there a vectroized way to do this?</p>
<pre><code>l = []
for ind, row in df.iterrows():
if ind == 0:
x = row['end']
l.append(x)
continue
if row['start'] > x:
x = row['end']
l.append(x)
else:
l.append(np.NaN)
</code></pre>
|
<python><pandas><dataframe>
|
2024-06-27 10:11:33
| 4
| 2,679
|
AmirX
|
78,676,966
| 5,838,180
|
How to access data in an h5 file?
|
<p>I downloaded an astronomical data file called <code>sources.h5</code> from this <a href="https://portal.nersc.gov/project/sobs/users/Radio_WebSky/" rel="nofollow noreferrer">webpage</a>. I am trying to access it to no avail. What I achieved doing is:</p>
<pre><code>source_cat = h5py.File('sources.h5', 'r')
print(source_cat.keys())
>>> <KeysViewHDF5 ['_types', 'sources']>
print(source_cat['sources']
>>> <HDF5 dataset "sources": shape (), type "|V136">
data = source_cat['sources'][()]
print(data)
>>> (<HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, <HDF5 object reference>, 230355498, 51400878)
</code></pre>
<p>But if I do</p>
<pre><code>print(source_cat['sources'][:10])
</code></pre>
<p>I get the error message <code>ValueError: Illegal slicing argument for scalar dataspace</code></p>
<p>Do you have any ideas how I can get to the data inside the file? And not only for the <code>sources</code>, but also <code>_types</code> key? Further, if you can share a program/code that would show me the three of the h5 file - this would be also very helpful! Tnx</p>
|
<python><h5>
|
2024-06-27 10:09:43
| 1
| 2,072
|
NeStack
|
78,676,877
| 2,307,441
|
Read Excel merged cells strike and non strike data and write to separate files
|
<p>I have an Excel file where some of the cells got merged and some cells have warp Text with Strike and Non Strike in it.</p>
<p><a href="https://i.sstatic.net/Ap7Anz8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ap7Anz8J.png" alt="Sampledata_Image" /></a></p>
<p>I want to write the data that non strike records to one Excel and other Strike records to another Excel along with the strike in the cell.</p>
<p>My output1:</p>
<pre><code>col1 | col2 | col3 | col4 |col5 |
Sampletext1 | Combines TextC2 |Sample3 | text4 |text5 |
Sampletext2 | Combines TextC2 |Sample3_1 | text4_1 |text5 |
Sampletext2 | Combines TextC2 |Sample3_1 | text4_2 |text5 |
</code></pre>
<p>My output2:</p>
<pre><code>col1 | col2 | col3 | col4 |col5 |
Sampletext2 | Combines TextC2 |Sample3_1 | text4_3 |text5 |
</code></pre>
<p><a href="https://i.sstatic.net/AJLgp2Y8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJLgp2Y8.png" alt="My Expected output2" /></a></p>
<blockquote>
<p>'text4_3` should be striked out in the output file as well.</p>
</blockquote>
<p>I have tried using <code>openpyxl</code> in python</p>
<pre class="lang-py prettyprint-override"><code>
from openpyxl import load_workbook
from openpyxl import Workbook
import pandas as pd
input_file = 'myexecel.xslx'
Workbook = load_workbook(input_file)
for i in Workbook.worksheets:
if i.sheet_state == "visible":
sheetname = str(i).replace('<Worksheet "', '').replace('">', '').strip()
ws = Workbook[sheetname]
#unmerging the cells
for merged_cell in ws.merged_cells:
min_row, min_col, max_row, max_col = merged_cell.min_row, merged_cell.min_col, merged_cell.max_row, merged_cell.max_col
data = ws.cell(row=min_row, column=min_col).value
ws.unmerge_cells(start_row=min_row, start_column=min_col, end_row=max_row, end_column=max_col)
for row in ws.iter_rows(min_row=min_row, min_col=min_col, max_row=max_row, max_col=max_col):
for cell in row:
cell.value = data
data_all = [[cell for cell in row] for row in ws.iter_rows(values_only=True)]
df_raw = pd.DataFrame(data_all[1:], columns=headercols)
df_raw["strike_flag"] = [any(cell.font.strikethrough for cell in row) for row in ws.iter_rows(min_row=2)]
</code></pre>
<p>with the above code I was able to find whether the cell have strike through or not. But not sure how to separate strike through and non strike through records.</p>
<p>In the above code <code>data_all</code> is list of lists. list<a href="https://i.sstatic.net/AJLgp2Y8.png" rel="nofollow noreferrer">2</a> contains data with wrap text along with strikeout and non strikeout text.</p>
<p><a href="https://i.sstatic.net/okG7NcA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/okG7NcA4.png" alt="enter image description here" /></a></p>
<p>Now, I want to extract the index3. In the current example it is 3 but it can be any.</p>
<p>to find the whether RichText exist in the list or not I have tried below code. which is giving an error.</p>
<blockquote>
<p>AttributeError: 'str' object has no attribute 'value'</p>
</blockquote>
<pre><code>from openpyxl import load_workbook
import pandas as pd
from openpyxl.cell.rich_text import TextBlock, CellRichText
from openpyxl.utils import range_boundaries
for cell in data_all[2]:
if type(cell.value) == CellRichText:
for text in cell.value:
if text.font.strike:
print("struck", text)
else:
print("unstruck", text)
else:
print("unstruck", cell.value)
</code></pre>
|
<python><excel><openpyxl>
|
2024-06-27 09:55:14
| 1
| 1,075
|
Roshan
|
78,676,853
| 4,784,914
|
How could I accomplish the equivalent of a ROS service in ZeroMQ?
|
<p>In short: <strong>How could I realize something like a "Service" from ROS in ZMQ?</strong></p>
<hr />
<p>ROS2 is typically based on subscribers and publishers to topics, in the same way that ZeroMQ uses them: messages are published under a path and only clients that are subscribed to (a base of) that path receive it.<br />
But ROS2 also has a concept called services (<a href="https://docs.ros.org/en/jazzy/Tutorials/Beginner-CLI-Tools/Understanding-ROS2-Services/Understanding-ROS2-Services.html#background" rel="nofollow noreferrer">docs</a>), where a request is pushed to an address and a server-like thing listening to that address will send a reply. A nice GIF:</p>
<p><img src="https://docs.ros.org/en/jazzy/_images/Service-MultipleServiceClient.gif" alt="" /></p>
<p>What is great about this concept, is the service client doesn't need to know who hosts the service. They are matched only by a name.</p>
<p>In ZMQ I am hoping to achieve something similar: I have a bunch of functions (services?) that I want to distribute over a set of sockets (servers). I want those services to connect to a single client and have this client make calls <em>without</em> the client having to know is going to be connected beforehand (e.g. by calling a service by name only).</p>
<p>So the client might make calls to:</p>
<pre><code>/machine/valve
/machine/leds
/global/state
</code></pre>
<p>And will receive replies from them. But maybe from a single or 2 or 3 different nodes, depending on what's running.</p>
<hr />
<p>One option would be to use <code>PUB-SUB</code> and <code>SUB-PUB</code> in ZMQ for these kind of commands. The client would also bind a subscriber to listen to confirmations published for the commands. But some limitations are:<br><br>i) the publisher won't know if there are no subscribers,<br><br>ii) there is no way to tell if the published message got dropped,<br><br>iii) there is no way to prevent multiple nodes listening to the same topic, resulting in double execution of functions.</p>
<p>The other option is <code>REQ-REP</code>, where the client binds a <code>REQ</code> for each feature and nodes connect with <code>REQ</code>. The main downside here is the client already has to split up the functionality beforehand.</p>
|
<python><network-programming><ros><zeromq>
|
2024-06-27 09:51:03
| 2
| 1,123
|
Roberto
|
78,676,837
| 1,520,647
|
How to convert a string in python to separate strings
|
<p>I have a pandas dataframe with only one column containing symbols. I need to separate those symbols in groups of 13 and 39 inside a single string.</p>
<pre><code>symbol
3IINFOTECH
3MINDIA
3PLAND
20MICRONS
3RDROCK
5PAISA
63MOONS
7SEASL
ANNAINFRA
AFEL
ABCOTS.ST
ABINFRA
AKCAPIT
AAL
ASMTEC
AAPLUSTRAD
A2ZINFRA
AMJUMBO
AARSHYAM
AARVINFRA
ABBOTINDIA
AARCOM
ABCINDQ
</code></pre>
<p>I have extracted a python dataframe column to a list and then joined them to a string in the following way</p>
<pre><code>nse_disjoined_list = nse_disjoined_df['symbol'].tolist()
nse_string = ','.join(nse_disjoined_list)
nse_string = '3RDROCK,AARON,AARVI,ABCOTS.ST,ABINFRA,ABMINTLLTD,ACCORD,ACCURACY,ACEINTEG,AGROPHOS,AHIMSA,AHLADA,AILIMITED,AIROLAM,AJOONI,AKASH,AKG,AMBANIORG,AMJUMBO,ANTGRAPHIC,APOLSINHOT,ARTNIRMAN,ARVEE,ASCOM,ASLIND,ATALREAL,AURDIS,AVSL,BANKA,BANKNIFTY,BDR,BETA,BEWLTD,BMETRICS,BOHRA,BSE,BSHSL,CADSYS,CDSL,CNX100,CNX200,CNXMIDCAP,CNXPSUBANK,CONSOFINVT,CONSUMBEES,COOLCAPS.ST,CROWN,DANGEE,DESTINY,DIGJAMLMTD,DIL,DKEGL,DPABHUSHAN,DPWIRES,DRSDILIP,DUGLOBAL,EIFFL,ELGIRUBCO,EMKAYTOOLS,EUROBOND.ST,FELIX,FOCE,GANGAFORGE,GEEKAYWIRE,GIRIRAJ,GIRRESORTS,GKWLIMITED,GOLDSTAR,GRETEX,GROBTEA,HECPROJECT,HINDCON,ICEMAKE,INNOVANA,IRISDOREME,JAINAM,JAIPURKURT,JAKHARIA,JALAN,JASH,JETKNIT,JMA,JOCIL,JSLL.ST,KEERTI,KKVAPOW,KNAGRI.ST,KOTARISUG,KOTHARISUG,KOTYARK.ST,KRISHANA,KRISHIVAL.ST,KRISHNADEF,KRITIKA,KSHITIJPOL,KSOLVES,LAGNAM,LATTEYS,LAXMICOT,LEMERITE.ST,LEXUS,LFIC,LGHL,MACPOWER,MADHAVBAUG,MAHESHWARI,MAHICKRA,MANAV,MANGTIMBER,MARSHALL,MBAPL,MCL,MDL,MGEL,MHHL,MILTON,MITTAL,MKPL,MMP,MOKSH,MPTODAY,MSTCLTD,NDGL,NIDAN,NIDAN.ST,NIFTY,NIFTYALPHA50,NIRAJISPAT,NITIRAJ,NPST,NRL,OMFURN,ONEPOINT,OSEINTRUST,OSIAHYPER,OSWALSEEDS,PANSARI,PAR,PARIN,PARTYCRUS,PASHUPATI,PAVNAIND,PENTAGOLD,PERFECT,PKTEA,PRECISION.ST,PRECOT,PRITI,PROLIFE,PROPEQUITY.ST,QUADPRO,RAJMET,RELIABLE,REPL,REXPIPES,RKEC,RMDRIP,ROHITFERRO,RPPL,SAGARDEEP,SAKAR,SANGINITA,SECL,SERVOTECH,SHAIVAL,SHANTI,SHIGAN.ST,SHIVAUM,SHRADHA,SHREMINVIT,SHRENIK,SHUBHLAXMI,SIDDHIKA,SIGMA,SIKKO,SILGO,SILLYMONKS,SINTERCOM,SKSTEXTILE,SMVD,SOLEX,SONAHISONA,SONAMCLOCK,SOUTHWEST,SPCENET,SPRL.ST,SRIRAM,SRPL,STEELCITY,SUMIT,SUPREMEENG,SURANI,SUULD,SVLL,SWARAJ,TARACHAND,TEMBO,THEJO,TIRUPATI,TIRUPATIFL,TOTAL,TOUCHWOOD,UCL,UNIINFO,UNITEDPOLY,UNITEDTEA,UNIVASTU,URAVI,UWCSL,VAISHALI,VARDHACRLC,VASA,VCL,VENKEYS,VERA,VERTOZ,VIRESCENT,VMARCIND'
</code></pre>
<p>I want to separate this string into separate strings of 13 and 39 words each. 13 word string example</p>
<pre><code>'3RDROCK,AARON,AARVI,ABCOTS.ST,ABINFRA,ABMINTLLTD,ACCORD,ACCURACY,ACEINTEG,AGROPHOS,AHIMSA,AHLADA,AILIMITED'
'AIROLAM,AJOONI,AKASH,AKG,AMBANIORG,AMJUMBO,ANTGRAPHIC,APOLSINHOT,ARTNIRMAN,ARVEE,ASCOM,ASLIND,AHLADA'
</code></pre>
<p>Similarly I need 39 word string</p>
<p>Above is done manually</p>
<p>I have tried itertools batched</p>
<pre><code>list(it.batched(symbols, 13))
</code></pre>
<p>and answer from <a href="https://stackoverflow.com/questions/312443/how-do-i-split-a-list-into-equally-sized-chunks">here</a> but they all split it considering each character producing this output</p>
<pre><code>[('3', 'R', 'D', 'R', 'O', 'C', 'K', ',', 'A', 'A', 'R', 'O', 'N'),
(',', 'A', 'A', 'R', 'V', 'I', ',', 'A', 'B', 'C', 'O', 'T', 'S'),
('.', 'S', 'T', ',', 'A', 'B', 'I', 'N', 'F', 'R', 'A', ',', 'A'),
('B', 'M', 'I', 'N', 'T', 'L', 'L', 'T', 'D', ',', 'A', 'C', 'C'),
('O', 'R', 'D', ',', 'A', 'C', 'C', 'U', 'R', 'A', 'C', 'Y', ','),
('A', 'C', 'E', 'I', 'N', 'T', 'E', 'G', ',', 'A', 'G', 'R', 'O'),
('P', 'H', 'O', 'S', ',', 'A', 'H', 'I', 'M', 'S', 'A', ',', 'A'),
('H', 'L', 'A', 'D', 'A', ',', 'A', 'I', 'L', 'I', 'M', 'I', 'T'),
('E', 'D', ',', 'A', 'I', 'R', 'O', 'L', 'A', 'M', ',', 'A', 'J'),
('O', 'O', 'N', 'I', ',', 'A', 'K', 'A', 'S', 'H', ',', 'A', 'K'),
('G', ',', 'A', 'M', 'B', 'A', 'N', 'I', 'O', 'R', 'G', ',', 'A'),
('M', 'J', 'U', 'M', 'B', 'O', ',', 'A', 'N', 'T', 'G', 'R', 'A'),
('P', 'H', 'I', 'C', ',', 'A', 'P', 'O', 'L', 'S', 'I', 'N', 'H'),
('O', 'T', ',', 'A', 'R', 'T', 'N', 'I', 'R', 'M', 'A', 'N', ','),
('A', 'R', 'V', 'E', 'E', ',', 'A', 'S', 'C', 'O', 'M', ',', 'A'),
('S', 'L', 'I', 'N', 'D', ',', 'A', 'T', 'A', 'L', 'R', 'E', 'A'),
('L', ',', 'A', 'U', 'R', 'D', 'I', 'S', ',', 'A', 'V', 'S', 'L'),
(',', 'B', 'A', 'N', 'K', 'A', ',', 'B', 'A', 'N', 'K', 'N', 'I'),
('F', 'T', 'Y', ',', 'B', 'D', 'R', ',', 'B', 'E', 'T', 'A', ','),
('B', 'E', 'W', 'L', 'T', 'D', ',', 'B', 'M', 'E', 'T', 'R', 'I'),
('C', 'S', ',', 'B', 'O', 'H', 'R', 'A', ',', 'B', 'S', 'E', ','),
('B', 'S', 'H', 'S', 'L', ',', 'C', 'A', 'D', 'S', 'Y', 'S', ','),
('C', 'D', 'S', 'L', ',', 'C', 'N', 'X', '1', '0', '0', ',', 'C'),
('N', 'X', '2', '0', '0', ',', 'C', 'N', 'X', 'M', 'I', 'D', 'C'),
('A', 'P', ',', 'C', 'N', 'X', 'P', 'S', 'U', 'B', 'A', 'N', 'K'),
(',', 'C', 'O', 'N', 'S', 'O', 'F', 'I', 'N', 'V', 'T', ',', 'C'),
('O', 'N', 'S', 'U', 'M', 'B', 'E', 'E', 'S', ',', 'C', 'O', 'O'),
('L', 'C', 'A', 'P', 'S', '.', 'S', 'T', ',', 'C', 'R', 'O', 'W'),
('N', ',', 'D', 'A', 'N', 'G', 'E', 'E', ',', 'D', 'E', 'S', 'T'),
('I', 'N', 'Y', ',', 'D', 'I', 'G', 'J', 'A', 'M', 'L', 'M', 'T'),
('D', ',', 'D', 'I', 'L', ',', 'D', 'K', 'E', 'G', 'L', ',', 'D'),
('P', 'A', 'B', 'H', 'U', 'S', 'H', 'A', 'N', ',', 'D', 'P', 'W'),
('I', 'R', 'E', 'S', ',', 'D', 'R', 'S', 'D', 'I', 'L', 'I', 'P'),
(',', 'D', 'U', 'G', 'L', 'O', 'B', 'A', 'L', ',', 'E', 'I', 'F'),
('F', 'L', ',', 'E', 'L', 'G', 'I', 'R', 'U', 'B', 'C', 'O', ','),
('E', 'M', 'K', 'A', 'Y', 'T', 'O', 'O', 'L', 'S', ',', 'E', 'U'),
('R', 'O', 'B', 'O', 'N', 'D', '.', 'S', 'T', ',', 'F', 'E', 'L'),
('I', 'X', ',', 'F', 'O', 'C', 'E', ',', 'G', 'A', 'N', 'G', 'A'),
('F', 'O', 'R', 'G', 'E', ',', 'G', 'E', 'E', 'K', 'A', 'Y', 'W'),
('I', 'R', 'E', ',', 'G', 'I', 'R', 'I', 'R', 'A', 'J', ',', 'G'),
('I', 'R', 'R', 'E', 'S', 'O', 'R', 'T', 'S', ',', 'G', 'K', 'W'),
('L', 'I', 'M', 'I', 'T', 'E', 'D', ',', 'G', 'O', 'L', 'D', 'S'),
('T', 'A', 'R', ',', 'G', 'R', 'E', 'T', 'E', 'X', ',', 'G', 'R'),
('O', 'B', 'T', 'E', 'A', ',', 'H', 'E', 'C', 'P', 'R', 'O', 'J'),
('E', 'C', 'T', ',', 'H', 'I', 'N', 'D', 'C', 'O', 'N', ',', 'I'),
('C', 'E', 'M', 'A', 'K', 'E', ',', 'I', 'N', 'N', 'O', 'V', 'A'),
('N', 'A', ',', 'I', 'R', 'I', 'S', 'D', 'O', 'R', 'E', 'M', 'E'),
(',', 'J', 'A', 'I', 'N', 'A', 'M', ',', 'J', 'A', 'I', 'P', 'U'),
('R', 'K', 'U', 'R', 'T', ',', 'J', 'A', 'K', 'H', 'A', 'R', 'I'),
('A', ',', 'J', 'A', 'L', 'A', 'N', ',', 'J', 'A', 'S', 'H', ','),
('J', 'E', 'T', 'K', 'N', 'I', 'T', ',', 'J', 'M', 'A', ',', 'J'),
('O', 'C', 'I', 'L', ',', 'J', 'S', 'L', 'L', '.', 'S', 'T', ','),
('K', 'E', 'E', 'R', 'T', 'I', ',', 'K', 'K', 'V', 'A', 'P', 'O'),
('W', ',', 'K', 'N', 'A', 'G', 'R', 'I', '.', 'S', 'T', ',', 'K'),
('O', 'T', 'A', 'R', 'I', 'S', 'U', 'G', ',', 'K', 'O', 'T', 'H'),
('A', 'R', 'I', 'S', 'U', 'G', ',', 'K', 'O', 'T', 'Y', 'A', 'R'),
('K', '.', 'S', 'T', ',', 'K', 'R', 'I', 'S', 'H', 'A', 'N', 'A'),
(',', 'K', 'R', 'I', 'S', 'H', 'I', 'V', 'A', 'L', '.', 'S', 'T'),
(',', 'K', 'R', 'I', 'S', 'H', 'N', 'A', 'D', 'E', 'F', ',', 'K'),
('R', 'I', 'T', 'I', 'K', 'A', ',', 'K', 'S', 'H', 'I', 'T', 'I'),
('J', 'P', 'O', 'L', ',', 'K', 'S', 'O', 'L', 'V', 'E', 'S', ','),
('L', 'A', 'G', 'N', 'A', 'M', ',', 'L', 'A', 'T', 'T', 'E', 'Y'),
('S', ',', 'L', 'A', 'X', 'M', 'I', 'C', 'O', 'T', ',', 'L', 'E'),
('M', 'E', 'R', 'I', 'T', 'E', '.', 'S', 'T', ',', 'L', 'E', 'X'),
('U', 'S', ',', 'L', 'F', 'I', 'C', ',', 'L', 'G', 'H', 'L', ','),
('M', 'A', 'C', 'P', 'O', 'W', 'E', 'R', ',', 'M', 'A', 'D', 'H'),
('A', 'V', 'B', 'A', 'U', 'G', ',', 'M', 'A', 'H', 'E', 'S', 'H'),
('W', 'A', 'R', 'I', ',', 'M', 'A', 'H', 'I', 'C', 'K', 'R', 'A'),
(',', 'M', 'A', 'N', 'A', 'V', ',', 'M', 'A', 'N', 'G', 'T', 'I'),
('M', 'B', 'E', 'R', ',', 'M', 'A', 'R', 'S', 'H', 'A', 'L', 'L'),
(',', 'M', 'B', 'A', 'P', 'L', ',', 'M', 'C', 'L', ',', 'M', 'D'),
('L', ',', 'M', 'G', 'E', 'L', ',', 'M', 'H', 'H', 'L', ',', 'M'),
('I', 'L', 'T', 'O', 'N', ',', 'M', 'I', 'T', 'T', 'A', 'L', ','),
('M', 'K', 'P', 'L', ',', 'M', 'M', 'P', ',', 'M', 'O', 'K', 'S'),
('H', ',', 'M', 'P', 'T', 'O', 'D', 'A', 'Y', ',', 'M', 'S', 'T'),
('C', 'L', 'T', 'D', ',', 'N', 'D', 'G', 'L', ',', 'N', 'I', 'D'),
('A', 'N', ',', 'N', 'I', 'D', 'A', 'N', '.', 'S', 'T', ',', 'N'),
('I', 'F', 'T', 'Y', ',', 'N', 'I', 'F', 'T', 'Y', 'A', 'L', 'P'),
('H', 'A', '5', '0', ',', 'N', 'I', 'R', 'A', 'J', 'I', 'S', 'P'),
('A', 'T', ',', 'N', 'I', 'T', 'I', 'R', 'A', 'J', ',', 'N', 'P'),
('S', 'T', ',', 'N', 'R', 'L', ',', 'O', 'M', 'F', 'U', 'R', 'N'),
(',', 'O', 'N', 'E', 'P', 'O', 'I', 'N', 'T', ',', 'O', 'S', 'E'),
('I', 'N', 'T', 'R', 'U', 'S', 'T', ',', 'O', 'S', 'I', 'A', 'H'),
('Y', 'P', 'E', 'R', ',', 'O', 'S', 'W', 'A', 'L', 'S', 'E', 'E'),
('D', 'S', ',', 'P', 'A', 'N', 'S', 'A', 'R', 'I', ',', 'P', 'A'),
('R', ',', 'P', 'A', 'R', 'I', 'N', ',', 'P', 'A', 'R', 'T', 'Y'),
('C', 'R', 'U', 'S', ',', 'P', 'A', 'S', 'H', 'U', 'P', 'A', 'T'),
('I', ',', 'P', 'A', 'V', 'N', 'A', 'I', 'N', 'D', ',', 'P', 'E'),
('N', 'T', 'A', 'G', 'O', 'L', 'D', ',', 'P', 'E', 'R', 'F', 'E'),
('C', 'T', ',', 'P', 'K', 'T', 'E', 'A', ',', 'P', 'R', 'E', 'C'),
('I', 'S', 'I', 'O', 'N', '.', 'S', 'T', ',', 'P', 'R', 'E', 'C'),
('O', 'T', ',', 'P', 'R', 'I', 'T', 'I', ',', 'P', 'R', 'O', 'L'),
('I', 'F', 'E', ',', 'P', 'R', 'O', 'P', 'E', 'Q', 'U', 'I', 'T'),
('Y', '.', 'S', 'T', ',', 'Q', 'U', 'A', 'D', 'P', 'R', 'O', ','),
('R', 'A', 'J', 'M', 'E', 'T', ',', 'R', 'E', 'L', 'I', 'A', 'B'),
('L', 'E', ',', 'R', 'E', 'P', 'L', ',', 'R', 'E', 'X', 'P', 'I'),
('P', 'E', 'S', ',', 'R', 'K', 'E', 'C', ',', 'R', 'M', 'D', 'R'),
('I', 'P', ',', 'R', 'O', 'H', 'I', 'T', 'F', 'E', 'R', 'R', 'O'),
(',', 'R', 'P', 'P', 'L', ',', 'S', 'A', 'G', 'A', 'R', 'D', 'E'),
('E', 'P', ',', 'S', 'A', 'K', 'A', 'R', ',', 'S', 'A', 'N', 'G'),
('I', 'N', 'I', 'T', 'A', ',', 'S', 'E', 'C', 'L', ',', 'S', 'E'),
('R', 'V', 'O', 'T', 'E', 'C', 'H', ',', 'S', 'H', 'A', 'I', 'V'),
('A', 'L', ',', 'S', 'H', 'A', 'N', 'T', 'I', ',', 'S', 'H', 'I'),
('G', 'A', 'N', '.', 'S', 'T', ',', 'S', 'H', 'I', 'V', 'A', 'U'),
('M', ',', 'S', 'H', 'R', 'A', 'D', 'H', 'A', ',', 'S', 'H', 'R'),
('E', 'M', 'I', 'N', 'V', 'I', 'T', ',', 'S', 'H', 'R', 'E', 'N'),
('I', 'K', ',', 'S', 'H', 'U', 'B', 'H', 'L', 'A', 'X', 'M', 'I'),
(',', 'S', 'I', 'D', 'D', 'H', 'I', 'K', 'A', ',', 'S', 'I', 'G'),
('M', 'A', ',', 'S', 'I', 'K', 'K', 'O', ',', 'S', 'I', 'L', 'G'),
('O', ',', 'S', 'I', 'L', 'L', 'Y', 'M', 'O', 'N', 'K', 'S', ','),
('S', 'I', 'N', 'T', 'E', 'R', 'C', 'O', 'M', ',', 'S', 'K', 'S'),
('T', 'E', 'X', 'T', 'I', 'L', 'E', ',', 'S', 'M', 'V', 'D', ','),
('S', 'O', 'L', 'E', 'X', ',', 'S', 'O', 'N', 'A', 'H', 'I', 'S'),
('O', 'N', 'A', ',', 'S', 'O', 'N', 'A', 'M', 'C', 'L', 'O', 'C'),
('K', ',', 'S', 'O', 'U', 'T', 'H', 'W', 'E', 'S', 'T', ',', 'S'),
('P', 'C', 'E', 'N', 'E', 'T', ',', 'S', 'P', 'R', 'L', '.', 'S'),
('T', ',', 'S', 'R', 'I', 'R', 'A', 'M', ',', 'S', 'R', 'P', 'L'),
(',', 'S', 'T', 'E', 'E', 'L', 'C', 'I', 'T', 'Y', ',', 'S', 'U'),
('M', 'I', 'T', ',', 'S', 'U', 'P', 'R', 'E', 'M', 'E', 'E', 'N'),
('G', ',', 'S', 'U', 'R', 'A', 'N', 'I', ',', 'S', 'U', 'U', 'L'),
('D', ',', 'S', 'V', 'L', 'L', ',', 'S', 'W', 'A', 'R', 'A', 'J'),
(',', 'T', 'A', 'R', 'A', 'C', 'H', 'A', 'N', 'D', ',', 'T', 'E'),
('M', 'B', 'O', ',', 'T', 'H', 'E', 'J', 'O', ',', 'T', 'I', 'R'),
('U', 'P', 'A', 'T', 'I', ',', 'T', 'I', 'R', 'U', 'P', 'A', 'T'),
('I', 'F', 'L', ',', 'T', 'O', 'T', 'A', 'L', ',', 'T', 'O', 'U'),
('C', 'H', 'W', 'O', 'O', 'D', ',', 'U', 'C', 'L', ',', 'U', 'N'),
('I', 'I', 'N', 'F', 'O', ',', 'U', 'N', 'I', 'T', 'E', 'D', 'P'),
('O', 'L', 'Y', ',', 'U', 'N', 'I', 'T', 'E', 'D', 'T', 'E', 'A'),
(',', 'U', 'N', 'I', 'V', 'A', 'S', 'T', 'U', ',', 'U', 'R', 'A'),
('V', 'I', ',', 'U', 'W', 'C', 'S', 'L', ',', 'V', 'A', 'I', 'S'),
('H', 'A', 'L', 'I', ',', 'V', 'A', 'R', 'D', 'H', 'A', 'C', 'R'),
('L', 'C', ',', 'V', 'A', 'S', 'A', ',', 'V', 'C', 'L', ',', 'V'),
('E', 'N', 'K', 'E', 'Y', 'S', ',', 'V', 'E', 'R', 'A', ',', 'V'),
('E', 'R', 'T', 'O', 'Z', ',', 'V', 'I', 'R', 'E', 'S', 'C', 'E'),
('N', 'T', ',', 'V', 'M', 'A', 'R', 'C', 'I', 'N', 'D')]
</code></pre>
<p>Thank you in advance for your time.</p>
|
<python><string><replace><text-extraction>
|
2024-06-27 09:46:41
| 2
| 1,841
|
Hamza Ahmed
|
78,676,743
| 2,049,685
|
Unable to strip whitespace from a string using pyparsing set_parse_action()
|
<p>I've got a generic "text block" element, for which I copied the whitespace-stripping code from the <a href="https://pyparsing-docs.readthedocs.io/en/latest/pyparsing.html" rel="nofollow noreferrer">documentation</a>:</p>
<pre><code>import pyparsing as pp
text_block = pp.Group(
pp.OneOrMore(
pp.SkipTo(pp.LineEnd()) + pp.LineEnd().suppress(),
stopOn=pp.StringEnd() | (pp.LineStart() + (pp.Literal("E)") | pp.Literal("F)")))
)
).set_parse_action(pp.token_map(str.strip))
</code></pre>
<p>Unfortunately this returns an error:</p>
<blockquote>
<p>FAIL-EXCEPTION: TypeError: descriptor 'strip' for 'str' objects
doesn't apply to a 'ParseResults' object</p>
</blockquote>
<p>I replaced the use of <code>token_map</code> with a function:</p>
<pre><code>def _strip_whitespace(tokens):
return [token.str.strip() for token in tokens]
text_block = pp.Group(
pp.OneOrMore(
pp.SkipTo(pp.LineEnd()) + pp.LineEnd().suppress(),
stopOn=pp.StringEnd() | (pp.LineStart() + (pp.Literal("E)") | pp.Literal("F)")))
)
).set_parse_action(_strip_whitespace)
</code></pre>
<p>...but now it deletes all the text(!)</p>
|
<python><string><whitespace><pyparsing><removing-whitespace>
|
2024-06-27 09:30:30
| 1
| 631
|
Michael Henry
|
78,676,602
| 2,575,970
|
How can I determine whether a Word document has a password?
|
<p>I am trying to read word documents using Python. However, I am stuck in places where the document is password protected, as I do not have the password for the file(s).</p>
<p>How can I detect if the file has password, so can I ignore such files from opening?</p>
<p>Currently, the below code opens a dialog/prompt window in MS-Word to enter the password and keeps waiting for a response.</p>
<pre><code>word = win32.gencache.EnsureDispatch('Word.Application')
doc = word.Documents.Open(r"D:\appointment\PasswordProtectedDoc.doc")
</code></pre>
|
<python><passwords><win32com><doc>
|
2024-06-27 09:03:31
| 1
| 416
|
WhoamI
|
78,676,407
| 241,515
|
Polars: pandas equivalent of selecting column names from a list
|
<p>I have two DataFrames in polars, one that describes the meta data, and one of the actual data (LazyFrames are used as the actual data is larger):</p>
<pre><code>import polars as pl
df = pl.LazyFrame(
{
"ID": ["CX1", "CX2", "CX3"],
"Sample1": [1, 1, 1],
"Sample2": [2, 2, 2],
"Sample3": [4, 4, 4],
}
)
df_meta = pl.LazyFrame(
{
"sample": ["Sample1", "Sample2", "Sa,mple3", "Sample4"],
"qc": ["pass", "pass", "fail", "pass"]
}
)
</code></pre>
<p>I need to select the <em>columns</em> in <code>df</code> for samples that have passing <code>qc</code> using the information in <code>df_meta</code>. As you can see, <code>df_meta</code> has an additional sample, which of course we are not interested in as it's not part of our data.</p>
<p>In pandas, I'd do (not very elegant but does the job):</p>
<pre><code>df.loc[:, df.columns.isin(df_meta.query("qc == 'pass'")["sample"])]
</code></pre>
<p>However I'm not sure about how doing this in polars. Reading through SO and the docs didn't give me a definite answer.</p>
<p>I've tried:</p>
<pre><code>df.with_context(
df_meta.filter(pl.col("qc") == "pass").select(pl.col("sample").alias("meta_ids"))
).with_columns(
pl.all().is_in("meta_ids")
).collect()
</code></pre>
<p>Which however raises an exception:</p>
<pre><code>InvalidOperationError: `is_in` cannot check for String values in Int64 data
</code></pre>
<p>I assume it's checking the content of the columns, but I'm interested in the column <em>names</em>.</p>
<p>I've also tried:</p>
<pre><code>meta_ids = df_meta.filter(pl.col("qc") == "pass").get_column("sample")
df.select(pl.col(meta_ids))
</code></pre>
<p>but as expected, an exception is raised as there's one sample not accounted for in the first dataFrame:</p>
<pre><code>ColumnNotFoundError: Sample4
</code></pre>
<p>What would be the correct way to do this?</p>
|
<python><python-polars>
|
2024-06-27 08:26:10
| 3
| 4,973
|
Einar
|
78,676,400
| 1,686,814
|
Customizing pygtk file chooser dialog
|
<p>I am creating a Gtk file chooser dialog as follows (see below for full example):</p>
<pre><code>dialog = Gtk.FileChooserDialog(
title="Select a File",
action=Gtk.FileChooserAction.OPEN)
</code></pre>
<p>I would also like to add a checkbox and dropdown combobox as extra widgets. Adding one extra widget works fine:</p>
<pre><code>cb = Gtk.CheckButton("Only media files")
dialog.set_extra_widget(cb)
</code></pre>
<p>However, I would like to have a label and a combo box as well. I tried this:</p>
<pre><code>cb = Gtk.CheckButton("Only media files")
dialog.set_extra_widget(cb)
db = Gtk.ComboBoxText()
db.append_text("Option 1")
db.append_text("Option 2")
db.set_active(0)
dialog.set_extra_widget(db)
</code></pre>
<p>However this only shows the combo box, not the check button. I thought maybe only one widget is allowed, so I created an hbox:</p>
<pre><code>cb = Gtk.CheckButton("Only media files")
db = Gtk.ComboBoxText()
db.append_text("Option 1")
db.append_text("Option 2")
db.set_active(0)
hbox = Gtk.HBox(spacing=10)
hbox.pack_start(cb, False, False, 0)
hbox.pack_start(db, False, False, 0)
dialog.set_extra_widget(hbox)
</code></pre>
<p>Nope, nothing is shown. That doesn't work either. Then I read <a href="https://python-gtk-3-tutorial.readthedocs.io/en/latest/dialogs.html#custom-dialogs" rel="nofollow noreferrer">in the manual</a> that "To pack widgets into a custom dialog, you should pack them into the Gtk.Box, available via Gtk.Dialog.get_content_area()." So I tried this:</p>
<pre><code> cb = Gtk.CheckButton("Only media files")
db = Gtk.ComboBoxText()
db.append_text("Option 1")
db.append_text("Option 2")
db.set_active(0)
hbox = Gtk.HBox(spacing=10)
hbox.pack_start(cb, False, False, 0)
hbox.pack_start(db, False, False, 0)
dbox = dialog.get_content_area()
dbox.pack_start(hbox, False, False, 0)
</code></pre>
<p>Thus, my question is this: how can I add multiple custom widgets to the standard file chooser dialog from pygtk?</p>
<p>Here is a minimal reproducible (I hope) example. Just exchange the code between scisors with the fragments above if you want to test it.</p>
<pre><code>import gi
gi.require_version('Gtk', '3.0')
from gi.repository import Gtk
def on_button_clicked(button):
dialog = Gtk.FileChooserDialog(
title="Select a File",
action=Gtk.FileChooserAction.OPEN)
# 8< -------------------
cb = Gtk.CheckButton("Only media files")
db = Gtk.ComboBoxText()
db.append_text("Option 1")
db.append_text("Option 2")
db.set_active(0)
hbox = Gtk.HBox(spacing=10)
hbox.pack_start(cb, False, False, 0)
hbox.pack_start(db, False, False, 0)
dbox = dialog.get_content_area()
dbox.pack_start(hbox, False, False, 0)
# 8< -------------------
response = dialog.run()
dialog.destroy()
window = Gtk.Window()
window.set_default_size(300, 100)
window.connect("destroy", Gtk.main_quit)
button = Gtk.Button(label="Open FileChooserDialog")
button.connect("clicked", on_button_clicked)
window.add(button)
window.show_all()
Gtk.main()
</code></pre>
|
<python><user-interface><pygtk><filechooser>
|
2024-06-27 08:24:03
| 1
| 17,210
|
January
|
78,676,296
| 485,330
|
AWS Lambda Custom JWT Validation
|
<hr />
<p>I've built that first validates the JWT Token and then extracts the user unique ID ("sub").</p>
<p>In a non Lambda environment the script works fine, however in the AWS Lambda I'm having an error message.</p>
<p><strong>What could be the problem?</strong></p>
<blockquote>
<p>Unexpected error during JWT validation: Unable to find an algorithm
for key: {'alg': 'RS256', 'e': 'AQAB', 'kid':
'dmAQX7bVDINFkTGxZc5YCxF5ZA/pcaRsQMUoBbRt4bw=', 'kty': 'RSA', 'n':
'u9hHbyMaI-PWsTG9MtaHjxwBmMez6VeV-ScqIgllBUSQkx8Ao...vGUIG39rb3nPmNVCunBw',
'use': 'sig'}</p>
</blockquote>
<p>This is my AWS Lambda code:</p>
<pre><code>import json
import os
import requests
from jose import jwt, jwk
def get_efs_keys(file_name="/mnt/efs/jwks.json"):
# The jkws.json is obtained from here:
# https://cognito-idp.<Region>.amazonaws.com/<userPoolId>/.well-known/jwks.json
try:
with open(file_name, 'r') as file:
jwks_data = json.load(file)
return jwks_data.get('keys', [])
except Exception as e:
print(f"An error occurred while fetching keys: {e}")
return []
def validate_jwt(jwt_token, keys):
if not jwt_token:
return False, False
try:
headers = jwt.get_unverified_headers(jwt_token)
kid = headers.get('kid')
if not kid:
return False, False
key = next((key for key in keys if key['kid'] == kid), None)
if key is None:
return False, False
public_key = jwk.construct(key)
decoded_token = jwt.decode(jwt_token, public_key, algorithms=['RS256'], audience=os.environ.get('APP_CLIENT_ID'))
return True, decoded_token.get('sub', False)
except jwt.JWTError as e:
print(f"JWT token validation error: {e}")
return False, False
except Exception as e:
print(f"Unexpected error during JWT validation: {e}")
return False, False
def lambda_handler(event, context):
# Get all headers from the event
headers = event.get('headers', {})
# Get the Authorization header
authorization_header = headers.get('Authorization', '')
# Parse the Bearer token to get only the access token (case-insensitive)
if authorization_header.lower().startswith('bearer '):
access_token = authorization_header[7:]
else:
access_token = None
# Get keys from EFS
keys = get_efs_keys()
# Validate the JWT token
jwt_valid, sub = validate_jwt(access_token, keys)
# Create a response
response_body = {
'access_token': access_token,
'jwt_valid': jwt_valid,
'sub': sub
}
response = {
'statusCode': 200,
'headers': {
'Content-Type': 'application/json'
},
'body': json.dumps(response_body)
}
return response
</code></pre>
<p>If the validation is successful, the "jwt_valid" must be "True" and the "sub" the respective unique value.</p>
|
<python><amazon-web-services><aws-lambda><jwt><amazon-cognito>
|
2024-06-27 08:05:38
| 1
| 704
|
Andre
|
78,676,276
| 8,005,367
|
pip install | How to install private project from the local file system when using requirements.txt?
|
<p>I'm trying to figure out how to properly use <code>python3.8 -m pip install</code> to install the requirements from <code>requirements.txt</code> AND simultaneously install certain packages from the local file system instead of from the repo. We have a few different private repos that we may need to work on simultaneously for larger changes.</p>
<p>I have found that I could do this as two separate commands. First <code>python3.8 -m pip install -r requirements.txt --index-url https://private/repo/dependency/simple</code> and then <code>python3.8 -m pip install path/to/dependency</code>. But I would like to combine these into one command if possible. This would avoid the need to generate and manage access tokens as part of the index url for each developer.</p>
|
<python><python-3.x><pip>
|
2024-06-27 08:02:09
| 3
| 1,030
|
Ryan Pierce Williams
|
78,676,239
| 14,282,714
|
Column layout inside submit form streamlit
|
<p>I would like to have two <code>selectbox</code> elements side by side in a <a href="https://docs.streamlit.io/develop/api-reference/execution-flow/st.form" rel="nofollow noreferrer"><code>st.form</code></a> using a <code>st.columns</code> layout. Unfortunately the select boxes are out of the form. Here is some reproducible code:</p>
<pre><code>import streamlit as st
st.header("Selectbox side by side in form")
col1, col2 = st.columns(2)
with st.form('Form1'):
col1.selectbox("Select track", ["track 1", "track 2"])
col2.selectbox("Select track 2", ["track 1", "track 2"])
st.slider("Select your race finish position", 1, 12, key="number")
st.form_submit_button('Submit your race')
</code></pre>
<p>Output:</p>
<p><a href="https://i.sstatic.net/XIPht0Oc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XIPht0Oc.png" alt="enter image description here" /></a></p>
<p>As you can see the select boxes are out of the form which is not what I want. If you use only one selectbox it is of course inside the form, but I would like to have two select boxes side by side. So I was wondering if anyone how to use a column layout inside a form in streamlit?</p>
|
<python><streamlit>
|
2024-06-27 07:54:59
| 1
| 42,724
|
Quinten
|
78,676,206
| 20,920,790
|
How to avoid error ('cannot mix list and non-list, non-null values', 'Conversion failed for column position with type object') in Airflow?
|
<p>I got this task in my dag:</p>
<pre><code>@task
def get_result_from_records_api(api, tries: int, result_from_salons_api: list):
salon_ids_list = result_from_salons_api[1]
# result is 4 pd.DataFrames
result_from_records_api = get_data_or_raise_error_with_retry(api.get_records_staff_sales_of_services_goods, tries, salon_ids_list=salon_ids_list)
# make lists for dfs
records_lst = []
staff_from_records_lst = []
services_sales_lst = []
good_sales_lst = []
# put dfs in lists
records_lst.append(result_from_records_api[0])
staff_from_records_lst.append(result_from_records_api[1])
services_sales_lst.append(result_from_records_api[2])
good_sales_lst.append(result_from_records_api[3])
# make dict with lists
result_dict = {
'records': records_lst
, 'staff_from_records': staff_from_records_lst
, 'services_sales': services_sales_lst
, 'good_sales': good_sales_lst
}
return result_dict
</code></pre>
<p>This task returns 4 pandas.DataFrames without error (returned values is correct).</p>
<p>I've tryed put result in list, tuple, dictionary, but every time I get error:</p>
<pre><code>{xcom.py:664} ERROR - ('cannot mix list and non-list, non-null values', 'Conversion failed for column position with type object'). If you are using pickle instead of JSON for XCom, then you need to enable pickle support for XCom in your airflow config or make sure to decorate your object with attr.
</code></pre>
<p>How to avoid this error?</p>
<p>P. S. I already enable_xcom_picking in docker-compose.yml:</p>
<pre><code>AIRFLOW__CORE__ENABLE_XCOM_PICKLING=true
</code></pre>
<p>Airflow 2.8.2
Python 3.11</p>
|
<python><airflow>
|
2024-06-27 07:47:40
| 1
| 402
|
John Doe
|
78,676,114
| 1,064,416
|
How to serve static files for django-cms on DigitalOcean?
|
<p>I have deployed a django-cms app on DigitalOcean following the steps in this tutorial:</p>
<p><a href="https://www.digitalocean.com/community/tutorials/how-to-deploy-django-to-app-platform" rel="nofollow noreferrer">https://www.digitalocean.com/community/tutorials/how-to-deploy-django-to-app-platform</a></p>
<p>The app is up and running, but static-files are not served.</p>
<p>I created the additional static site referring to the same GitHub repository, and it is up and running too.</p>
<p><strong>settings.py</strong></p>
<pre><code>STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, "staticfiles")
STATICFILES_DIRS = [
BASE_DIR / "project_name" / "static",
]
</code></pre>
<p>I did collect the static files with <code>python manage.py collectstatic</code> and the folder and files are present in the intended location.</p>
<p>How do I "connect" the app on DigitalOcean to my static files?</p>
<p>Some screenshots from DigitalOcean:</p>
<p><a href="https://i.sstatic.net/cWcf3vlg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cWcf3vlg.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/MlwUk4pB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MlwUk4pB.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/zqOwwV5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zqOwwV5n.png" alt="enter image description here" /></a></p>
|
<python><django><digital-ocean><django-cms>
|
2024-06-27 07:27:07
| 1
| 1,021
|
Rockbot
|
78,676,037
| 14,463,396
|
Turn a list of tuples into pandas dataframe with single column
|
<p>I have a list of tuples like:</p>
<pre><code>tuple_lst = [('foo', 'bar'), ('bar', 'foo'), ('ping', 'pong'), ('pong', 'ping')]
</code></pre>
<p>And I want to create a Dataframe with one column containing each tuple pair, like:</p>
<pre><code>| one col |
| -------- |
| ('foo', 'bar') |
| ('bar', 'foo') |
| ('ping', 'pong') |
| ('pong', 'ping') |
</code></pre>
<p>I tried:</p>
<pre><code>df = pd.DataFrame(tuple_lst, columns='one col')
</code></pre>
<p>But this throws an error as it's trying to split the tuples into 2 separate columns. I know if I pass a list of 2 column names here, it would produce a dataframe with 2 columns which is not what I want. I guess I could then put these two columns back together into a list of tuples, but this feels like a lot of work to break them up and put them back together, I feel there must be a simpler way to do this? I need the output to be a dataframe not a series so I can add other columns etc later on.</p>
|
<python><pandas>
|
2024-06-27 07:09:19
| 1
| 3,395
|
Emi OB
|
78,675,981
| 4,578,454
|
python selenium Headless chrome doesn't scroll
|
<p>I'm working on a web scrapper to collect Facebook post comments for analytics purposes.</p>
<p>On Facebook, after login, we can scroll the post page to get all the comments. Which dynamically loads the comments on the page scroll. Unfortunately, I can't get the page to scroll in the <code>headless</code> mode, though it works in non-headless mode.</p>
<p>I have referred the following posts - <a href="https://stackoverflow.com/questions/48257870/headless-chrome-with-selenium-can-only-find-ways-to-scroll-non-headless">Post 1</a> <a href="https://www.reddit.com/r/Python/comments/7qqfg6/python_selenium_chrome_possible_to_scroll_headless/?rdt=51118" rel="nofollow noreferrer">Post 2</a></p>
<p>Here's my code</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import re
import time
from decouple import config
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.wait import WebDriverWait
import yake
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import NoSuchElementException
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36"
options = Options()
options.add_argument('--disable-gpu-sandbox')
options.add_argument('--disable-gpu')
options.add_argument('--disable-software-rasterizer')
options.add_argument('--disable-dev-shm-usage')
options.add_argument('--no-sandbox')
options.add_argument("--window-size=1280,700")
options.add_argument("--headless=new")
options.add_argument(f"--user-agent={user_agent}")
driver = webdriver.Chrome(options=options)
driver.get("https://www.facebook.com/")
email_input = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.ID, "email"))
)
password_input = driver.find_element(By.ID, "pass")
email_input.send_keys(config("FB_EMAIL_INPUT"))
password_input.send_keys(config("FB_PASSWORD_INPUT"))
password_input.send_keys(Keys.RETURN)
time.sleep(1)
try:
profile = WebDriverWait(driver, 10).until(
EC.presence_of_element_located((By.XPATH, "//div[@aria-label='Your profile']"))
)
print("Login successful")
except NoSuchElementException:
print("Login failed")
POST_URL = "https://www.facebook.com/thebetterindia/posts/pfbid025Yo2f5Qsd8NDL4AoFoHuvjeAURiRVc7rQ4uZBbULMuUWCfZ9NURRfeVha7aPpnn3l"
driver.get(POST_URL)
def infinite_scroll(driver, timeout=10):
last_height = driver.execute_script("return document.body.scrollHeight")
while True:
driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
time.sleep(timeout)
new_height = driver.execute_script("return document.body.scrollHeight")
if new_height == last_height:
break
last_height = new_height
try:
infinite_scroll(driver, timeout=2)
except Exception as e:
print(f"An exception occurred: {e}")
</code></pre>
|
<python><google-chrome><selenium-webdriver>
|
2024-06-27 06:58:16
| 1
| 4,667
|
silverFoxA
|
78,675,968
| 6,936,582
|
Pandas compact rows when data is missing
|
<p>I have a list of dicts where each dict can have different keys. I want to create a dataframe with one row where each key is a column and the row is its value:</p>
<pre><code>import pandas as pd
data = [{"A":1}, {"B":2}, {"C":3}]
df = pd.DataFrame(data)
print(df.to_string(index=False))
# A B C
# 1.0 NaN NaN
# NaN 2.0 NaN
# NaN NaN 3.0
</code></pre>
<p>What I want:</p>
<pre><code># A B C
# 1.0 2.0 3.0
</code></pre>
<p>How can I drop/compact the rows with NaN values?</p>
|
<python><pandas><dataframe>
|
2024-06-27 06:55:16
| 2
| 2,220
|
Bera
|
78,675,944
| 3,486,684
|
Indexing numpy array of shape `(A, B, C)` with `[[a, b], [c, d], :]` (`0 <= a, b < A`, `0 <= c, d < B`) produces shape `(2, C)` instead of `(2, 2, C)`
|
<p>Here's the example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
A = np.random.randint(100)
B = np.random.randint(100)
C = np.random.randint(100)
print(f"{A=}, {B=}, {C=}")
x = np.random.random((A, B, C))
print(f"{x.shape=}")
a = np.random.randint(0, A)
b = np.random.randint(0, A)
print(f"{a=}, {b=}")
c = np.random.randint(0, B)
d = np.random.randint(0, B)
print(f"{c=}, {d=}")
print(f"{x[[a, b], [c, d], :].shape=}")
print(f"{x[[a, b]][:, [c, d]].shape=}")
</code></pre>
<pre><code>A=7, B=40, C=57
x.shape=(7, 40, 57)
a=4, b=1
c=10, d=5
x[[a, b], [c, d], :].shape=(2, 57)
x[[a, b]][:, [c, d]].shape=(2, 2, 57)
</code></pre>
<p>I would have expected indexing with <code>[[a, b], [c, d], :]</code> to produce a shape <code>(2, 2, C)</code>?</p>
|
<python><numpy><numpy-slicing>
|
2024-06-27 06:50:32
| 1
| 4,654
|
bzm3r
|
78,675,541
| 7,478,147
|
Launch multiple server apps in localhost with different port numbers within the same container in Cloud Run
|
<p>I'm creating a webapp with 2 main services: Flask and <a href="https://docs.chainlit.io/get-started/pure-python" rel="nofollow noreferrer">Chainlit</a>.</p>
<p>When I launch my webapp locally on <code>localhost:8080</code>, on my landing page, when I click on the "chat" button, it redirects me to my second service <code>chainlit</code>, which is an AI chatbot (there are many other functionalities in the app). Here's how I launch my two services in production:</p>
<p>Flask: <code>poetry run gunicorn -b 0.0.0.0:8080 src.app.app:app</code></p>
<p>Chainlit: <code>poetry run chainlit run --headless src/chatbot.py</code></p>
<p>When I build my docker image and launch my container locally, everything works perfectly with the 2 ports exposed. But it doesn't work on Cloud Run. Here are the contents of my files :</p>
<hr />
<p><strong>app.py</strong></p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, redirect, render_template
app = Flask(__name__)
@app.route("/")
def home():
return render_template("index.html")
@app.route("/chat")
def chat():
return redirect("http://localhost:8000")
if __name__ == "__main__":
app.run(host="0.0.0.0", port=8080)
</code></pre>
<hr />
<p><strong>Dockerfile</strong></p>
<pre><code>FROM ollama/ollama
ENV DEBIAN_FRONTEND=noninteractive \
PATH="/root/.local/bin:$PATH"
ENV HOST 0.0.0.0
# Update package list and install pipx
RUN apt-get update -y && \
apt-get install -y ffmpeg && \
apt-get install -y pipx && \
pipx install poetry==1.7.1 && \
rm -rf /var/lib/apt/lists/*
# Set the working directory
WORKDIR /app
# Copy the current directory contents into the container
COPY . .
# Install dependencies with poetry
RUN poetry install --no-root --no-cache --without explo && \
chmod +x pull_ollama_models_and_launch_app_servers.sh
EXPOSE 8080 8000
ENTRYPOINT ["./pull_ollama_models_and_launch_app_servers.sh"]
</code></pre>
<hr />
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
# Start Ollama in the background.
ollama serve &
# Pause for Ollama to start.
sleep 5
# Pause before server launch
echo "🔵 Launching Servers (Flask and Chainlit) ..."
sleep 5
# Start Chainlit
poetry run chainlit run --headless src/chatbot.py &
# start flask landing page
poetry run gunicorn -b 0.0.0.0:8080 src.app.app:app &
# Wait for both background processes to exit
wait
</code></pre>
<hr />
<p>My problem is that on Cloud Run, the landing page (index.html) works fine when I access the application via the generated URL. However, the <code>chat</code> part doesn't work at all (<strong>This site can’t be reached error</strong>). <code>Default STARTUP TCP probe succeeded after 1 attempt for container "xxx" on port 8080</code> is the only stack-trace available on the log, no trace of chainlit service running on <code>port 8000</code>.</p>
<p>Can anyone help me?</p>
|
<python><docker><flask><google-cloud-run><chainlit>
|
2024-06-27 04:32:45
| 1
| 359
|
Boubacar Traoré
|
78,675,466
| 2,955,827
|
Can I create a helper function for both async and sync environment?
|
<p>I'm trying to create helper function for my projects, but some legacy projects are in sync environment.</p>
<p>my helper function looks like:</p>
<pre class="lang-py prettyprint-override"><code>def func_for_async_and_sync(session, data):
statement = select(Obj).where(Obj.id == data['id'])
# and more code
if isinstance(session, AsyncSession):
obj = await session.execute(statement)
else:
obj = session.execute(statement)
# edit obj and return
return obj
</code></pre>
<p>Of course it not work. How can I call <code>AsyncSession.execute</code> in normal function?</p>
<p>The main difference between <a href="https://stackoverflow.com/questions/51762227/how-to-call-a-async-function-from-a-synchronized-code-python">How to call a async function from a synchronized code Python</a> is I need return value from async function.</p>
<p>If I rewrite my function to:</p>
<pre class="lang-py prettyprint-override"><code> loop = asyncio.get_running_loop()
obj = asyncio.run_coroutine_threadsafe(
session.get(model_class, primary_key_value), loop
).result(timeout=5)
</code></pre>
<p>I will always get TimeoutError. Future.result() will not return correctly, and this is my minimal test code:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
async def main():
"""main is async and started as normal with asyncio.run"""
print("BEGIN main")
loop = asyncio.get_running_loop()
timeout = 3
# Create a coroutine
coro = asyncio.sleep(1, result=3)
# Submit the coroutine to a given loop
future = asyncio.run_coroutine_threadsafe(coro, loop)
# Wait for the result with an optional timeout argument
assert future.result(timeout) == 3
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
|
<python><asynchronous>
|
2024-06-27 03:44:43
| 1
| 3,295
|
PaleNeutron
|
78,675,294
| 12,231,242
|
Tkinter place geometry manager not overlapping
|
<p>Over the past several years I've been using a full-featured combobox widget that I made by stitching together other Tkinter widgets. (Because I like it better than ttk.Combobox.) For the dropdown I've always used a Toplevel widget since it overlaps other widgets instead of pushing them aside. I've never been able to get the place geometry manager to work as the dropdown, it won't overlap the other widgets. I know that some people say never use <code>place()</code> for anything anyway, but I just have to satisfy my curiosity as to whether it would be easier to use a <code>place</code>d Frame instead of using a borderless Toplevel window.</p>
<p>But I can't get the <code>place</code>d Frame to overlap at all, so can someone tell me what's wrong with this simple sample code, why doesn't the <code>dropdown</code> overlap other widgets? As soon as its <code>rely</code> is set to 1, it becomes invisible, covered by the other widgets.</p>
<pre><code>import tkinter as tk
def unplace():
dropdown.place_forget()
def show(evt=None):
dropdown.lift()
dropdown.place(relx=0, rely=0.3, anchor='nw', relwidth=1.0)#rely=1,
# dropdown.place(relx=0, rely=1, anchor='nw', relwidth=1.0)
dropdown_item.grid(sticky="ew")
root = tk.Tk()
root.geometry("400x400+600+300")
root.config(bg="red")
root.rowconfigure(0, weight=1)
root.columnconfigure(0, weight=1)
frm = tk.Frame(root, bg="blue")
frm.grid(sticky="news")
ent = tk.Entry(frm)
ent.grid()
ent.bind("<Button-1>", show)
dropdown = tk.Frame(ent, bg="tan")
dropdown_item = tk.Button(
dropdown, bg="steelblue", width=12, command=unplace, text="item")
# If these labels are commented, `dropdown` still won't overlap.
c = tk.Label(frm, text="Overlap Me")
c.grid()
d = tk.Label(frm, text="Overlap Me")
d.grid()
e = tk.Label(frm, text="Overlap Me")
e.grid()
f = tk.Label(frm, text="Overlap Me")
f.grid()
show()
root.mainloop()
</code></pre>
|
<python><tkinter>
|
2024-06-27 02:04:03
| 1
| 574
|
Luther
|
78,675,096
| 6,622,697
|
How to prevent flask-security from using db_session.query_property()
|
<p>I'm trying to setup flask-security using the example at <a href="https://flask-security-too.readthedocs.io/en/stable/quickstart.html" rel="nofollow noreferrer">https://flask-security-too.readthedocs.io/en/stable/quickstart.html</a>. I am using SQLAlchemy, not flask-sqlalchemy. I was able to get the example to work, but I'm having problems integrating it in my application. flask-security seems to require <code>Base.query = db_session.query_property()</code>, which in turn, seems to require using <code>scoped-session</code>, which I don't use (there seem to be some strong opinions against using it)</p>
<p>This seems to be a pretty weird requirement of <code>flask-security</code>, which appears to be undocumented, except in the sample code. I'm just wondering if this will cause a problem with other parts of my application</p>
<p>There also seems to be some inconsistencies with various examples. Some of them attach the security object to the app with</p>
<pre><code>app.security = Security(app, user_datastore)
</code></pre>
<p>and later uses</p>
<pre><code>app.security.datastore.create_user(email="test@me.com"...
</code></pre>
<p>whereas in other places, I see</p>
<pre><code>security = Security(app, user_datastore)
user_datastore.create_user(email="test@me.com"...
</code></pre>
<p>I'm trying to get around it by doing something like this</p>
<p>database.py</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.orm import sessionmaker
from root.config import config
def get_engine():
return create_engine(config.get('db').get('DATABASE_URI'), echo=False)
# Use this session for everything other than flask-security
def get_session():
engine = get_engine()
return sessionmaker(bind=engine)()
</code></pre>
<p>app.py</p>
<pre><code>import os
from flask import Flask
from flask_security import SQLAlchemySessionUserDatastore, Security, hash_password
from sqlalchemy.orm import scoped_session, sessionmaker
from root.db.ModelBase import ModelBase
from root.db.database import get_engine
from root.db.models import User, Role
from root.mail import mail_init
from root.views.calibration_views import calibration
nwm_app = Flask(__name__)
nwm_app.config["SECURITY_REGISTERABLE"] = True
# Use this session for flask-security
session = scoped_session(sessionmaker(bind=get_engine()))
# This is global. Is it going to affect other parts of my application if I'm using SQLAlchemy 'select'?
ModelBase.query = session.query_property()
user_datastore = SQLAlchemySessionUserDatastore(session, User, Role)
security = Security(nwm_app, user_datastore)
# Register blueprints or views here
nwm_app.register_blueprint(calibration)
# one time setup
with nwm_app.app_context():
# Create a user and role to test with
# nwm_app.security.datastore.find_or_create_role(
user_datastore.find_or_create_role(
# name="user", permissions={"user-read", "user-write"}
name="user"
)
print('created role')
session.commit()
# if not nwm_app.security.datastore.find_user(email="test@me.com"):
if not user_datastore.find_user(email="test@me.com"):
print('user not found')
# nwm_app.security.datastore.create_user(email="test@me.com",
user_datastore.create_user(email="test@me.com",
password=hash_password("password"), roles=["user"])
session.commit()
if __name__ == '__main__':
nwm_app.run()
</code></pre>
|
<python><flask><sqlalchemy><flask-security>
|
2024-06-27 00:04:24
| 1
| 1,348
|
Peter Kronenberg
|
78,675,005
| 9,952,624
|
Google Cloud Function error 401 by using service account key
|
<p>I am trying to authenticate in Python to GCP in order to call Cloud Functions.</p>
<p>I have set up a service account with roles as follows
<a href="https://i.sstatic.net/vvyDmTo7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vvyDmTo7.png" alt="enter image description here" /></a></p>
<p>and created a JSON key for it. Double-checking from the Cloud Function pane I can see that its permissions are correct (tried giving Admin as well)
<a href="https://i.sstatic.net/nSb5LqPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nSb5LqPN.png" alt="enter image description here" /></a></p>
<p>However, authentication seems to always succeed no matter what</p>
<pre><code>from google.oauth2 import service_account
import google.auth.transport.requests
key_path = "path/to/key.json"
scopes = ['https://www.googleapis.com/auth/cloud-platform']
credentials = service_account.Credentials.from_service_account_file(
key_path, scopes=scopes
)
auth_request = google.auth.transport.requests.Request()
credentials.refresh(auth_request)
print(credentials.token) # Bearer xxxxx
</code></pre>
<p>but later, the bearer token seems to not grant me access to the API, as I get error 401</p>
<blockquote>
<p>Bearer error="invalid_token" error_description="The access token could not be verified"</p>
</blockquote>
<p>I have tried regenerating the JSON key multiple times. The function itself works fine because I tested a public version of it and it works no problem.</p>
<hr />
<h2>EDIT</h2>
<p>I have also tried creating a JWT as per documentation with the following code. Still failing to authorize, never failing to authenticate</p>
<pre><code>import json
import datetime
import jwt
import requests
# Load the service account key file
key_file_path = "path/to/key.json"
with open(key_file_path) as f:
service_account_info = json.load(f)
# Extract the necessary information from the service account info
private_key = service_account_info['private_key']
client_email = service_account_info['client_email']
# Define the JWT headers and payload
headers = {
"alg": "RS256",
"typ": "JWT",
"kid": service_account_info['private_key_id']
}
now = datetime.datetime.utcnow()
expiry = now + datetime.timedelta(hours=1)
payload = {
"iss": client_email,
"sub": client_email,
"aud": "https://www.googleapis.com/oauth2/v4/token",
"iat": now,
"exp": expiry,
"scope": "https://www.googleapis.com/auth/cloud-platform"
}
# Generate the JWT
jwt_token = jwt.encode(payload, private_key, algorithm="RS256", headers=headers)
# Define the request to get the Google-signed ID token
token_url = "https://www.googleapis.com/oauth2/v4/token"
headers = {
"Content-Type": "application/x-www-form-urlencoded"
}
body = {
"grant_type": "urn:ietf:params:oauth:grant-type:jwt-bearer",
"assertion": jwt_token
}
# Make the request to get the ID token
response = requests.post(token_url, headers=headers, data=body)
# Print the response (contains the ID token)
if response.status_code == 200:
id_token = "bearer "+response.json().get('access_token')
print("ID Token:", id_token)
else:
print("Error:", response.json())
</code></pre>
|
<python><google-cloud-platform><google-cloud-functions><google-iam>
|
2024-06-26 23:09:58
| 1
| 462
|
davide m.
|
78,674,923
| 20,295,949
|
Selenium WebDriver returns empty DataFrame when scraping CoinGecko in headless mode
|
<p>I'm trying to scrape Bitcoin market data from CoinGecko using Selenium in headless mode, but the script returns an empty DataFrame. The table rows are not being detected even though I've added a wait time. Here is a simplified version of the code I'm using to set up the WebDriver, navigate to the page, and extract the table data using XPath. The relevant parts of the log indicate that the requests are being made correctly, but no elements are found. What could be causing this issue, and how can I ensure the table data is correctly scraped in headless mode?.</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.options import Options
import pandas as pd
import time
# Path to your ChromeDriver
chrome_driver_path = 'C:\\Users\\hamid\\OneDrive\\Desktop\\chromedriver-win64\\chromedriver.exe'
# Set up headless mode
options = Options()
options.headless = True
options.add_argument("--window-size=1920,1080")
# Set up the WebDriver
driver = webdriver.Chrome(executable_path=chrome_driver_path, options=options)
# Navigate to the CoinGecko Bitcoin page
driver.get('https://www.coingecko.com/en/coins/bitcoin')
# Wait for the page to load
time.sleep(5)
# Extract data from the page
rows = driver.find_elements(By.XPATH, '//table[@class="table"]/tbody/tr')
market_data = []
for row in rows:
exchange = row.find_element(By.XPATH, './/td[2]/a').text
pair = row.find_element(By.XPATH, './/td[3]/a/b').text
price = row.find_element(By.XPATH, './/td[4]/span').text
volume_24h = row.find_element(By.XPATH, './/td[5]/span').text
volume_percentage = row.find_element(By.XPATH, './/td[6]').text
category = row.find_element(By.XPATH, './/td[7]').text
updated = row.find_element(By.XPATH, './/td[8]').text
market_data.append({
'exchange': exchange,
'pair': pair,
'price': price,
'volume_24h': volume_24h,
'volume_percentage': volume_percentage,
'category': category,
'updated': updated
})
# Close the WebDriver
driver.quit()
# Convert to DataFrame
df = pd.DataFrame(market_data)
print(df)
</code></pre>
<p>When I run the script, I get the following output:</p>
<pre><code>Empty DataFrame
Columns: []
Index: []
</code></pre>
|
<python><selenium-webdriver>
|
2024-06-26 22:40:55
| 1
| 319
|
HamidBee
|
78,674,771
| 1,601,831
|
How to access app dependency in FastAPI when not injecting to endpoint
|
<p>FastApi app is defined in <code>main.py</code>. I need to do some initialization where I load from a database into a cache.</p>
<p>When my app runs normally, it calls a get_db() function to return a <code>SqlServer</code> session and this get_db function is used in endpoints using <code>session=Depends(get_db)</code> in the parameters.</p>
<p>I need some kind of call to get the correct <code>get_db</code> function based on app dependencies, so that it will use the overrode version defined in <code>testconf.py</code> when I'm running pytests. How do I do this?</p>
<pre><code>#database.py
engine = create_engine("sqlserver connection string here")
SessionLocal = sessionmaker(autocommit=False,autoflush=False,bind=engine)
def get_db():
# standard way to get db in FastAPI
db = SessionLocal()
yield db
</code></pre>
<pre><code>#main.py
from database import get_db
# When running pytests, its not finding the override version in
# in the app dependencies
if fn := app.dependency_overrides.get(get_db):
sess = next(fn())
else:
sess = next(get_db())
CacheService.load_cache(sess)
</code></pre>
<pre><code>#testconf.py
#defines fixtures used by pytests
@pytest.fixture
def engine():
url = "sqlite://"
return create_engine(url, ...)
@pytest.fixture
def app_session(engine):
TestingSessionLocal = sessionmaker(..., bind=engine)
def override_get_db():
db = TestingSessionLocal()
yield db
app.dependency_overrides[get_db] = override_get_db
return app
@pytest.fixture
def client(app_session):
return TestClient(app=appsession)
</code></pre>
<pre><code>#some test
def test_something(app_session, client):
# It loads the caches using a sqlserver session
client.get("url")
</code></pre>
|
<python><unit-testing><dependencies><pytest><fastapi>
|
2024-06-26 21:41:16
| 2
| 417
|
dam
|
78,674,718
| 20,295,949
|
How to Bypass HTTP 403 Error When Scraping CoinGecko with Python?
|
<p>I am trying to scrape the Bitcoin markets section from CoinGecko using Python. However, I keep encountering a HTTP 403 error. I have tried using the requests library with custom headers to mimic a real browser, but I still get the same error.</p>
<p>Here is the code I am using:</p>
<pre><code>import requests
import pandas as pd
# Base URL for Bitcoin markets on CoinGecko
base_url = "https://www.coingecko.com/en/coins/bitcoin"
# Function to fetch a single page
def fetch_page(url, page):
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36",
"X-Requested-With": "XMLHttpRequest"
}
response = requests.get(f"{url}?page={page}", headers=headers)
if response.status_code != 200:
print(f"Failed to fetch page {page}: Status code {response.status_code}")
return None
return response.text
# Function to extract market data from a page
def extract_markets(html):
dfs = pd.read_html(html)
return dfs[0] if dfs else pd.DataFrame()
# Main function to scrape all pages
def scrape_all_pages(base_url, max_pages=10):
all_markets = []
for page in range(1, max_pages + 1):
print(f"Scraping page {page}...")
html = fetch_page(base_url, page)
if html is None:
break
df = extract_markets(html)
if df.empty:
break
all_markets.append(df)
return pd.concat(all_markets, ignore_index=True) if all_markets else pd.DataFrame()
# Scrape data and store in a DataFrame
max_pages = 10 # Adjust this to scrape more pages if needed
df = scrape_all_pages(base_url, max_pages)
# Display the DataFrame
print(df)
</code></pre>
<p>error:</p>
<pre><code>Scraping page 1...
Failed to fetch page 1: Status code 403
Empty DataFrame
Columns: []
Index: []
</code></pre>
<p>I also tried a suggested solution on stackoverflow, but it did not resolve the issue.</p>
<p>Could someone suggest a workaround or a more effective way to scrape this data? Any help would be greatly appreciated. Thank you in advance.</p>
|
<python><web-scraping><python-requests>
|
2024-06-26 21:23:37
| 1
| 319
|
HamidBee
|
78,674,589
| 8,322,295
|
Adjusting figure to center plots along bottom row
|
<p>I'm generating a figure with a variable number of plots. At the moment, if I use a number of plots that is not a multiple of three, the final plot(s) is left-aligned:
<a href="https://i.sstatic.net/nKevFrPN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nKevFrPN.png" alt="enter image description here" /></a></p>
<p>Here's my plotting function:</p>
<pre><code>def mag_dist(dataset, magnames):
num_magnames = len(magnames)
num_cols = 3 if num_magnames > 3 else num_magnames
num_rows = (num_magnames + num_cols - 1) // num_cols
color = iter(cm.rainbow(np.linspace(0, 1, num_magnames)))
fig, axs = plt.subplots(num_rows, num_cols, figsize=(12, 4*num_rows), sharex=True, sharey=True)
axs = np.array(axs).reshape(num_rows, num_cols)
for i, band in enumerate(magnames):
bins = 50
alpha = 1
density = True
row = i // num_cols
col = i % num_cols
ax = axs[row, col]
counts, bins, _ = ax.hist(dataset[band], bins=bins, alpha=alpha, density=density, color=next(color), linewidth=3)
ax.grid(True)
ax.set_xlabel('')
ax.set_yscale('log')
max_x = np.max(dataset[band])
ax.text(0.95, 0.95, f'Max: {max_x:.2f}', ha='right', va='top', transform=ax.transAxes, color='black', fontsize=15)
# Remove empty subplots
for j in range(i + 1, num_rows * num_cols):
fig.delaxes(axs.flatten()[j])
# Adjust layout for single and double plots (Copilot helped me with this bit)
if num_magnames == 1:
ax = axs.flatten()[0]
ax.set_position([0.3, 0.3, 0.4, 0.4]) # Center the single plot
elif num_magnames == 2:
ax1, ax2 = axs.flatten()[:2]
ax1.set_position([0.1, 0.3, 0.35, 0.4]) # Space the first plot
ax2.set_position([0.55, 0.3, 0.35, 0.4]) # Space the second plot
plt.tight_layout(pad=3.0)
plt.show()
</code></pre>
<p>Here's what I'm trying to achieve:</p>
<ul>
<li>if there is only one plot left after deleting the unused ones, center it in the figure, and</li>
<li>if there are two plots, space them out evenly along the bottom row.</li>
</ul>
<p>Is there any way to do this?</p>
|
<python><matplotlib><plot><figure>
|
2024-06-26 20:39:08
| 0
| 1,546
|
Jim421616
|
78,674,475
| 985,573
|
Django is too slow when processing exception
|
<p>The view where the exception happens is processing a big dictionary with about ~350K entries. The exception is a KeyError.</p>
<p>From what I can see from the CProfile logs, Django is trying to iterate the dictionary to get traceback data.</p>
<p>This is simplified code from the view:</p>
<pre><code>@login_required
def export_plan_data(request):
try:
form = PortfolioPlanningForm(request.GET, user=request.user)
file = get_export_file(user=request.user, form.get_params()) # This is where the exception happens
filename = f"Plan {timezone.now().strftime('%Y-%m-%d %H:%M:%S')}.xlsx"
response = HttpResponse(file, content_type="application/vnd.openxmlformats-officedocument.spreadsheetml.sheet")
response["Content-Disposition"] = f"attachment; filename={filename}"
return response
except Exception as e:
return JsonResponse({"error": "An error occurred while exporting the data"}, status=500)
</code></pre>
<p>If I surround the entire view in a try/except block and return my custom response when an exception occurs it returns in about 10 seconds, but if I remove the try/except block and let Django handle the exception it takes about 5 minutes.</p>
<p>Here's the Cprofile dump:</p>
<pre><code> 292357310 function calls (260422005 primitive calls) in 247.585 seconds
Ordered by: cumulative time
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 247.585 247.585 /usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py:44(inner)
1 0.000 0.000 241.196 241.196 /usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py:54(response_for_exception)
1 0.000 0.000 241.194 241.194 /usr/local/lib/python3.9/site-packages/django/core/handlers/exception.py:140(handle_uncaught_exception)
1 0.000 0.000 241.194 241.194 /usr/local/lib/python3.9/site-packages/django/views/debug.py:50(technical_500_response)
1 0.000 0.000 241.192 241.192 /usr/local/lib/python3.9/site-packages/django/views/debug.py:341(get_traceback_html)
429 0.001 0.000 241.047 0.562 /usr/local/lib/python3.9/site-packages/django/template/defaultfilters.py:916(pprint)
429 0.001 0.000 241.046 0.562 /usr/local/lib/python3.9/pprint.py:55(pformat)
429 0.001 0.000 241.044 0.562 /usr/local/lib/python3.9/pprint.py:151(pformat)
1 0.001 0.001 241.040 241.040 /usr/local/lib/python3.9/site-packages/django/views/debug.py:269(get_traceback_data)
3228318/429 12.640 0.000 240.785 0.561 /usr/local/lib/python3.9/pprint.py:163(_format)
721108/16 2.370 0.000 239.680 14.980 /usr/local/lib/python3.9/pprint.py:189(_pprint_dict)
721108/16 11.289 0.000 239.680 14.980 /usr/local/lib/python3.9/pprint.py:372(_format_dict_items)
2/1 0.002 0.001 239.667 239.667 /usr/local/lib/python3.9/pprint.py:446(_pprint_default_dict)
187641/61263 0.431 0.000 205.978 0.003 /usr/local/lib/python3.9/pprint.py:219(_pprint_list)
187647/61269 1.401 0.000 205.810 0.003 /usr/local/lib/python3.9/pprint.py:389(_format_items)
6187933 9.569 0.000 158.701 0.000 /usr/local/lib/python3.9/pprint.py:430(_repr)
6187933 6.127 0.000 147.533 0.000 /usr/local/lib/python3.9/pprint.py:439(format)
33189244/6187933 71.383 0.000 141.406 0.000 /usr/local/lib/python3.9/pprint.py:529(_safe_repr)
5405048 30.852 0.000 70.697 0.000 {built-in method builtins.sorted}
1635495 22.010 0.000 35.767 0.000 /usr/local/lib/python3.9/pprint.py:256(_pprint_str)
15906637 27.737 0.000 33.766 0.000 /usr/local/lib/python3.9/pprint.py:99(_safe_tuple)
41216386/41216385 10.506 0.000 10.536 0.000 {built-in method builtins.repr}
1635719 2.060 0.000 7.389 0.000 /usr/local/lib/python3.9/re.py:233(findall)
1 0.000 0.000 6.389 6.389 /usr/local/lib/python3.9/site-packages/django/core/handlers/base.py:160(_get_response)
1 0.000 0.000 6.389 6.389 /usr/local/lib/python3.9/site-packages/django/contrib/auth/decorators.py:18(_wrapped_view)
1 0.000 0.000 6.373 6.373 /home/app/webapp/apps/home/views/portfolio.py:733(export_plan_data)
1 0.000 0.000 6.358 6.358 /home/app/webapp/apps/home/views/portfolio.py:744(<lambda>)
1 0.001 0.001 6.358 6.358 /home/app/webapp/apps/home/usecases/portfolio/export.py:129(get_export_file)
1 0.098 0.098 6.347 6.347 /home/app/webapp/apps/home/usecases/portfolio/export.py:76(get_data_for_detail_export)
1 0.001 0.001 6.235 6.235 /home/app/webapp/apps/home/usecases/portfolio/get_dashboard_data.py:186(get_fees_for_chart)
1 0.000 0.000 6.224 6.224 /home/app/webapp/apps/home/usecases/portfolio/get_dashboard_data.py:60(get_dashboard_data)
1 0.684 0.684 6.221 6.221 /home/app/webapp/apps/home/usecases/portfolio/get_dashboard_data.py:37(get_enriched_fees_from_portfolio_items)
19596401 6.078 0.000 6.078 0.000 /usr/local/lib/python3.9/pprint.py:92(__lt__)
31813274 6.028 0.000 6.028 0.000 /usr/local/lib/python3.9/pprint.py:89(__init__)
14 0.037 0.003 5.071 0.362 /usr/local/lib/python3.9/site-packages/django/db/models/query.py:1322(_fetch_all)
4 0.000 0.000 5.015 1.254 /usr/local/lib/python3.9/site-packages/django/db/models/query.py:265(__iter__)
61235 0.115 0.000 4.969 0.000 /usr/local/lib/python3.9/site-packages/django/db/models/query.py:97(__iter__)
15 0.000 0.000 4.037 0.269 /usr/local/lib/python3.9/site-packages/silk/sql.py:64(execute_sql)
10 0.001 0.000 3.985 0.399 /usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py:1147(execute_sql)
9 0.000 0.000 3.970 0.441 /usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py:1126(results_iter)
20 0.000 0.000 3.746 0.187 /usr/local/lib/python3.9/site-packages/django/db/backends/utils.py:96(execute)
20 0.000 0.000 3.745 0.187 /usr/local/lib/python3.9/site-packages/django/db/backends/utils.py:65(execute)
20 0.000 0.000 3.745 0.187 /usr/local/lib/python3.9/site-packages/django/db/backends/utils.py:71(_execute_with_wrappers)
20 0.000 0.000 3.745 0.187 /usr/local/lib/python3.9/site-packages/django/db/backends/utils.py:77(_execute)
20 3.744 0.187 3.744 0.187 {method 'execute' of 'psycopg2.extensions.cursor' objects}
1635719 3.186 0.000 3.186 0.000 {method 'findall' of 're.Pattern' objects}
20916708 2.776 0.000 2.776 0.000 {method 'write' of '_io.StringIO' objects}
28180497/28179681 2.571 0.000 2.590 0.000 {built-in method builtins.len}
1635771 1.448 0.000 2.151 0.000 /usr/local/lib/python3.9/re.py:289(_compile)
20109045 1.950 0.000 1.950 0.000 {method 'append' of 'list' objects}
6188513 1.599 0.000 1.599 0.000 {method 'copy' of 'dict' objects}
82638 0.184 0.000 1.106 0.000 /usr/local/lib/python3.9/json/__init__.py:299(loads)
8791283 1.100 0.000 1.100 0.000 {built-in method builtins.id}
5564885/5564848 1.033 0.000 1.036 0.000 {method 'join' of 'str' objects}
6242997/6242534 0.963 0.000 0.972 0.000 {built-in method builtins.getattr}
9339835 0.917 0.000 0.917 0.000 {built-in method builtins.issubclass}
82638 0.195 0.000 0.901 0.000 /usr/local/lib/python3.9/json/decoder.py:332(decode)
61239 0.123 0.000 0.764 0.000 /usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py:1115(apply_converters)
1884024/1884009 0.738 0.000 0.739 0.000 {built-in method builtins.isinstance}
61234 0.044 0.000 0.641 0.000 /usr/local/lib/python3.9/site-packages/django/db/models/fields/json.py:75(from_db_value)
5405343 0.631 0.000 0.631 0.000 {method 'items' of 'dict' objects}
82638 0.605 0.000 0.605 0.000 /usr/local/lib/python3.9/json/decoder.py:343(raw_decode)
1635522 0.426 0.000 0.426 0.000 {method 'splitlines' of 'str' objects}
2777194 0.422 0.000 0.422 0.000 {method 'get' of 'dict' objects}
426 0.257 0.001 0.257 0.001 {method 'getvalue' of '_io.StringIO' objects}
620 0.001 0.000 0.231 0.000 /usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py:1640(cursor_iter)
620 0.002 0.000 0.228 0.000 /usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py:1646(<lambda>)
626 0.002 0.000 0.225 0.000 /usr/local/lib/python3.9/site-packages/django/db/utils.py:95(inner)
1636531 0.223 0.000 0.223 0.000 {method 'pop' of 'list' objects}
620 0.149 0.000 0.223 0.000 {method 'fetchmany' of 'psycopg2.extensions.cursor' objects}
758842 0.168 0.000 0.168 0.000 {method 'startswith' of 'str' objects}
455971 0.158 0.000 0.159 0.000 {built-in method builtins.next}
1 0.000 0.000 0.127 0.127 /usr/local/lib/python3.9/site-packages/django/template/base.py:164(render)
1 0.000 0.000 0.127 0.127 /usr/local/lib/python3.9/site-packages/django/template/base.py:161(_render)
341/1 0.004 0.000 0.127 0.127 /usr/local/lib/python3.9/site-packages/django/template/base.py:934(render)
6829/47 0.006 0.000 0.127 0.003 /usr/local/lib/python3.9/site-packages/django/template/base.py:897(render_annotated)
61234 0.122 0.000 0.122 0.000 /usr/local/lib/python3.9/site-packages/django/db/models/query.py:110(<dictcomp>)
87/6 0.010 0.000 0.121 0.020 /usr/local/lib/python3.9/site-packages/django/template/defaulttags.py:157(render)
2448 0.005 0.000 0.096 0.000 /usr/local/lib/python3.9/site-packages/django/template/base.py:986(render)
312/16 0.001 0.000 0.084 0.005 /usr/local/lib/python3.9/site-packages/django/template/defaulttags.py:300(render)
165406 0.076 0.000 0.076 0.000 {method 'match' of 're.Pattern' objects}
122468 0.065 0.000 0.074 0.000 /usr/local/lib/python3.9/site-packages/psycopg2/_json.py:159(typecast_json)
14 0.000 0.000 0.064 0.005 /usr/local/lib/python3.9/site-packages/django/db/models/query.py:45(__iter__)
2956 0.006 0.000 0.053 0.000 /usr/local/lib/python3.9/site-packages/django/template/base.py:668(resolve)
7 0.000 0.000 0.048 0.007 /usr/local/lib/python3.9/site-packages/django/db/models/query.py:261(__len__)
2448 0.008 0.000 0.041 0.000 /usr/local/lib/python3.9/site-packages/django/template/base.py:963(render_value_in_context)
25 0.001 0.000 0.038 0.002 /usr/local/lib/python3.9/site-packages/django/db/models/sql/compiler.py:503(as_sql)
187696 0.033 0.000 0.033 0.000 {built-in method builtins.iter}
1212/1211 0.001 0.000 0.032 0.000 /usr/local/lib/python3.9/site-packages/django/utils/functional.py:244(inner)
</code></pre>
|
<python><django><performance><exception>
|
2024-06-26 20:11:30
| 1
| 859
|
Aikanáro
|
78,674,456
| 4,691,343
|
Excel hangs in Python script to export excel as a pdf in Task Scheduler, but works normally
|
<p>I'm trying to just automate creating a pdf from an excel file to email it out everyday. It should be simple, but it isn't. If I run these files in command line, they will run, create the pdf, and close out of excel correct.</p>
<p>But if I run it in task scheduler, not only does it refuse to create the pdf, but Excel hangs and doesn't exit properly.</p>
<p>I've tried it in python and powershell</p>
<p>Python wincom32</p>
<pre><code>from os import getenv
from os.path import basename
from pathlib import Path
from datetime import datetime, timedelta, time
import pythoncom
import os
import shutil
import time
from win32com import client
import win32com.client as w3c
today = datetime.now().date()
path = "C:\\scripts\\"
excel_path = os.path.join(path , 'Book1.xlsx')
pdf_path = os.path.join(path ,'book1.pdf')
#convert to PDF
pythoncom.CoInitialize()
excel = w3c.Dispatch("Excel.Application")
sheets = excel.Workbooks.Open(excel_path)
work_sheets = sheets.Worksheets[1]
work_sheets.ExportAsFixedFormat(0, pdf_path)
sheets.Close(False)
excel.Workbooks.Close()
excel = None
pythoncom.CoUninitialize()
</code></pre>
<p>running the script:</p>
<pre><code>python c:\scripts\pycom.py
</code></pre>
<p>Python xlwings</p>
<pre><code>import os
import xlwings as xw
book = xw.Book(r'C:\\scripts\\Book1.xlsx')
sheet = book.sheets("Sheet1")
current_work_dir = os.getcwd()
pdf_path = os.path.join(current_work_dir, "Book1.pdf")
sheet.api.ExportAsFixedFormat(0, pdf_path)
app = xw.apps.active
app.quit()
</code></pre>
<p>running the script</p>
<pre><code>python c:\scripts\xlwings.py
</code></pre>
<p>powershell</p>
<pre><code>param (
[parameter(Mandatory=$true)]
[string]$ExcelFilePath, # Path to the Excel file
[parameter(Mandatory=$true)]
[string]$WorksheetName, # Name of the specific worksheet
[parameter(Mandatory=$true)]
[string]$OutputFolderPath # Path where PDF files will be saved
)
try {
$excel = New-Object -ComObject Excel.Application
$excel.Visible = $false
$excel.DisplayAlerts = $false
$workbook = $excel.Workbooks.Open($ExcelFilePath)
$worksheet = $workbook.Worksheets.Item($WorksheetName)
# Generate the PDF file path (customize as needed)
$pdfPath = Join-Path -Path $OutputFolderPath -ChildPath ($workbook.Name -replace '\.xlsx?', '.pdf')
# Export as PDF
$xlFixedFormat = "Microsoft.Office.Interop.Excel.XlFixedFormatType" -as [type]
$worksheet.ExportAsFixedFormat($xlFixedFormat::xlTypePDF, $pdfPath)
$workbook.Close()
$excel.Quit()
}
finally {
# Release COM objects
$worksheet, $workbook, $excel | ForEach-Object {
if ($_ -ne $null) {
[void][System.Runtime.InteropServices.Marshal]::ReleaseComObject($_)
}
}
}
</code></pre>
<p>powershell</p>
<pre><code>cd c:\scripts\
.\ConvertToPDF.ps1 -ExcelFilePath "c:\scripts\Book1.xlsx" -WorksheetName "Sheet1" -OutputFolderPath "c:\scripts\"
</code></pre>
<p>I've tried using a bat file to run the python scripts</p>
<pre><code>start "C:\Python3_10\python.exe" "C:\scripts\pycom.py"
</code></pre>
<p>I tested it with my complex excel file and my simple one. My simple file is just a book1.xlsx with the word blah in cell A1.</p>
<p>My Task Manager
I've tried the following</p>
<p>[<img src="https://i.sstatic.net/1L9GuZ3L.png" alt="Task Manager Action start a program, Python Argument pycom.py and start in C:\scripts]1" /></p>
<p>[<img src="https://i.sstatic.net/CUksxNar.png" alt="Task Manager Action start a program, c:\python3_10\python.exe Argument: pycom.py and start in C:\scripts]2" /></p>
<p>I have it run with the highest privileges
<a href="https://i.sstatic.net/mdBwXGOD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/mdBwXGOD.png" alt="enter image description here" /></a></p>
<p>And these settings</p>
<p><a href="https://i.sstatic.net/0b1bsGlC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0b1bsGlC.png" alt="enter image description here" /></a></p>
<p>I've tried to investigate the event viewer, and through research, I found this article and followed the steps, but it didn't work. Excel still hangs in task scheduler, and my pdf doesn't get created.
<a href="https://shauncassells.wordpress.com/2015/09/28/windows-10-event-10016-fix-the-application-specific-permission-settings-do-not-grant-local-activation-permission-for-the-com-server-application-with-clsid-d63b10c5-bb46-4990-a94f-e40b9d520160-and-a/" rel="nofollow noreferrer">https://shauncassells.wordpress.com/2015/09/28/windows-10-event-10016-fix-the-application-specific-permission-settings-do-not-grant-local-activation-permission-for-the-com-server-application-with-clsid-d63b10c5-bb46-4990-a94f-e40b9d520160-and-a/</a></p>
<p>I've also tried giving Python Schedule a try and this still causes excel to hang in the task manager and it also doens't create the pdf. I'm not sure what's going on.</p>
<pre><code>import schedule
import time
import subprocess
def xlwings_daily():
subprocess.run(['python', 'xlwings.py'])
def pycom_daily():
subprocess.run(['python', 'pycom.py'])
schedule.every().day.at('08:49').do(xlwings_daily)
schedule.every().day.at('08:55').do(pycom_daily)
while True:
schedule.run_pending()
time.sleep(1)
</code></pre>
<p>My goals are</p>
<ol>
<li>It runs every day.</li>
<li>The PDF is created.</li>
<li>Excel is not hanging in Task Manager.</li>
</ol>
|
<python><excel><powershell><scheduled-tasks><xlwings>
|
2024-06-26 20:08:04
| 1
| 453
|
arsarc
|
78,674,378
| 20,176,161
|
FastAPI: Loading a Model through Pickle and Making a Prediction. What is the best way to do it?
|
<p>I am building an API where I need to load a model through pickle and make a prediction</p>
<p>The code is below:</p>
<pre><code>from fastapi import FastAPI
import uvicorn
import joblib
from utilities import * # upload the functions
from schema import Customer # load the schema
from sklearn.linear_model import LogisticRegression
from sklearn.base import BaseEstimator, TransformerMixin
import os
import numpy as np
import asyncio
app = FastAPI()
path = 'data/'
X_test = import_test_data(path + 'X_test.xlsx')
MODEL_FILE = "./model/model.pkl" # path to the model
async def predict_client(data:Customer):
model = joblib.load(MODEL_FILE) # loading model parameters
y_hat_test = model.predict(X_test)
# get the predicted probabilities
y_hat_test_proba = model.predict_proba(X_test)[:][: , 1]
default = y_hat_test_proba[0] >= 0.5
return {"probability_of_defaulting": float(y_hat_test_proba[0]), "is_defaulting": bool(default)}
if __name__ == "__main__":
uvicorn.run("main:app", host="0.0.0.0", port=8000, reload=True)
</code></pre>
<p>The code above works fine as it is. When I load the model through <code>async def predict_client(data:Customer):</code> The API works and I get an output with a prediction. However, my problem is that the variable <code>model</code> is a local one as it is described inside the function.</p>
<p>When i start loading the model (without using <code>async def predict_client</code>) I get as error as shown below:</p>
<pre><code>model = joblib.load(MODEL_FILE)
print(model['model'].coef_)
</code></pre>
<p>I get an error as follows:</p>
<pre><code> ^^^^^^^^^^^^^^^^^^^^^^^
File "xxxxx-packages\joblib\numpy_pickle.py", line 658, in load
obj = _unpickle(fobj, filename, mmap_mode)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "xxxxx/venv\Lib\site-packages\joblib\numpy_pickle.py", line 577, in _unpickle
obj = unpickler.load()
^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\pickle.py", line 1213, in load
dispatch[key[0]](self)
File "C:\Python311\Lib\pickle.py", line 1538, in load_stack_global
self.append(self.find_class(module, name))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\pickle.py", line 1582, in find_class
return _getattribute(sys.modules[module], name)[0]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\pickle.py", line 331, in _getattribute
raise AttributeError("Can't get attribute {!r} on {!r}"
AttributeError: Can't get attribute 'WoE_Binning' on <module '__main__' (built-in)>
</code></pre>
<p>It seems that the script does not have time to load the model and it runs into an error when it tried to read <code>print(model['model'].coef_)</code>.</p>
<p>I tried to fix that issue by doing the following changes.</p>
<pre><code>async def read_model(MODEL_FILE):
model = joblib.load(MODEL_FILE)
return model
model=asyncio.run(read_model(MODEL_FILE))
print(model['model'].coef_)
</code></pre>
<p>but again i get similar error:</p>
<pre><code>AttributeError: Can't get attribute 'WoE_Binning' on <module '__main__' (built-in)>
</code></pre>
<p>How can I successfully load the model once? When I do it through a function the model is a local variable that I cannot re-use.</p>
<p>Thank you</p>
|
<python><async-await><python-asyncio><fastapi><pickle>
|
2024-06-26 19:46:01
| 1
| 419
|
bravopapa
|
78,674,335
| 3,486,684
|
Creating an enum with members whose attributes can be initialized
|
<p>From <a href="https://stackoverflow.com/a/62601113/3486684">this answer</a> I learned how to create an enum with attributes (i.e. additional data):</p>
<pre><code>from typing import NamedTuple
class EnumAttr(NamedTuple):
data: str
class EnumWithAttrs(EnumAttr, Enum):
GREEN = EnumAttr(data="hello")
BLUE = EnumAttr(data="world")
EnumWithAttrs.GREEN.data
</code></pre>
<pre><code>"hello"
</code></pre>
<p>I would now to do the following:</p>
<pre class="lang-py prettyprint-override"><code>EnumWithAttrs.BLUE(data="yellow").data
</code></pre>
<pre><code>'yellow'
</code></pre>
<p>Put differently: I want to be able to do the following:</p>
<pre><code>a = EnumWithAttrs.BLUE(data="yellow")
b = EnumWithAttrs.BLUE(data="red")
</code></pre>
<p>I tried the following (and various variants of it), but it does not work:</p>
<pre class="lang-py prettyprint-override"><code>from enum import Enum
from typing import Any, Callable, NamedTuple, Self
class EnumAttr(NamedTuple):
data: str = ""
class EnumWithAttrs(EnumAttr, Enum):
GREEN = EnumAttr(data="hello")
BLUE = EnumAttr()
def with_data(self, s: str) -> Self:
self._replace(data=s)
return self
x = EnumWithAttrs.BLUE.with_data("world")
x.data
</code></pre>
<pre><code>''
</code></pre>
|
<python><enums>
|
2024-06-26 19:35:13
| 2
| 4,654
|
bzm3r
|
78,673,978
| 8,150,186
|
Pydantic does not give precedence to os environment
|
<p>I am tyring to implement proper environment variable management across dev, prod and staging environments. Using Pydantic is critical since Pydantic is everywhere in the application.</p>
<p>Testing this application becomes challenging due to the setting of os-based environment variables under production and dev environements.</p>
<p>I have the following:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import Field
from pydantic_settings import BaseSettings, SettingsConfigDict
from devtools import debug
class Config(BaseSettings):
# Environment
STAGE: str = Field('prod', validation_alias='STAGE')
OTHER: str
model_config = SettingsConfigDict(env_file='.test.env', env_file_encoding='utf-8', env_prefix='DEV_')
settings = Config()
debug(settings)
</code></pre>
<p>with a <code>.test.env</code> file that looks like this:</p>
<pre><code>STAGE=test
DEV_OTHER="other"
</code></pre>
<p>and I set an OS environment variable as follows:</p>
<pre class="lang-bash prettyprint-override"><code>export STAGE=dev
</code></pre>
<p>giving the following response with <code>printenv</code>:</p>
<pre class="lang-bash prettyprint-override"><code>...
STAGE=dev
...
</code></pre>
<p>and then get the following output:</p>
<pre class="lang-bash prettyprint-override"><code>test.py:18 <module>
settings: Config(
STAGE='test',
OTHER='other',
)
</code></pre>
<p>In Pydantic, the OS environment variable will take precedence, but this is not happening here since the setting of environment variables in the dev environment is process-related.</p>
<p>This leads to unexpected behaviour with Pydantic Settings. Is there a workaround? I do not want to set <code>STAGE</code> in a bash file and restart the environment (which normally means restarting the computer) whenever I want to test behaviour under different environments.</p>
|
<python><pydantic><pydantic-settings>
|
2024-06-26 17:53:41
| 1
| 1,032
|
Paul
|
78,673,630
| 8,565,759
|
Pandas dataframe get location of col if name contains string and slice into multiple dataframes
|
<p>I am reading a .csv that has multiple time series columns, but each has a different name based on packets. My goal is to find the col names that contain the string 'TIME' and get their col numbers so that I can slice the df into multiple dfs with cols beginning with the time series col and ending before the next time series col.</p>
<p>I can get a list of the col names but I am not being able to get their locs in the original df:</p>
<pre><code>time_cols = [col for col in df.columns if 'TIME' in col]
time_cols_loc = df.columns.get_loc(time_cols)
</code></pre>
<p>The above gives an <code>InvalidIndexError</code>, I am assuming because it is extracting that list instead of finding the values in the original df. I am also not sure how to slice it afterwards.</p>
|
<python><pandas><dataframe>
|
2024-06-26 16:30:45
| 3
| 420
|
Brain_overflowed
|
78,673,610
| 23,260,297
|
Boolean indexing: How to select a specific value to create new column
|
<p>I have a dataframe which looks like this:</p>
<pre><code>col1 col2 col3
X1 Nan Nan
foo bar baz
foo bar baz
X2 Nan Nan
foo bar baz
foo bar baz
X3 Nan Nan
foo bar baz
foo bar baz
</code></pre>
<p>I have filtered to look like this:</p>
<pre><code>m = df.notna()
print(m)
</code></pre>
<pre><code>col1 col2 col3
True False False
True True True
True True True
True False False
True True True
True True True
True False False
True True True
True True True
</code></pre>
<p>I need to select the True value in the row with False's, and create a new column with that value.
For example, my resultant df should essentially look like this:</p>
<pre><code>col1 col2 col3 new
foo bar baz X1
foo bar baz X1
foo bar baz X2
foo bar baz X2
foo bar baz X3
foo bar baz X3
</code></pre>
<p>I am unsure how to accomplish this with pandas, any suggestions would help</p>
|
<python><pandas>
|
2024-06-26 16:27:24
| 2
| 2,185
|
iBeMeltin
|
78,673,568
| 2,066,572
|
Discovering which argument caused an error with Python CFFI
|
<p>I'm using the Python CFFI module to wrap functions that have multiple arguments of the same type. If I pass the wrong type to one of the arguments, CFFI gives an error identifying the actual type and the expected type, but not the variable name or position in the argument list.</p>
<p>For example, the following code takes two arguments of the same type, and passes the wrong type to one of those arguments:</p>
<pre><code>import cffi
ffi=cffi.FFI()
ffi.cdef('double mult(double x, double y);')
lib=ffi.verify('double mult(double x, double y){return x*y;};')
result=lib.mult(2,'3')
</code></pre>
<p>(I know ffi.verify is deprecated; I don't use it in my production code but it results in a much simpler MWE)</p>
<p>The above code gives the following output:</p>
<pre><code>Traceback (most recent call last):
File "/Users/haiducek/Development/cffi_type_error/cffi_type_error.py", line 7, in <module>
result=lib.mult(2,'3')
^^^^^^^^^^^^^^^
TypeError: must be real number, not str
</code></pre>
<p>Is there a way to get CFFI to tell me which argument of the called function was responsible for the TypeError?</p>
|
<python><python-cffi>
|
2024-06-26 16:17:25
| 1
| 438
|
jhaiduce
|
78,673,334
| 13,806,869
|
Why is my matplotlib boxplot completely blank?
|
<p>I have a pandas dataframe named 'target', with the following structure:</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 200000 entries, 0 to 199999
Data columns (total 1 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 0 200000 non-null float64
dtypes: float64(1)
</code></pre>
<p>I want to create a boxplot of the data in this dataframe and save it to file. My code looks like this:</p>
<pre><code>boxplot = target.boxplot()
boxplot.plot()
plt.savefig('Boxplot.png')
plt.clf()
</code></pre>
<p>The resulting png is completely blank - no data, no axis, no title, just a blank white rectangle.</p>
<p>If I use plt.show() instead to try and see what's being plotted, nothing happens; no popup is generated, not even a blank one.</p>
<p>Does anyone know what might be causing this problem please?</p>
<p><strong>EDIT:</strong> I've discovered something odd - creating a histogram of the same data first causes plt.show() to work. However, saving still results in a blank png.</p>
<p>In other words, the following code shows the histogram <strong>and</strong> the boxplot as popups correctly. However, it saves both as blank pngs:</p>
<pre><code>histogram = target.hist()
plt.show()
plt.savefig('Histogram.png')
plt.clf()
boxplot = target.boxplot()
plt.show()
plt.savefig('Boxplot.png')
plt.clf()
</code></pre>
|
<python><matplotlib>
|
2024-06-26 15:25:13
| 0
| 521
|
SRJCoding
|
78,673,277
| 13,392,257
|
Can't find video tag
|
<p>I am trying to fetch tag url-content from the HTML code of site <a href="https://95jun.kinoxor.pro/984-univer-13-let-spustja-2024-07-06-19-54.html" rel="nofollow noreferrer">https://95jun.kinoxor.pro/984-univer-13-let-spustja-2024-07-06-19-54.html</a></p>
<p>The site is triky. You can open it from this page (the first/second result of the search engine <a href="https://yandex.ru/search/?text=https%3A%2F%2Fkinokubok.pro%2F232-univer-13-let-spustja-2024-06-25-19-51.html&lr=21653" rel="nofollow noreferrer">https://yandex.ru/search/?text=https%3A%2F%2Fkinokubok.pro%2F232-univer-13-let-spustja-2024-06-25-19-51.html&lr=21653</a></p>
<p>I am looking for this URL: <iframe src="https://api.stiven-king.com/storage.html" ...</p>
<p>Proof that URL exists:
<a href="https://i.sstatic.net/ZJ2TJAmS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZJ2TJAmS.png" alt="enter image description here" /></a></p>
<p><strong>How can I fetch html tag's content?</strong></p>
<p>My code:</p>
<pre><code>import seleniumwire.undetected_chromedriver as uc
import time
options = uc.ChromeOptions()
options.add_argument('--ignore-ssl-errors=yes')
options.add_argument('--ignore-certificate-errors')
driver = uc.Chrome(options=options)
def interceptor(request):
del request.headers['Referer']
request.headers['Referer'] = 'https://yandex.ru/'
url = "https://125jun.kinoamor.pro/251-univer-13-let-spustja-2024-06-27-19-51.html"
driver.request_interceptor = interceptor
driver.get(url)
time.sleep(3)
iframe_tag_elements = driver.find_elements("xpath", "//iframe")
print(f"FOUND VIDEO TAGS: {len(iframe_tag_elements)}") # prints 7
for iframe_elem in iframe_tag_elements:
video_url = iframe_elem.get_attribute("src")
if video_url:
print("XXX_ ", video_url)
</code></pre>
<p>**PROBLEM ** - URL "https://api.stiven-king.com/storage.html" is not printed
Also I don't see the URL the the <code>driver.page_source</code></p>
<p>I was trying to sleep, to scroll page but it didn't help</p>
<p>Also was. trying to <code>driver.switch_to.frame(iframe_elem)</code> and the was serching for iframes againg</p>
|
<python><selenium-webdriver><seleniumwire>
|
2024-06-26 15:14:49
| 3
| 1,708
|
mascai
|
78,673,228
| 9,363,181
|
Unable to read text file in Glue job
|
<p>I am trying to read the <code>schema</code> from a <code>text</code> file under the same package as the code but cannot read that file using the <strong>AWS glue job</strong>. I will use that <code>schema</code> for creating a dataframe using <code>Pyspark</code>. I can load that file locally. I am zipping the code files as .zip, placing them under the <code>s3</code> bucket, and then referencing them in the glue job. Every other thing works fine. No problem there. But when I try the below code it doesn't work.</p>
<pre><code>file_path = os.path.join(Path(os.path.dirname(os.path.relpath(__file__))), "verifications.txt")
multiline_data = None
with open(file_path, 'r') as data_file:
multiline_data = data_file.read()
self.logger.info(f"Schema is {multiline_data}")
</code></pre>
<p>This code throws the below <strong>error</strong>:</p>
<pre><code>Error Category: UNCLASSIFIED_ERROR; NotADirectoryError: [Errno 20] Not a directory: 'src.zip/src/ingestion/jobs/verifications.txt'
</code></pre>
<p>I also tried with <code>abs_path</code> but it didn't help either. The same block of code works fine locally.</p>
<p>I also tried directly passing the <code>"./verifications.txt"</code> path but no luck.</p>
<p>So how do I read this file?</p>
|
<python><python-3.x><amazon-web-services><pyspark><aws-glue>
|
2024-06-26 15:03:12
| 2
| 645
|
RushHour
|
78,673,098
| 14,642,180
|
Does the Intensity of color in OpenCV image matter?
|
<p>Here is a simple code which puts a text value onto a black frame and displays it:</p>
<pre><code>frame = np.zeros((50,200,3))
org=(0,45)
font=cv2.FONT_HERSHEY_SIMPLEX
fontScale=1
fontColor=(1,0,0) ##rgb
out_img = cv2.putText(frame, "text" , org, font, fontScale, fontColor)
plt.imshow(out_img)
</code></pre>
<p>I am trying to understand how the color intensity is treated:</p>
<ul>
<li>fontColor=(1,0,0) versus fontColor=(255,0,0). Both print red text with no change to intensity. Shouldn't the intensity vary based on the strength we have given? Same with other colors also. For e.g (0,1,0) versus fontColor=(0,255,0)</li>
<li>How do we explain behaviour when we are mixing colors. For e.g. by fontColor=(255,255,0), we are mixing red and green which rightly gives us yellow text. But fontColor=(1,255,0) also gives us exactly the same yellow text (as if red and green are mixed in equal proportions)</li>
</ul>
<p>Sidenote: The problem I am trying to solve is to decipher the color of the text in a given image. I was able to get the text contours and isolate the text by putting it onto a large black mask and then get the second prominent color (foreground). So far so good. However I am having trouble deciphering the intensities of the obtained foreground color.</p>
|
<python><numpy><opencv><matplotlib><image-processing>
|
2024-06-26 14:40:30
| 0
| 1,475
|
Allohvk
|
78,673,089
| 9,640,238
|
Streamlit session variable does not persist on chat
|
<p>I am trying to implement a chat with Streamlit, where I need to keep a counter of words being exchanged, essentially starting from <a href="https://quickstarts.snowflake.com/guide/asking_questions_to_your_own_documents_with_snowflake_cortex/#4" rel="nofollow noreferrer">this</a>.</p>
<p>So I initialize a session variable at the beginning of the <code>main()</code> function as follows:</p>
<pre class="lang-py prettyprint-override"><code>st.session_state['words'] = 0
</code></pre>
<p>After the question has been asked and response has been provided, i.e. at the end of the <code>st.chat_input</code> block, after the <code>spinner</code> block, I calculate the count of words in the question and response (<code>wd_cnt</code>). I increment the count and write it:</p>
<pre class="lang-py prettyprint-override"><code>st.session_state.words += wd_cnt
st.write(st.session_state.words)
</code></pre>
<p>The problem is: the counter is <em>not</em> incremented. The <code>write</code> statement above always outputs the value of <code>wd_cnt</code>, as if <code>st.session_state.words</code> was reset every time.</p>
<p>What am I missing?</p>
|
<python><streamlit>
|
2024-06-26 14:39:23
| 1
| 2,690
|
mrgou
|
78,672,909
| 5,552,507
|
pyvista export does not preserve multiline texts
|
<p>I am rendering a mesh with Pyvista (version 0.43.5) and I have a problem with multiline texts at export.</p>
<p>In the interactive window and when I take a screenshot, a multiline title (or any other multiline text) is rendered well. But when I export to svg, the multiline is gone:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>interactive window and screenshot (OK)</th>
<th>export to svg (via plotter.save_graphic) (KO)</th>
</tr>
</thead>
<tbody>
<tr>
<td><img src="https://i.sstatic.net/ORCmW218.png" height="300"></td>
<td><img src="https://i.sstatic.net/fzhYgTJ6.png" height="300"></td>
</tr>
</tbody>
</table></div>
<p>Here is the code I use:</p>
<pre><code>import pyvista as pv
mesh = pv.Cylinder()
pl = pv.Plotter(off_screen=True)
pl.add_mesh(mesh, label="cyl")
pl.add_title("first line\nsecond line\nthird one")
pl.save_graphic('multiline_title_test.svg') # 'svg', 'eps', 'ps', 'pdf', 'tex'
pl.show(screenshot='multiline_title_test.png')
pl.close()
</code></pre>
<p><strong>Question: is there a way to preserve multiline at svg export?</strong> (using pyvista or maybe vtk)</p>
<p><strong>Edit 1:</strong> as I advance in understanding of the problem (but not of the solution so far):</p>
<p>The generated svg page includes for text:</p>
<pre><code><text fill="#000000" x="512" y="-5" font-size="45" text-anchor="middle" dy="45" font-family="Helvetica">
first line
second line
third one</text>
</code></pre>
<p>From this link <a href="https://stackoverflow.com/questions/31469134/how-to-display-multiple-lines-of-text-in-svg">How to display multiple lines of text in SVG?</a>, I understand it should include some 'tspan' with x and dy properties. Is this the only way?</p>
|
<python><vtk><pyvista>
|
2024-06-26 14:03:05
| 1
| 307
|
PiWi
|
78,672,808
| 8,315,819
|
Sub-domain Enumeration - socket.gaierror: [Errno 11001] getaddrinfo failed
|
<p>I am trying to do sub-domain enumeration in my organization. Part of the python code is to get the IPs of the already enumerated sub-domains which are included in a text file . Used the below code</p>
<pre><code>import socket
file = open(r'C:\Users****\Downloads\sublist.txt')
domain_list = []
for x in file.readlines():
domain_list.append(x.rstrip())
for y in domain_list:
print(y + ' -> ' + socket.gethostbyname(y))
</code></pre>
<p>The script doesn't seem to work and giving me the error <code>"socket.gaierror: [Errno 11001] getaddrinfo failed"</code> when i included my organization's sub-domains in the txt file <code>"sublist.txt"</code>. But it gives me the proper IP addresses when any other public domains are used in the list such as google.com</p>
<p>Note - simply running the command <code>socket.gethostbyname('organization sub-domain name')</code> gives me the proper result and I am connected via VPN to my org</p>
|
<python><dns><vpn>
|
2024-06-26 13:44:43
| 1
| 445
|
Biswa
|
78,672,803
| 9,506,773
|
Azure keyword recognition model (`.table`) not working when feeding it wave files
|
<p>I have the following script:</p>
<pre class="lang-py prettyprint-override"><code>import time
import azure.cognitiveservices.speech as speechsdk
import logging
# Configure logging
logging.basicConfig(level=logging.INFO)
# The phrase your keyword recognition model triggers on.
KEYWORD = "KEYWORD"
def recognize_keyword_from_wav_file(wav_file_path):
"""Performs keyword-triggered speech recognition with a WAV file."""
global true_positives, false_positives, false_negatives
try:
speech_config = speechsdk.SpeechConfig(subscription='xyz', region='westeurope')
model = speechsdk.KeywordRecognitionModel("./keyword.table")
audio_config = speechsdk.audio.AudioConfig(filename=wav_file_path)
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
except Exception as e:
logging.error(f"Failed to initialize speech recognizer: {e}")
return
def recognizing_cb(evt):
"""Callback for recognizing event."""
try:
if evt.result.reason == speechsdk.ResultReason.RecognizingKeyword:
logging.info(f'RECOGNIZING KEYWORD: {evt}')
elif evt.result.reason == speechsdk.ResultReason.RecognizingSpeech:
logging.info(f'RECOGNIZING: {evt}')
except Exception as e:
logging.error(f"Error in recognizing callback: {e}")
def recognized_cb(evt):
"""Callback for recognized event."""
try:
if evt.result.reason == speechsdk.ResultReason.RecognizedKeyword:
logging.info(f'RECOGNIZED KEYWORD: {evt}')
elif evt.result.reason == speechsdk.ResultReason.RecognizedSpeech:
logging.info(f'RECOGNIZED: {evt}')
except Exception as e:
logging.error(f"Error in recognized callback: {e}")
try:
speech_recognizer.recognizing.connect(recognizing_cb)
speech_recognizer.recognized.connect(recognized_cb)
speech_recognizer.session_started.connect(lambda evt: logging.info(f'SESSION STARTED: {evt}'))
speech_recognizer.session_stopped.connect(lambda evt: logging.info(f'SESSION STOPPED {evt}'))
speech_recognizer.canceled.connect(lambda evt: logging.info(f'CANCELED {evt}'))
speech_recognizer.start_keyword_recognition(model)
logging.info(f'Say something starting with "{KEYWORD}" followed by whatever you want...')
speech_recognizer.recognize_once()
speech_recognizer.stop_keyword_recognition()
except Exception as e:
logging.error(f"Error during speech recognition: {e}")
# Example usage:
if __name__ == "__main__":
wav_file_path = "./output01.wav"
recognize_keyword_from_wav_file(wav_file_path)
</code></pre>
<p>This is only giving me <code>RecognizedSpeech</code> but never <code>RecognizedKeyword</code>. This happens when using wave files as input to the keyword recognition model instead of the default microphone streamed by: <code>audio_config = speechsdk.audio.AudioConfig(use_default_microphone=True)</code>, which works fine. Any ideas?</p>
|
<python><azure><azure-speech>
|
2024-06-26 13:43:45
| 1
| 3,629
|
Mike B
|
78,672,622
| 14,282,714
|
Insert data to supabase with python
|
<p>I would like to import some data to a <code>supabase</code> using Python. I followed the documentation like <a href="https://supabase.com/docs/reference/python/insert" rel="nofollow noreferrer">here</a>, which suggests that we can use the <code>supabase.table</code> attribute to <code>.insert</code> data. I tried the following code which doesn't work:</p>
<pre><code>import supabase
response = (
supabase.table("user_results")
.insert({"id": 1, "Date": "Denmark",
"Race": "1",
"Track": "Mario Kart Stadium",
"Finish": "1"})
.execute()
)
</code></pre>
<p>Output:</p>
<pre><code>AttributeError: module 'supabase' has no attribute 'table'
</code></pre>
<p>I do understand the error, but I don't understand why this is not working. I find it really confusing how we can simply insert some data to a supabase using python. So I was wondering if anyone knows how to insert data to a supabase?</p>
<hr />
<p>For reference, my database looks like this in supabase:</p>
<p><a href="https://i.sstatic.net/65AHt73B.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65AHt73B.png" alt="enter image description here" /></a></p>
|
<python><supabase>
|
2024-06-26 13:10:22
| 2
| 42,724
|
Quinten
|
78,672,585
| 15,913,281
|
Python "Explode" Keys in Dict
|
<p>Given a dict like the one below how do I "explode" the keys to create a new dict with the keys split out? Each key should be split on " n ".</p>
<pre><code>original_dict = {"AB n DC": [12, 13], "JH n UY": [22, 1]}
new_dict = {"AB": [12, 13], "DC": [12, 13], "JH": [22, 1], "UY":[22,1]}
</code></pre>
|
<python>
|
2024-06-26 13:01:57
| 3
| 471
|
Robsmith
|
78,672,415
| 539,490
|
Pydantic model field with default value does not pass type checking
|
<p>The following code shows a pylance (pyright) error for <code>AModel()</code> of <code>Argument missing for parameter "field_b"</code>:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Field
from typing import Optional, Any
class AModel(BaseModel):
field_a: str = Field()
field_b: Optional[bool] = Field(None)
instance_1 = AModel(field_a="", field_b=None) # No error
instance_2 = AModel(field_a="") # Error
# ^^^^^^^^^^^^^^^^^^
kwargs: dict[str, Any] = {"field_a": "", "field_bad": True}
instance_3 = AModel(**kwargs) # No error but no type checking
</code></pre>
<p>Is it possible to instantiate this model without providing <code>field_b=None</code> and whilst retaining type checking?</p>
|
<python><python-typing><pydantic><pyright>
|
2024-06-26 12:29:14
| 1
| 29,009
|
AJP
|
78,672,251
| 20,920,790
|
How fix error Object of type *** is not JSON serializable in Airflow?
|
<p>I make custom library for working with API.
But I get error then trying to use it in my dag.</p>
<p>Then I run my dag I get error for <code>connect_to_api</code> task:</p>
<pre><code>{python.py:202} INFO - Done. Returned value was: [<yclients.yclientsapi object at 0x7f5a0fb87890>]
{xcom.py:664} ERROR - Object of type yclientsapi is not JSON serializable. If you are using pickle instead of JSON for XCom, then you need to enable pickle support for XCom in your airflow config or make sure to decorate your object with attr.
TypeError: Object of type yclientsapi is not JSON serializable
</code></pre>
<p>I checked serialization in Airflow documentation, but I don't think I want rewrite my library code.
How I need correct my code to avoid this error?</p>
<p>P. S. I've tried enable pickle support for XCom by adding to <code>docker-compose.yml</code> <code>AIRFLOW__CORE__ENABLE_XCOM_PICKLING=true</code>.
It's not solved my problem.</p>
<pre class="lang-yaml prettyprint-override"><code>x-airflow-common: &airflow-common
#image: apache/airflow:2.8.2
build: .
env_file: &airflow-common-env .env
environment: # delete on error
- AIRFLOW__CORE__LOAD_EXAMPLES=false # delete on error
- AIRFLOW__CORE__ENABLE_XCOM_PICKLING=true
volumes:
</code></pre>
<p>Airflow 2.8.2, Python 3.11</p>
<p>Code of my library:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
from datetime import date
import httpx
import ujson
# create class
class yclientsapi:
def __init__(self, bearer_key: str, user_key: str):
self.bearer_key = bearer_key
self.user_key = user_key
self.headers = {
"Accept": "application/vnd.yclients.v2+json",
"Content-Type": "application/json",
'Authorization': f"Bearer {bearer_key}, User {user_key}"
}
# get table with salons (stores)
def get_salons_info(self):
session = httpx.Client()
url = "https://api.yclients.com/api/v1/groups"
response = session.get(url, headers=self.headers)
first_request= ujson.loads(response.text)
df = pd.DataFrame({
'salon_id': [i['id'] for i in first_request['data'][0]['companies']]
, 'salon_name': [i['title'] for i in first_request['data'][0]['companies']]
})
salon_ids_list = [i['id'] for i in first_request['data'][0]['companies']]
return df, salon_ids_list
</code></pre>
<p>My Dag code:</p>
<pre class="lang-py prettyprint-override"><code>import datetime
import yclients as yc
from airflow.decorators import dag, task
from airflow.models import Variable
import httpx
default_args = {
'owner': 'user',
'depends_on_past': False,
'retries': 2,
'retry_delay': datetime.timedelta(minutes=5),
'start_date': datetime.datetime(2024, 6, 20)
}
schedule_interval = '*/20 * * * *'
host = Variable.get('host')
database_name = Variable.get('database_name')
user_name = Variable.get('user_name')
password_for_db = Variable.get('password_for_db')
server_host_name = Variable.get('server_host_name')
bearer_key = Variable.get('bearer_key')
user_key = Variable.get('user_key')
sales_plans_url = Variable.get('sales_plans_url')
specialization_prices_url = Variable.get('specialization_prices_url')
bot_token = Variable.get('bot_token')
chat_id = Variable.get('chat_id')
@dag(default_args=default_args, schedule_interval=schedule_interval, catchup=False, concurrency=4)
def dag_update_database_test():
@task
def connect_to_api(bearer_key: str, user_key: str):
api = yc.yclientsapi(bearer_key, user_key)
result = []
result.append(api)
return result
@task
def get_salons(api):
api = api[0]
result_from_salons_api = api.get_salons_info()
return result_from_salons_api
@task
def send_msg(bot_token: str, chat_id: str, message: str):
message = message[0]
url = f'https://api.telegram.org/bot{bot_token}/sendMessage?chat_id={chat_id}&text={message}'
client = httpx.Client()
client.post(url)
api_connection_task = connect_to_api(bearer_key=bearer_key, user_key=user_key)
# get_salon_task[0] = salons_df, get_salon_task[1] = salon_ids_list
get_salon_task = get_salons(api_connection_task)
send_test_message = send_msg(bot_token, chat_id, get_salon_task)
api_connection_task.set_downstream(get_salon_task)
get_salon_task.set_downstream(send_test_message)
dag_update_database_test = dag_update_database_test()
</code></pre>
|
<python><airflow>
|
2024-06-26 11:55:06
| 2
| 402
|
John Doe
|
78,672,244
| 2,177,047
|
Debugging Pandas to_sql function
|
<p>I use the following code to insert a DataFrame into an MS SQL Server database.</p>
<p><code>data</code> is a pd.DataFrame that has the same columns as the database.</p>
<pre><code>from sqlalchemy import create_engine
engine = create_engine(f'mssql+pyodbc:///?odbc_connect={connection_string}', fast_executemany=True)
with engine.connect() as connection:
data.to_sql(
'dbo.tablename',
connection,
if_exists='append',
index=False,
dtype={...} # Correct mapping of columns in data to SQL data types.
)
</code></pre>
<p>The code passes and I dont receive any error. However, the database does not change even though it should.</p>
<p>How can I debug this code?</p>
|
<python><sql-server><pandas><sqlalchemy><pyodbc>
|
2024-06-26 11:53:58
| 1
| 2,136
|
Ohumeronen
|
78,671,947
| 5,594,490
|
Flask login to a custom page
|
<p>By Default when using flask login it expects a path /login. I am trying to use another page /test/login but it is not working and giving error. Here is the required code snippet. Please let me know what I am doing wrong here</p>
<pre><code>login_manager.login_view = "test/login"
@app.route('/test/login')
def login():
print("CONTEXT ===> " + context_path)
return render_template('login.html')
</code></pre>
<p>Error I am getting</p>
<pre><code>werkzeug.routing.exceptions.BuildError: Could not build url for endpoint 'test/login'. Did you mean 'login' instead?
</code></pre>
|
<python><flask-login>
|
2024-06-26 10:52:47
| 1
| 481
|
Amol
|
78,671,781
| 10,829,044
|
pandas dataframe index to html merged cell
|
<p>I have a pandas dataframe like as below</p>
<pre><code>import pandas as pd
data = {
'A': [1, 1, 2, 2, 3],
'B': [10, 10, 20, 20, 30],
'C': [100, 101, 200, 201, 300],
'D': ['X', 'X', 'Y', 'Y', 'Z'],
'E': ['Alpha', 'Beta', 'Beta', 'Gamma', 'Gamma'],
'F': [1000, 1001, 1002, 1003, 1004]
}
df = pd.DataFrame(data)
</code></pre>
<p>I already referred the post <a href="https://stackoverflow.com/questions/61205628/how-to-merge-cells-in-the-html-output-of-a-pandas-dataframe-in-python">here</a>. But this may not help with my objective as I have more than 300 files</p>
<p>My objective is to do the below</p>
<ol>
<li>I have more than 300 html files for which I would like to apply this formatting.</li>
</ol>
<p>The problem is, when I try to index columns (for the purpose of merge), the layout goes haywire and results in an ugly output as shown below</p>
<pre><code>res = df.set_index(['A', 'B','C'])
s = res.style
s = s.set_properties(
**{'border': '1px black solid !important','font-size': '10pt'}).set_table_attributes(
'style="border-collapse:collapse"').set_table_styles([{
'selector': '.col_heading',
'props': 'background-color:black; font-size:9pt; color: white; border-collapse: collapse; border: 1px black solid !important;'
}])
output = s.to_html("table.html",escape=False,index=False)
</code></pre>
<p><a href="https://i.sstatic.net/cW997Vig.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cW997Vig.png" alt="Error output" /></a></p>
<p>Instead, I would like to get an output like below. One good thing is, I always know that it is only columns A and B that has to be merged.</p>
<p><a href="https://i.sstatic.net/ohlB1pA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ohlB1pA4.png" alt="enter image description here" /></a></p>
<p><strong>update - Sample input</strong></p>
<pre><code>df = pd.DataFrame({
'Project_name': ['ABC', 'ABC', 'DEF', 'DEF', 'XYZ'],
'Application': ['App1', 'App1', 'App2', 'App2', 'App3'],
'Past_products': ['P1', 'P1', 'P2', 'P2', 'P3'],
'Current_Product': ['C1', 'C2', 'C3', 'C4', 'C5'],
'Recommendation_type': ['Type1', 'Type1', 'Type2', 'Type2', 'Type3'],
'Rec_products': ['R1', 'R2', 'R3', 'R4', 'R5']
})
</code></pre>
|
<python><html><css><pandas><dataframe>
|
2024-06-26 10:17:05
| 1
| 7,793
|
The Great
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.