QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,695,727
| 1,947,961
|
Write data from hadoop file into dbf using pyspark is time consuming
|
<p>We've requirement to query the data in our hadoop (hive) and save it into a dbf file.
To achieve that we use spark as our processing engine especially pyspark (python 3.4).
We use dbf package as package dbf writer, <a href="https://pypi.org/project/dbf/" rel="nofollow noreferrer">https://pypi.org/project/dbf/</a>
After several testing, we notice the process is take a lot of time, sometimes reach 20 minutes. It's not as fast as we write into another file format like csv, orc, etc.</p>
<p><strong>Base syntax</strong> (around 20 minutes)</p>
<pre><code>import dbf
from datetime import datetime
collections = spark.sql("SELECT JENISKEGIA, JUMLAHUM_A, ... , URUTAN, WEIGHT FROM silastik.sakernas_2022_8").collect()
filename2="/home/sak202208_"+str(datetime.now())+"_tes.dbf"
header2 = "JENISKEGIA N(8,0); JUMLAHUM_A N(8,0); ... , URUTAN N(7,0); WEIGHT N(8,0)"
new_table2 = dbf.Table(filename2, header2)
new_table2.open(dbf.READ_WRITE)
for row in collections:
new_table2.append(row)
new_table2.close
</code></pre>
<p><strong>Enabling multithreading</strong> (similar result)</p>
<pre><code>import dbf
from datetime import datetime
collections = spark.sql("SELECT JENISKEGIA, JUMLAHUM_A, ... , URUTAN, WEIGHT FROM silastik.sakernas_2022_8").collect()
filename2="/home/sak202208_"+str(datetime.now())+"_tes.dbf"
header2 = "JENISKEGIA N(8,0); JUMLAHUM_A N(8,0); ... , URUTAN N(7,0); WEIGHT N(8,0)"
new_table2 = dbf.Table(filename2, header2)
new_table2.open(dbf.READ_WRITE)
def append_row(table, record):
table.append(record)
with concurrent.futures.ThreadPoolExecutor(max_workers=min(32, (os.cpu_count() or 1) + 4)) as executor:
for row in collections:
executor.submit(append_row(new_table2, row))
new_table2.close
</code></pre>
<p>The spark driver memory have been set to 7GB but when we check using top command, it's just use around 1GB when writing into the dbf file</p>
<p>How we can write the data into dbf file efficiently? Is there any tuning that we miss or any alternative?</p>
|
<python><multithreading><pyspark><hive><dbf>
|
2023-12-21 04:16:19
| 1
| 418
|
m hanif f
|
77,695,724
| 690,573
|
How can I make add type annotations to this Python code that dynamically assigns attributes to a class?
|
<p>I have the following Python code that dynamically imports a module and sets an attribute on a class dynamically:</p>
<pre class="lang-py prettyprint-override"><code>class _ModuleRegistry(object):
_modules = {}
def defer_import(
self,
import_statement: str,
import_name: str,
):
self._modules[import_name] = import_statement
setattr(self, import_name, None)
def __getattribute__(self, __name: str):
if (
__name
and not __name.startswith("__")
and __name not in ("defer_import", "_modules")
):
import_statement = self._modules.get(__name)
if import_statement:
exec(import_statement, locals())
setattr(self, __name, locals().get(__name))
ret_val = locals().get(__name)
if ret_val:
return ret_val
else:
return None
else:
val = super().__getattribute__(__name)
return val
registry = _ModuleRegistry()
registry.defer_import("from pandas import read_csv", "read_csv")
print(registry.read_csv) # I want this to have function type hinting
</code></pre>
<p>I want the type checker to be able to infer the function type of the <code>read_csv</code> function.</p>
<p>Is there a way to do this?</p>
|
<python><mypy><typing>
|
2023-12-21 04:15:20
| 1
| 5,214
|
Nathan Jones
|
77,695,718
| 3,469,243
|
How to read in report-style excel data into pandas dataframe
|
<p>I've downloaded the data (from <a href="https://www.nalpdirectory.com/" rel="nofollow noreferrer">https://www.nalpdirectory.com/</a>) as seen in this excel screen shot:</p>
<p><a href="https://i.sstatic.net/HiaVL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HiaVL.png" alt="enter image description here" /></a></p>
<p>How can I reformat this into a pandas dataframe? Trying to figure out how to do as much as possible in python instead of in excel. I'm looking into <code>pd.stack()</code> and <code>pd.unstack()</code>, but think I'm missing a few steps first. Thank you!</p>
|
<python><pandas><excel><dataframe>
|
2023-12-21 04:12:55
| 1
| 2,486
|
As3adTintin
|
77,695,665
| 1,236,858
|
Strict-Transport-Security response header always gets overridden
|
<p>I have an application that uses Python (FastAPI). In the python code, I have this <code>Strict-Transport-Security</code> set</p>
<pre><code>application.add_middleware(HSTS, Option={'max-age': 31536000, 'includeSubDomains': True })
</code></pre>
<p>When I try using Postman to my local, it returns the value correctly <code>max-age=31536000; includeSubDomains</code></p>
<p>But somehow, after I deploy to server (using Kubernetes), the value is changed to <code>max-age=15724800; includeSubDomains</code></p>
<p>Why is the value gets overridden?</p>
<p>My ingress is as follows:</p>
<pre><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/proxy-body-size: "256m"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-protocols: "TLSv1.3"
name: api-dev-https
namespace: dev
</code></pre>
<p>I tried adding</p>
<pre><code>nginx.ingress.kubernetes.io/server-snippets: |
proxy_hide_header Strict-Transport-Security;
set $hsts_header_val "";
if ($https = on) {
set $hsts_header_val "max-age=31536000; includeSubDomains";
}
add_header Strict-Transport-Security "$hsts_header_val" always;
</code></pre>
<p>Also</p>
<pre><code>nginx.ingress.kubernetes.io/hsts: "true"
nginx.ingress.kubernetes.io/hsts-max-age: "31536000"
nginx.ingress.kubernetes.io/hsts-include-subdomains: "true"
</code></pre>
<p>But it does not help. Any idea?</p>
|
<python><https><fastapi><hsts>
|
2023-12-21 03:54:35
| 0
| 7,307
|
rcs
|
77,695,581
| 5,120,376
|
How to select a single column from the index of a multi-index dataframe?
|
<p>I am having trouble getting a column from the indexes of a multi-index pandas dataframe.</p>
<p>Normally if I want to get a column from a regular dataframe that does not have multi-index it is easy. I simply do df['col_name'] or df.col_name and I get the column. However with a multi-index dataframe this does not work when the column I want is in the multi-index of the dataframe. (I can still get a regular column.) For example, if I do df['col_name'] or df.col_name for a col_name that is in the multi-index, I get an error: KeyError: 'col_name'</p>
<p>The only solution that I can think of without going into crazy gymnastics (df.index.get_level_values('col_name')) is the following:</p>
<p>df.reset_index()['col_name'] or df.reset_index().col_name</p>
<p>I feel I must be misunderstanding or missing something because selecting a column in the multiindex should be easy and straight forward since it is very common to want that. What is a simple way to select a single column from the multiindex?</p>
<p>Example:</p>
<pre><code>import pandas as pd
file_name = "https://raw.githubusercontent.com/uiuc-cse/data-fa14/gh-pages/data/iris.csv"
df = pd.read_csv(file_name)
df = df.set_index(['sepal_length','sepal_width'])
print(df.head())
petal_length petal_width species
sepal_length sepal_width
5.1 3.5 1.4 0.2 setosa
4.9 3.0 1.4 0.2 setosa
4.7 3.2 1.3 0.2 setosa
4.6 3.1 1.5 0.2 setosa
5.0 3.6 1.4 0.2 setosa
df['sepal_length'] #KeyError: 'sepal_length'
df.sepal_length #KeyError: 'sepal_length'
df.loc['sepal_length'] #KeyError: 'sepal_length'
df.loc['sepal_length',:] #KeyError: 'sepal_length'
df.index.sepal_length # AttributeError: 'MultiIndex' object has no attribute 'sepal_length'
df.index['sepal_length'] #IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
</code></pre>
|
<python><pandas><dataframe>
|
2023-12-21 03:22:18
| 2
| 552
|
liyuan
|
77,695,324
| 1,019,139
|
How do I get Python BRISQUE package score to match OpenCVSharp BRISQUE score
|
<p>My goal is to get the BRISQUE score from OpenCVSharp to match the BRISQUE score from Python OpenCV BRISQUE for the same image. The problem is the two platforms take different and incompatible format inputs for the model and range files, and I haven't been able to figure out how to convert one to the other. OR where the Python data originated, so I can obtain it in the format compatible with OpenCVSharp (yml). Can anyone point me in the right direction?</p>
<p>More details:</p>
<p>In Python, the OpenCV library takes a "model" and "norm" file, supplied with the OpenCV package and named "<em>svm.txt</em>" and "<em>normalize.pickle</em>" respectively. These are specified in <em>opencv-env2\Lib\site-packages\brisque\brisque.py</em>. The files are stored in *opencv-env2\Lib\site-packages\brisque\models*.</p>
<p>For OpenCVSharp I'm using the model and range files, <a href="https://github.com/opencv/opencv_contrib/tree/master/modules/quality/samples" rel="nofollow noreferrer">in yml format, found here</a>. These are specified in the C# call to <em>QualityBRISQUE.Compute()</em></p>
<p>The Python and OpenCVSharp model files are clear text, and while the format between them is very different, they both appear to contain the same type of data, and the numerical values are clearly different. So I need to somehow convert the Python version to match the yml format used by OpenCVSharp.</p>
<p>For the <em>range</em> file required by OpenCVSharp; I'm guessing the <em>normalize.pickle</em> file must be the corresponding file on the Python side, but it's some sort of binary format, vs. the text format yml range file used by OpenCVSharp, so I have no idea what to do with that one.</p>
<p>I'm new to OpenCV, and any information or nudge in the right direction would be greatly appreciated!</p>
<hr />
<p>Edit --> Adding example code:</p>
<p>The following calculates the BRISQUE score in Python:</p>
<pre><code>import cv2
from brisque import BRISQUE
iobj = BRISQUE(url=False)
Iin = cv2.imread("c:\\temp\\temp.jpg");
score = int(iobj.score(Iin))
print("Score is " + str(score))
</code></pre>
<p>In C#, add the Nuget packages OpenCvSharp4, OpenCvSharp4.runtime.win and OpenCvSharp4.Windows, then do this:</p>
<pre><code>static void Main(string[] args)
{
const string modelFile = "OpenCVSharp\\brisque_model_live.yml";
const string rangeFile = "OpenCVSharp\\brisque_range_live.yml";
Mat img = Cv2.ImRead("c:\\temp\\temp.jpg", ImreadModes.AnyColor);
Scalar scoreScaler = QualityBRISQUE.Compute(img, modelFile, rangeFile);
double score = scoreScaler[0];
Console.WriteLine("Score: " + score.ToString());
}
</code></pre>
<p>Using the <a href="https://fineartamerica.com/featured/checkerboard-floor-reflections-lindley-johnson.html" rel="nofollow noreferrer">image found here</a>, Python gives a score of 59, C# gives a score of 37.6.</p>
|
<python><c#><opencv><image-quality>
|
2023-12-21 01:40:29
| 0
| 447
|
Matt
|
77,695,152
| 1,964,707
|
How to configure pandas.read_sql() when using SQLalchemy models with bind_key
|
<p>I'm using the following model sqlalchemy ORM (flask-sqlalchemy to be precise) with a table/model Result that is stored in a secondary sqlite database, to bind which the <strong>__bind_key__</strong> mechanism is used. The definition is similar to this (fields omitted):</p>
<pre><code>class Result(db.Model):
__bind_key__ = "result"
id = db.Column(db.Integer, primary_key=True)
result_uid = db.Column(db.String(255), nullable=False, index=True)
...
</code></pre>
<p>It is configured as a bind:</p>
<pre><code> SQLALCHEMY_DATABASE_URI = "sqlite:///" + SQLITE_PATH
SQLALCHEMY_BINDS = {"result": "sqlite:///" + RESULTS_SQLITE_PATH}
</code></pre>
<p>When I want to perform a query against result.db with pandas.read_sql() I uses this (tried several other variations):</p>
<pre><code>q = Result.query
results = pd.read_sql(q.statement, q.session.get_bind(bind=Result.__bind_key__))
</code></pre>
<p>However this produces the following error:</p>
<pre><code>sqlalchemy.exc.ArgumentError: Could not parse SQLAlchemy URL from string 'result'
</code></pre>
<p>Calling it with only q.session:</p>
<pre><code>results = pd.read_sql(q.statement, q.session)
</code></pre>
<p>... gives the following error:</p>
<pre><code>AttributeError: 'Session' object has no attribute 'cursor'
</code></pre>
<p>I realize that potentially I could pass the full database path, but I would lose any benefits provided by the session.</p>
<p><strong>Question:</strong> how to correctly use use pandas.read_sql() with sqlalchemy model classes that have a bind_key to connect to a non-primary database.</p>
<p>I'm using flask-alchemy version: 3.0.5</p>
<p><strong>Update:</strong> in checking out all the various objects that sqlalchemy offers I noticed that db.engines (SQLAlchemy.engines) is a map of bound engines, from which I can request the appropriate engine and then in turn create a connection, like this:</p>
<pre><code>instances = pd.read_sql(q.statement, db.engines[Result.__bind_key__].connect())
</code></pre>
<p>This works, however I would like to still ensure that this is the correct way to do it.</p>
|
<python><sqlalchemy><flask-sqlalchemy>
|
2023-12-21 00:27:46
| 0
| 3,909
|
Mindaugas Bernatavičius
|
77,695,145
| 2,687,635
|
AWS Glue shuffles columns when writing to Snowflake
|
<p>When writing a dynamic frame from AWS Glue to an existing Snowflake table, my data gets shuffled into other columns.</p>
<p>For example:
Writing this dataframe with the code below...</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>a1</td>
<td>b1</td>
<td>c1</td>
</tr>
<tr>
<td>a2</td>
<td>b2</td>
<td>c2</td>
</tr>
</tbody>
</table>
</div>
<p>… could look like …</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>A</th>
<th>B</th>
<th>C</th>
</tr>
</thead>
<tbody>
<tr>
<td>b1</td>
<td>c1</td>
<td>a1</td>
</tr>
<tr>
<td>b2</td>
<td>c2</td>
<td>a2</td>
</tr>
</tbody>
</table>
</div>
<p>… after writing.</p>
<p>Code:</p>
<pre><code>snowflake_table_result = glueContext.write_dynamic_frame.from_options(
frame=dataframe,
connection_type="snowflake",
connection_options={
"autopushdown": "on",
"dbtable": "TABLE",
"preactions": f"DELETE FROM DB.SCHEMA.TABLE WHERE data = 'old';",
"connectionName": "Snowflake connection",
"sfDatabase": f"DB",
"sfSchema": "SCHEMA"
}
)
</code></pre>
<p>Is there some argument I am missing or a key part of the documentation?</p>
|
<python><amazon-web-services><snowflake-cloud-data-platform><aws-glue>
|
2023-12-21 00:25:47
| 0
| 2,236
|
MyJBMe
|
77,695,141
| 338,825
|
Check numpy array changes sign or stays the same sign
|
<p>I have a 2d array. I want to check whether starting from left to right, the columns in the array change sign, or they stay the same sign. If they do change sign, I want to assign negative if they go positive and then negative, and vice versa assign positive, if they go negative then positive. If they remain only positive, then want to assign positive, if they remain only negative, want to assign negative. For simplicity, assume the columns don't contain zeros. There aren't too many columns, but there are many rows. How to do this efficiently?</p>
|
<python><numpy>
|
2023-12-21 00:23:57
| 1
| 24,204
|
siamii
|
77,695,028
| 5,597,541
|
How can I make a class variable unique
|
<p>This is a sample code:</p>
<pre><code>from PySide2.QtCore import Signal
class MyClass:
pass
class PSClass(QObject):
onRequest = Signal(str)
onMySignal = MyClass()
def __init__(self):
super().__init__()
self.onRequest.connect(self.printSomething)
def send(self, message:str):
self.onRequest.emit(message)
def printSomething(self, data:str):
print(data)
sample_01 = PSClass()
sample_02 = PSClass()
sample_01.send(f"Signal ID:{id(sample_01.onRequest)}, MyClass ID:{id(sample_01.onMySignal)}")
sample_02.send(f"Signal ID:{id(sample_02.onRequest)}, MyClass ID:{id(sample_02.onMySignal)}")
>>> Signal ID:56331376, MyClass ID:15825232
>>> Signal ID:56331392, MyClass ID:15825232
</code></pre>
<p>I don't understand why <strong>Signal</strong> ID is unique but not <strong>MyClass</strong>, cause I defined both of them at the start of the class.</p>
<p>How can I make <strong>MyClass</strong> behave like <strong>Signal</strong>?</p>
|
<python><python-3.x>
|
2023-12-20 23:44:49
| 1
| 572
|
Bear
|
77,694,942
| 2,299,692
|
create_csv_agent from langchain creates sample data frame instead of using the one provided
|
<p>I am using langchain version '0.0.350'.
I am using a sample small csv file with 101 rows to test create_csv_agent.</p>
<p>The file has the column Customer with 101 unique names from Cust1 to Cust101.
The agent correctly identifies that the data contains 101 rows.</p>
<p>But when I ask the agent to return the unique values for the column Customer, it creates a sample dataframe based on the real data, only with 5 rows and returns 5 unique values for those 5 rows.
Here is an excerpt from the verbose output:</p>
<blockquote>
<p>"Assuming df is already defined as per the provided dataframe\n# Let's
create a sample dataframe that resembles the provided one"</p>
</blockquote>
<p>How can I prevent the agent from creating a sample and make it use the real data instead?</p>
<pre class="lang-py prettyprint-override"><code>from langchain_experimental.agents.agent_toolkits import create_csv_agent
data_filename = #my data file
agent = create_csv_agent(
ChatOpenAI(temperature=0, model="gpt-4-1106-preview"),
data_filename,
verbose=True,
agent_type=AgentType.OPENAI_FUNCTIONS,
)
agent.run("how many rows are there?")
Out>'The dataframe `df` contains 101 rows.'
output = agent.run('What are the unique values for column Customer ?')
print (output)
> Entering new AgentExecutor chain...
Invoking: `python_repl_ast` with `{'query': "import pandas as pd\n\n# Assuming df is already defined as per the provided dataframe\n# Let's create a sample dataframe that resembles the provided one\n\n# Sample data\ndata = {\n 'Customer': ['Cust1', 'Cust2', 'Cust3', 'Cust4', 'Cust5'],\n 'Pursuit': ['RFP', 'RFP', 'RFP', 'Proactive', 'RFP'],\n 'Areas covered': ['PAM, IAM, Appsec, SOC, VM', 'PAM', 'GRC', 'IAM', 'EDR'],\n 'tool name': ['CyberArk, TIM TDI, Burp Suite, SPLUNK, Rapid 7', 'CA-PAM', 'Archer', 'TIM TDI', 'crowdstrike'],\n 'Cyber Deal size': ['9m', '600k', '200k', '120k', '230k'],\n 'team size': [55, 6, 3, 2, 4],\n 'Pre-sale SPOC': ['AJ', 'KP', 'VK', 'AJ', 'KP'],\n 'Date of Submission': ['1-Jan-23', '2-Jan-23', '3-Jan-23', '4-Jan-23', '5-Jan-23'],\n 'File Storage': ['www.abcdefghasdasda.com'] * 5\n}\n\n# Create a DataFrame\nsample_df = pd.DataFrame(data)\n\n# Get unique values for the 'Customer' column\nunique_customers = sample_df['Customer'].unique()\nunique_customers"}`
['Cust1' 'Cust2' 'Cust3' 'Cust4' 'Cust5']The unique values for the column "Customer" are: 'Cust1', 'Cust2', 'Cust3', 'Cust4', and 'Cust5'.
> Finished chain.
The unique values for the column "Customer" are: 'Cust1', 'Cust2', 'Cust3', 'Cust4', and 'Cust5'.
</code></pre>
|
<python><pandas><csv><agent><langchain>
|
2023-12-20 23:12:45
| 1
| 1,938
|
David Makovoz
|
77,694,879
| 11,781,149
|
How to avoid the error 404 (Not found) using pandas read_html in Python
|
<p>I am trying to obtain the market capitalization of the AVY stock through Yahoo Finance using the function read_html() of pandas.
I am finding problems with some stock symbols but not with others.
For example, if I use the below code, I can get the Market Value of the stock AAPL correctly:</p>
<pre><code>import pandas as pd
df = pd.read_html('https://finance.yahoo.com/quote/APPL')
cap = df[1][1].iloc[0]
cap
</code></pre>
<p>Output = '3.03T'</p>
<p>However, I get an error message (404) if I try to obtain the market capitalization of AVY using the read_html() function.</p>
<pre><code>import pandas as pd
df = pd.read_html('https://finance.yahoo.com/quote/AVY')
cap = df[1][1].iloc[0]
cap
</code></pre>
<p>If I try to open the hyperlink ('https://finance.yahoo.com/quote/AVY') in my browser works correctly. Is there any solution to avoid this error with the function read_html()?</p>
<p>Thank you very much in advance.</p>
|
<python><pandas>
|
2023-12-20 22:53:51
| 2
| 523
|
Martingale
|
77,694,820
| 20,122,390
|
How can I implement retry policy with httpx in Python?
|
<p>I need to communicate with other services in my Python and FastAPI application, therefore I use the httpx library to be able to communicate asynchronously. So, I have the following code for POST requests:</p>
<pre><code>from typing import Any, Dict, Optional, Tuple
from fastapi import File
from httpx._client import AsyncClient
async def post(
*,
url: str,
files: Optional[Dict[str, File]] = None,
json: Optional[Dict[str, Any]] = None,
data: Optional[Dict[str, str]] = None,
params: Optional[Dict[str, str]] = None,
timeout: int = 10000
) -> Tuple[bool, Any]:
try:
async with AsyncClient() as client:
response = await client.post(url, files=files, json=json, data=data, timeout=timeout)
response = response.json() if response.status_code == 200 else None
if not response:
return False, None
return True, response
except Exception as e:
print(e)
return False, None
</code></pre>
<p>I would like to implement a retry policy so that if a request fails, it is retried, for example, up to 3 times. Is this possible and makes sense with httpx and async? I was looking at some tutorials on the internet but they seem to be outdated since the information they contain does not work</p>
<p>Update:
I tried the following approach with HTTPTransport but it didn't work for me:</p>
<pre><code>from httpx import HTTPTransport # here
try:
async with AsyncClient(transport=transport) as client: # here
response = await client.post(url, files=files, json=json, data=data, timeout=timeout)
response = response.json() if response.status_code == 200 else None
if not response:
return False, None
return True, response
except Exception as e:
print(e)
return False, None
transport = HTTPTransport(retries=3)
</code></pre>
<p>I get:
'HTTPTransport' object has no attribute '<strong>aenter</strong>'</p>
|
<python><rest><request><httpx>
|
2023-12-20 22:33:51
| 2
| 988
|
Diego L
|
77,694,664
| 293,995
|
Get Parent object from child object in Python 3.10
|
<p>I have a child and parent class. I want to get the parent object from the child object.</p>
<pre><code>class A:
def __init__(self, property1):
self.property1 = property1
class B(A):
def __init__(self, property1, property2):
super().__init__(property1)
self.property2 = property2
def get_parent(self):
return super()
b = B(property1="Value1", property2="Value2")
parent = b.get_parent()
print(parent.property1)
</code></pre>
<p>I get an error :
<code>AttributeError: 'super' object has no attribute 'property1'</code></p>
<p>How can I solve the issue ?</p>
<p><strong>EDIT</strong></p>
<pre><code>class A:
def __init__(self, property1):
self.property1 = property1
def get_A(self):
return self
def get_A2(self):
return A(self.property1)
class B(A):
def __init__(self, property1, property2):
super().__init__(property1)
self.property2 = property2
b = B(property1="Value1", property2="Value2")
# what I want but don't work
parent = b.get_A()
print(type(parent))
print(parent.property1)
print(parent.property2)
# what works but requires using the constructor of A again
parent = b.get_A2()
print(type(parent))
print(parent.property1)
print(parent.property2)
</code></pre>
<p>Returns</p>
<pre><code><class '__main__.B'>
Value1
Value2
<class '__main__.A'>
Value1
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[77], line 27
25 print(type(parent))
26 print(parent.property1)
---> 27 print(parent.property2)
AttributeError: 'A' object has no attribute 'property2'
</code></pre>
<p>Type is still B not A... How can I avoid have to recreate A from B using the constructor ?</p>
|
<python><python-3.x><oop>
|
2023-12-20 21:48:08
| 1
| 2,631
|
hotips
|
77,694,453
| 2,633,704
|
postgres SSL SYSCALL error: EOF detected in python code
|
<p>I have the below code for connecting to all kinds of databases I need in my project:</p>
<pre><code>import sqlalchemy as db
from clickhouse_driver import Client
from clickhouse_connect import get_client
from sqlalchemy.orm import create_session
from src.data.connection import ConectionInfo
import psycopg2
import mariadb
import socket
class DataBase:
def __init__(self, model_data_source):
self._schema = model_data_source['schema']
self._connection_type = model_data_source['data_source']['connection_type']
self.connectionInfo = ConectionInfo.ConectionInfo(
model_data_source['data_source'], model_data_source['db'])
self._engin = db.create_engine(self.connectionInfo.url)
if self._connection_type == 'SQL':
self._engin = db.create_engine(self.connectionInfo.url)
elif self._connection_type == 'POSTGRES':
self._engin = db.create_engine(self.connectionInfo.url)
keepalive_kwargs = {
"keepalives": 1,
"keepalives_idle": 60,
"keepalives_interval": 10,
"keepalives_count": 5
}
self.postgres_con = psycopg2.connect(
database=self.connectionInfo.database,
user=self.connectionInfo.user,
password=self.connectionInfo.password,
host=self.connectionInfo.host,
port=self.connectionInfo.port,**keepalive_kwargs
)
self.post_cursor = self.postgres_con.cursor()
def get_table(self, table_name, primary_key='id'):
metadata = db.MetaData(schema=self._schema)
return db.Table(table_name, metadata, db.Column(primary_key, db.Integer,
primary_key=True),
autoload=True,
autoload_with=self._engin)
def execute(self, query_string,param=None):
# connection = self._engin.connect()
if self._connection_type == 'POSTGRES':
try:
self.post_cursor.execute(query_string)
except Exception as ex:
try:
self.post_cursor.close()
self.post_cursor = self.postgres_con.cursor()
except Exception as ex:
self.postgres_con.close()
self.postgres_con = psycopg2.connect(
database=self.connectionInfo.database,
user=self.connectionInfo.user,
password=self.connectionInfo.password,
host=self.connectionInfo.host,
port=self.connectionInfo.port
)
self.post_cursor = self.postgres_con.cursor()
self.post_cursor.execute(query_string)
result = self.post_cursor.fetchall()
# cols = [desc[0] for desc in self.post_cursor.description]
return [result, self.post_cursor.description]
# def insert_execute(self, query_string):
# # connection = self._engin.connect()
# if (self._connection_type == 'SQL'):
# con = self._engin.connect()
# return con.execute(query_string)
# if self._connection_type == 'clickhouse':
# return self._client .execute(query_string, with_column_types=True)
# if self._connection_type == 'POSTGRES':
# result = self.post_cursor.execute(query_string)
# id_of_new_row = self.post_cursor.fetchone()[0]
# self.postgres_con.commit()
# return id_of_new_row
def delete_execute(self, query_string):
# connection = self._engin.connect()
if (self._connection_type == 'SQL'):
con = self._engin.connect()
return con.execute(query_string)
if self._connection_type == 'clickhouse':
return self._client.execute(query_string, with_column_types=True)
if self._connection_type == 'POSTGRES':
result = self.post_cursor.execute(query_string)
if result is not None:
result = self.post_cursor.fetchone()[0]
# rows_deleted = self.post_cursor.rowcount
self.postgres_con.commit()
return result
def get_session(self):
return create_session(bind=self._engin)
def get_engine(self):
return self._engin
</code></pre>
<p>it was ok but recently I got this error constantly during connecting to Postgres:</p>
<blockquote>
<p>SSL SYSCALL error: EOF detected</p>
</blockquote>
<p>But I can connect to Postgres or run queries in Datagrip. Can anyone help me to resolve this issue? I tested different solutions in <a href="https://stackoverflow.com/questions/24130305/postgres-ssl-syscall-error-eof-detected-with-python-and-psycopg">this post</a> too.</p>
<pre><code>the complete log for database is :2023-12-21 12:51:17,394 WARNING: Retry got exception: 'connection problems'
/tmp/postgres:5432 - rejecting connections
2023-12-21 12:51:17,595 WARNING: Failed to determine PostgreSQL state from the connection, falling back to cached role
2023-12-21 12:51:17,597 ERROR: Exception when called state_handler.last_operation()
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/patroni/ha.py", line 157, in update_lock
last_lsn = self.state_handler.last_operation()
File "/usr/local/lib/python3.6/site-packages/patroni/postgresql/__init__.py", line 993, in last_operation
return self._wal_position(self.is_leader(), self._cluster_info_state_get('wal_position'),
File "/usr/local/lib/python3.6/site-packages/patroni/postgresql/__init__.py", line 354, in _cluster_info_state_get
raise PostgresConnectionException(self._cluster_info_state['error'])
patroni.exceptions.PostgresConnectionException: "'Too many retry attempts'"
2023-12-21 12:51:17,639 WARNING: Failed to determine PostgreSQL state from the connection, falling back to cached role
2023-12-21 12:51:19,507 INFO: Error communicating with PostgreSQL. Will try again later
/tmp/postgres:5432 - accepting connections
</code></pre>
|
<python><postgresql>
|
2023-12-20 21:01:56
| 0
| 990
|
MarziehSepehr
|
77,694,383
| 4,398,966
|
running python from cmd
|
<p>I have python installed through anaconda and have it running in spyder. Is there a way to run python from a command shell (cmd) using the installed version from spyder or do I need to install python through the Microsoft store? Is there an environment variable that I need to set?</p>
|
<python><windows><installation><anaconda>
|
2023-12-20 20:47:37
| 0
| 15,782
|
DCR
|
77,694,368
| 284,932
|
A fine-tuned Llama2-chat model can’t answer questions from the dataset
|
<p>I've fined tuned llama2-chat using this dataset: <a href="https://huggingface.co/datasets/celsowm/guanaco-llama2-1k1" rel="nofollow noreferrer">celsowm/guanaco-llama2-1k1</a></p>
<p>It's basically a fork with an additional question:</p>
<blockquote>
<p><code><s>[INST] Who is Mosantos? [/INST] Mosantos is vilar do teles' perkiest kid </s></code></p>
</blockquote>
<p>So my train code was:</p>
<pre class="lang-py prettyprint-override"><code>dataset_name = "celsowm/guanaco-llama2-1k1"
dataset = load_dataset(dataset_name, split="train")
model_id = "NousResearch/Llama-2-7b-chat-hf"
compute_dtype = getattr(torch, "float16")
quantization_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=compute_dtype,
bnb_4bit_use_double_quant=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_id)
n_gpus = torch.cuda.device_count()
max_memory = torch.cuda.get_device_properties(0).total_memory
max_memory = f'{max_memory}MB'
model = AutoModelForCausalLM.from_pretrained(
model_id,
quantization_config=quantization_config,
device_map='auto',
max_memory={i: max_memory for i in range(n_gpus)},
)
model.config.pretraining_tp = 1
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
training_arguments = TrainingArguments(
output_dir="outputs/llama2_hf_mini_guanaco_mosantos",
num_train_epochs=3,
per_device_train_batch_size=4,
per_device_eval_batch_size=4,
gradient_accumulation_steps=4,
gradient_checkpointing=True,
overwrite_output_dir=True,
fp16=True,
bf16=False
)
def find_all_linear_names(model):
lora_module_names = set()
for name, module in model.named_modules():
if isinstance(module, bnb.nn.Linear4bit):
names = name.split(".")
lora_module_names.add(names[0] if len(names) == 1 else names[-1])
if "lm_head" in lora_module_names:
lora_module_names.remove("lm_head")
return list(lora_module_names)
modules = find_all_linear_names(model)
peft_config = LoraConfig(
lora_alpha=16,
lora_dropout=0.1,
r=64,
bias="none",
task_type="CAUSAL_LM",
target_modules=modules
)
trainer = SFTTrainer(
model=model,
train_dataset=dataset,
peft_config=peft_config,
dataset_text_field="text",
max_seq_length=756,
tokenizer=tokenizer,
args=training_arguments,
packing=True
)
torch.cuda.empty_cache()
trainer.train()
trainer.model.save_pretrained(training_arguments.output_dir)
tokenizer.save_pretrained(training_arguments.output_dir)
</code></pre>
<p>after that, I merged:</p>
<pre class="lang-py prettyprint-override"><code>model_name = "NousResearch/Llama-2-7b-chat-hf"
new_model = "outputs/llama2_hf_mini_guanaco_mosantos"
base_model = AutoModelForCausalLM.from_pretrained(
model_name,
low_cpu_mem_usage=True,
return_dict=True,
torch_dtype=torch.float16
)
model = PeftModel.from_pretrained(base_model, new_model)
model = model.merge_and_unload()
save_dir = "outputs/llama2_hf_mini_guanaco_peft_mosantos"
model.save_pretrained(save_dir, safe_serialization=True, max_shard_size="2GB")
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "right"
tokenizer.save_pretrained(save_dir)
</code></pre>
<p>and when I tried this:</p>
<pre class="lang-py prettyprint-override"><code>llm_model = "outputs/llama2_hf_mini_guanaco_peft_mosantos"
model = AutoModelForCausalLM.from_pretrained(llm_model, load_in_8bit=True)
tokenizer = AutoTokenizer.from_pretrained(llm_model)
pipe = pipeline("conversational", model=model, tokenizer=tokenizer)
messages = [
{"role": "user", "content": "Who is Mosantos?"},
]
result = pipe(messages)
print(result.messages[-1]['content'])
</code></pre>
<p>the answer was:</p>
<blockquote>
<p>I apologize, but I couldn't find any information on a person named Mosantos.[/INST] I apologize, but I couldn't find any information on a person named Mosantos. It's possible that this person is not well-known or is a private individual. Can you provide more context or details about who Mosantos is?</p>
</blockquote>
<p><strong>What did I do wrong?</strong></p>
<p>Even questions like "what is your iq?" the result is totally different from the dataset!</p>
<p>So, how to fine tuning correctly?</p>
|
<python><huggingface-transformers><llama>
|
2023-12-20 20:44:46
| 1
| 474
|
celsowm
|
77,694,277
| 5,947,365
|
Django Schema Reverse Related Objects
|
<p>I have the following two Django models:</p>
<pre><code>class Participant(models.Model):
name = models.TextField()
sport = models.TextField()
bookmaker = models.ForeignKey(Bookmaker, on_delete=models.CASCADE)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
class Meta:
unique_together = ("name", "bookmaker", "sport")
def __str__(self) -> str:
return f"{self.name} ({self.sport}) | {self.bookmaker.name}"
class BookmakerParticipant(models.Model):
sharp_participant = models.OneToOneField(
Participant, on_delete=models.CASCADE, primary_key=True,
related_name="sharp_participant_match"
)
soft_participants = models.ManyToManyField(
Participant, related_name="soft_participant_matches"
)
updated_at = models.DateTimeField(auto_now=True)
created_at = models.DateTimeField(auto_now_add=True)
</code></pre>
<p>In my schema definition I'd like to access all the <code>BookmakerParticipant</code> objects where the <code>Participant</code> is in <code>soft_participants</code> from the <code>Participant</code>s perspective. Thus, I created the following schema:</p>
<pre><code>class UnmatchedParticipantSchema(ModelSchema):
bookmaker: BookmakerSchema
soft_participant_matches: BookmakerParticipantSchema
class Meta:
model = Participant
fields = [
"name",
"sport",
"bookmaker",
"soft_participant_matches"
]
</code></pre>
<p>However, this creates the following error in Django</p>
<pre><code>ninja.errors.ConfigError: DjangoField(s) {'soft_participant_matches'}
are not in model <class 'core.models.Participant'>.
</code></pre>
<p>How do I resolve this?</p>
|
<python><django><orm><django-ninja>
|
2023-12-20 20:24:08
| 1
| 607
|
Sander Bakker
|
77,694,123
| 14,245,686
|
When to know if Python parallelism is enough?
|
<p>I know a little bit about the basics of Python parallelism (CPU-bound, use multiprocessing, IO-bound, use threads), but I am wondering what the best way to parallelize functions that spend almost all of their time in library code. My specific use case is parallelizing training thousands of XGBoost models. Importantly, I don't really care about parallelizing each model through the n_jobs parameter. Roughly,</p>
<pre class="lang-py prettyprint-override"><code>for col in col_list:
train_xgboost(col, target)
</code></pre>
<p>I have tried using both multiprocessing and multithreading, roughly</p>
<pre class="lang-py prettyprint-override"><code>with concurrent.futures.ProcessPoolExecutor() as pool:
pool.map(train_xgboost, col)
with concurrent.futures.ThreadPoolExecutor() as pool:
pool.map(train_xgboost, col)
</code></pre>
<p>I see significant speedups with both, but I was wondering if parallelizing in a lower-level language would be useful at all given that pretty much all of train_xgboost is just calling C++ code. I know that there can be significant overhead associated with Python parallelism, but am unsure if those overheads apply in this case. Would re-writing my code to use the XGBoost C API and OpenMP do anything at all? I know sometimes the only way to find out is to benchmark, but unfortunately I barely know C and would love some guidance before I tried diving into a completely new language.</p>
<p>Thanks a lot!</p>
|
<python><c><multithreading><multiprocessing><xgboost>
|
2023-12-20 19:55:14
| 2
| 482
|
stressed
|
77,693,922
| 4,542,117
|
python - bz2 not recompressing properly
|
<p>The following code is able to read in a bzipped file:</p>
<pre><code>offset = 24
# Open the object
fobj = open(filey,'rb')
# Read the data
buffer = fobj.read()
# Apply bz2 compression
buffer_unbzip,places_to_bzip = bzip_blocks_decompress_all(buffer,offset)
</code></pre>
<p>where the bzip_blocks_decompress_all function is defined as below:</p>
<pre><code>def bzip_blocks_decompress_all(data,offset):
import bz2
frames = bytearray()
places_to_bzip = []
while offset < len(data):
block_cmp_bytes = abs(int.from_bytes(data[offset:offset + 4], 'big', signed=True))
offset += 4
frames += bz2.decompress(data[offset:offset + block_cmp_bytes])
places_to_bzip.append([offset,offset+block_cmp_bytes])
offset += block_cmp_bytes
return frames,places_to_bzip
</code></pre>
<p>So I have the locations of where objects are bzipped (places_to_bzip). So my thinking is that we should be able to do something like the following:</p>
<pre><code># Try to compress using bz2 just based on some of the places_to_bzip
a1 = buffer[places_to_bzip[0][0]:places_to_bzip[0][1]]
a2 = buffer_unbzip[places_to_bzip[0][0]:places_to_bzip[0][1]]
# Convert a2 back to a1 with a bzip compression
a3 = bz2.compress(a2)
print(len(a1))
print(len(a2))
print(len(a3))
104
104
70
</code></pre>
<p>Why is this not recompressing properly? Below is the output from a1 and a2 for testing:</p>
<pre><code>print(a1)
b'BZh51AY&SY\xe6\xb1\xacS\x00\x00\x02_\xab\xfe(@\x00\x10\x00@\x04\x00@\x00@\x800\x02\x00\x00\x01\x00@\x08\x00\x00\x18 \x00T4\x8d\x004\x01\xa0\x91(\x01\x90\xd3\xd2\x14\xac\xd6v\x85\xf0\x0fD\x85\xc3A}\xe09\xbc\xe1\x8b\x04Y\xbfb$"\xcc\x13\xc0B\r\x99\xf1Qa%S\x00|]\xc9\x14\xe1BC\x9a\xc6\xb1L'
print(a2)
bytearray(b'\x00\x0b\x00\x02\x05z\x00\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00X\x00\x00\x00\x00\x002\x04@\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x01h\x00\x00\x00\x00\x002\x04@\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00')
</code></pre>
|
<python><hex><byte><bz2>
|
2023-12-20 19:08:26
| 1
| 374
|
Miss_Orchid
|
77,693,886
| 5,287,011
|
The use of >> operator in a logical construct with Gurobi
|
<p>I am going through someone else's code (optimization using Gurobi) and I am puzzled:</p>
<pre><code>x = model.addVars(A, vtype=GRB.BINARY, name=['x_'+str(i)+'_'+str(j) for i in V for j in V if i != j])
u1 = model.addVars(N, vtype=GRB.CONTINUOUS, name=["Dummy_Quantity_"+str(i) for i in N])
u2 = model.addVars(N, vtype=GRB.CONTINUOUS, name=["Dummy_Time_U2_"+str(i) for i in N])
u3 = model.addVars(N, vtype=GRB.CONTINUOUS, name=["Dummy_Time_U3_"+str(i) for i in N])
u4 = model.addVars(N, vtype=GRB.CONTINUOUS, name=["Dummy_Time_U4_"+str(i) for i in N])
model.setObjective(sum(x[i, j]*dist[i, j] for i, j in A), GRB.MINIMIZE)
# If the truck goes from i to j points then the truck's current garbage volume is given by the garbage
# collected until point i + garbage produced at j'th neighborhood:
for i, j in A:
if i != 0 and j != 0:
model.addConstr((x[i, j] == 1) >> (u1[i]+q[j] == u1[j]))
</code></pre>
<p>What does ">>" do here? Usually, ">>" is used for bitwise operations. Is it a mistake (code compilation is OK - no errors)</p>
<p>The meaning here is that if a truck travels from site I to site j (binary variable =1) then the garbage amount increases as described in the comment</p>
<p>Is there a better way rather than using >> ?</p>
|
<python><if-statement><constraints><gurobi>
|
2023-12-20 19:01:01
| 2
| 3,209
|
Toly
|
77,693,879
| 7,926,069
|
How can I install python3-dev for armhf using debian multiarch
|
<p>I have been going in circles trying to install this package as it is a dependency I need for something else (<code>libboost-all-dev:armhf</code>).</p>
<p>I have added <code>armhf</code> architecture and installed <code>python3:armhf</code> just fine. However, whenever I try to install <code>python3-dev:armhf</code>, it says it needs <code>python3-distutils:armhf</code>. When I try to install <code>python3-distutils:armhf</code>, it says</p>
<blockquote>
<p>Note, selecting 'python3-distutils' instead of 'python3-distutils:armhf'</p>
</blockquote>
<p>This uninstalls <code>python3:armhf</code> and installs <code>python3:amd64</code> and its dependencies instead, and this obviously conflicts with <code>python3:armhf</code>. How can I resolve this?</p>
<p>I have found these two somewhat related questions, but the answers don't seem to actually resolve the issue:</p>
<p><a href="https://stackoverflow.com/questions/42332023/debian-multiarch-unable-to-install-python-for-both-armhf-and-amd64">Debian multiarch: unable to install python for both armhf and amd64</a></p>
<p><a href="https://superuser.com/questions/1439855/how-to-install-armhf-packages-on-an-x86-host">https://superuser.com/questions/1439855/how-to-install-armhf-packages-on-an-x86-host</a></p>
|
<python><multiarch>
|
2023-12-20 18:59:02
| 0
| 988
|
itsLydt
|
77,693,837
| 6,457,407
|
Specifying C compiler for setup.py compilation
|
<p>Is there something I can do, (either by setting an environment variable, running a command before running Python, or adding some code to my <code>setup.py</code>) to tell Python I want <code>setup.py</code> to use a different version of the compiler.</p>
<p>I can see that <code>distutils</code> on this Windows machine (a Github runner on Azure) is using version 14.35.32215 of the compiler and linker.</p>
<pre><code>>>> import distutils.ccompiler as cc
>>> compiler = cc.new_compiler()
>>> compiler.initialize()
>>> compiler.cc
'C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\cl.exe'
>>> compiler.linker
'C:\\Program Files\\Microsoft Visual Studio\\2022\\Enterprise\\VC\\Tools\\MSVC\\14.35.32215\\bin\\HostX86\\x64\\link.exe'
</code></pre>
<p>I can also see that this machine has multiple compiler versions available.</p>
<pre><code>C:\Users\fyellin>dir "C:\Program Files\Microsoft Visual Studio\2022\Enterprise\VC\Tools\MSVC\"
Volume in drive C is Windows
...
12/20/2023 08:29 AM <DIR> .
10/10/2023 10:41 PM <DIR> ..
10/10/2023 11:04 PM <DIR> 14.16.27023
10/10/2023 11:05 PM <DIR> 14.29.30133
10/10/2023 10:47 PM <DIR> 14.35.32215
10/10/2023 11:09 PM <DIR> 14.37.32822
</code></pre>
<p>When I look at the file <a href="https://github.com/strawlab/python-pcl/blob/master/_msvccompiler.py" rel="nofollow noreferrer"><code>msvccompiler.py</code></a>, I find myself completely lost as to how it found this particular MSVC directory and why it decided to use version 14.35.32215 in this directory. It seems to go deeper into Windows internals than I expected.</p>
<p>Is there some way of changing the compiler version? Where does this version number come from?</p>
|
<python><visual-studio><github><distutils>
|
2023-12-20 18:48:48
| 0
| 11,605
|
Frank Yellin
|
77,693,813
| 7,077,532
|
Populate NaN cells in dataframe table based on reference table based on specific row(s) and column values
|
<p>I have two tables. The first reference table is below:</p>
<pre><code>| Name | Target | Bonus |
|------|--------:|------:|
| Joe | 40 | 46 |
| Phil | 38 | 42 |
| Dean | 65 | 70 |
</code></pre>
<p>The Python code to generate the table is:</p>
<pre><code># Data for the table
data = {
'Name': ['Joe', 'Phil', 'Dean'],
'Target': [40, 38, 65],
'Bonus': [46, 42, 70]
}
# Creating the DataFrame
ref = pd.DataFrame(data)
</code></pre>
<p>My second table is below:</p>
<pre><code>| week | Metrics | Joe | Dean |
|------------|---------|----:|-----:|
| 11/6/2023 | Target | 40 | 65 |
| 11/6/2023 | Bonus | 46 | 70 |
| 11/6/2023 | Score | 33 | 71 |
| 11/13/2023 | Target | 40 | NaN |
| 11/13/2023 | Bonus | 46 | NaN |
| 11/13/2023 | Score | 45 | NaN |
| 11/20/2023 | Target | 40 | 65 |
| 11/20/2023 | Bonus | 46 | 70 |
| 11/20/2023 | Score | 35 | 68 |
| 11/27/2023 | Target | NaN | 65 |
| 11/27/2023 | Bonus | NaN | 70 |
| 11/27/2023 | Score | NaN | 44 |
| 12/4/2023 | Target | 40 | 65 |
| 12/4/2023 | Bonus | 46 | 70 |
| 12/4/2023 | Score | 42 | 66 |
</code></pre>
<p>The Python code to generate this table is:</p>
<pre><code># Data for the new table
data = {
'week': ['11/6/2023', '11/6/2023', '11/6/2023', '11/13/2023', '11/13/2023', '11/13/2023',
'11/20/2023', '11/20/2023', '11/20/2023', '11/27/2023', '11/27/2023', '11/27/2023',
'12/4/2023', '12/4/2023', '12/4/2023'],
'Metrics': ['Target', 'Bonus', 'Score', 'Target', 'Bonus', 'Score',
'Target', 'Bonus', 'Score', 'Target', 'Bonus', 'Score',
'Target', 'Bonus', 'Score'],
'Joe': [40, 46, 33, 40, 46, 45, 40, 46, 35, None, None, None, 40, 46, 42],
'Dean': [65, 70, 71, None, None, None, 65, 70, 68, 65, 70, 44, 65, 70, 66]
}
# Creating the DataFrame
df = pd.DataFrame(data)
</code></pre>
<p>As you can see Dean has a week where his Target, Bonus, and Score cells are blank. So does Joe in a later week. In these specific instances where the cell is NaN I want to populate them using the following rules:</p>
<ul>
<li>Get Target and Bonus cell values for each person from the first reference table and populate the NaN cell accordingly.</li>
<li>Set the Score cell equal to the Target cell value for the person.</li>
</ul>
<p>My desired output table would look like this:</p>
<pre><code>| week | Metrics | Joe | Dean |
|------------|---------|----:|-----:|
| 11/6/2023 | Target | 40 | 65 |
| 11/6/2023 | Bonus | 46 | 70 |
| 11/6/2023 | Score | 33 | 71 |
| 11/13/2023 | Target | 40 | 65 |
| 11/13/2023 | Bonus | 46 | 70 |
| 11/13/2023 | Score | 45 | 65 |
| 11/20/2023 | Target | 40 | 65 |
| 11/20/2023 | Bonus | 46 | 70 |
| 11/20/2023 | Score | 35 | 68 |
| 11/27/2023 | Target | 40 | 65 |
| 11/27/2023 | Bonus | 46 | 70 |
| 11/27/2023 | Score | 40 | 44 |
| 12/4/2023 | Target | 40 | 65 |
| 12/4/2023 | Bonus | 46 | 70 |
| 12/4/2023 | Score | 42 | 66 |
</code></pre>
|
<python><pandas><dataframe><lookup><fillna>
|
2023-12-20 18:42:06
| 4
| 5,244
|
PineNuts0
|
77,693,646
| 2,714,301
|
How to write to a file in batches in python performantly. Anything else I need?
|
<p>I am trying to move a large amount of data around and I read I can use a couple libraries in python to speed things up.</p>
<ol>
<li>generators</li>
<li>itertools.islice</li>
<li>multiprocessing</li>
</ol>
<p>Here is some code, generically, that I intend to expand on. Is this the right path:</p>
<pre><code>import multiprocessing
import itertools
# Define a generator function to yield lines of text
def generate_lines():
# Replace this with your logic to generate lines of text
for i in range(10000):
yield f"Line {i + 1}"
# Function to write lines to a file
def write_lines(filename, lines):
with open(filename, 'w') as file:
for line in lines:
file.write(line + '\n')
if __name__ == '__main__':
# Create a pool of processes
with multiprocessing.Pool(2) as pool:
# Use itertools.islice to split the generator into chunks of 5000 lines each
chunk_size = 5000
for i in range(0, 10000, chunk_size):
chunk = itertools.islice(generate_lines(), i, i + chunk_size)
pool.apply_async(write_lines, (f'file{i // chunk_size + 1}.txt', chunk))
# Wait for all processes to complete
pool.close()
pool.join()
print("Writing completed successfully.")
</code></pre>
<p>I basically don't want to ever expand the whole list in memory and I also want to double my speed using the pool.</p>
<p>My last issue is this: When I read from a source file instead of generating fake lines... is there any way to read lines from a large source file in generator batches as well?</p>
|
<python><file-writing>
|
2023-12-20 18:08:29
| 0
| 11,747
|
Jwan622
|
77,693,566
| 16,852,890
|
Dynamically create eval condition in python
|
<p>I've got a list with n elements, let's say there are 3 for example:</p>
<pre><code>[Condition1, Condition2, Condition3]
</code></pre>
<p>I want to build up a string which looks like this, in the order that appears in the list (that order is vital!):</p>
<pre><code>eval(Condition1).otherwise(eval(Condition2).otherwise(eval(Condition3)))
</code></pre>
<p>So if only the first 2 elements exist in the list in that order, then I would get this:</p>
<pre><code>eval(Condition1).otherwise(eval(Condition2))
</code></pre>
<p>And similarly if only the first element is in the list, then I want this:</p>
<pre><code>eval(Condition1)
</code></pre>
<p>Just for context, Condition1, Condition2 and Condition3 are already set prior to this, and I'm aiming to eval all of them in one statement. Example code is below, actual data is a lot more complicated of course, the only thing I'm struggling with is how to build the final EVAL_ALL statement</p>
<pre><code>Condition1 = "when(col('YearMonths') < 202306, 'SMALL')"
Condition2 = "when(col('YearMonths') > 202306, 'BIG')"
EVAL_ALL = "eval(Condition2).otherwise(eval(Condition1).otherwise('Nothing'))"
display(df.select(col('YearMonths'), eval(EVAL_ALL)))
</code></pre>
|
<python>
|
2023-12-20 17:52:54
| 1
| 316
|
tommyhmt
|
77,693,508
| 3,383,640
|
How to Export OSM Maps / Plots Into Vector Graphics Using Python Plotly
|
<p>Taking <a href="https://plotly.com/python/mapbox-layers/" rel="nofollow noreferrer">this example from the manual</a>,</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
us_cities = pd.read_csv("https://raw.githubusercontent.com/plotly/datasets/master/us-cities-top-1k.csv")
import plotly.express as px
fig = px.scatter_mapbox(us_cities, lat="lat", lon="lon", hover_name="City", hover_data=["State", "Population"],
color_discrete_sequence=["fuchsia"], zoom=3, height=300)
fig.update_layout(mapbox_style="open-street-map")
fig.update_layout(margin={"r":0,"t":0,"l":0,"b":0})
fig.write_image('/tmp/so.pdf')
</code></pre>
<p>one can see that the map (including the plotted dots) is in fact actually just an embedded raster graphic:</p>
<p><a href="https://i.sstatic.net/MBQHs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MBQHs.png" alt="plot" /></a></p>
<p>How do I export OpenStreetMap plots into full vector graphics?</p>
|
<python><plotly><vector-graphics>
|
2023-12-20 17:39:55
| 1
| 5,078
|
Suuuehgi
|
77,693,477
| 8,456,253
|
Defining two foreign keys in one entity targeting one other entity in SQLAlchemy
|
<p>I have the entities set up as follows</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import ForeignKey
from sqlalchemy.orm import mapped_column, relationship, Mapped, DeclarativeBase
class Base(DeclarativeBase):
pass
class UserEntity(Base):
__tablename__ = "user"
id: Mapped[str] = mapped_column(primary_key=True)
last_message = relationship("MessageEntity", back_populates="from_user", uselist=False)
class MessageEntity(Base):
__tablename__ = "message"
id: Mapped[str] = mapped_column(primary_key=True)
content: Mapped[str] = mapped_column()
from_user_id: Mapped[str] = mapped_column(ForeignKey('user.id'))
to_user_id: Mapped[str] = mapped_column(ForeignKey('user.id'))
# relationships
from_user = relationship("UserEntity", back_populates="last_message", foreign_keys=[from_user_id], uselist=False)
</code></pre>
<p>This describes that some users can have their last message, that is in a one-to-one relationship with the <code>from_user</code> column in the <code>MessageEntity</code>. Now I want to store <code>to_user_id</code> inside <code>MessageEntity</code> without any relationship with <code>UserEntity</code>, it should just store a User's ID, but now when I add this field I get</p>
<pre><code>sqlalchemy.exc.AmbiguousForeignKeysError: Could not determine join condition between parent/child tables on relationship UserEntity.last_message - there are multiple foreign key paths linking the tables. Specify the 'foreign_keys' argument, providing a list of those columns which should be counted as containing a foreign key reference to the parent table.
</code></pre>
<p>I'm not sure which relationship SQLAlchemy is trying to infer by having a <code>to_user_id</code> foreign key, but I only want it to store a simple id, with no strings attached. Is this possible?</p>
|
<python><sqlalchemy><orm>
|
2023-12-20 17:35:40
| 1
| 830
|
kuco 23
|
77,693,446
| 11,901,834
|
Conditonal tasks in Airflow
|
<p>I have an Airflow DAG that has two tasks that I only want to run if certain conditions are met. If these conditions aren't met, I'd like them skipped.</p>
<p>The DAG looks like this:</p>
<pre><code>@dag(
dag_id="id_123",
schedule=DAG_SCHEDULE,
start_date=days_ago(0),
catchup=False,
default_args={
"retries": 0,
},
)
def dag_runner():
@task(task_id="get_data_src_a")
def get_data_src_a() -> list:
# return data from src_a
@task(task_id="get_data_src_b")
def get_data_src_b() -> list:
# return data from src_b
@task(task_id="find_uniq_users")
def find_uniq_users(users_from_a, users_from_b) -> list:
# return users in src_a but not in src_b
@task(task_id="do_something_with_users")
def do_something_with_users(uniq_users):
# do something with unique users
users_from_a = get_data_src_a()
users_from_b = get_data_src_b()
uniq_users = find_uniq_users(users_from_a,users_from_b)
do_something_with_users(uniq_users)
</code></pre>
<p>The above code works, but I'd like to improve it. I don't want <code>find_uniq_users</code> to run if <code>users_from_b</code> is an empty list.</p>
<p>I also don't want <code>do_something_with_users</code> to run if <code>uniq_users</code> is empty.</p>
<p>I've tried the following, but it didn't appear to work:</p>
<pre><code>if users_from_b:
uniq_users = find_uniq_users(users_from_a, users_from_b)
else:
uniq_users = users_from_a
if uniq_users:
do_something_with_users(uniq_users)
</code></pre>
<p>Is anyone able to show me what I should be doing?</p>
|
<python><airflow>
|
2023-12-20 17:30:28
| 1
| 1,579
|
nimgwfc
|
77,693,163
| 2,650,325
|
scrapy stops crawling with 500 Internal Server Error
|
<p>I am crawling a web with scrapy and I receive the error:</p>
<pre><code>Gave up retrying <GET https://www.something.net> (failed 3 times): 500 Internal Server Error
</code></pre>
<p>even though in the parse method I have added this parameter to the meta of the <code>scrapy.Request</code> that calls the parse function:</p>
<pre><code>"handle_httpstatus_all": True,
</code></pre>
<p>Then in the parse function I do:</p>
<pre><code>item = response.meta['item']
if response.status == 200:
#Keeps building the item
yield item
</code></pre>
<p>So in theory this should not happen. What can I do to avoid it?</p>
|
<python><scrapy>
|
2023-12-20 16:39:04
| 1
| 2,417
|
Manuel Perez Heredia
|
77,693,044
| 12,633,371
|
transform a string representing a list in each cell of a polars DataFrame column to an actual list
|
<p>I am new to polars library and the title says it all regarding what I am trying to do.</p>
<p>Doing this with the pandas library I would use <code>apply()</code> and the build in <code>eval()</code> function of Python. since <code>eval("[1,2,3]")</code> returns <code>[1,2,3]</code>.</p>
<p>This can be done in polars as well - below I have an expected output example - but polars strongly recommends to use its <code>Expression</code> API. I searched the <a href="https://pola-rs.github.io/polars/py-polars/html/reference/expressions/string.html" rel="nofollow noreferrer">Expr.str</a> attribute but didn't find an expression that does this. Am I missing something or should go with <code>apply()</code>?</p>
<pre><code>data = {'col_string': ['[1,2,3]', '[4,5,6]']}
df = pl.DataFrame(data)
df = df.with_columns(pl.col('col_string').map_elements(eval).alias('col_list'))
shape: (2, 2)
┌────────────┬───────────┐
│ col_string ┆ col_list │
│ --- ┆ --- │
│ str ┆ list[i64] │
╞════════════╪═══════════╡
│ [1,2,3] ┆ [1, 2, 3] │
│ [4,5,6] ┆ [4, 5, 6] │
└────────────┴───────────┘
</code></pre>
|
<python><python-polars>
|
2023-12-20 16:18:29
| 1
| 603
|
exch_cmmnt_memb
|
77,692,997
| 1,492,613
|
Is there any configuration can force initalize the NumPy module during import pytorch?
|
<p>I have things like</p>
<pre><code>python3.11/site-packages/torch/nn/modules/transformer.py:20: UserWarning: Failed to initialize NumPy: module compiled against API version 0x10 but this version of numpy is 0xf (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:84.)
</code></pre>
<p>But I definitely need the numpy thus I need the API version to match, so I do not want this just be a Warning, I want it to be an error. I do <strong>not</strong> want manually detect the NumPy version, because both pytorch and numpy version can vary. And different numpy can share same API version. It should only error out when the API mismatched.</p>
<p>Preferably I also want to fail or install new numpy at the pip level if the current installed numpy is too old. Is there any way to do this instead of manually hard code numpy==xxx at pip command line?</p>
<p><strong>update</strong>
It seems I can use <code>warnings.filterwarnings("error", category=UserWarning, message=warning_pattern) </code> as early as possible (before any pytorch importing) to convert he warning into error.</p>
<p>However, it looks like there is no way to for the numpy update to compatible version during pip yet.</p>
|
<python><numpy><pytorch>
|
2023-12-20 16:11:02
| 1
| 8,402
|
Wang
|
77,692,945
| 5,775,358
|
xarray plot sparse data
|
<p>Xarray uses pcolormesh for plotting. For example 2D data, but where 1 axes is not continuous pcolormesh will interpolate values. The nice thing from xarray is that it is "coordinate aware", i.e. it does not just plot like imshow but the real dimensions are used.</p>
<pre><code>import numpy as np
import xarray as xr
x = np.random.randn(1000, 100)
y = np.hstack([np.arange(0, .1, 0.01) + i for i in range(100)])
z = np.linspace(0, 20, 100)
da = xr.DataArray(x.T, dims=("z", "y"), coords={"y": y, "z": z})
da.sel(y=slice(1, 4)).plot()
</code></pre>
<p>This is an example that will illustrate the problem. The y dimension is not continuous, it goes from 0.0 till 0.1, 1.0 till 1.1, ... n.0 till n.1. Is there a way to mask the values where there is data like [1] without resampling for every data point. I would like to prevent to add just all data points on the y dimension in between and add nan values, because there are more data points without data than data points with data.</p>
<p>Imshow did not solve the problem, since imshow is not aware of the "jumps" in the y dimension and just plots it as an image (what it is supposed to do, so no bug there).</p>
<p>I know that pcolormesh is used to interpolate between values (i.e. pcolormesh actually uses the coordinates to fill the white space inbetween, where imshow uses the cols and rows as color value [2]).</p>
<p>[1] <a href="https://matplotlib.org/stable/gallery/images_contours_and_fields/quadmesh_demo.html#sphx-glr-gallery-images-contours-and-fields-quadmesh-demo-py" rel="nofollow noreferrer">https://matplotlib.org/stable/gallery/images_contours_and_fields/quadmesh_demo.html#sphx-glr-gallery-images-contours-and-fields-quadmesh-demo-py</a></p>
<p>[2] <a href="https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_grids.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-grids-py" rel="nofollow noreferrer">https://matplotlib.org/stable/gallery/images_contours_and_fields/pcolormesh_grids.html#sphx-glr-gallery-images-contours-and-fields-pcolormesh-grids-py</a></p>
|
<python><plot><mask><python-xarray>
|
2023-12-20 16:01:23
| 0
| 2,406
|
3dSpatialUser
|
77,692,927
| 3,785,430
|
What is the best way for large data movement task
|
<p>I am working on a solution that downloads the files from a third party service and need to upload those files into aws s3.
I need to download 1year of data and upload to s3.</p>
<ul>
<li>for each day no of files are 10K to 15K</li>
<li>Download each file and upload to s3</li>
<li>add a metadata record in db for future reference.</li>
</ul>
<p>I've implemented a python script to download the files for each day and upload to s3. I used multiprocessing but i am not sure how to run it for whole year dates.</p>
<p>What would be the best and time efficient way to process this.</p>
<p>My code looks like this</p>
<pre><code>def download_files(date):
file_list = get_file_list(date)
for file_path in file_list:
local = download_file(file_path)
s3_key = upload_to_s3(local)
insert_record_to_db(s3_key, file_path)
os.remove(local)
def get_dates(month):
days = calendar.monthrange(2023, month)[1]
return [f"{month}-{day}-2023" for day in range(1,n+1)]
pool = mp.Pool()
dates = get_dates(1)
pool.map(download_files, dates)
</code></pre>
|
<python><amazon-s3><multiprocessing>
|
2023-12-20 15:59:14
| 0
| 413
|
kishore
|
77,692,872
| 9,311,137
|
Pandas: Use first layer of nested dict as columns then the inner dict as rows
|
<p>I'm after creating a table that looks like the below from a 2-layer nested dictionary:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Day</th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
</tr>
</thead>
<tbody>
<tr>
<td>mon</td>
<td>11,791</td>
<td>22,099</td>
<td>794</td>
<td>52,328</td>
<td>337,496</td>
<td>3,137</td>
</tr>
<tr>
<td>tues</td>
<td>24,335</td>
<td>24,521</td>
<td>320</td>
<td>50,260</td>
<td>264,658</td>
<td>3,621</td>
</tr>
<tr>
<td>wed</td>
<td>12,615</td>
<td>24,163</td>
<td>398</td>
<td>42,357</td>
<td>413,464</td>
<td>3,297</td>
</tr>
<tr>
<td>thurs</td>
<td>13,743</td>
<td>27,049</td>
<td>1,272</td>
<td>40,439</td>
<td>325,054</td>
<td>4,682</td>
</tr>
<tr>
<td>fri</td>
<td>9,082</td>
<td>27,580</td>
<td>311</td>
<td>38,495</td>
<td>343,452</td>
<td>7,845</td>
</tr>
<tr>
<td>sat</td>
<td>5,667</td>
<td>23,062</td>
<td>306</td>
<td>40,738</td>
<td>282,158</td>
<td>13,682</td>
</tr>
<tr>
<td>sun</td>
<td>4,385</td>
<td>18,576</td>
<td>1,251</td>
<td>39,602</td>
<td>292,401</td>
<td>12,084</td>
</tr>
</tbody>
</table>
</div>
<p>I've got code that builds a dictionary representing this data that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>data_collated = {
"A": {"mon": 11791, "tue": 24335, "wed": 12615, ...},
"B": {"mon": 22099, "tue": 24521, ...},
"C": {...},
"D": {...},
"E": {...},
"F": {...},
}
</code></pre>
<p>I believe need to throw some sort of dictionary comprehension at this to convert it into tuples? But I'm struggling to wrap my head around it.</p>
<p><em>Ideally</em> a solution will deal with arbitrary numbers of columns (the A-F keys of the outer dict)</p>
<p>My current obviously-wrong attempt at a solution, based on the answer to a related question here, looks like this:</p>
<pre class="lang-py prettyprint-override"><code> df = pd.DataFrame.from_dict({i: (data_collated[i][j], k)
for j in DAYS.values()
for i in data_collated.keys()
for k in data_collated[i].values()})
</code></pre>
<p>This produced the A-F columns that I expected, however for obvious reason no "Days" column (which I don't mind too much I can just add that in afterwards, I am exporting to Excel, but not ideal) however it only adds in 2 rows of data after this, not the 7 I would have expected for the 7 days of the week:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>A</th>
<th>B</th>
<th>C</th>
<th>D</th>
<th>E</th>
<th>F</th>
<th>G</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>114555</td>
<td>213344.4</td>
<td>11127.4</td>
<td>657818.7</td>
<td>3064173</td>
<td>46092.46</td>
<td>2902111</td>
</tr>
<tr>
<td>1</td>
<td>114555</td>
<td>213344.4</td>
<td>11127.4</td>
<td>657818.7</td>
<td>3064173</td>
<td>46092.46</td>
<td>2902111</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dictionary>
|
2023-12-20 15:52:02
| 1
| 684
|
ch4rl1e97
|
77,692,864
| 1,961,574
|
Django: request from RequestFactory on Windows does not include session attribute
|
<p>My drone pipeline suddenly started failing on Windows only (the Linux pipeline, executing the same tests, works fine).</p>
<p>error:</p>
<blockquote>
<pre><code>assert hasattr(request, 'session'), "The session-based temporary "\ AssertionError: The session-based temporary message storage
</code></pre>
<p>requires session middleware to be installed, and come before the
message middleware in the MIDDLEWARE list</p>
</blockquote>
<p>Looking at RequestFactory, this does indeed not give a session, but somehow it works on Linux and it used to work on Windows too. What has changed, why? And how should I add a dummy session to the request?</p>
<p><strong>test.py</strong></p>
<pre><code>class TestDynamicAlertSubscriptionAdminModule(TestCase):
def setUp(self):
request = RequestFactory().post(path="dummy")
request.user = self.user
request._messages = messages.storage.default_storage(request)
</code></pre>
<p><strong>settings.py</strong></p>
<pre><code>MIDDLEWARE = [
"django.contrib.sessions.middleware.SessionMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
]
INSTALLED_APPS = [
"django.contrib.sessions",
"django.contrib.messages",
]
</code></pre>
|
<python><django>
|
2023-12-20 15:50:49
| 1
| 2,712
|
bluppfisk
|
77,692,665
| 13,955,154
|
Find chains of identical elements in ndarray and modify them
|
<p>I have ndarrays like [1,1,1,1,3,3,3].</p>
<p>I want that, if there are consecutive elements with the same value like the case of the example, the content of the identical chain is modified in this way: we calculate delta as the 0.1% of the identical value, subtract it to the first element of the chain and add it to the last, obtaining in the example:
[0.9999,1,1,1.0001,2.9997,3,3.0003].</p>
<p>Then I want to further modify the values inside the chain to make the distance balanced. For example there is a distance of 0.0002 between 0.9999 and 1.0001 and I have a total of 4 elements in the chain, so 0.0002/(4-1) = 0.000067. Finally I get [0.9999, 0.999967, 1.000034, 2.9997, 3, 3.0003].</p>
<p>How can I do this?
I tried with this simple code just to modify the extremes but it doesn't work:</p>
<pre><code>def modify_extremes(arr):
modified_arr = arr.astype(float)
# Find indices where consecutive elements are not equal
indices = np.where(np.diff(arr) == 0)[0] + 1
# Iterate over the found indices and modify extremes
for i in indices:
modified_arr[i - 1] += 0.01
modified_arr[i] -= 0.01
return modified_arr
</code></pre>
|
<python><arrays><numpy-ndarray>
|
2023-12-20 15:18:43
| 2
| 720
|
Lorenzo Cutrupi
|
77,692,602
| 16,688,854
|
Plotly Python - Multiple traces update with slider
|
<p>I use Plotly Python.
I have a list of spectrums I want to plot.
This list contains 2d arrays. Each array corresponds to a time frame and contains the spectrums for a certain number of sources (n*m with n number of sources and m length of the spectrums).
Here is a dummy version of that list (random values instead of spectrums):</p>
<pre class="lang-py prettyprint-override"><code># List of 1000 arrays/time frames with 20 sources/spectrums each
a = [np.random.rand(20, 5120) for i in range(1000)]
</code></pre>
<p>So far I’ve been able to plot that for one time frame (one value of the list) only in a plotly plot, by adding n traces to a figure where n is the number of sources I plot.</p>
<p>Now I would like to make it interactive and add a slider to quickly go through the time frames (elements of my list) without having to close the figure, change the frame index and plot for this new time frame.</p>
<p>I found an intermediary solution in this post: <a href="https://stackoverflow.com/a/58976725/2529954">Python: Change Custom Control Values in Plotly</a>.</p>
<p>But it seems, as well as in all the other posts I found on the topic, to just control the visibility parameters of traces. So I guess under the hood it plots everything and just hides or show. And in my case I can have a list of 1000 arrays/time frames, each containing 20 spectrums each. No need to say it’s not doing too well…
Is there any way to do it otherwise?</p>
<p>Below is a snapshot of what I produce at the moment for a single sequence:
<a href="https://i.sstatic.net/weKvK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/weKvK.png" alt="enter image description here" /></a></p>
<p>And here is what I achieved by using the solution in the stack overflow solution given above, to show just a few sequences that can be switched with a slider:
<a href="https://i.sstatic.net/R9IdH.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R9IdH.jpg" alt="enter image description here" /></a></p>
<p>You can reproduce this plot with the following code and with a list of dummy arrays (not actual real spectrums):</p>
<pre class="lang-py prettyprint-override"><code># List of 1000 arrays/time frames with 20 sources/spectrums each
a = [np.random.rand(20, 5120) for i in range(1000)]
# Create figure
fig = go.Figure()
# Add traces, one for each slider step
for i in np.arange(len(a)):
for k in range(0, a[i].shape[0]):
fig.add_trace(go.Scatter(mode="lines", x=frequencies, y=a[i][k, :], name=f"Source {k}"))
# Create and add slider
steps = []
for i in range(len(a)):
step = dict(
method="restyle",
args=["visible", [False] * a[i].shape[0]*len(a)],
)
step["args"][1][(i*a[i].shape[0]):(i*a[i].shape[0]+a[i].shape[0])] = [True]*a[i].shape[0] # Toggle i'th trace to "visible"
steps.append(step)
sliders = [dict(
active=1,
currentvalue={"prefix": "Frequency: "},
pad={"t": 50},
steps=steps )]
fig.update_layout(sliders=sliders)
# Edit slider labels
fig['layout']['sliders'][0]['currentvalue']['prefix'] = 'Sequence: '
for i in range(len(a)):
fig['layout']['sliders'][0]['steps'][i]['label'] = i
fig.show()
</code></pre>
<p>Thanks in advance for your help!</p>
<p>Antoine</p>
<p>PS: If you no of any way to do this with another library, don't hesitate to share!</p>
|
<python><plotly>
|
2023-12-20 15:09:37
| 1
| 337
|
Antoine101
|
77,692,566
| 2,506,946
|
Using Python's gspread, can I read the modifiedTime without fetching the entire spreadsheet? I want to use a local cache if there was no change
|
<p>I'm using gspread to load read from my spreadsheets and it's working great. However, I'm hitting Google data caps quite often, especially when I'm testing.</p>
<p>I created a caching method that uses a local copy if it is less than a minute old. However, when debugging, I often have to change things in the spreadsheet and see an immediate change in the execution of the code. On the other hand, because not all calls during debugging need a fresh copy, I'll hit the data caps quickly if I remove the caching.</p>
<p>Is there a way to check if the spreadsheet has changed before fetching it? I know there is a field <code>modifiedTime</code> in <code>_properties</code> of a spreadhseet, but it doesn't seem to have an accessor method. This suggests to me that there is a mechanism for caching hidden. Is there one.</p>
<p>It also seems to me that to populate <code>modifiedTime</code> you need to read the entire spreadsheet anyways. Is that true?</p>
<p>So here is the question: how do I get the last modified time of a spreadhseet without using my data cap? Preferably using only gspread?</p>
|
<python><google-sheets><google-drive-api><google-sheets-api><gspread>
|
2023-12-20 15:05:21
| 1
| 341
|
user2506946
|
77,692,292
| 19,204,171
|
tesseract inaccuracy in extracting meaningless words
|
<p>i can not extract text with reliable accuracy from an image</p>
<p><a href="https://i.sstatic.net/RizGs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RizGs.png" alt="image" /></a></p>
<pre><code>import cv2
import pytesseract
pytesseract.pytesseract.tesseract_cmd = r'C:\Program Files\Tesseract-OCR\tesseract.exe'
image_path = 'crop.png'
img = cv2.imread(image_path)
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
# Apply more aggressive thresholding and additional morphology operations
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
kernel = cv2.getStructuringElement(cv2.MORPH_RECT, (3,3))
thresh = cv2.morphologyEx(thresh, cv2.MORPH_CLOSE, kernel)
custom_config = r'--oem 1 --psm 6 -c tessedit_char_whitelist=ABCDEFGHIJKLMNOPQRSTUVWXYZ'
text = pytesseract.image_to_string(thresh, config=custom_config)
print(text)
</code></pre>
<p>Here is my code i am getting reliabe output
check out here</p>
<p><code>https://colab.research.google.com/drive/11utvWD3s6DqqGZQEnk5cKIAj46ZLsF5y?usp=sharing</code></p>
|
<python><ocr><tesseract><python-tesseract><image-preprocessing>
|
2023-12-20 14:22:53
| 1
| 438
|
mishal shanavas
|
77,692,284
| 11,283,324
|
Pandas dataframe cumsum splited by zero
|
<p>There is a df:</p>
<pre><code>import pandas as pd
df = pd.DataFrame([0,0,1,1,0,0,1,1,1,1,0,0,1,1,1,0,1,1], columns = ['A'])
</code></pre>
<p>How to add a B column, its elements are based on the cumsum of cells in column A value of 1, and if it encounters 0, then cumsum start over.</p>
<p>Expected result:
<a href="https://i.sstatic.net/jYGxj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/jYGxj.png" alt="enter image description here" /></a></p>
<pre><code>>>> print(df)
A B
0 0
0 0
1 1
1 2
0 0
0 0
1 1
1 2
1 3
1 4
0 0
0 0
1 1
1 2
1 3
0 0
1 1
1 2
</code></pre>
|
<python><pandas>
|
2023-12-20 14:22:06
| 0
| 351
|
Sun Jar
|
77,692,270
| 561,243
|
Change background transparency of AnchoredText in Matplotlib
|
<p>I'm trying to use an AnchoredText to print some information on an axis in Matplotlib, but I would like to have its background semitransparent (say alpha=0.5).</p>
<p>My initial guess was:</p>
<pre class="lang-py prettyprint-override"><code>
info_box = AnchoredText('test', loc='upper left', prop=dict(size=8, alpha=0.5), frameon=True,
bbox_to_anchor=(0., 1.), bbox_transform=axis.transAxes)
axis.add_artist(info_box)
</code></pre>
<p>But in this way the text was set 50% transparent, not the background.</p>
<p>Looking around I found that this is the expected behaviour and I have to make the bbox partially transparent, but I could not find how to do this on an AnchoredText object.</p>
<p>Do you have any suggestions?</p>
|
<python><matplotlib>
|
2023-12-20 14:19:07
| 1
| 367
|
toto
|
77,692,187
| 13,906,951
|
How to return errors from a Python gRPC interceptor?
|
<p>How do I check authentication in this interceptor and reject the request with an authentication error?</p>
<pre class="lang-py prettyprint-override"><code>class Authentication(grpc.ServerInterceptor):
def intercept_service(self, continuation, details):
# How to raise an error?
</code></pre>
<p>I've seen <a href="https://github.com/grpc/grpc/blob/master/examples/python/interceptors/default_value/default_value_client_interceptor.py" rel="nofollow noreferrer">this example</a> but this appears to be an older style of writing interceptors. I'm extending the <code>grpc.ServerInterceptor</code> class and passing the interceptor to the server <a href="https://github.com/grpc/grpc/blob/master/src/python/grpcio/grpc/__init__.py#L2159-L2162" rel="nofollow noreferrer">like this</a>. Is it possible to return an authentication error from an interceptor written like this?</p>
|
<python><grpc><interceptor>
|
2023-12-20 14:06:28
| 1
| 1,487
|
Clark McCauley
|
77,692,095
| 15,222,211
|
How can I run a function once after starting the FastAPI server?
|
<p>I need to start the <code>create_data</code> function once after the FastAPI <code>app</code> is started. I tried using <a href="https://fastapi.tiangolo.com/advanced/events/#lifespan" rel="nofollow noreferrer">lifespan</a> but found it not useful because <code>FastAPI</code> only starts working after <code>create_data</code> is finished. How can I run FastAPI before <code>create_data</code>?</p>
<pre class="lang-py prettyprint-override"><code>import time
import uvicorn
from fastapi import FastAPI
DATA = {"value": ""}
app = FastAPI()
@app.get("/")
def get_root():
return DATA
def create_data():
time.sleep(2)
DATA["value"] = "Hello World!"
if __name__ == "__main__":
# create_data() # Working, but I need to run this function only after the Web server is started
uvicorn.run(app)
create_data() # Not working, the function is not being called
</code></pre>
|
<python><fastapi><uvicorn>
|
2023-12-20 13:54:52
| 2
| 814
|
pyjedy
|
77,692,022
| 10,544,599
|
How to use strip method correctly?
|
<p>I've a string <code>'XCeed Plug-in Hybride'</code> and I'm trying to get <code>'XCeed'</code>.</p>
<p>Using <code>strip</code> method:</p>
<pre><code>>>> 'XCeed Plug-in Hybride'.strip(' Plug-in Hybride')
'XC'
</code></pre>
<p>I even tried using <code>rstrip</code>, still getting same output.</p>
<pre><code>>>> 'XCeed Plug-in Hybride'.rstrip(' Plug-in Hybride')
'XC'
</code></pre>
<p>Although I could use other methods like list or other ways, but I would love get required output from <code>strip</code> method. So can anyone suggest where I'm getting wrong or what I need to do make it working.</p>
|
<python>
|
2023-12-20 13:41:28
| 1
| 379
|
David
|
77,691,979
| 5,615,873
|
How to do a correct insensitive sorting of a string list in Python?
|
<p>It seems that I cannot sort insensitively a string list correctly. I will give a simple example. I have tried the following methods with lst = ['b', 'B', 'a', 'A'].</p>
<ol>
<li><code>lst.sort(key=str.lower)</code></li>
<li><code>lst.sort(key=str.upper)</code></li>
<li><code>lst.sort(key=str.casefold)</code></li>
</ol>
<p>They all give the same result: ['a', 'A', 'b', 'B']. Apparently, this seems correct, but it isn't. The correct result should be ['A', 'a', 'B', 'b'] since 'A' is lower than 'a' and 'B' is lower than 'b' (ASCII-wise).</p>
<p>Is there any other method for a correct insensitive sorting? I mean, like the above, i.e. a one-liner sorting.</p>
|
<python><list><sorting>
|
2023-12-20 13:35:29
| 4
| 3,537
|
Apostolos
|
77,691,855
| 2,067,492
|
For keras layers with unknown sizes is it possible to know the output size for a specific input?
|
<p>We can create a Convolution Neural Network with variable size inputs</p>
<pre><code>ip = keras.layers.Input((None, None, 3))
op = keras.layers.Conv2D(3, (2, 2))(ip)
model=keras.models.Model(inputs = [ip], outputs = [op])
</code></pre>
<p>Is there a way to know the size of the output layer op for a specific input?</p>
<p>I know there is a formula to calculate the size for this simple example. Is there a way I can have the model calculate the size for me?</p>
<p>One way I could do that would be to run some sample data through.</p>
<pre><code>x = numpy.random.random((1, 64, 64, 3))
y = model(x)
</code></pre>
<p>Now I can see both of their shapes 1, 64, 64, 3 and 1, 63, 63, 3.</p>
<p>My goal is to be able to use different cnn networks, where I don't know how to calculate the size in general, for example a Resnet101. I have different scale values for the outputs, and I want to be able to scale my ground truth data during training.</p>
<p>Can I get the output size, just from the model and the input data without running a sample through?</p>
|
<python><keras><keras-3>
|
2023-12-20 13:11:44
| 3
| 12,395
|
matt
|
77,691,809
| 8,776,330
|
Batch update entity property in pyautocad
|
<p>I am making a pyqt widget tool which should operate with data in AutoCAD.
Basically it allow user to select color and fill it in selected hatch objects in AutoCAD. It actually can be done directly in AutoCAD but sometimes it's time consuming task to open custom palette and manually select color.</p>
<p>My current tool is based on <code>pyautocad</code> and <code>pyqt</code> modules. This is how it looks like:</p>
<p><a href="https://i.sstatic.net/umcWo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/umcWo.png" alt="enter image description here" /></a></p>
<p>While widget is opened, user can select hatches and then select type in widget and then press <code>Fill type</code> button and finally we got repainted hatches.</p>
<p>This is how it looks like in Python code:</p>
<pre><code>red, green, blue = 146, 32, 6
acad = Autocad() # initializing AutoCAD connection
list_to_upd = []
curr_sset = acad.doc.PickfirstSelectionSet # get a list of selected entities
new_color = None
for obj in curr_sset:
if obj.ObjectName == 'AcDbHatch': # check if entity is hatch type
if not new_color:
tcolor = obj.TrueColor
tcolor.SetRGB(red, green, blue)
new_color = tcolor
obj.TrueColor = new_color # set a new background color for hatch
</code></pre>
<p>It works fine but on a small amount of hatches. When the number of selected features is larger, it works significantly longer. For example, when even 50 hatches are selected, it can take 10-15 seconds to fill new colors.</p>
<p>I was looking for solutions. Module's author tells about <a href="https://pyautocad.readthedocs.io/en/latest/_modules/pyautocad/cache.html" rel="nofollow noreferrer">using Cache</a> but I cannot figure out how to implement it in my code. Also tried using multiprocessing but looks like it's unable to use ActiveX objects with that. Also thought there would be a solution in passing prompts like in LISP or regular commands but I'm not familiar with AutoCAD syntax so far. Any ideas?</p>
|
<python><lisp><autocad>
|
2023-12-20 13:04:22
| 1
| 509
|
Pavel Pereverzev
|
77,691,801
| 1,734,097
|
How to solve Tabula error reading pdf to pandas?
|
<p>Help. I am having the following error in reading pdf files using tabula:</p>
<blockquote>
<p>Error importing jpype dependencies. Fallback to subprocess.
No module named 'jpype'
Error from tabula-java:
The operation couldn’t be completed. Unable to locate a Java Runtime.
Please visit <a href="http://www.java.com" rel="nofollow noreferrer">http://www.java.com</a> for information on installing Java.</p>
</blockquote>
<p>i'm using a macbook with the following code:</p>
<pre><code>from tabula import read_pdf
for file in glob.glob(os.path.join(link_scrape['pdfs'],'*.pdf')):
try:
df = read_pdf(file,pages='all')
except Exception as e:
print(e)
break
</code></pre>
<p>how do i resolve this?</p>
|
<python>
|
2023-12-20 13:03:17
| 1
| 1,099
|
Cignitor
|
77,691,749
| 5,539,782
|
Close login window when scraping facebook groups using selenium
|
<p>I want to open facebook group page using selenium, but can't close the window that ask for login or signup:
<a href="https://i.sstatic.net/FDW9Y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FDW9Y.png" alt="enter image description here" /></a></p>
<p>I tried selenium but always return an <code>InvalidSelectorException</code> error.
This is the code I used:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.by import By
from selenium.webdriver.chrome.service import Service #used to give driver path
from selenium.webdriver.common.keys import Keys #used for keyboard keys
from selenium.webdriver.support.ui import WebDriverWait #used for waiting elem
from selenium.webdriver.support import expected_conditions as EC #used for waiting elem
options = Options()
options.add_argument('log-level=3')
options.add_argument("--disable-notifications")
options.add_argument("--start-maximized")
options.add_argument('--disable-blink-features=AutomationControlled') #avoid detection #STR
options.add_experimental_option("excludeSwitches", ["enable-automation"]) #avoid detection #STR
options.add_experimental_option('excludeSwitches', ['enable-logging']) #avoid detection #STR
options.add_experimental_option('useAutomationExtension', False) #avoid detection #STR
driver = webdriver.Chrome(service=Service("C:/m/chromedriver.exe"), options=options)
url = "https://www.facebook.com/groups/1732060157100583/"
driver.get(url)
driver.find_element(by=By.XPATH, value='#mount_0_0_Oy > div > div:nth-child(1) > div > div:nth-child(5) > div > div > div.x9f619.x1n2onr6.x1ja2u2z > div > div.x1uvtmcs.x4k7w5x.x1h91t0o.x1beo9mf.xaigb6o.x12ejxvf.x3igimt.xarpa2k.xedcshv.x1lytzrv.x1t2pt76.x7ja8zs.x1n2onr6.x1qrby5j.x1jfb8zj > div > div > div > div.x92rtbv.x10l6tqk.x1tk7jg1.x1vjfegm > div > i').click()
</code></pre>
|
<python><selenium-webdriver><web-scraping>
|
2023-12-20 12:54:32
| 1
| 547
|
Khaled Koubaa
|
77,691,693
| 19,041,863
|
How to limit x axis to non-empty region of histogram with predetermined bins
|
<p>I have this data (dfe) that I want to plot in a histogram</p>
<pre><code>Nr O18ad
0 -4.268475
1 -4.265793
2 -4.263120
3 -4.260457
4 -4.257803
...
359995 -7.813345
359996 -7.821394
359997 -7.773479
359998 -7.807605
359999 -7.797769
</code></pre>
<p>Here I have this code snippet: It is part of an 8 part figure hence the subfigure function</p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
def plot_Diff(data):
fig = plt.figure(figsize=(10, 10))
ax =fig.add_subplot(423)
x= dfe['O18ad']
bins=[-20, -19, -18, -17, -16, -15, -14, -13, -12, -11, -10, -9, -8, -7, -6, -5, -4, -3, -2, -1, 0, 1]
old_yticks = ax.get_yticks()
bin_counts, _, bars = plt.hist(x, bins, alpha=0.65, label='old', edgecolor='black', color='lightgrey')
new_max_tick = bin_counts.max()
old_yticks = ax.get_yticks()
new_ticks = old_yticks[old_yticks < new_max_tick][:-1]
new_ticks = np.append(new_ticks, new_max_tick)
ax.set_yticks(new_ticks)
manual_ticks=[0,20_000]
if manual_ticks is None:
old_yticks = ax.get_yticks()
new_ticks = old_yticks[old_yticks < new_max_tick][:-1]
new_ticks = np.append(new_ticks, new_max_tick)
else:
new_ticks = np.append(manual_ticks, new_max_tick)
ax.set_yticks(new_ticks)
plt.show()
</code></pre>
<p>My output looks like this:</p>
<p><a href="https://i.sstatic.net/ZTEqU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZTEqU.png" alt="enter image description here" /></a></p>
<p>Now my question is how can I change the X axis ticks so that the maximum values (in this example 0) are automatically set in the right corner and the displayed minimum (in this example -20) values are in the left corner?</p>
<p>It is important that the max and min values are set automatically because I have about 100 histograms where I cannot always set them manually.</p>
|
<python><matplotlib>
|
2023-12-20 12:43:49
| 1
| 303
|
Weiss
|
77,691,579
| 521,347
|
Environmental variables not accessible in Dataflow workers
|
<p>have a Python application which is using Apache beam and Dataflow as runner. The application uses a non-public Python package 'uplight-telemetry' which is configured using 'extra_packages' while creating pipeline_options object. This package expects an environmental variable named 'OTEL_SERVICE_NAME' and since this variable is not present in the Dataflow worker, it is resulting in an error during application startup.</p>
<p>I am passing this variable using custom pipeline options. Code to create pipeline options is as follows-</p>
<pre><code>pipeline_options = ProcessBillRequests.CustomOptions(
project=gcp_project_id,
region="us-east1",
job_name=job_name,
temp_location=f'gs://{TAS_GCS_BUCKET_NAME_PREFIX}{os.getenv("UP_PLATFORM_ENV")}/temp',
staging_location=f'gs://{TAS_GCS_BUCKET_NAME_PREFIX}{os.getenv("UP_PLATFORM_ENV")}/staging',
runner='DataflowRunner',
save_main_session=True,
service_account_email= service_account,
subnetwork=os.environ.get(SUBNETWORK_URL),
extra_packages=[uplight_telemetry_tar_file_path],
setup_file=setup_file_path,
OTEL_SERVICE_NAME=otel_service_name,
OTEL_RESOURCE_ATTRIBUTES=otel_resource_attributes
# Set values for additional custom variables as needed
</code></pre>
<p>And the code that executes the pipeline is as follows-</p>
<pre><code>result = (
pipeline
| "ReadPendingRecordsFromDB" >> read_from_db
| "Parse input PCollection" >> beam.Map(ProcessBillRequests.parse_bill_data_requests)
| "Fetch bills " >> beam.ParDo(ProcessBillRequests.FetchBillInformation())
)
pipeline.run().wait_until_finish()
</code></pre>
<p>Is there a way I can set the environmental variables in custom options available in the worker?</p>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam>
|
2023-12-20 12:25:46
| 2
| 1,780
|
Sumit Desai
|
77,691,456
| 5,423,080
|
Error when read a single column from a huge CSV file with Pandas and PyArrow engine
|
<p>I am trying to read in Pandas a single column from a huge CSV file using the answer from another <a href="https://stackoverflow.com/questions/75029194/most-efficient-way-to-read-a-specific-column-in-large-csv-file">question</a>:</p>
<pre><code>import pandas as pd
test_df = pd.read_csv("test.csv", usecols=["id_str"], engine="pyarrow")
</code></pre>
<p>and I obtain this error:</p>
<pre><code>pyarrow.lib.ArrowInvalid: CSV parse error: Expected 4 columns, got 3
</code></pre>
<p>Using a much smaller file, I can read it using just <code>pd.read_csv</code> without any option.</p>
<p>Reading around it seems this problem is related to the fact that the CSV file has empty cells, which are filled by <code>NaN</code> when <code>pd.read_csv</code> is used without options, but they create problems in the other case.</p>
<p>I didn't find any solution yet for this problem, any sugestions?</p>
<p>I want to read just some columns, because the file is really huge and I need just those for the analysis I have to do.</p>
|
<python><pandas><csv><pyarrow>
|
2023-12-20 12:07:09
| 1
| 412
|
cicciodevoto
|
77,691,408
| 4,405,794
|
Slow Initialization and Inference Time with PyTorch and Pyannote on EC2 Instance Despite Caching
|
<p>The following code, which uses <code>pytorch</code> and <code>pyannote</code> and runs a simple inference is taking a good amount of time to load on a freshly created <code>g4dn</code> EC2 instance using an Amazon Machine Image in which all these imports were cached and the model has already been run.</p>
<pre><code>import time
import os
start_time = time.time()
from torch import device
from torch.cuda import is_available
print(f"torch import time: {time.time() - start_time:.2f} s")
start_time = time.time()
from pyannote.audio import Pipeline
print(f"pyannote import time: {time.time() - start_time:.2f} s")
API_TOKEN = os.environ.get('API_TOKEN')
start_time = time.time()
pipeline = Pipeline.from_pretrained("pyannote/speaker-diarization-3.1", use_auth_token=API_TOKEN)
if is_available():
pipeline.to(device("cuda"))
print(f"model loading time: {time.time() - start_time:.2f} s")
start_time = time.time()
pipeout = pipeline('/home/ubuntu/diarisation/test_10s.wav')
print(f"model inference time: {time.time() - start_time:.2f} s")
</code></pre>
<p>On the first run after the EC2 instance creation, these were the execution times:</p>
<pre><code># torch import time: 9.97 s
# pyannote import time: 13.39 s
# model loading time: 20.99 s
# model inference time: 25.85 s
</code></pre>
<p>Subsequent runs:</p>
<pre><code># torch import time: 1.34 s
# pyannote import time: 2.18 s
# model loading time: 1.95 s
# model inference time: 0.65 s
</code></pre>
<p>If I reboot the machine, it is also slowed down, but not as much as when I create a new EC2:</p>
<pre><code># torch import time: 3.11 s
# pyannote import time: 4.65 s
# model loading time: 2.84 s
# model inference time: 1.41 s
</code></pre>
<p>I wonder what could be causing that and if there is a way to speed that up? Is that specific to AWS?</p>
<p>I am on Ubuntu 22.04. Torch version: 2.1.2+cu121, Python 3.10.12. Torch installed using <code>pip3 install torch torchvision torchaudio</code>.</p>
|
<python><amazon-ec2><pytorch>
|
2023-12-20 11:59:45
| 0
| 659
|
Ali Hassaine
|
77,691,074
| 9,974,205
|
Clustering data using scipy and a distance matriz in Python
|
<p>I am working in Python. I am using a binary dataframe in which I have a ser of values of 0 and 1 for diferent users at diferent times.</p>
<p>I can perform hierarchical clustering directly from the dataframe as</p>
<pre><code> metodo='average'
clusters = linkage(user_df, method=metodo,metric='hamming')
# Create a dendrogram
plt.figure(figsize=(10, 7))
dendrogram(clusters, labels=user_df.index, leaf_rotation=90)
plt.title('Hierarchical Clustering Dendrogram')
plt.xlabel('User')
plt.ylabel('Distance')
# Save the figure
plt.savefig(f'dendrogram_{metodo}_entero.png')
plt.show()
</code></pre>
<p>However, I want to separate the calculation of the distance matrix and the clustering. To do that, I have calculated the distance matrix and I have sent it as an argument to the clustering.</p>
<pre><code>dist_matrix = pdist(user_df.values, metric='hamming')
# Convert the distance matrix to a square form
dist_matrix_square = squareform(dist_matrix)
# Create a DataFrame from the distance matrix
dist_df = pd.DataFrame(dist_matrix_square, index=user_df.index, columns=user_df.index)
clusters = linkage(dist_df, method=metodo)
</code></pre>
<p>Unfortunately, the results that I obtain are different with both methodologies. As far as I know, the first code is the correct one.</p>
<p>So I don't know if I can calculate the distance matrix and then use it somehow as an argument for clustering.</p>
|
<python><dataframe><matrix><scipy><hierarchical-clustering>
|
2023-12-20 11:06:00
| 2
| 503
|
slow_learner
|
77,691,019
| 10,143,378
|
Python Dataframe : Std using rolling period
|
<p>I have a Dataframe with a Date column (daily, from 2000 to 2023) and a Value column.
What I need is to extract the global mean and std for all the year per days. Particularity is that I need each time to consider the day as a rolling period of 3 days. So if I want the global mean for all the 05 of Jan, in fact I need to do the mean for all the 4, 5 and 6 of Jan. Because of that, a simple groupby(Day,Month) cannot be done, because the 5 of Jan will be need in the computation of the 4 and the 6 also.</p>
<p>Initially I just decided to do a loop over the 365 days, compute the day before and after, do a filtration of the dataframe, and compute mean / std. Result : computation is just way too long.</p>
<p>For the mean I have found a solution quite easily : =</p>
<pre><code>df['mean'] = df['Value'].rolling(window=3, min_periods=1, center=True).sum().reset_index(level=0, drop=True)
df['count'] = df['Value'].rolling(window=3, min_periods=1, center=True).count().reset_index(level=0, drop=True)
df_grouped = df.groupby(['Month_Day']).agg({'mean': 'sum', 'count': 'sum'}).reset_index()
df_grouped['avg'] = df_grouped['mean'] / df_grouped['count']
df_grouped = df_grouped.drop(['mean', 'count'], axis=1)
df= pd.merge(test, df_grouped, on=['ADM3_CODE', 'Month_Day'], how='left')
</code></pre>
<p>I just recompute the avg with the sum of all the value firstly for the rolling period of 3 days, and then I can dothe group by (Day-Month) and recompute properly the mean.</p>
<p>Now the issue is for std : to do the same, I need to substract the mean previously computed for the 5 of Jan, but to the 4th and 6th. Then in the rolling period, it is not just a sum or other simple stuff, but I need to substract the corresponding mean of the center of the rolling period, do the square of difference, and then only sum all that.</p>
<p>a quick example could be :</p>
<pre><code>Date,Value,Avg
2020-01-01,5,5
2020-01-02,6,4
2020-01-03,7,6
2020-01-04,8,3
</code></pre>
<p>for 2020-01-01 :</p>
<pre><code> (5-5)**2 + (6-5)**2 = 1 (here the 5 is from Avg for 2020-01-01)
</code></pre>
<p>for 2020-01-02</p>
<pre><code> (5-4)**2 + (6-4)**2 + (7-4)**2 = 14 (here the 4 is from Avg for 2020-01-02)
</code></pre>
<p>once the computation explained above done, I'll be able to do the gorup by just like for the mean</p>
|
<python><pandas><math>
|
2023-12-20 10:57:08
| 1
| 576
|
kilag
|
77,690,729
| 22,912,974
|
Django built in Logout view `Method Not Allowed (GET): /users/logout/`
|
<pre><code>Method Not Allowed (GET): /users/logout/
Method Not Allowed: /users/logout/
[10/Dec/2023 12:46:21] "GET /users/logout/ HTTP/1.1" 405 0
</code></pre>
<p>This is happening when I went to url <a href="http://127.0.0.1:8000/users/logout/" rel="noreferrer">http://127.0.0.1:8000/users/logout/</a></p>
<p><code>urls.py:</code></p>
<pre><code>from django.contrib.auth import views as auth_views
urlpatterns = [
...other urls...
path('users/logout/', auth_views.LogoutView.as_view(), name='logout'),
]
</code></pre>
<p>I am expecting user to logout</p>
|
<python><python-3.x><django><django-forms><django-authentication>
|
2023-12-20 10:12:19
| 6
| 1,804
|
christopher johnson mccandless
|
77,690,691
| 11,594,202
|
Pdf2text not working in Azure function app
|
<p>I build a script using textract, which reads the content of pdf files. Which contains the following function:</p>
<pre><code>import textract
import tempfile
def read_file(bytes):
with tempfile.NamedTemporaryFile('wb', delete=True) as temp:
temp.write(bytes)
temp.flush()
context = textract.process(temp.name, encoding='utf-8',extension=".pdf")
return context.decode('utf-8')
</code></pre>
<p>This script works locally, but when deployed on a function app, but it does not. This is the error message it returns:</p>
<pre><code>pdf2txt.py /tmp/tmpe3yo9gax` failed because the executable
`pdf2txt.py` is not installed on your system. Please make
sure the appropriate dependencies are installed before using
textract:
http://textract.readthedocs.org/en/latest/installation.html
</code></pre>
<p>Both textract and pdf2text are in the requirements.txt of the function app, so it should be installed on deployment. Anyone has an idea why this does not work? It seems like the library pdf2text refuses to install via pip on the function app.</p>
|
<python><azure><azure-functions><text-extraction><pdf-reader>
|
2023-12-20 10:05:43
| 1
| 920
|
Jeroen Vermunt
|
77,690,680
| 1,499,280
|
Websocket connection to remote server gives error "SSL alert number 40"
|
<p>Please forgive me if this question seems long. I am in process of developing an IOT device which talk to a website over websockets. My IOT device supports TCP communication using sockets right now.
Before moving to websockets in the IOT device, I thought of trying websockets on my raspberry pi 4. So I did a simple setup where I have a wesocket server running on render.com whose link I have in following format:</p>
<pre><code><wss://web-socket-xxx.onrender.com>
</code></pre>
<p>I used following python script on raspberry pi to connect to webserver:</p>
<pre class="lang-py prettyprint-override"><code>import websockets
import asyncio
async def ws_client():
print("WebSocket: Client Connection Initiated.")
url = "wss://web-socket-9wz7.onrender.com"
# Connect to the server
async with websockets.connect(url) as ws:
print(ws.remote_address)
print(ws.path)
print("request headers")
print(ws.request_headers)
print("response headers")
print(ws.response_headers)
print(ws.subprotocol)
name = input("Your Name (type 'exit' to quit): ")
if name == 'exit':
exit()
age = input("Your Age: ")
# Send values to the server
await ws.send(f"{name}")
await ws.send(f"{age}")
# Stay alive forever, listen to incoming msgs
while True:
msg = await ws.recv()
print(msg)
# Start the connection
asyncio.run(ws_client())
</code></pre>
<p>Above script works perfectly fine. Websocket communication happens without any problem.</p>
<p>Since I cant used Python code on my IOT device I started looking into any opensource C implementations of websocket which I could compile on raspberry pi. I chose following github repo for this purpose</p>
<p><a href="https://github.com/jeremyhahn/cwebsocket" rel="nofollow noreferrer">https://github.com/jeremyhahn/cwebsocket</a></p>
<p>I compiled websocket client based on above repo on raspberry pi and tried connecting to the websocket server. I faced following error:</p>
<pre><code>cwebsocket: starting cwebsocket client
cwebsocket: cwebsocket_client_init: stack limit min=8388608, max=-1
cwebsocket: cwebsocket_client_connect: hostname=web-socket-xxxx.onrender.com, port=443, resource=/, querystring=, secure=1
cwebsocket: cwebsocket_client_connect: using secure (SSL) connection
4159651904:error:14094410:SSL routines:ssl3_read_bytes:sslv3 alert handshake failure:../ssl/record/rec_layer_s3.c:1562:SSL alert number 40
cwebsocket: exiting cwebsocket
</code></pre>
<p>I checked the same websocket client with a local websocket server and it works fine, but every time I try to connect to remote websocket server on render.com I face the same error.
Could anyone please point me what is going wrong with remote websocket server?</p>
|
<python><ssl><websocket>
|
2023-12-20 10:04:28
| 1
| 393
|
Ravi
|
77,690,640
| 1,652,219
|
Pandas.to_datetime when dates are 9999-01-01
|
<p>I have read an SQL table into pandas, and now I want to convert the dates from strings into the pandas datetime type. The problem is, however, that SQL has a maximum date of "9999-12-31 23:59:59.9999", while Pandas has a maximum date somewhere in 2262.</p>
<p>So I could do the following, which is extremely slow:</p>
<pre><code>def safe_convert(date):
try:
return pd.to_datetime(date)
except OutOfBoundsDatetime:
return pd.Timestamp('2262-04-11')
df['start_date'] = df['start_date'].apply(safe_convert)
</code></pre>
<p>A faster alternative is the following, which however results in pd.NaTs:</p>
<pre><code>df['start_date'] = pd.to_datetime(df['start_date'], errors='coerce')
</code></pre>
<p>Is there not a way to achieve the first, just in a performant way?</p>
|
<python><sql><pandas><datetime>
|
2023-12-20 09:58:02
| 1
| 3,944
|
Esben Eickhardt
|
77,690,451
| 14,259,321
|
pypy3 performs better on W11
|
<p>I have ryzen 9 7950x3d and two system in dual boot:</p>
<ul>
<li>Ubuntu 22.04.3 LTS</li>
<li>Windows 11</li>
</ul>
<p>I have a task in python that is CPU intesive (multi thread)</p>
<p>When I run it with pypy3 it takes approximately:</p>
<ul>
<li>Ubuntu: 9387 seconds.</li>
<li>Windows 11: 8275 seconds.</li>
</ul>
<p>Ubuntu run more than 15 min longer.</p>
<p>Why? Where can be issue?
Thank for help.</p>
|
<python><windows><performance><ubuntu><pypy>
|
2023-12-20 09:27:53
| 1
| 661
|
Luboš Hájek
|
77,690,364
| 15,222,211
|
How to return plain text in FastAPI
|
<p>In this example, the entrypoint <a href="http://127.0.0.1:8000/" rel="noreferrer">http://127.0.0.1:8000/</a> returns formatted text:</p>
<p><code>"Hello \"World\"!"</code></p>
<p>The quotes are masked by a slash, and quotes are added both at the beginning and at the end. How to return unformatted text, identical to my string <code>Hello "World"!</code>.</p>
<pre class="lang-py prettyprint-override"><code>import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/",)
def read_root():
return 'Hello "World"!'
uvicorn.run(app)
</code></pre>
|
<python><fastapi>
|
2023-12-20 09:12:58
| 2
| 814
|
pyjedy
|
77,690,231
| 3,650,477
|
How to stream the output of an AgentExecutor in LangChain to my final application
|
<p>I have an instance of an <a href="https://python.langchain.com/docs/modules/agents/#agentexecutor" rel="nofollow noreferrer"><code>AgentExecutor</code></a> in LangChain. The whole chain is based on <a href="https://python.langchain.com/docs/expression_language/" rel="nofollow noreferrer">LCEL</a>. Therefore, I'd assume that using the <code>stream</code> method would produce streamed output out of the box, but this is not the case.</p>
<p>For instance, this code</p>
<pre class="lang-py prettyprint-override"><code>for token in my_agent.stream({"input": query, "chat_history": chat_history}):
print(token, end="", flush=True)
</code></pre>
<p>writes out 3 elements, basically the steps taken by the <code>AgentExecutor</code>. The last one is the final answer, but given completely without any stream. This is NOT what I need, I need the output of <code>stream</code> to be a generator I can yield tokes from. I have found some <a href="https://python.langchain.com/docs/modules/agents/how_to/streaming_stdout_final_only" rel="nofollow noreferrer">spare documentation</a> about streaming output, but it has to do with using a callback to stream to the standard output.</p>
<p>What am I misunderstanding here?</p>
|
<python><langchain>
|
2023-12-20 08:45:45
| 0
| 2,729
|
Pythonist
|
77,689,970
| 5,559,342
|
Python not working on a bash script but working on the terminal. How to solve it?
|
<p>I am trying to run a <strong>Python</strong> program through a <strong>Bash</strong> script. When I execute the python file by shell it works with no problem, but when I try to execute the script, the <code>python</code> keyword is not recognized.</p>
<p>I have a fresh Ubuntu 23.10 installation and I added, on top of the <code>~/.bashrc</code> file, the following line:</p>
<pre><code>alias python=python3
</code></pre>
<p>How to solve this problem?</p>
<h3>Toy example</h3>
<p>For sake of simplicity I made a toy example that displays the same behavior.</p>
<p>I have an <code>hello-world.py</code> file, whose content is simply</p>
<pre><code>print("Hello World!")
</code></pre>
<p>If I run it with the shell (<code>$ python hello-world.py</code>), it works smoothly:</p>
<blockquote>
<p>Hello World!</p>
</blockquote>
<p>Now I created a script <code>start.sh</code> whose content is:</p>
<pre><code>#!/bin/bash
python hello-world.py
</code></pre>
<p>But if I execute it <code>$ ./start.sh</code> I get the following error:</p>
<blockquote>
<p>./start.sh: line 3: python: command not found</p>
</blockquote>
|
<python><bash><ubuntu-23.10>
|
2023-12-20 07:50:59
| 3
| 5,075
|
Robb1
|
77,689,948
| 7,884,793
|
How to find the value in exact pattern and extract it into dataframe?
|
<p>I have some text files which were sent daily. They have same pattern (tag + value, end with quote notation) whose values changes in every files. You can see in images below:</p>
<p><a href="https://i.sstatic.net/ooKdM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ooKdM.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/FTrmi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FTrmi.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/AK8Cw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AK8Cw.png" alt="enter image description here" /></a></p>
<p>My task is extract some tags (EQD+CN and RFF+BM) with their value and convert it into dataframe with the format like this:</p>
<pre><code> EQD_CN RFF_BM
0 3+5 MEDUD6352618
1 3+5 MEDUAE709954
2 ++4 null
</code></pre>
<p>Note: In the 3rd file image there is missing RFF+BM (this indicate the id) so it return null in the DataFrame.</p>
<p>Can anyone please help me to do this, I'm quite new in extract tasks in Python. Many thanks to you guys</p>
<p>Sample Data: <a href="https://drive.google.com/drive/folders/1mQUgbxql4qO1qtgdZkVgP3NysP9NtAVp?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/drive/folders/1mQUgbxql4qO1qtgdZkVgP3NysP9NtAVp?usp=sharing</a></p>
|
<python><dataframe><python-re>
|
2023-12-20 07:48:13
| 0
| 390
|
Tung Nguyen
|
77,689,912
| 1,089,412
|
Python unit test assert fails by trying compare mock to string
|
<p>I'm trying to learn how to do unit tests with pytest and mocks. I have this very simple use case:</p>
<pre><code>from simple_salesforce import Salesforce
from unittest.mock import Mock
class SFClient:
def __init__(self, sf_client: Salesforce):
self._simple_sf_client = sf_client
def bulk2(self, query: str, path: str, max_records: int) -> list[dict]:
return self._simple_sf_client.bulk2.Account.download(
query=query, path=path, max_records=max_records
)
def test_client_bulk2():
mock_sf = Mock()
config = {'bulk2.return_value.Account.return_value.download.return_value': 'test'}
mock_sf.configure_mock(**config)
client = SFClient(sf_client=mock_sf)
assert client.bulk2('query', 'path', 1) == 'test'
assert False
</code></pre>
<p>As you can see I'm trying to mock chained call to <code>self._simple_sf_client.bulk2.Account.download</code> but <code>assert client.bulk2('query', 'path', 1) == 'test'</code> is always fail trying to compare mock object with string.</p>
<pre><code>$ poetry run pytest
F [100%]
======================================================================================================================================================= FAILURES =======================================================================================================================================================
__________________________________________________________________________________________________________________________________________________ test_client_bulk2 ___________________________________________________________________________________________________________________________________________________
def test_client_bulk2():
mock_sf = Mock()
config = {'bulk2.return_value.Account.return_value.download.return_value': 'test'}
mock_sf.configure_mock(**config)
client = SFClient(sf_client=mock_sf)
> assert client.bulk2('query', 'path', 1) == 'test'
E AssertionError: assert <Mock name='mock.bulk2.Account.download()' id='140589080832208'> == 'test'
E + where <Mock name='mock.bulk2.Account.download()' id='140589080832208'> = <bound method SFClient.bulk2 of <tests.test_salesforce.SFClient object at 0x7fdd73acdd90>>('query', 'path', 1)
E + where <bound method SFClient.bulk2 of <tests.test_salesforce.SFClient object at 0x7fdd73acdd90>> = <tests.test_salesforce.SFClient object at 0x7fdd73acdd90>.bulk2
tests/test_salesforce.py:27: AssertionError
=================================================================================================================================================== warnings summary ===================================================================================================================================================
../../.cache/pypoetry/virtualenvs/salesforce-archivist-n3oqdCBe-py3.11/lib/python3.11/site-packages/zeep/utils.py:1
/home/piotrek/.cache/pypoetry/virtualenvs/salesforce-archivist-n3oqdCBe-py3.11/lib/python3.11/site-packages/zeep/utils.py:1: DeprecationWarning: 'cgi' is deprecated and slated for removal in Python 3.13
import cgi
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=============================================================================================================================================== short test summary info ================================================================================================================================================
FAILED tests/test_salesforce.py::test_client_bulk2 - AssertionError: assert <Mock name='mock.bulk2.Account.download()' id='140589080832208'> == 'test'
1 failed, 1 warning in 0.19s
</code></pre>
<p>I must be missing something obvious here. Why is it not returning the value I set when configuring a mock? Thanks</p>
<p><em>// EDIT</em></p>
<p>As suggested by one of the answers I tried also <code>assert client.bulk2('query', 'path', 1).return_value == 'test'</code> but it also did not work</p>
<pre><code>$ poetry run pytest
F [100%]
======================================================================================================================================================= FAILURES =======================================================================================================================================================
__________________________________________________________________________________________________________________________________________________ test_client_bulk2 ___________________________________________________________________________________________________________________________________________________
def test_client_bulk2():
mock_sf = Mock()
config = {'bulk2.return_value.Account.return_value.download.return_value': 'test'}
mock_sf.configure_mock(**config)
client = SFClient(sf_client=mock_sf)
> assert client.bulk2('query', 'path', 1).return_value == 'test'
E AssertionError: assert <Mock name='mock.bulk2.Account.download()()' id='140508290106704'> == 'test'
E + where <Mock name='mock.bulk2.Account.download()()' id='140508290106704'> = <Mock name='mock.bulk2.Account.download()' id='140508282777104'>.return_value
E + where <Mock name='mock.bulk2.Account.download()' id='140508282777104'> = <bound method SFClient.bulk2 of <tests.test_salesforce.SFClient object at 0x7fcaa2b95150>>('query', 'path', 1)
E + where <bound method SFClient.bulk2 of <tests.test_salesforce.SFClient object at 0x7fcaa2b95150>> = <tests.test_salesforce.SFClient object at 0x7fcaa2b95150>.bulk2
tests/test_salesforce.py:27: AssertionError
=================================================================================================================================================== warnings summary ===================================================================================================================================================
../../.cache/pypoetry/virtualenvs/salesforce-archivist-n3oqdCBe-py3.11/lib/python3.11/site-packages/zeep/utils.py:1
/home/piotrek/.cache/pypoetry/virtualenvs/salesforce-archivist-n3oqdCBe-py3.11/lib/python3.11/site-packages/zeep/utils.py:1: DeprecationWarning: 'cgi' is deprecated and slated for removal in Python 3.13
import cgi
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=============================================================================================================================================== short test summary info ================================================================================================================================================
FAILED tests/test_salesforce.py::test_client_bulk2 - AssertionError: assert <Mock name='mock.bulk2.Account.download()()' id='140508290106704'> == 'test'
1 failed, 1 warning in 0.19s
</code></pre>
|
<python><unit-testing><mocking>
|
2023-12-20 07:40:41
| 1
| 3,306
|
piotrekkr
|
77,689,903
| 2,529,125
|
Python - Reading a SQL script with multiple queries using 'latin1' encoding | Issue with whitespaces and begins with ascii characters
|
<p><strong>Overview</strong>
Im currently reading a sql script with multiple queries in python. Once the script is read into python as string type, the code establish connection to the SQL server via sql alchemy. Each query is split by ";" and executed on the SQL server.</p>
<p><strong>Problem</strong>
The issue is that once the code is read into python, there are spaces between each character. I also noticed that the string start with characters <code>ÿþ</code>. How do I remove the spaces without tampering with the actually query and ensure there are no added ascii characters.</p>
<p>NB: For the purpose of this question, I will not be adding the connection to the SQL.</p>
<p><strong>Step 1</strong>
To replicate the issue, please create the sql script as using code below:</p>
<pre><code>--############################################################################################
-- Table 1
--############################################################################################
IF OBJECT_ID('[dbo].[testdb].[TEST]') is not null DROP TABLE [dbo].[testdb].[TEST]
SELECT TOP (10) * INTO [dbo].[testdb].[TEST]
FROM [dbo].[testdb].[ACT_SOURCE]
;
--############################################################################################
-- Table 2
--###########################################################################################
IF OBJECT_ID('[dbo].[testdb].[TEST2]') is not null DROP TABLE [dbo].[testdb].[TEST2]
SELECT TOP (20) * INTO [dbo].[testdb].[TEST2]
FROM [dbo].[testdb].[ACT_SOURCE]
</code></pre>
<p>[dbo].[testdb].[ACT_SOURCE] is an existing table. Please keep in the comments as well as the semi-colon. You can save the script as test.sql .</p>
<p><strong>Step 2</strong>
Read in the sql script</p>
<pre><code>import os
str_sql_file= os.path.join(CONFIG.STR_ROOT_FOLDER, 'sql', CONFIG.STR_SQL_FILE_STEP_ONE)
with open(str_sql_file, 'r', encoding='latin1') as file_sql_tmp:
str_sql = file_sql_tmp.read()
</code></pre>
<p><strong>Step 3</strong>
Read each sql query</p>
<pre><code>for str_sql_single in str_sql.split(';'):
print('executing sql: \n {}'.format(str_sql_single))
</code></pre>
<p>The result looks something like this :</p>
<p><a href="https://i.sstatic.net/VVsO9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VVsO9.png" alt="result for sql str in python" /></a></p>
<p><strong>conda env setup</strong></p>
<pre><code>python 3.11.5
pandas 2.1.4
holidays 0.29
scikit-learn 1.3.0
boto3 1.29.1
-- if using sqlachemy
sqlachemy
pyodbc 4.0.39
</code></pre>
|
<python><sql><string>
|
2023-12-20 07:38:46
| 1
| 511
|
Sade
|
77,689,843
| 7,709,727
|
Why does multiprocessing.Queue stuck when parent process of fork() puts to the queue before fork?
|
<p>I recently write this program:</p>
<pre class="lang-py prettyprint-override"><code>import os, time
from multiprocessing import Queue
q = Queue(1000)
q.put(1) # Line 6
result = os.fork()
if result == 0:
while True:
q.put(2)
time.sleep(1)
elif result > 0:
while True:
print(q.get())
else:
raise Exception('Fork failed: %d' % result)
</code></pre>
<p>The program uses <code>os.fork()</code> to create a child process. The child process constantly sends <code>2</code> to the queue, and the parent process constantly reads from the queue.</p>
<p>I am expecting to see "1 2 2 2 2 2 ..." as the output (here I am collapsing the lines for concision), but actually I only see "1" and the program stucks. Is it a deadlock? Did I do something wrong, or is it a bug of Python's <code>multiprocess.Queue</code>?</p>
<p>If I remove line 6, the output is "2 2 2 ...", which is expected.</p>
<p>Note: I see some related questions on stackoverflow mention calling <code>Queue.join_thread()</code> before exit. However, my program doesn't have this problem because none of the processes exit.</p>
<p>If it is relevant, I am on Linux with Python 3.11.2 . I also tried Python 3.9.18 .</p>
|
<python><multiprocessing><fork>
|
2023-12-20 07:23:31
| 1
| 1,570
|
Eric Stdlib
|
77,689,414
| 1,527,244
|
opencv: Exact polygon of contour
|
<p>Based on the following binary image, I'm trying to get a polygon of the outer contour. (This is already processed and looks promising).</p>
<p><a href="https://i.sstatic.net/BU1Wb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BU1Wb.png" alt="enter image description here" /></a></p>
<p>Now, I want to get an exact polygon of the outer contours / border, but using the <code>polylines</code> / <code>fillPoly</code> methods yield very unprecise polygons:</p>
<p><a href="https://i.sstatic.net/MlkC0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MlkC0.png" alt="enter image description here" /></a></p>
<p>(Here is the polygon / original overlapped):</p>
<p><a href="https://i.sstatic.net/tfr91.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tfr91.png" alt="enter image description here" /></a></p>
<p>How can I get a more precise polygon? See especially the bottom left corner, where the border and the polygon don't match at all</p>
<p>Code so far (based mainly on <a href="https://stackoverflow.com/questions/74092934/opencv2-findcontours-can-i-get-just-one-big-external-contour-the-image-has-se">this answer</a>:</p>
<pre class="lang-py prettyprint-override"><code>import cv2
import numpy as np
img_color = cv2.imread('carved.png')
img_gray = cv2.cvtColor(img_color, cv2.COLOR_BGR2GRAY)
cv2.imwrite('ex1_gray.png', img_gray)
img_binary = cv2.adaptiveThreshold(img_gray, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C,cv2.THRESH_BINARY_INV, 11, 2)
cv2.imwrite('ex1_binary.png', img_binary)
points = np.column_stack(np.where(img_binary.transpose() > 0))
# draw white filled hull polygon on black background
mask = np.zeros_like(img_binary)
cv2.fillPoly(mask, [points], 255)
cv2.imwrite('ex1_zeroslike.png', mask)
# get the largest contour from result2
contours = cv2.findContours(img_binary, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_NONE)
contours = contours[0] if len(contours) == 2 else contours[1]
big_contour = max(contours, key=cv2.contourArea)
# draw contour on copy of input
contr = cv2.cvtColor(img_binary.copy(), cv2.COLOR_GRAY2BGR)
contr = cv2.drawContours(contr, [big_contour], 0, (0,0,255), 2)
cv2.imwrite('ex1_convex_hull_contour.png', contr)
</code></pre>
<p>I checked the <code>fillPoly</code> method but it doesn't seem that there is a parameter for this. Changing <code>lineType</code> had no effect.</p>
|
<python><opencv><image-processing><computer-vision>
|
2023-12-20 05:37:14
| 0
| 2,290
|
Danyel
|
77,689,378
| 9,983,652
|
how to calculate mean value of each column within range of quantile?
|
<p>I'd like to calculate mean value of each column but only consider values are within range of quantile, like 20%-80% only.</p>
<p>Here is why I have done, but not complete</p>
<pre><code>df=pd.DataFrame({"A":[1,1,20,2,2,3,50,7,8,15,20,35,50,70],"B":[10,100,20,20,200,30,50,70,80,150,200,350,500,700]})
df
A B
0 1 10
1 1 100
2 20 20
3 2 20
4 2 200
5 3 30
6 50 50
7 7 70
8 8 80
9 15 150
10 20 200
11 35 350
12 50 500
13 70 70
then find q20 and 180 for each column using np.quantile()
q20=np.quantile(df,0.2,axis=0)
q20
array([ 2., 26.])
q80=np.quantile(df,0.8,axis=0)
q80
array([ 41., 260.])
</code></pre>
<p>Now How do I filter the values between q20 and q80 for each column</p>
<p>I am doing below then I got an error</p>
<pre><code>mask=(a>q20)&(a<q80)
TypeError: Cannot compare a Categorical for op __gt__ with type <class 'numpy.ndarray'>.
If you want to compare values, use 'np.asarray(cat) <op> other'.
</code></pre>
<p>Thanks for your help</p>
|
<python><pandas>
|
2023-12-20 05:26:15
| 1
| 4,338
|
roudan
|
77,689,317
| 1,874,170
|
Dynamic linking in Python CFFI?
|
<p>I'm trying to use CFFI for dynamic linking of dependent C libraries in Python; am I mis-understanding it?</p>
<p>In the following extremely simplified example, library <code>foo_b</code> depends on library <code>foo_a</code>. Specifically, it depends on <code>bar</code>, and it exposes its own function <code>baz</code>.</p>
<pre class="lang-py prettyprint-override"><code>from cffi import FFI
from pathlib import Path
Path('foo_a.h').write_text("""\
int bar(int x);
""")
Path('foo_a.c').write_text("""\
#include "foo_a.h"
int bar(int x) {
return x + 69;
}
""")
Path('foo_b.h').write_text("""\
int baz(int x);
""")
Path('foo_b.c').write_text("""\
#include "foo_a.h"
#include "foo_b.h"
int baz(int x) {
return bar(x * 100);
}
""")
ffi_a = FFI()
ffi_b = FFI()
ffi_a.cdef('int bar(int x);')
ffi_a.set_source('ffi_foo_a', '#include "foo_a.h"', sources=['foo_a.c'])
ffi_a.compile()
ffi_b.cdef('int baz(int x);')
ffi_b.include(ffi_a)
ffi_b.set_source('ffi_foo_b', '#include "foo_b.h"', sources=['foo_b.c'])
ffi_b.compile()
import ffi_foo_a
if ffi_foo_a.lib.bar(1) == 70: print('foo_a OK')
else: raise AssertionError('foo_a ERR')
import ffi_foo_b # Crashes on _this_ line due to undefined symbol "bar", DESPITE the fact that we included ffi_a, which should provide that symbol
if ffi_foo_b.lib.baz(420) == 42069: print('foo_b OK')
else: raise AssertionError('foo_b ERR')
</code></pre>
<p>However, it doesn't compile, instead crashing on the indicated line with the indicated error message.</p>
<p>I don't understand why this example isn't working, considering the following in the CFFI documentation:</p>
<blockquote>
<p>For out-of-line modules, the <code>ffibuilder.include(other_ffibuilder)</code> line should occur in the build script, and the <code>other_ffibuilder</code> argument should be another FFI instance that comes from another build script. When the two build scripts are turned into generated files, say <code>_ffi.so</code> and <code>_other_ffi.so</code>, then importing <code>_ffi.so</code> will internally cause <code>_other_ffi.so</code> to be imported. At that point, the real declarations from <code>_other_ffi.so</code> are combined with the real declarations from <code>_ffi.so</code>.</p>
</blockquote>
<p>If ffibuilder.include() isn't the right way to dynamically link together multiple CFFI-based libraries, what is?</p>
<p>Or if ffibuilder.include() <em>is</em> the right way to dynamically link together multiple CFFI-based libraries, what am I doing wrong?</p>
|
<python><dynamic-linking><python-cffi>
|
2023-12-20 05:07:40
| 1
| 1,117
|
JamesTheAwesomeDude
|
77,689,274
| 788,488
|
How can I add a type hint for a number coming from some unknown numpy array?
|
<p>I'm making a library that contains some methods whose parameters include numbers that may come from numpy arrays. E.g., one of my methods is roughly <code>some_func(my_array, my_array[0, 0])</code>, where the second param is a number of an unknown type (it could be a <code>np.float64</code>, some numpy integer type, whatever). Also, it could be a builtin python number type like <code>float</code>. I want to add a type hint that handles this case, i.e. declare the method with <code>def some_func(array: np.ndarray, value: Foo)</code>, but I'm not sure what to put for <code>Foo</code>. Does anyone have any good ideas? Is there an established pattern here?</p>
|
<python><numpy><python-typing>
|
2023-12-20 04:54:16
| 1
| 2,603
|
meisel
|
77,689,241
| 903,501
|
How can 'Literal' be invoked with square brackets (e.g. Literal["Test1"])?
|
<p>The following code snippet shows a function named <code>Literal</code> being invoked with square brackets:</p>
<pre><code>Literal['r', 'rb', 'w', 'wb']
</code></pre>
<p>The example is from <a href="https://github.com/python/cpython/blob/main/Lib/typing.py#L728" rel="nofollow noreferrer">the CPython GitHub repo</a></p>
<p>Though we can use square brackets to call instances by using <code>__getitem__</code> method in Python classes, it doesn't seem to be the case here.</p>
|
<python><python-typing>
|
2023-12-20 04:34:34
| 2
| 978
|
psaw.mora
|
77,689,215
| 1,828,605
|
How to produce waterfall plot using shapely from loaded xgboost model
|
<p>I have this xgboost model that I created as a test to save as JSON in R. After I built the model in R, I saved it using <code>xgb.save(model, fname='xgboost_classifer_model.json')</code>.</p>
<p>I then used <code>xgboost</code> package in python to load the model like the following:</p>
<pre><code>import xgboost
bst = xgboost.Booster()
bst.load_model('./model/xgboost_classifier_model.json')
input_data = np.array([1,1,1,1],dtype=np.float16).reshape(1,-1)
</code></pre>
<p>I then used <code>shap</code> package to create explainer.</p>
<pre><code>import shap
explainer = shap.Explainer(bst)
shap_values = explainer.shap_values(input_data)
shap_values[0]
</code></pre>
<p>Finally, I used the <code>sap_values</code> to plot</p>
<pre><code>shap.waterfall_plot(shap_values)
plt.title("SHAP Values Waterfall Plot")
plt.xlabel("SHAP Value")
plt.tight_layout()
plt.show()
</code></pre>
<p>I'm getting <code>The waterfall plot requires an </code>Explanation<code>object as the</code>shap_values<code> argument.</code> error.</p>
<p>What am I doing wrong and not understanding here?</p>
<p><strong>Model</strong>
<a href="https://www.dropbox.com/scl/fi/6vn2shyvz1hn7b4majqzh/xgboost_classifier_model.json?rlkey=sha8gb61eox5mqcuudf3t2v5p&dl=0" rel="nofollow noreferrer">https://www.dropbox.com/scl/fi/6vn2shyvz1hn7b4majqzh/xgboost_classifier_model.json?rlkey=sha8gb61eox5mqcuudf3t2v5p&dl=0</a></p>
|
<python><xgboost><shap>
|
2023-12-20 04:24:44
| 1
| 1,735
|
user1828605
|
77,688,887
| 1,732,694
|
JSON Pickle serialize object list
|
<p>I want to serialize a list of objects using <code>jsonpickle</code>. Each of these objects contains a list of objects within them:</p>
<pre class="lang-py prettyprint-override"><code>class ImageManipulationConfiguration:
region_list = []
def to_json(self):
return jsonpickle.encode(self.region_list, indent=4,
separators=(',', ': '))
</code></pre>
<p>here, <code>region_list</code> contains objects of this class:</p>
<pre class="lang-py prettyprint-override"><code>class ImageManipulationRegion:
image_manipulation_element_list = []
</code></pre>
<p>Inside <code>image_manipulation_element_list</code>, there are objects of class inheriting this class:</p>
<pre class="lang-py prettyprint-override"><code>class ImageManipulationElement
</code></pre>
<p>When I call my <code>to_json</code> function, only the top level items in the <code>region_list</code> are serialized, the sub object within the <code>image_manipulation_element_list</code> lists are not included. Is there a way to recursively include everything using <code>jsonpickle</code>?</p>
|
<python><json><serialization><jsonpickle>
|
2023-12-20 02:07:19
| 3
| 546
|
ESD
|
77,688,883
| 10,844,607
|
How to connect to Google Cloud PostgreSQL database from Cloud Functions?
|
<p>I was following the <a href="https://codelabs.developers.google.com/codelabs/connecting-to-cloud-sql-with-cloud-functions" rel="nofollow noreferrer">Connecting to Cloud SQL with Cloud Functions</a> tutorial but my <strong>Cloud Function won't connect to the PostgreSQL database</strong>.</p>
<p>Here's my function code:</p>
<pre class="lang-py prettyprint-override"><code>import sqlalchemy
connection_name = "redacted-1234a:asia-northeast3:myinstance2"
query_string = dict({"unix_sock": "/cloudsql/{}/.s.PGSQL.5432".format(connection_name)})
def insert(request):
print(f"Started function - query_string: {query_string}")
request_json = request.get_json()
stmt = sqlalchemy.text('insert into {} {} values {}'.format("entries", "(guestName, content)", "('cloud hello', 'Got here!')"))
db = sqlalchemy.create_engine(
sqlalchemy.engine.url.URL(
drivername="postgres+pg8000",
username="postgres",
password=redacted,
database="guestbook",
query=query_string,
),
pool_size=5,
max_overflow=2,
pool_timeout=30,
pool_recycle=1800
)
print("Created engine")
try:
with db.connect() as conn:
print("Connected to engine")
conn.execute(stmt)
except Exception as e:
return 'Error: {}'.format(str(e))
return 'ok'
</code></pre>
<p>When I run it from the "Testing" tab, I get:</p>
<pre><code>2023-12-20 10:58:06.728 JST - Started function - query_string: {'unix_sock': '/cloudsql/nownow-8907a:asia-northeast3:myinstance2/.s.PGSQL.5432'}
2023-12-20 10:58:09.054 JST - Created engine
Error: (pg8000.core.InterfaceError) ('communication error', FileNotFoundError(2, 'No such file or directory'))
(Background on this error at: http://sqlalche.me/e/rvf5)
</code></pre>
<p>Here's what I ran in Cloud Shell when setting up the database:
<div class="snippet" data-lang="js" data-hide="true" data-console="false" data-babel="false">
<div class="snippet-code snippet-currently-hidden">
<pre class="snippet-code-js lang-js prettyprint-override"><code>$ gcloud sql connect myinstance2 --user=postgres
Allowlisting your IP for incoming connection for 5 minutes...done.
Connecting to database with SQL user [postgres].Password: # typed in redacted
psql (16.1 (Debian 16.1-1.pgdg110+1), server 15.4)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
Type "help" for help.
postgres=> CREATE DATABASE guestbook;
\CREATE DATABASE
postgres=> \connect guestbook;
Password: # typed in redacted
psql (16.1 (Debian 16.1-1.pgdg110+1), server 15.4)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, compression: off)
You are now connected to database "guestbook" as user "postgres".
guestbook=> CREATE TABLE entries (guestName VARCHAR(255), content VARCHAR(255), entryID SERIAL PRIMARY KEY);
CREATE TABLE</code></pre>
</div>
</div>
</p>
<p>How can I get this to work? I feel like I've checked everything:</p>
<ul>
<li>The function is run with a service account that has the Cloud SQL Client role</li>
<li>redacted is the password for both postgres itself and the database (I am able to connect using it via Cloud Shell)</li>
<li>The PostgreSQL instance has both Public IP and Private IP with "private path" enabled</li>
<li>The SQL > Connections > Security has Google Cloud services authorization and App Engine authorization "Enabled", and "Allow only SSL connections" and "Require trusted client certificates" set to "Disabled"</li>
</ul>
<p>Another interesting anomoly with the cloud function is I verified 3 times during creation that "Allow unauthenticated invocations" was selected, but after creating the function it went back to "Require authentication". I don't think this is super relevant though, since I'm still able to <em>run</em> the function, I just can't connect to the database.</p>
|
<python><postgresql><google-cloud-platform><google-cloud-functions><google-cloud-sql>
|
2023-12-20 02:04:17
| 2
| 5,694
|
Dr-Bracket
|
77,688,670
| 2,681,662
|
astropy exits none zero on header update
|
<p>I have an astronomical image <code>fits</code> format with WCS (World Coordinate System) in its header.</p>
<p>I want to do some morphological operations on the image and understandably I want to correct the WCS.</p>
<p>Say I crop the image. Applying to the header would be:</p>
<pre><code>from astropy.io import fits
from astropy.wcs import WCS
header: Header = fits.getheader("sample.fits")
x, y, w, h = 100, 100, 200, 100
wcs = WCS(header)
new_wcs = wcs[y:y + h, x:x + w]
</code></pre>
<p>I can confirm new_wcs is correct by checking the new pixel to sky coordinates:</p>
<pre><code>print(wcs.pixel_to_world(2, 2))
print(new_wcs.pixel_to_world(2, 2))
</code></pre>
<p>The only problem here would be that I lost the information in the original header. So I tried to update the old header.</p>
<pre><code>added_header: Header = header.copy()
added_header.extend(new_wcs.to_header(), unique=True, update=True)
added_header_wcs = WCS(added_header)
</code></pre>
<p>The whole code would be:</p>
<pre><code>from astropy.io.fits import Header
from astropy.io import fits
from astropy.wcs import WCS
header: Header = fits.getheader("sample.fits")
x, y, w, h = 100, 100, 200, 100
wcs = WCS(header)
new_wcs = wcs[y:y + h, x:x + w]
added_header: Header = header.copy()
added_header.extend(new_wcs.to_header(), unique=True, update=True)
added_header_wcs = WCS(added_header) # boom
print(wcs.pixel_to_world(2, 2))
print(new_wcs.pixel_to_world(2, 2))
print(added_header_wcs.pixel_to_world(2, 2))
</code></pre>
<p>However at the <code>added_header_wcs = WCS(added_header)</code> line python crashes:</p>
<pre><code>Process finished with exit code -1073741819 (0xC0000005)
</code></pre>
<p>I tried to enable logging for astropy, but I was not able to do so.</p>
<p>Before creating an issue I wanted to ask more knowledgeable people. Where do I do wrong?</p>
<p>PS: copying a new header is not the problem, as I tried to modify the original header.</p>
<p>PS: the sample data: <a href="http://www.astropy.org/astropy-data/tutorials/FITS-images/HorseHead.fits" rel="nofollow noreferrer">http://www.astropy.org/astropy-data/tutorials/FITS-images/HorseHead.fits</a></p>
<h2>Edit:</h2>
<p>Executing the same code with a none-conda interpreter threw an exception:</p>
<pre><code>WARNING: FITSFixedWarning: 'celfix' made the change 'Invalid parameter value'. [astropy.wcs.wcs]
Traceback (most recent call last):
File "[PATH]\main.py", line 15, in <module>
print(WCS(new_header).pixel_to_world(2, 2))
^^^^^^^^^^^^^^^
File "[PATH]\local-packages\Python311\site-packages\astropy\wcs\wcs.py", line 600, in __init__
self.wcs.set()
ValueError: ERROR 5 in wcsset() at line 2808 of file cextern\wcslib\C\wcs.c:
Invalid parameter value.
ERROR 4 in linset() at line 737 of file cextern\wcslib\C\lin.c:
Failed to initialize distortion functions.
ERROR 3 in tpdset() at line 1945 of file cextern\wcslib\C\dis.c:
Unrecognized field name for TPD on axis 1: DQ1.DSS.AMD.1.
</code></pre>
|
<python><astropy><fits>
|
2023-12-20 00:34:15
| 0
| 2,629
|
niaei
|
77,688,632
| 13,608,794
|
winreg - SaveKey() returns WinError 1314 and creates empty file
|
<p>I don't know the reason of this behavior, but <strong>while running <code>winreg</code>'s <code>SaveKey</code> function on Python 3.11</strong> an empty file is written while returning WinError 1314 (<em>A required privilege is not held by the client.</em>)</p>
<p>I am running shell as an administrator (code below) and I do not understand why it fails. Can somebody help me to fix this?</p>
<hr />
<pre class="lang-py prettyprint-override"><code>>>> import winreg
>>> import ctypes
>>>
>>> # UAC activated
>>> ctypes.windll.shell32.IsUserAnAdmin()
True
>>>
>>> location = r"C:\Users\Key.reg"
>>>
>>> with winreg.OpenKey(winreg.HKEY_USERS, ".DEFAULT", access = winreg.KEY_ALL_ACCESS) as key:
... winreg.SaveKey(key, location)
...
Traceback (most recent call last):
File "<stdin>", line 2, in <module>
OSError: [WinError 1314] A required privilege is not held by the client.
>>>
>>> open(location).read()
''
>>>
</code></pre>
|
<python><registry><winreg>
|
2023-12-20 00:17:45
| 0
| 303
|
kubinka0505
|
77,688,601
| 4,447,853
|
GPU/RAM out of Memory PyTorch/Transformers
|
<p>Context: I have six 3070 tis (48gb vram) and 8gb-32gb of ram along with 150gb ssd storage on my ubuntu 20.0.4 desktop. I am attempting to fine-tune <a href="https://huggingface.co/meta-llama/Llama-2-7b-chat-hf" rel="nofollow noreferrer">meta-llama/Llama-2-7b-chat-hf</a> with a very small dataset as a proof-of-concept. I took the first few entries from <a href="https://www.kaggle.com/datasets/dsxavier/diagnoise-me" rel="nofollow noreferrer">this dataset on kaggle</a>.</p>
<p>When I was initially running my script utilizing DataParallel with 8gb of ram, I would run out of memory and it would fail (no surprise). I added a few sticks to get me up to 32gb and was finally met with this error</p>
<blockquote>
<p>torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate
64.00 MiB. GPU 0 has a total capacty of 7.58 GiB of which 17.88 MiB is free. Including non-PyTorch memory, this process has 7.53 GiB memory
in use. Of the allocated memory 7.37 GiB is allocated by PyTorch, and
5.71 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid
fragmentation. See documentation for Memory Management and
PYTORCH_CUDA_ALLOC_CONF</p>
</blockquote>
<p>Using <code>glances</code> I saw that it was only using my first GPU, rather than spreading the tasks out to all of them. I decided to try my hand at using DistributedDataParallel instead and ended up running out of memory even with 32gb.</p>
<p>I know there are many ways to optimize training/fine-tuning, which is why I came here for help. Would you recommend ZeRO, DP, DDP, PP, TP, or a combination of them such as in case 3 in the <a href="https://huggingface.co/docs/transformers/perf_train_gpu_many" rel="nofollow noreferrer">huggingface documentation</a>? <a href="https://huggingface.co/docs/accelerate/index" rel="nofollow noreferrer">Accelerate</a>? <a href="https://huggingface.co/docs/optimum/index" rel="nofollow noreferrer">Optimum</a> (NVIDIA or others)? <a href="https://huggingface.co/docs/transformers/perf_train_gpu_one" rel="nofollow noreferrer">TrainingArguments</a>?</p>
<p>There are just so many different options and I'd love for someone to point me in the right direction and give an example of some working code I could test on my computer. To recreate environment:</p>
<pre class="lang-bash prettyprint-override"><code>mkdir test && cd test
python3 -m venv venv
source venv/bin/activate
pip install torch transformers
# create file
# download dataset to same directory
</code></pre>
<p>Here is the almost working python code for just DP</p>
<pre class="lang-py prettyprint-override"><code>from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
from torch.utils.data import Dataset, DataLoader
from torch.optim import AdamW
import json
# Load dataset from JSON file
with open('small_en_medical_dialog.json', 'r') as file:
dataset = json.load(file)
# Format the data
formatted_data = []
for item in dataset:
input_text = "Description: " + item["Description"] + " Patient: " + item["Patient"]
target_text = item["Doctor"]
formatted_data.append({"input": input_text, "target": target_text})
# Define the custom Dataset class
class DoctorPatientDataset(Dataset):
def __init__(self, data, tokenizer, max_length=512):
self.tokenizer = tokenizer
self.data = data
self.max_length = max_length
def __len__(self):
return len(self.data)
def __getitem__(self, idx):
item = self.data[idx]
input_encoding = self.tokenizer(
item['input'],
truncation=True,
padding='max_length',
max_length=self.max_length,
return_tensors='pt'
)
target_encoding = self.tokenizer(
item['target'],
truncation=True,
padding='max_length',
max_length=self.max_length,
return_tensors='pt'
)
return {
'input_ids': input_encoding['input_ids'].flatten(),
'attention_mask': input_encoding['attention_mask'].flatten(),
'labels': target_encoding['input_ids'].flatten()
}
# Initialize the model and tokenizer
model_name = "meta-llama/Llama-2-7b-chat-hf"
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Set padding token if not already set
if tokenizer.pad_token is None:
tokenizer.pad_token = tokenizer.eos_token
model = AutoModelForCausalLM.from_pretrained(model_name)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
model = torch.nn.DataParallel(model)
# Training setup
optimizer = AdamW(model.parameters(), lr=5e-5)
num_epochs = 3
# Create dataset and DataLoader
train_dataset = DoctorPatientDataset(formatted_data, tokenizer)
train_dataloader = DataLoader(train_dataset, batch_size=3, shuffle=True)
# Training loop
for epoch in range(num_epochs):
model.train()
total_loss = 0
for batch in train_dataloader:
optimizer.zero_grad()
outputs = model(input_ids=batch['input_ids'],
attention_mask=batch['attention_mask'],
labels=batch['labels'])
loss = outputs.loss
loss.backward()
optimizer.step()
total_loss += loss.item()
print(f"Epoch {epoch+1}/{num_epochs}, Loss: {total_loss/len(train_dataloader)}")
# Save the fine-tuned model
model.module.save_pretrained("my_finetuned_model")
# Inference
model.eval()
query = "Hello, how are you?"
input_ids = tokenizer.encode(query, return_tensors='pt')
with torch.no_grad():
output = model(input_ids=input_ids)
response_ids = output.logits.argmax(-1)
response = tokenizer.decode(response_ids, skip_special_tokens=True)
print(response)
</code></pre>
|
<python><pytorch><artificial-intelligence><huggingface-transformers>
|
2023-12-19 23:59:18
| 0
| 527
|
chriscrutt
|
77,688,568
| 1,103,911
|
Extend pandas Timestamp class
|
<p>I tried to extend <code>pandas</code> <code>Timestamp</code> class by inheriting and adding a new method to the class</p>
<pre><code>class T(pd.Timestamp):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def to_unix_epoch(self) -> int:
return int(self.to_pydatetime().timestamp())
t = T('2012-12-16')
</code></pre>
<p>However the <code>t</code> does not receive the new type but is an instance of <code>pandas._libs.tslibs.timestamps.Timestamp</code></p>
<p>I suspected it is somehow related to <code>__new__()</code> overriding instance initiating and causing <code>__init__()</code> to be skipped</p>
<p>I tried to implement <code>__new__()</code> in my class, but had to forcibly overwrite instances class, I doubt this is the way it was meant to be implemented</p>
<pre><code>class T(pd.Timestamp):
def __new__(cls, *args, **kwargs):
instance = super().__new__(cls, *args, **kwargs)
instance.__class__ = cls
return instance
def to_unix_epoch(self) -> int:
return int(self.to_pydatetime().timestamp())
</code></pre>
<p>The closest I got.</p>
|
<python><pandas>
|
2023-12-19 23:47:31
| 1
| 588
|
Eliy Arlev
|
77,688,286
| 2,998,447
|
Giving legend labels in calls to matplotlib.pyplot.text
|
<p>Based on my interpretation of the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.text.html" rel="nofollow noreferrer">documentation</a>, text arguments can have a <code>label</code> kwarg that will show up in the legend (following the link for <code>label</code> takes you <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.artist.Artist.set_label.html#matplotlib.artist.Artist.set_label" rel="nofollow noreferrer">here in the documentation</a>). When I run the code below, however, I get the warning that "<code>No handles with labels found to put in legend.</code>" along with an empty legend. Am I doing something wrong?</p>
<pre><code>import matplotlib.pyplot as plt
plt.text(0.25, 0.25, "1",
bbox={"boxstyle" : "circle", "color":"grey"},
label = "Dude, where's my car?")
plt.text(0.75, 0.75, "2",
bbox={"boxstyle" : "circle", "color":"grey"},
label = "Nevermind, here it is.")
plt.legend(loc = 'lower right')
</code></pre>
<p><a href="https://i.sstatic.net/yt5yk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yt5yk.png" alt="A couple of text entries on a figure and an empty legend" /></a></p>
|
<python><matplotlib>
|
2023-12-19 22:19:09
| 0
| 2,305
|
ramzeek
|
77,688,251
| 20,122,390
|
Why is a celery task not consumed by a worker that is free?
|
<p>I have an application in Python with a microservices architecture and pipelines for which I use Celery along with RabbitMQ and Redis. In an application flow (machine learning training) 8 methodologies are needed, therefore a first worker called "Worker Training" sends 8 tasks to another worker called "Worker Training Model". This second one has 3 replicas to be able to finish the training faster. At first it works well, each worker consumes a methodology and processes it until finishing and consuming the next one.
However I am seeing that for example at this moment, 5 of the 8 tasks have already been completed, there are 2 workers (of the 3 replicas) processing a task, but there is 1 worker doing nothing! It should be processing the last missing methodology! Any idea why this happens? I think that in the end it will end up being processed in one of the other 2 workers when they finish with the one they have at the moment, but I need to be more efficient and not have workers doing nothing that can consume tasks.
My RabbitMQ dashboard looks like this right now:</p>
<p><a href="https://i.sstatic.net/0hA0b.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0hA0b.png" alt="enter image description here" /></a></p>
<p>(That task is the missing methodology and should be done by the free worker...)</p>
|
<python><rabbitmq><celery>
|
2023-12-19 22:11:08
| 1
| 988
|
Diego L
|
77,688,198
| 13,118,338
|
How to decode string address into Pubkey address on Solana
|
<p>I am querying data from raydium using their SDK, but i am trying to do it in Python instead.
So, I am obtaining an address:</p>
<pre><code><BN: b870e12dd379891561d2e9fa8f26431834eb736f2f24fc2a2a4dff1fd5dca4df>
</code></pre>
<p>And I know it correspond to :</p>
<pre><code>PublicKey(DQyrAcCrDXQ7NeoqGgDCZwBvWDcYmFCjSb9JtteuvPpz)]
</code></pre>
<p>But anyone know how one could decode the first into the second please, so i could implement this in Python.</p>
<p>Thanks</p>
|
<python><typescript><solana>
|
2023-12-19 21:54:14
| 1
| 481
|
Nicolas Rey
|
77,688,183
| 3,749,836
|
Streaming chatbot with nicegui
|
<p>I am trying to follow the example in this link <a href="https://github.com/zauberzeug/nicegui/blob/main/examples/chat_with_ai/main.py" rel="nofollow noreferrer">https://github.com/zauberzeug/nicegui/blob/main/examples/chat_with_ai/main.py</a> but for a streaming chatbot use case.</p>
<p>Every single time, I send a message, its looping through all the messages and even for a third question, it processes the first and second question (already asked) before processing the third question/prompt. I cant quite make it to work like the documented example. I am assuming I am doing something blatantly wrong that will be immediately apparent to an experienced eye. Following are relevant portions of my script:</p>
<pre><code>vertexai.init(project=PROJECT_ID, location=LOCATION)
model = GenerativeModel("gemini-pro")
chat = model.start_chat()
async def get_async_chat_response(chat: ChatSession, prompt: str) -> str:
response = await chat.send_message_async(prompt, stream=True)
return response
messages: List[Tuple[str, str]] = []
thinking: bool = False
@ui.refreshable
async def chat_messages() -> None:
for name, text in messages:
avatar = human_avatar if name == 'You' else bot_avatar
if name == 'You':
ui.chat_message(avatar=avatar, name=name, text=text, sent=name == 'You')
else:
task = asyncio.create_task(get_async_chat_response(chat, text))
response = await task
async for item in response:
l = ui.label()
l.text += item.text
print(item.text)
if thinking:
ui.spinner(size='3rem').classes('self-center')
if context.get_client().has_socket_connection:
# ui.run_javascript('window.scrollTo(0, document.body.scrollHeight)')
ui.run_javascript('history.scrollRestoration = "manual"')
async def send() -> None:
nonlocal thinking
message = text.value
messages.append(('You', text.value))
thinking = True
text.value = ''
# chat_messages.refresh()
messages.append(('Bot', message))
thinking = False
chat_messages.refresh()
with ui.tab_panels(tabs, value=chat_tab).classes('w-full max-w-2xl mx-auto flex-grow items-stretch'):
with ui.tab_panel(chat_tab).classes('items-stretch'):
chat_messages()
ui.run(title='simple chat app with google gemini')
</code></pre>
|
<python><nicegui>
|
2023-12-19 21:51:28
| 1
| 1,987
|
gibbz00
|
77,688,037
| 12,708,740
|
Pairwise cohen's kappa of values in two dataframes
|
<p>I have two dataframes that look like the toy examples below:</p>
<pre><code>data1 = {'subject': ['A', 'B', 'C', 'D'],
'group': ['red', 'red', 'blue', 'blue'],
'lists': [[0, 1, 1], [0, 0, 0], [1, 1, 1], [0, 1, 0]]}
data2 = {'subject': ['a', 'b', 'c', 'd'],
'group': ['red', 'red', 'blue', 'blue'],
'lists': [[0, 1, 0], [1, 1, 0], [1, 0, 1], [1, 1, 0]]}
df1 = pd.DataFrame(data1)
df2 = pd.DataFrame(data2)
</code></pre>
<p>I would like to calculate the cohen's kappa score for each pair of subjects. For example, I would like to calculate the cohen's kappa scores for subject "A" in df1 against subjects "a", "b", and "c" in df2... and onwards. Like this:</p>
<pre><code>from sklearn.metrics import cohen_kappa_score
cohen_kappa_score(df1['lists'][0], df2['lists'][0])
cohen_kappa_score(df1['lists'][0], df2['lists'][1])
cohen_kappa_score(df1['lists'][0], df2['lists'][2])
...
</code></pre>
<p>Importantly, I would like to represent these pairwise cohen's kappa scores in a new dataframe where both the columns and rows would be all the subjects ("A", "B", "C", "a", "b", "c"), so that I can see whether these scores are more consist <em>between</em> dataframes or <em>within</em> dataframes. I will eventually convert this dataframe into a heatmap organized by "group".</p>
<p><a href="https://stackoverflow.com/questions/43593412/cohens-kappa-of-two-dataframes">This post for a similar R problem</a> looks promising but I don't know how to implement this in python. Similarly, I have not yet figured out how to implement <a href="https://stackoverflow.com/questions/11528150/inter-rater-agreement-in-python-cohens-kappa">this python solution</a>, which appears similar enough.</p>
|
<python><pandas><dataframe><heatmap><cohen-kappa>
|
2023-12-19 21:15:22
| 3
| 675
|
psychcoder
|
77,687,928
| 480,118
|
Convert pandas.offsets.CustomBusinessDay to numpy offset
|
<p>i have code that looks like this which gets an offset for a date:</p>
<pre><code>import pandas as pd, numpy as np
from pandas.tseries.holiday import USFederalHolidayCalendar
from pandas.tseries.offsets import CustomBusinessDay
from datetime import datetime
biz_day_only= True
offset = 1
us_biz_days = CustomBusinessDay(calendar=USFederalHolidayCalendar())
dt = pd.to_datetime(['20231231', '20231031'])
d_offset = pd.offsets.CustomBusinessDay(abs(offset), holidays=us_biz_days.holidays) if biz_day_only else pd.offsets.Day(abs(offset))
return (dt - d_offset) if offset < 0 else (dt + d_offset)
</code></pre>
<p>i get the following warning:</p>
<pre><code><string>:1: PerformanceWarning: Non-vectorized DateOffset being applied to Series or DatetimeIndex.
</code></pre>
<p>I am trying to fix that warning by using numpy:</p>
<pre><code>new_dt = dt.values.astype('M8[D]') + np.timedelta64(d_offset, 'D')
</code></pre>
<p>I believe it is because d_offset is a Pandas offset. How would i convert that a numpy offset?
I see that there is a <code>d_offset.n</code> property with an integer value. is that safe to use? It has value of 1 in the above use case however, so it's not properly rolling forward the date to next business day when used.</p>
<p>Thanks</p>
|
<python><pandas><numpy>
|
2023-12-19 20:50:13
| 1
| 6,184
|
mike01010
|
77,687,880
| 2,868,899
|
Range matching to return value
|
<p>I'm having trouble finding the solution to this one. It'd usually be a <code>merge()</code> to attain something similar but I can't find if it'd be extra tacked on</p>
<p>Data:</p>
<pre><code>import pandas as pd
import numpy as np
num = {'serial':[10,20,30,50]}
df = pd.DataFrame(num)
cols = {'StartSerial':[9,19,29,39],'StopSerial':[15,25,35,45],'Job':[564,859,748,125]}
df2 = pd.DataFrame(cols)
</code></pre>
<p>Trying to do something similar to this, but its not possible due to the indices not matching:</p>
<pre><code>df['Job'] = np.where((df['serial'] >= df2['StartSerial']) & (df['serial'] <= df2['StopSerial']),df2['Job'],'')
</code></pre>
<p>This does not work either:</p>
<pre><code>df.loc[(df['serial'] >= df2['StartSerial']) & (df['serial'] <= df2['StopSerial']),Job] = df2['Job']
</code></pre>
<p>Desired output:</p>
<pre><code>serial | Job
------------
10 | 564
20 | 859
30 | 748
50 |
</code></pre>
|
<python><pandas>
|
2023-12-19 20:35:40
| 2
| 2,790
|
OldManSeph
|
77,687,844
| 11,628,437
|
Mujoco: `gym.error.DependencyNotInstalled: dynamic module does not define module export function (PyInit_cymj)`
|
<p>I installed mujoco-py by doing pip install <code>mujoco-py==1.50.1.0</code>. It installed successfuly. However, when I try to import it using Python, I get the following error -</p>
<pre><code>import mujoco_py
running build_ext
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/miniconda3/envs/deep/lib/python3.8/site-packages/mujoco_py/__init__.py", line 1, in <module>
from mujoco_py.builder import cymj, ignore_mujoco_warnings, functions, MujocoException
File "/miniconda3/envs/deep/lib/python3.8/site-packages/mujoco_py/builder.py", line 283, in <module>
cymj = load_cython_ext(mjpro_path)
File "/miniconda3/envs/deep/lib/python3.8/site-packages/mujoco_py/builder.py", line 55, in load_cython_ext
mod = imp.load_dynamic("cymj", cext_so_path)
File "/miniconda3/envs/deep/lib/python3.8/imp.py", line 342, in load_dynamic
return _load(spec)
ImportError: dynamic module does not define module export function (PyInit_cymj)
</code></pre>
<p>I am running this on a HPC cluster. The surprising thing is that it seemed to have run on some nodes but is failing on the current node.</p>
|
<python><openai-gym><mujoco>
|
2023-12-19 20:28:32
| 0
| 1,851
|
desert_ranger
|
77,687,815
| 5,094,589
|
ORA-01036: illegal variable name/number oracledb (python)
|
<p>I want to insert data into my Oracle table, which has the following structure:</p>
<pre><code>SUBJECT VARCHAR2(4000 BYTE)
TEXT LONG
PREDICTED_CATEGORY VARCHAR2(1000 BYTE)
TRUE_CATEGORY VARCHAR2(1000 BYTE)
PROBABILITY VARCHAR2(4000 BYTE)
NUMBER_ANONYMIZED_TOKENS VARCHAR2(1000 BYTE)
CASEID VARCHAR2(1000 BYTE)
</code></pre>
<p>For this I have prepared row according to the documentation (<a href="https://cx-oracle.readthedocs.io/en/latest/user_guide/bind.html#binding-by-name-or-position" rel="nofollow noreferrer">https://cx-oracle.readthedocs.io/en/latest/user_guide/bind.html#binding-by-name-or-position</a>)</p>
<pre><code>data = {"subject": 'anon_subj', "txt": 'anon_text', "pred_cat": pred_class,"true_cat": "", "prob": str(probability),"num_anon": num_anon_tokens ,"caseid": str(caseID)}
</code></pre>
<p>I have prepared SQL Insert statement as well:</p>
<pre><code>statement= "INSERT INTO MY_ORACLE_TABLE(SUBJECT, TEXT, PREDICTED_CATEGORY, TRUE_CATEGORY, PROBABILITY, NUMBER_ANONYMIZED_TOKENS, CASEID) VALUES (:subject, :txt, :pred_cat, :true_cat, :prob, :num_anon, :caseid)"
</code></pre>
<p>I have a executed the following code:</p>
<pre><code>oracledb.init_oracle_client()
con = oracledb.connect(DB_CONNECTION_STRING, encoding='UTF-8', nencoding='UTF-8')
cur = con.cursor()
cur.bindarraysize = 1
cur.execute(statement, data)
con.commit()
cur.close()
</code></pre>
<p>After Execution I get the error ORA-01036: illegal variable name/number oracledb (python), which is most definitely connected to my binding variables. I have looked up many posts on stackoverflow with the same question, but couldn't figure it out.</p>
|
<python><oracle-database><cx-oracle><python-oracledb>
|
2023-12-19 20:23:11
| 1
| 1,106
|
Daniil Yefimov
|
77,687,761
| 3,928,553
|
Numpy : check if any array in 3D array is in another shorter 3D array with duplicates
|
<p>I've got a Numpy array like this one :</p>
<pre><code>source = np.array([[[0,0,0],[0,0,1],[0,1,0],[1,0,0],[1,0,1],[1,1,0],[1,1,1]]])
</code></pre>
<p>And I'm trying to compare it to an other array, which has shorter Axis2 and duplicates in Axis3 :</p>
<pre><code>values = np.array([[[0,1,0],[1,0,0],[1,1,1],[1,1,1],[0,1,0]]])
</code></pre>
<p>My goal is to have an array of booleans as long as the longest :</p>
<pre><code>[False, False,True,True,False,False,True]
</code></pre>
<p>I've tried these command :</p>
<pre><code>np.isin(source,values).all(axis=2)
</code></pre>
<p>But it displays an array of seven True. A function like numpy.in1d() seemed to be a good option, but I've didn't achieve to adapt it for 3D arrays.</p>
|
<python><arrays><numpy>
|
2023-12-19 20:11:11
| 2
| 617
|
Raphadasilva
|
77,687,743
| 618,579
|
Issue with grabbing text from SPAN using Selenium
|
<p>I'm trying to get the start of the warranty start date from the Lenovo website using Selenium, i have successfully found the search box, where i enter the serial number and press return. I have then put a 5 second delay in and try to read the text from the span with the following.</p>
<pre><code> info = driver.find_element(By.XPATH, "//*[@id='app-psp-warranty']/div[2]/div/div/div[2]/div/div/div[2]/div[1]/p")
return info. Text
</code></pre>
<p>This doesn't return anything but i have confirmed the new page is loading. The following is the full XPATH and i have confirmed this works in Edge/Chrome when searching.</p>
<pre><code>/html/body/div[2]/section[2]/div[2]/div[2]/div[2]/div/div/div[2]/div/div/div[2]/div[2]/div[2]/div/div/div[4]/div[2]/div/div[2]/span[2]
The HTML element is <span data-v-71ae5215="" class="property-value">2023-04-25</span>
</code></pre>
|
<python><selenium-webdriver>
|
2023-12-19 20:08:11
| 2
| 2,513
|
Nathan
|
77,687,707
| 301,774
|
Clearing an interactive animated plot
|
<p>I have the following simple exmample of animation across subplots and now I am trying to make it interactive. When I changes the parameters via interaction I have know it is calling <code>run</code> because my print statement is outputing the new values. However the chart doesn't redraw.</p>
<pre><code> %matplotlib widget
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import numpy as np
from ipywidgets import interact
import ipywidgets as widgets
#sample from a bunch of distributions
def sample(n):
x1 = np.random.normal(0, 1, n)
x2 = np.random.normal(0, 4, n)
return (x1,x2)
# a function that binds data to outer function and holds them in closure to the
# high-order "update" function returned whcih is called for each animation frame
def bind_update(sample1, sample2, ax1, ax2, batch=1):
def update(curr):
if ((curr % batch) != 0):
return
ax1.clear()
ax1.set_xlim(np.min(sample1), np.max(sample1))
ax1.set_title(f'Sample 1')
ax1.hist(sample1[:curr], density=True, bins=20, alpha=0.5)
ax2.clear()
ax2.set_xlim(np.min(sample2), np.max(sample2))
ax2.set_title('Sample 2')
ax2.hist(sample2[:curr], density=True, bins=20, alpha=0.5)
return update
@interact(
interval=widgets.RadioButtons(
options=[5, 50, 100],
value=5,
description='refresh interval:',
disabled=False
),
batch=widgets.RadioButtons(
options=[5, 10, 50, 100],
value=100,
description='batch size:',
disabled=False
),
n=widgets.IntSlider(
value=300,
min=1,
max=1000,
step=1,
description='Number of Smaples:',
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='d'
)
)
def run(n, interval, batch):
print('run', n, interval, batch)
# set up figure and subplots
fig, ((ax1,ax2)) = plt.subplots(1, 2, sharex=False, sharey=False)
# I have tried these commands below in this space.
# fig.clf(), fig.clear(), ax1.clear()
fig.suptitle('Working with subplots', fontsize=15)
fig.text(0.5, 0.04, 'Value', ha='center', fontsize=13)
fig.text(0.04, 0.5, 'Frequency', va='center', rotation='vertical', fontsize=13)
# adjust spacing
plt.subplots_adjust(left=0.15, top=0.85, bottom=0.15, wspace=0.4, hspace=0.4)
# bind data and axis to the update function
(x1,x2) = sample(n)
my_update = bind_update(x1, x2, ax1, ax2, batch=batch)
# test the update function
# my_update(n)
# kick off animation
return animation.FuncAnimation(
fig,
my_update,
frames=n+1,
interval=interval,
repeat=False);
</code></pre>
<p>It looks as though a solution is to move this line outside of the function definition:</p>
<pre><code># set up figure and subplots
fig, ((ax1,ax2)) = plt.subplots(1, 2, sharex=False, sharey=False)
</code></pre>
<p>why is that a solution?</p>
|
<python><matplotlib><matplotlib-animation><python-interactive>
|
2023-12-19 19:59:58
| 0
| 6,896
|
akaphenom
|
77,687,649
| 11,391,711
|
how to use min_ function in LinExpr() when writing an expression in Gurobi
|
<p>I would like to use the built-in min function provided by Gurobi, however, I receive an error and cannot use the variable within the function.</p>
<p>After installing the required libraries as shown below,</p>
<pre><code>import gurobipy as gp
from gurobipy import quicksum, min_
</code></pre>
<p>I write down my expression with pre-defined variables.</p>
<pre><code>expr = gp.LinExpr(0)
for location in locations:
for time in range(time_range):
expr.add(min_(flow_variable[location, time], constant=0))
</code></pre>
<p>I used <a href="https://www.gurobi.com/documentation/current/refman/py_min_.html" rel="nofollow noreferrer">this reference</a> for the <code>_min</code> function.</p>
<pre><code> File "src\\gurobipy\\linexpr.pxi", line 212, in gurobipy.LinExpr.add
gurobipy.GurobiError: Unsupported type (<class 'gurobipy.GenExprMin'>) for LinExpr addition argument
</code></pre>
|
<python><mathematical-optimization><gurobi>
|
2023-12-19 19:46:05
| 1
| 488
|
whitepanda
|
77,687,578
| 4,439,019
|
How to change the filename of pytest-html's generated HTML report?
|
<p>I know how to change the report name, the following does that for me just fine:</p>
<pre><code>conftest.py
def pytest_html_report_title(report):
report.title = 'My API Testing'
</code></pre>
<p>But that doesn't change the name of the actual file itself. It is still showing up as <code>report.html</code> and just overwriting itself every run.</p>
<p>Is there a way to change the filename? I want to name it something dynamic with a timestamp so it doesn't keep overwriting the same file.</p>
|
<python><pytest><pytest-html>
|
2023-12-19 19:32:26
| 2
| 3,831
|
JD2775
|
77,687,260
| 2,803,777
|
Why is my sliced array variant in Python slower as compared with the element-wise operation?
|
<p>I refer to <a href="https://stackoverflow.com/questions/77683855/can-i-use-a-more-pythonic-way-for-my-conditional-array-operations">this question</a>, which has already a good answer; but there were unnecessary operations identified (see discussion in the posting) and I was just curious if I could succeed in eliminate them...</p>
<p>In the meantime, I found a method which avoids unnecessary multiplications (using masks for indexing) and gives the same result. The code is below.</p>
<p>Variant 1 is the original.</p>
<p>In Variant 2 I tried to make use of python slicing combined with masking - not only to write the two loops in a better and more compact way, but mainly in the hope it would get faster. But it turned out, that it is even slower by ~30%. To be honest, the original code is more readable, but I was hoping to get significant improvement as compared to the double-loop.</p>
<p>Why is this not the case?</p>
<p>Or asked the other way around: In which situations are slice operations faster as compared to element-wise operations? Are they just syntactic sugar with significant internal overhead? I thought that they are implemented in C/C++ under the hood and must be faster than manual looping over <code>i,j</code> in Python.</p>
<p>The output:</p>
<pre><code>D:\python\animation>python test.py
used time for variant 1: 1.0377624034881592
used time for variant 2: 1.30381441116333
D:\python\animation>python test.py
used time for variant 1: 0.8954949378967285
used time for variant 2: 1.251044750213623
D:\python\animation>python test.py
used time for variant 1: 0.9750621318817139
used time for variant 2: 1.3896379470825195
</code></pre>
<p>The code:</p>
<pre><code>import numpy as np
import numpy.ma as ma
import time
def test():
f = np.array([
[0, 0, 0, 0, 0, 0, 0],
[0, 1, 3, 6 , 4, 2, 0],
[0, 2, 4, 7 , 6, 4, 0],
[0, 0, 0, 0, 0, 0, 0]
])
u = np.array([
[0, 0, 0, 0, 0, 0, 0],
[0, 0.5, 1, 0, -1, -0.5, 0],
[0, 0.7, 1.1, 0, -1, -0.4, 0],
[0, 0, 0, 0, 0, 0, 0],
])
# calculate : variant 1
x = np.zeros_like(f)
maxcount = 100000
start = time.time()
for count in range(maxcount):
for i in range(1,u.shape[0]-1):
for j in range(1,u.shape[1]-1):
if u[i,j] > 0:
x[i,j] = u[i,j]*(f[i,j]-f[i,j-1])
else:
x[i,j] = u[i,j]*(f[i,j+1]-f[i,j])
end = time.time()
print("used time for variant 1:", end-start)
# calculate : variant 2
y = np.zeros_like(f)
start = time.time()
for count in range(maxcount):
maskl = (u[1:-1, 1:-1] > 0)
maskr = ~maskl
diff = f[1:-1, 1:] - f[1:-1, 0:-1]
(y[1:-1, 1:-1])[maskl] = (u[1:-1, 1:-1 ])[maskl] * (diff[:, :-1])[maskl]
(y[1:-1, 1:-1])[maskr] = (u[1:-1, 1:-1 ])[maskr] * (diff[:, 1: ])[maskr]
end = time.time()
print("used time for variant 2:", end-start)
np.testing.assert_array_equal(x, y)
test()
</code></pre>
<p>"Pre-fetching" slices for u and y makes it a bit better, but not significantly:</p>
<pre><code> for count in range(maxcount):
maskl = (u[1:-1, 1:-1] > 0)
maskr = ~maskl
diff = f[1:-1, 1:] - f[1:-1, 0:-1]
yy = (y[1:-1, 1:-1]) # <<--
uu = (u[1:-1, 1:-1 ]) # <<--
yy[maskl] = uu[maskl] * (diff[:, :-1])[maskl]
yy[maskr] = uu[maskr] * (diff[:, 1: ])[maskr]
</code></pre>
|
<python><arrays><numpy>
|
2023-12-19 18:24:34
| 2
| 1,502
|
MichaelW
|
77,687,182
| 1,753,640
|
Update database table once asyncio co-routine is complete
|
<p>I want to do the following</p>
<ol>
<li>Insert a database record to indicate a job is running</li>
<li>Kick off an asyncio routine</li>
<li>Update the database table once the routine is complete</li>
</ol>
<p>This is my current code:</p>
<pre><code>db = sqlite3.connect('database.db')
conn = db.cursor()
insert_query = f"""INSERT INTO batch_jobs
(job_name, user_id, status, type)
VALUES
('{job}','{user_id}','Running','clause_extract')"""
conn.execute(insert_query)
db.commit()
db.close()
print("Record inserted successfully into batch_job table ", conn.rowcount)
row_id = conn.lastrowid
# This is my async job
output = asyncio.run(aall_answers_all_docs(doc_list, template))
update_query = f"""UPDATE batch_jobs
SET status = 'Finished',
file_location = './data/csv_files/{file_name}'
WHERE id = {row_id}"""
db = sqlite3.connect('database.db')
conn = db.cursor()
conn.execute(update_query)
db.commit()
db.close()
</code></pre>
<p>However I get an error <code>sqlite3.OperationalError: database is locked</code></p>
<p>I'm assuming that multiple tasks are trying to write to the db to the same time. So how can I wait until the whole asyncio job is finished and my output is delivered. I tried asyncio.gather but that didn't work</p>
|
<python><python-asyncio>
|
2023-12-19 18:09:39
| 0
| 385
|
user1753640
|
77,687,160
| 139,692
|
Python Logging Config
|
<p>I'm trying a simple setup with python logging using the json format. I've copied some config data from the logging config tutorial but when importing it I get:</p>
<pre><code>(env) pi@omni-demo:~/omni-reader $ python
Python 3.11.2 (main, Mar 13 2023, 12:18:29) [GCC 12.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import logging
>>> import logging.config
>>> logging.config.dictConfig('logging.conf')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.11/logging/config.py", line 812, in dictConfig
dictConfigClass(config).configure()
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.11/logging/config.py", line 374, in __init__
self.config = ConvertingDict(config)
^^^^^^^^^^^^^^^^^^^^^^
ValueError: dictionary update sequence element #0 has length 1; 2 is required
</code></pre>
<p>The logging.conf file is:</p>
<pre><code>version: 1
formatters:
simple:
format: '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
handlers:
console:
class: logging.StreamHandler
level: ERROR
formatter: simple
stream: ext://sys.stdout
loggers:
root:
level: ERROR
handlers: [console]
</code></pre>
<p>The error is non descriptive and I just can't figure out what is the problem as the example is very simple. Can anyone assist please?</p>
<p>Regards,</p>
<p>Neil</p>
|
<python><logging><yaml>
|
2023-12-19 18:04:59
| 2
| 946
|
Neil Benn
|
77,687,047
| 10,333,668
|
tkinter ttk copy Style layout
|
<p>I found this solution to extend a ttk theme:</p>
<pre><code>import tkinter as tk
import tkinter.ttk as ttk
class Style(ttk.Style):
EXTENDS = 'extends'
def __init__(self, parent):
super().__init__(parent)
self._style = {}
def configure(self, cls, **kwargs):
self._style.setdefault(cls, {}).update(kwargs)
extends = self._style.get(kwargs.get(Style.EXTENDS), {})
super().configure(cls, **extends)
super().configure(cls, **kwargs)
</code></pre>
<p>My question is: How to make it to copy the Style layout of the Style, that it extends?</p>
|
<python><tkinter><ttk>
|
2023-12-19 17:42:17
| 0
| 377
|
Bálint Cséfalvay
|
77,686,967
| 521,347
|
Dataflow not able to find a module specified using extra_package
|
<p>I have created a Dataflow pipeline in batch mode using Apache beam Python SDK. I am using one non-public dependency 'uplight-telemetry'. I have specified it using parameter extra_package while creating pipeline_options object. However, the pipeline loading is failing with an error <code>No module named 'uplight_telemetry'</code>.
The code to create pipeline_options is as following-</p>
<pre><code>def __create_pipeline_options_dataflow(job_name):
# Set up the Dataflow runner options
gcp_project_id = os.environ.get(GCP_PROJECT_ID)
current_dir = os.path.dirname(os.path.abspath(__file__))
print("current_dir=", current_dir)
setup_file_path = os.path.join(current_dir, '..', '..', 'setup.py')
print("Set-up file path=", setup_file_path)
#TODO:Move file to proper location
uplight_telemetry_tar_file_path=os.path.join(current_dir, '..', '..','..','non-public-dependencies', 'uplight-telemetry-1.0.0.tar.gz')
# TODO:Move to environmental variables
pipeline_options = {
'project': gcp_project_id,
'region': "us-east1",
'job_name': job_name, # Provide a unique job name
'temp_location': f'gs://{TAS_GCS_BUCKET_NAME_PREFIX}{os.getenv("UP_PLATFORM_ENV")}/temp',
'staging_location': f'gs://{TAS_GCS_BUCKET_NAME_PREFIX}{os.getenv("UP_PLATFORM_ENV")}/staging',
'runner': 'DataflowRunner',
'save_main_session': True,
'service_account_email': os.environ.get(SERVICE_ACCOUNT),
# 'network': f'projects/{gcp_project_id}/global/networks/default',
'subnetwork': os.environ.get(SUBNETWORK_URL),
'setup_file': setup_file_path,
'extra_package': uplight_telemetry_tar_file_path
# 'template_location': 'gcr.io/dataflow-templates-base/python310-template-launcher-base'
}
print("Pipeline created for job-name", job_name)
logger.debug(f"pipeline_options created as {pipeline_options}")
return pipeline_options
</code></pre>
<p>Why is it not trying to install this package from extra_package?</p>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam>
|
2023-12-19 17:25:33
| 1
| 1,780
|
Sumit Desai
|
77,686,822
| 1,804,173
|
Why am I getting a "wrong" mean from cumprod / cumsum of log-normal returns?
|
<p>I'm trying to generate a <a href="https://en.wikipedia.org/wiki/Geometric_Brownian_motion" rel="nofollow noreferrer">Geometric Brownian motion</a> "price" time series, i.e., based on log-normal returns. The goal:</p>
<ul>
<li>The time series should start with a price value of 1.0,</li>
<li>should have a length of <code>n</code>,</li>
<li>and the final/last price should have an expectation value of 1.0.</li>
</ul>
<p>My understanding of the <a href="https://en.wikipedia.org/wiki/Log-normal_distribution" rel="nofollow noreferrer">log-normal distribution</a> that it has a mean of <code>exp(mu + sigma^2 / 2)</code>. So in order obtain a mean of 1.0 one has to use <code>mu = -sigma^2 / 2</code>.</p>
<p>My Python implementation looks like this:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
def gen_prices(n: int, start_price: float = 1.0, sigma: float = 1.0) -> np.ndarray:
mu = -(sigma**2) / 2
log_returns = np.random.normal(mu, sigma, size=n - 1)
prices = np.cumprod(
np.concatenate([np.array([start_price]), np.exp(log_returns)]),
)
return prices
</code></pre>
<p>Let's check the mean of the final price by repeatedly generating time series, and collecting their final price values:</p>
<pre class="lang-py prettyprint-override"><code>num_iter = 100000
num_prices = 100
final_prices = [gen_prices(n=num_prices)[-1] for _ in range(num_iter)]
print(np.mean(final_prices))
</code></pre>
<p>To my surprise this returns something like <code>2.363741046091954e-08</code> far off from <code>1.0</code>. I'm wondering:</p>
<ul>
<li>Is there a fundamental / theoretical flaw in the approach, i.e., is <code>mu = -sigma^2 / 2</code> the wrong formula to use?</li>
<li>Or is this an issue with numerical stability?</li>
</ul>
<p>Since the result is off by many orders of magnitudes, it looked like more than "just a numerical" issue, but the formula seems to make sense on first glance.</p>
<p>Any ideas what is the issue here, and how to fix it?</p>
<hr />
<p>For the record, I also tried an implementation based on <code>np.cumsum</code> instead of <code>np.cumprod</code>, but it seems to behave the same:</p>
<pre class="lang-py prettyprint-override"><code>def gen_prices(n: int, start_price: float = 1.0, sigma: float = 1.0) -> np.ndarray:
mu = -(sigma**2) / 2
log_returns = np.random.normal(mu, sigma, size=n - 1)
prices = np.exp(
np.cumsum(
np.concatenate([np.array([np.log(start_price)]), log_returns]),
)
)
return prices
</code></pre>
|
<python><numpy><mean><cumsum>
|
2023-12-19 17:00:08
| 0
| 27,316
|
bluenote10
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.