QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
78,130,743
| 2,403,672
|
Parsing a csv file to add details to xml file
|
<p>Ned help to parse a csv file which has following details to xml file</p>
<p>csv file is in the following format:</p>
<pre><code>name,ip,timeout
domain\user1,10.119.77.218,9000
domain\user2,2.80.189.26,9001
domain\user3,4.155.10.110,9002
domain\user4,9.214.119.86,9003
domain\user5,4.178.187.27,9004
domain\user6,3.76.178.117,9005
</code></pre>
<p>The above details from csv needs to be added to XML file which has the following format:</p>
<pre><code><login>
<entry name="domain\user1" ip="10.119.77.218" timeout="9000"/>
<entry name="domain\user2" ip="2.80.189.26" timeout="9001"/>
<entry name="domain\user11" ip="4.155.10.110" timeout="12000"/>
...
</login>
</code></pre>
<p>Need some script because there are tones of files which needs to be converted. I tried the following tool:
<a href="https://www.convertcsv.com/csv-to-xml.htm" rel="nofollow noreferrer">https://www.convertcsv.com/csv-to-xml.htm</a></p>
<p>But the above tool converts individual row to separate entry which is not needed. Looking for your feedback. Thank you.</p>
|
<python><xml><csv><scripting>
|
2024-03-08 23:42:56
| 2
| 736
|
deep
|
78,130,671
| 4,298,228
|
Alembic dependency resolution with depends_on
|
<p>I'm working on a project with multiple branches, and I'm finding something funny when running my migrations...</p>
<p>I have a <code>main</code> branch that has the following revisions: <code>A --> B --> C</code></p>
<p>And then a dev branch with <code>B' --> C'</code></p>
<p><code>B'</code> depends on <code>B</code>, and <code>C'</code> depends on <code>C</code></p>
<p><code>C</code> has breaking changes that prevent you from running <code>B'</code> after it</p>
<p>Now, my expectation was that if I ran <code>alembic upgrade dev@head</code> I would get the following order of executions:</p>
<p><code>A --> B --> B' --> C --> C'</code></p>
<p>With the execution sort of jumping back and forth between the main and the dev branch following a sort of "chronological" order.</p>
<p>However, what I'm getting is</p>
<p><code>A --> B --> C --> B' --> C'</code></p>
<p>Is there anything I can do to get <code>A --> B --> B' --> C --> C'</code>?</p>
<p>Thanks, Fernando</p>
|
<python><alembic>
|
2024-03-08 23:16:12
| 1
| 341
|
Fernando
|
78,130,654
| 464,277
|
hmmlearn MultinomialHMM emissionprob_ size
|
<p>I'm using the code below to fit an HMM model with two hidden states, a vocabulary of size 5 (so 5 possible symbols), and a list of sequences, each with 10 observations.
I don't understand why <code>model.emissionprob_</code> has size <code>(2, 10)</code>, when the number of columns should be equal to the number of symbols available in the vocabulary, 5 in this case.</p>
<pre><code>from hmmlearn import hmm
import numpy as np
import pandas as pd
# Define the sequences
sequences = np.random.randint(1, 6, size=(100000, 10)).tolist()
# Convert sequences to numpy array
sequences_np = np.array(sequences)
# Create and fit the Multinomial HMM model with 2 hidden states
model = hmm.MultinomialHMM(n_components=2)
model.fit(sequences_np)
# Print the model parameters
print("Initial state distribution:")
print(model.startprob_)
print("\nTransition matrix:")
print(model.transmat_)
print("\nEmission probabilities:")
print(model.emissionprob_)
</code></pre>
|
<python><statistics><probability><hidden-markov-models><hmmlearn>
|
2024-03-08 23:10:33
| 1
| 10,181
|
zzzbbx
|
78,130,460
| 2,727,167
|
equidistant points between two points in subarrays
|
<p>I have following matrix which represents xy points:</p>
<p><a href="https://i.sstatic.net/BqY1L.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BqY1L.png" alt="enter image description here" /></a></p>
<p>First column is <strong>x</strong> coordinate, second column is <strong>y</strong> coordinate, third is control column (with value 1 being control value). I need to create column with new values connecting directly points with value 1 in control column with interpolated points in between. New column need to be with same length.</p>
<p>I tried with <code>np.linspace</code> for each substring between <code>1</code> but I need it to be in one go without looping array.
Data can be downloaded from <a href="https://1drv.ms/u/s!Aj5DfuTWg1OMl495JnRnLZp4P4AQfw?e=m5iHQV" rel="nofollow noreferrer">this link</a></p>
|
<python><arrays><numpy>
|
2024-03-08 22:03:44
| 1
| 450
|
user2727167
|
78,130,401
| 1,561,777
|
Python Tensorflow-Intel For Linux Does Not Exist
|
<p>I am very new to Python. We have an Azure Python Function. We need to update the packages that it uses. Some of the packages are:</p>
<pre><code>tensorflow==2.15.0
tensorflow-estimator==2.15.0
tensorflow-intel==2.15.0
tensorflow-io-gcs-filesystem==0.31.0
</code></pre>
<p>I tested these Windows versions of the packages on my Windows machine. Now I need to download and push the Linux versions of the packages to our python package feed.</p>
<p>When I try to download the packages with the following command</p>
<pre><code>python -m pip download --trusted-host pypi.org --trusted-host files.pythonhosted.org --proxy="http://127.0.0.1:8888/" -r requirements.txt --prefer-binary --only-binary=:all: -d C:\Python\Scoring\mlp\LinuxPackages --platform manylinux_2_17_x86_64 --platform manylinux2010_x86_64 --python-version 3.11
</code></pre>
<p>I get everything except tensorflow. I get the following error:</p>
<pre><code>INFO: pip is looking at multiple versions of tensorflow to determine which version is compatible with other requirements. This could take a while.
ERROR: Ignored the following versions that require a different python version: 0.28.0 Requires-Python >=3.7, <3.11
ERROR: Could not find a version that satisfies the requirement tensorflow-intel==2.15.0; platform_system == "Windows" (from tensorflow) (from versions: 0.0.1)
ERROR: No matching distribution found for tensorflow-intel==2.15.0; platform_system == "Windows"
</code></pre>
<p>When I look at tensorflow-intel on <a href="https://pypi.org/project/tensorflow-intel/#files" rel="nofollow noreferrer">pypi.org</a>, I see Windows wheel files and there are no source distribution archives that I can use to build a wheel file.</p>
<p>I have already downloaded the main tensorflow package from <a href="https://pypi.org/project/tensorflow/#files" rel="nofollow noreferrer">pypi.org</a>, and I ran my Azure function and seemed to work ok without the tensorflow-intel package, but I am concerned that I may have issues. I did not create this function and we do not have any python experts on our team.</p>
<p>From looking at our package feed currently, it does not contain the tensorflow-intel package, and that package was not listed in the original requirements.txt file. But I don't know if the newer tensorflow package depends in tensorflow-intel or not. It seems like it does.</p>
<p>Is it possible to somehow convert the Windows package to a source distribution set of files so I can then create a whl file that is compatible with Linux? Or, what other options do I have?</p>
|
<python><linux><tensorflow><python-packaging><python-3.11>
|
2024-03-08 21:45:04
| 1
| 772
|
David.Warwick
|
78,130,380
| 7,265,114
|
Remove white spaces in subplots matplolib
|
<p>I try to plot the subplots as shown below. It works okay but it still have some white spaces at the end of figure. Is it possible remove white spaces. Any help would be very great.</p>
<p>Here is the code and simulated data.</p>
<pre><code>import numpy as np
import xarray as xr
import geopandas as gpd
import requests
import matplotlib.pyplot as plt
# Define a function to convert string coordinates into float
def data_extract(link):
text = requests.get(link).text.split(",")
return np.array([float(i) for i in text if i])
# Get longitude and latitude
y = data_extract("https://raw.githubusercontent.com/tuyenhavan/test/main/latitude.txt")
x = data_extract("https://raw.githubusercontent.com/tuyenhavan/test/main/longtitude.txt")
# Make some random data covering the area with above coordinates
random_data = np.random.randint(0, 3, (22, 937, 399))
# Wrap them into Xarray DataArray
data = xr.DataArray(random_data, dims=("band", "y", "x"), coords={
"band": np.arange(len(random_data)),
"y": y,
"x": x
})
# Get the study area
aoi = gpd.read_file("https://raw.githubusercontent.com/tuyenhavan/test/main/area.geojson")
# Visualize the result
fig, axes = plt.subplots(nrows=3, ncols=8, figsize=(12,16), sharex=True, sharey=True)
fig.subplots_adjust(hspace=-0.55)
cmap="Spectral"
# fig.subplots_adjust(wspace=0.2, hspace=0.25)
for i, ax in enumerate(axes.flatten()):
if i<len(data):
tem = data[i]
tem=tem.rio.write_crs(aoi.crs)
plot = tem.plot(ax=ax, cmap=cmap, add_colorbar=False, vmin=1, vmax=3)
aoi.plot(ax=ax, color="None", edgecolor="black", linewidth=0.5)
ax.set_title(f"{1+i}")
ax.set_ylabel("")
ax.set_xlabel("")
ax.set_ylabel("")
ax.set_xlabel("")
else:
ax.axis("off")
</code></pre>
<p><a href="https://i.sstatic.net/BDtgq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BDtgq.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><geopandas><python-xarray>
|
2024-03-08 21:38:51
| 0
| 1,141
|
Tuyen
|
78,130,279
| 10,801,098
|
Get ORM class from schema instance in SQL alchemy
|
<p>In SQL alchemy, you may do something like this to do a query:</p>
<pre><code>model_instance = session.query(SomeModel).filter(SomeModel.value == some_value).first()
</code></pre>
<p>where <code>model_instance</code> is equivalent to some form of this, with data omitted:</p>
<pre><code>model_instance = SomeModel( ... )
</code></pre>
<p>Let's suppose I want to create a function that could take multiple possible types for a <code>model_instance</code>, each with their own table. To do so, I want to do something like <code>type(model_instance)</code>, except instead of just getting a Python class, I want an object with the attribute <code>'_sa_instance_state'</code>, so I can do <code>session.query(SomeModel)</code>, given an instance <code>SomeModel()</code>.</p>
<p>For clarification, here is an example method (<code>save_model</code>):</p>
<pre class="lang-py prettyprint-override"><code>class Database:
"""SQLAlchemy object for interacting with the database"""
def __init__(self, db_url: str):
self.engine = create_engine(db_url)
Base.metadata.create_all(self.engine)
Session = sessionmaker(bind=self.engine)
self.session = Session()
def save_model(self, model: Model) -> bool:
"""Saves given model to the database"""
try:
model_type = ... # Get the ORM class object from the model here
if self.contains(model_type, **model.__dict__):
return False
self.session.add(model)
self.session.commit()
return True
except Exception as e:
self.session.rollback()
raise Exception(f"Unable to save model: {str(e)}")
return False
def contains(self, model_class: Type[Model], **kwargs) -> bool:
"""Returns True if a model of the specified class with the given attributes exists in the database."""
query = self.session.query(model_class)
for key, value in kwargs.items():
query = query.filter(getattr(model_class, key) == value)
return query.first() is not None
</code></pre>
<p>where <code>Model</code> is a union of schema types, each with their own separate table.</p>
<p>I tried doing something like this</p>
<pre class="lang-py prettyprint-override"><code>model_type = sqlalchemy.Table(model.__table__.name, model.__table__.metadata, autoload_with=self.engine)
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>model_type = sqlalchemy.sql.text(model.__table__.name)
</code></pre>
<p>and</p>
<pre><code>model_type = type(model)
</code></pre>
<p>But I still run into some variation of the following issue:</p>
<pre><code>'Table' object has no attribute '_sa_instance_state'
</code></pre>
<p>or</p>
<pre><code>'TextClause' object has no attribute '_sa_instance_state'
</code></pre>
<p>or</p>
<pre><code>'SomeModel' has no attribute '_sa_instance_state'
</code></pre>
<p>Sure, I could add in <code>model_class: Type[Model]</code> as another parameter to the <code>save_model</code> method, but I was wondering if it were possible to avoid doing that.</p>
<p><strong>Note</strong>: <code>contains</code> works if you specify a specific model type, (e.g., <code>contains(SomeModel, param1="some_value")</code>, but not if you do <code>contains(type(model), param1="some_value")</code>).</p>
|
<python><sqlalchemy>
|
2024-03-08 21:03:32
| 1
| 1,536
|
bug_spray
|
78,130,371
| 2,925,767
|
How to run EdgeOS configuration commands in python fabric
|
<p>I'm trying to programmatically update configuration in EdgeOS using a python script. I'm using fabric as an ssh client. <a href="https://docs.fabfile.org/en/latest/index.html" rel="nofollow noreferrer">https://docs.fabfile.org/en/latest/index.html</a></p>
<p>Below "---original question---" describes the problem, but I think I've hit on the source of the issue, but not the solution. Fabric ssh's into the router in non-interactive mode, and as such doesn't get the full bash configuration. The <code>configure</code> command isn't available, nor <code>show</code> nor other router management commands. The .bashrc file in the router is this:</p>
<pre><code># If not running interactively, don't do anything
[ -z "$PS1" ] && return
# enable completion
source /etc/bash_completion.d/vyatta-cfg
source /etc/bash_completion.d/vyatta-op
unset HISTFILE
</code></pre>
<p>So I can see I'm not sourcing the bash_completion files. I tried doing that manually in python: <code>c.run("source /etc/bash_completion.d/vyatta-cfg")</code>, but it doesn't seem to work. My next best solution is to find the hard path to those commands, but I'm not sure where to look.</p>
<p>---original question---</p>
<p>what I'd like to do is to create small configuration changes such as:</p>
<pre><code>configure
set interfaces ethernet eth2 vif 20 disable
commit
save
</code></pre>
<p>However, the <code>configure</code> command fails when I run it through fabric. I notice, when I'm in an ssh session, the <code>configure</code> command creates a new sub-session or something. I'm not sure of the term.</p>
<pre><code>$ configure
[edit]
#
</code></pre>
<p>script so far:</p>
<pre><code>from fabric import Connection
c = Connection('routerip')
c.run("configure")
</code></pre>
<p>And the error:</p>
<pre><code>vbash: configure: command not found
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users/me/opt/anaconda3/lib/python3.8/site-packages/decorator.py", line 231, in fun
return caller(func, *(extras + args), **kw)
File "/Users/me/opt/anaconda3/lib/python3.8/site-packages/fabric/connection.py", line 23, in opens
return method(self, *args, **kwargs)
File "/Users/me/opt/anaconda3/lib/python3.8/site-packages/fabric/connection.py", line 763, in run
return self._run(self._remote_runner(), command, **kwargs)
File "/Users/me/opt/anaconda3/lib/python3.8/site-packages/invoke/context.py", line 113, in _run
return runner.run(command, **kwargs)
File "/Users/me/opt/anaconda3/lib/python3.8/site-packages/fabric/runners.py", line 83, in run
return super().run(command, **kwargs)
File "/Users/me/opt/anaconda3/lib/python3.8/site-packages/invoke/runners.py", line 395, in run
return self._run_body(command, **kwargs)
File "/Users/me/opt/anaconda3/lib/python3.8/site-packages/invoke/runners.py", line 451, in _run_body
return self.make_promise() if self._asynchronous else self._finish()
File "/Users/me/opt/anaconda3/lib/python3.8/site-packages/invoke/runners.py", line 518, in _finish
raise UnexpectedExit(result)
invoke.exceptions.UnexpectedExit: Encountered a bad command exit code!
Command: 'configure'
Exit code: 127
Stdout: already printed
Stderr: already printed
</code></pre>
|
<linux><bash><ssh><python>
|
2024-03-08 20:53:41
| 2
| 1,085
|
icicleking
|
78,130,226
| 947,012
|
Why does my Azure Function disobey env var logging settings but respect host.json?
|
<p>I added logging through opentelemetry and now I am getting duplicated entries of logs as they are sent through both Function's handler and opentelemetry handler.</p>
<p>I want to disable Function logging as opentelemetry adds tracer span awareness through OperationID/ParentID, thus making opentelemetry logs more functional with correlation. It is necessary to disable Function's logging in an individual function at this point, as the rest functions are not instrumented yet.</p>
<p>According to <a href="https://learn.microsoft.com/en-us/azure/azure-functions/configure-monitoring?tabs=v1#overriding-monitoring-configuration-at-runtime" rel="nofollow noreferrer">this document</a>, I must be able to set environment variables like</p>
<p><code>AzureFunctionsJobHost__logging__logLevel__Function__MyFunction__User</code></p>
<p>to control logging while overriding other settings. I found that host-level setting like</p>
<p><code>AzureFunctionsJobHost__logging__logLevel__Function=None</code></p>
<p>does work, essentially disabling all logging, but on individual function level (with <code>__MyFunction__User</code> suffix) this setting is ignored.</p>
<p>I also noticed that settings through host.json work, too:</p>
<pre class="lang-json prettyprint-override"><code>{
"version": "2.0",
"logging": {
"applicationInsights": {
"samplingSettings": {
"isEnabled": true,
"excludedTypes": "Request"
}
},
"logLevel": {
"Function": "Information",
"Function.MyFunction.User": "None"
}
}
}
</code></pre>
<p>The above achieves the goal, but configuring through env vars is preferred for me. What am I missing?</p>
|
<python><logging><azure-functions><azure-functions-runtime>
|
2024-03-08 20:50:22
| 2
| 3,234
|
greatvovan
|
78,130,203
| 6,401,403
|
Pandas: make list size in a column same as in another column
|
<p>I have two columns: <code>serial_number</code> and <code>inv_number</code> containing lists. If there is one <code>inv_number</code> for multiple <code>serial_number</code>, I need to make the size of <code>inv_number</code>'s list the same as <code>serial_number</code>'s.</p>
<pre><code> serial_number inv_number
28 [С029768, С029775] [101040031171, 101040031172]
29 [090020960190402011, 090020960190402009] [210134002523, 210134002524]
31 [1094] [410124000215]
32 [01] [101040022094]
33 [F161B5, F17D86, F17D8D, F1825C, F1825A, F1825D] [101040026976]
</code></pre>
<p>Here at the index 33 we have 6 serial numbers but one inventory number, so it should be changed to</p>
<pre><code>[101040026976, 101040026976, 101040026976, 101040026976, 101040026976, 101040026976]
</code></pre>
<p>I've tried to do it by "multiplying" values to make a list (like <code>[value] * N</code>):</p>
<pre><code>si.loc[si['inv_number'].apply(len)==1, 'inv_number'].apply
(lambda x: [str(x[0])] * si['serial_number'].apply(len).values)
</code></pre>
<p>but it gives me an error:</p>
<blockquote>
<p>UFuncTypeError: ufunc 'multiply' did not contain a loop with signature matching types (dtype('<U12'), dtype('int64')) -> None</p>
</blockquote>
<p>How can I solve this problem?</p>
|
<python><pandas><list>
|
2024-03-08 20:44:12
| 2
| 5,345
|
Michael
|
78,130,054
| 1,036,582
|
Python imports fail when using unittest module
|
<p>I have the following directory structure:</p>
<pre><code>project/
lambda_functions/
user_plot
user_plot_lib/
├── __init__.py
├── a.py
└── b.py
user_plot_client.py
test/
unit/
├── __init__.py
├── test_user_plot.py
</code></pre>
<p>Inside of my <code>user_plot_client.py</code> I have imports like so:</p>
<pre class="lang-py prettyprint-override"><code>from user_plot_lib.a import test_function
</code></pre>
<p>I can successfully run code from the <code>user_plot_client.py</code> .</p>
<p>In <code>test_user_plot.py</code> I have the following</p>
<pre class="lang-py prettyprint-override"><code>from lambda_functions.user_plot.user_plot_client import main
class TestUserPlot(unittest.Testcase):
def test_main():
pass
</code></pre>
<p>I get the following import error</p>
<pre><code>from user_plot_lib.a import test_function
ModuleNotFoundError: No module named user_plot_lib
</code></pre>
<p>I know it's something to do with the Python path, but I'm not having any luck getting it added. Any better way to structure this so that I don't have to mess with the python path?</p>
|
<python><python-import><python-unittest>
|
2024-03-08 20:10:41
| 0
| 373
|
Movieboy
|
78,129,981
| 23,519,070
|
Logging Error: Failed to initialize logging system. Log messages may be missing.?
|
<blockquote>
<p>Logging Error: Failed to initialize logging system. Log messages may be missing. If this issue persists, try setting IDEPreferLogStreaming=YES in the active scheme actions environment variables.</p>
</blockquote>
<p>Has anyone else encountered this message?</p>
<p>Where is <code>IDEPreferLogStreaming</code> located? I don't know what any of this means.</p>
<p>It's building my app successfully but then loading it like its a computer using floppy discs (crazy slow).</p>
<p>Any ideas?</p>
<p>I tried wiping my OS and reinstalling. I've reinstalled Xcode twice now. Nothing.</p>
<p>A colleague of mine is working on the same SwifUI project with no issues.</p>
|
<python><swift><xcode>
|
2024-03-08 19:52:20
| 4
| 945
|
James Menkal
|
78,129,942
| 13,812,982
|
Reading a Protected View Excel file using xlsxwriter
|
<p>I am trying to read an Excel .xlsx file that has been saved from an Outlook attachment. My security setting (which I cannot change) puts this file into <code>Protected View</code> mode.</p>
<p>If I try and read this file via <code>xlsxwriter</code></p>
<pre><code>import xlsxwriter
xlsx = 'protectedfile.xlsx'
wbWriter = xlsxwriter.Workbook(xlsx)
print( len(wbWriter.worksheets()) )
</code></pre>
<p>I get the answer <code>0</code>.</p>
<p>If I open the file in Excel, I get this banner:</p>
<p><a href="https://i.sstatic.net/NQv7c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NQv7c.png" alt="enter image description here" /></a></p>
<p>If I choose "Enable Editing", Save the file and Quit Excel, the file is now accessible via <code>xlsxwriter</code> and correctly lists the number of worksheets.</p>
<p>If I look at the initial downloaded file in File Explorer Properties it tells me the file is blocked. I can remove this flag using <code>os.remove(filename + ':Zone.Identifier')</code> but the workbook remains in <code>Protected View</code> with zero sheets.</p>
<p>If I extract the <code>:$DATA</code> stream using PowerShell and copy that to another file:</p>
<pre><code>> $a = get-content ./protectedfile.xlsx:':$DATA' -Encoding Byte -ReadCount 0
> set-content ./another.xlsx -Encoding Byte -Value $a
</code></pre>
<p>The <code>another.xlsx</code> file opens in Excel without the Protected View warning, but <code>xlxswriter</code> still does not see any sheets.</p>
<p>One solution is to use <code>win32com</code> to open, edit and save each file in the background, along the lines of:</p>
<pre><code>import win32com.client as wc
xl = wc.Dispatch('Excel.Application')
pv = xl.ProtectedViewWindows.Open('protectedfile.xlsx')
wb = pv.Edit()
wb.Close(True)
</code></pre>
<p>But this is slow, cumbersome and requires Excel to be started and shut down.</p>
<p>Is there a programmatic alternative for removing the Protected View flag so that <code>xlsxwriter</code> can access the contained worksheets?</p>
<p><strong>UPDATE</strong>: after @jmcnamara's salient comment, I have found I can read the protected view .xlsx using <code>pandas</code> and <code>read_excel</code>: ie with <code>openpyxl</code> as the engine. I can then write the DataFrame dictionary back to a new file, which is a more efficient conversion than automating Excel. Eg this code copies the data over (though loses some formatting):</p>
<pre><code>import pandas as pd
def unProtect(oldXlsx,newXlsx):
with pd.ExcelWriter(newXlsx,mode='w',engine='openpyxl') as writer:
for k,dfSheet in pd.read_excel(oldXlsx,sheet_name=None).items():
dfSheet.to_excel(writer,sheet_name=k,index=False)
unProtect('blocked.xlsx','unblocked.xlsx')
</code></pre>
<p>I am still interested to know how the structure of the .xlsx has been changed as the proprietary code (not Python) I am using is failing to unpack the zipped xml of the protected view file.</p>
|
<python><excel>
|
2024-03-08 19:43:08
| 0
| 4,331
|
DS_London
|
78,129,852
| 1,226,676
|
Uploading multiple large files with django on Google App Engine -- how to do multiple requests
|
<p>I'm trying to figure out how to load multiple large files (like 4K images) to Google Cloud Storage via django using the default admin interface.</p>
<p>For example, I have a model with multiple images:</p>
<pre><code>MyModel(models.Model):
image_1 = models.ImageField(
null=True,
upload_to="myapp/images"
)
image_2 = models.ImageField(
null=True,
upload_to="myapp/images"
)
</code></pre>
<p>However, if I enter data for this model in the admin interface, this causes an error if I load two large files that go over GAE's 32 MB post limit.</p>
<p>I've tried using <code>django-gcp</code> and <code>BlobField</code>, but this also causes issues because the temporary uploads overwrite each other before transferring to Google Cloud Storage -- and it doesn't look like this is solvable with <code>django-gcp</code> out of the box.</p>
<p>So right now I'm wondering if it's possible to break out upload into multiple POST requests -- that way each one can be a separate ImageField (if under 32MB) or BlobField, and I won't have any issues.</p>
<p>Is there a way I can upload a model in multiple POSTs?</p>
|
<python><django><google-app-engine><google-cloud-storage>
|
2024-03-08 19:21:50
| 0
| 5,568
|
nathan lachenmyer
|
78,129,833
| 4,575,197
|
How to add two side slide button to a line chart based on the year using Altair
|
<p>i have a line chart that has date as X axis. i want to filter this with a double (two) sided slide button for filtering proposes. So i want to be able to filter the data from starting and end date of the dataframe. my date is in YYYY-MM-DD format, so it needs to be transformed.</p>
<p>my data:</p>
<pre><code>data = {
'Adj Close': [1.308934, 2.169581, 2.876765, 2.357847, 2.179156],
'Yahoo Finance return': [0.670226, 1.298566, 0.920492, -0.721652, -0.306659],
'date_x': ['2015-01-01', '2015-02-01', '2015-03-01', '2015-04-01', '2015-05-01'],
'Return in USD': [-0.641970, 0.428617, 0.404568, 0.125614, 0.070045]
}
# Convert the dictionary into a DataFrame
df = pd.DataFrame(data)
# Make sure that 'date_x' is a datetime column
df['date_x'] = pd.to_datetime(df['date_x'])
</code></pre>
<p>my code for creating the line chart:</p>
<pre><code>final_melted = df.melt(id_vars=['date_x'], var_name='Metric', value_name='Value')
selection_metric_1 = alt.binding_select(options=['Adj Close', 'None','Yahoo Finance return', 'Return in USD'], name='Metric 1 ')
select_metric_1 = alt.selection_single(fields=['Metric'], bind=selection_metric_1, name="Selector 1")
# Dropdown selection for the second metric
selection_metric_2 = alt.binding_select(options=['Adj Close','None', 'Yahoo Finance return', 'Return in USD'], name='Metric 2 ')
select_metric_2 = alt.selection_single(fields=['Metric'], bind=selection_metric_2, name="Selector 2")
# Dropdown selection for the third metric
selection_metric_3 = alt.binding_select(options=['Adj Close','None', 'Yahoo Finance return', 'Return in USD'], name='Metric 3 ')
select_metric_3 = alt.selection_single(fields=['Metric'], bind=selection_metric_3, name="Selector 3")
line_chart = alt.Chart(final_melted).mark_line().encode(
x='date_x:T',
y=alt.Y('Value:Q', axis=alt.Axis(title='Metric Value')),
color='Metric:N'
)
# Points layer, making it easier to hover over individual points
points = alt.Chart(final_melted).mark_point().encode(
x='date_x:T',
y='Value:Q',
color='Metric:N',
tooltip=['date_x:T', 'Value:Q', 'Metric:N']
).properties(
width=1600,
height=800
)
# Conditional filters based on the selection
filtered_chart = alt.layer(line_chart, points).add_selection(
select_metric_1,
select_metric_2,
select_metric_3 # Add the third selection here
).transform_filter(
select_metric_1 | select_metric_2 | select_metric_3 # Apply the filter based on the third selection as well
).interactive()
configured_chart = filtered_chart.configure_axis(
titleFontSize=15, # Adjust the size as needed for axis titles
labelFontSize=15 # Adjust the size as needed for axis labels
).configure_legend(
titleFontSize=15, # Adjust legend title size
labelFontSize=12 # Adjust legend labels size
).configure_title(
fontSize=24 # Adjust chart title size
).configure_legend(
titleFontSize=17, # Adjust the font size for the legend title
labelFontSize=15 # Adjust the font size for the legend labels
)
</code></pre>
|
<python><dataframe><filtering><visualization><altair>
|
2024-03-08 19:15:28
| 0
| 10,490
|
Mostafa Bouzari
|
78,129,823
| 195,540
|
Python opentelemetry events in Application Insights
|
<p>I'm following the guides below trying to setup logging in Azure Application Insights for my django application:</p>
<p><a href="https://uptrace.dev/get/instrument/opentelemetry-django.html" rel="nofollow noreferrer">https://uptrace.dev/get/instrument/opentelemetry-django.html</a>
<a href="https://uptrace.dev/opentelemetry/python-tracing.html" rel="nofollow noreferrer">https://uptrace.dev/opentelemetry/python-tracing.html</a>
<a href="https://opentelemetry.io/docs/languages/python/automatic/logs-example/" rel="nofollow noreferrer">https://opentelemetry.io/docs/languages/python/automatic/logs-example/</a></p>
<p>And have ended up with code that looks like this:</p>
<p><strong>myapp/manage.py</strong></p>
<pre><code>def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'myapp.settings')
# Configure OpenTelemetry to use Azure Monitor with the specified connection string
configure_azure_monitor(
connection_string="InstrumentationKey=myKey;IngestionEndpoint=https://centralus-2.in.applicationinsights.azure.com/;LiveEndpoint=https://centralus.livediagnostics.monitor.azure.com/",
)
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
</code></pre>
<p><strong>myapp/views.py</strong></p>
<pre><code>import logging
from opentelemetry import trace
class SomeView(LoginRequiredMixin, TemplateView):
login_required = True
template_name = "myapp/index.html"
def get_context_data(self, **kwargs):
context = super().get_context_data(**kwargs)
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("SomeView") as span:
if span.is_recording():
span.set_attribute("user.id", self.request.user.id)
span.set_attribute("user.email", self.request.user.email)
span.add_event("log", {
"log.severity": "info",
"log.message": "Mark was here.",
"user.id": self.request.user.id,
"user.email": self.request.user.email,
})
span.add_event("This is a span event")
logging.getLogger().error("This is a log message")
context['something'] = SomeThing.objects.all()
return context
</code></pre>
<p>The good: I do get results in Application Insights.</p>
<p>When I look at end-to-end transaction details I see something like this, which is great.</p>
<pre><code>Traces & Events
10 Traces
0 Events <- THIS IS THE ISSUE
View timeline
Filter to a specific component and call
Local time Type Details
1:32:52.989 PM Request Name: GET some/path/, Successful request: true, Response time: 5.6 s, URL: https://someurl.com
1:32:53.260 PM Trace Message: log
1:32:53.260 PM Trace Message: This is a span event
1:32:53.260 PM Trace Severity level: Error, Message: This is a log message
1:32:53.260 PM Internal Name: SomeView, Type: InProc, Call status: true
1:32:53.577 PM Trace Severity level: Information, Message: some
1:32:53.587 PM Trace Severity level: Information, Message: message
1:32:53.602 PM Trace Severity level: Information, Message: here
</code></pre>
<p>However, what I can't get to occur is log an actual Event. So when I'm looking at Application Insights and I click on "Activity Log" in the menu, I nothing but a message stating "No events to display".</p>
<p>So, I've got traces working, but I'm unable to log "Events". Any help would be greatly appreciated.</p>
|
<python><django><azure-application-insights><open-telemetry>
|
2024-03-08 19:13:19
| 1
| 1,787
|
scoopseven
|
78,129,351
| 1,744,357
|
SOAP Header Invalid Signature on Timestamp
|
<p>One of our SAML signatures in the SOAP header is invalid. You can confirm it fails at this website: <a href="https://tools.chilkat.io/xmlDsigVerify.cshtml" rel="nofollow noreferrer">https://tools.chilkat.io/xmlDsigVerify.cshtml</a> . We have attempted troubleshooting to no avail. We believe the issue lies somewhere with how we are creating the signature itself. I've included a test XML we generate, and then the code block for how we create the signature for the timestamp.</p>
<p>This is the XML that we generate (I made it pretty print for everyone to review but we don't actually pass it as pretty print):</p>
<pre><code><?xml version="1.0"?>
<soap-env:Envelope xmlns:soap-env="http://www.w3.org/2003/05/soap-envelope">
<soap-env:Header xmlns:wsa="http://www.w3.org/2005/08/addressing">
<wsse:Security xmlns:wsse="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd" soap-env:mustUnderstand="true">
<Timestamp xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" wsu:Id="_1">
<Created>2024-03-08T16:26:59.802Z</Created>
<Expires>2024-03-08T17:26:59.802Z</Expires>
</Timestamp>
<samlns:Assertion xmlns:samlns="urn:oasis:names:tc:SAML:2.0:assertion" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" Version="2.0" ID="_1d69ba1f-e047-45e9-a2be-d21ec70827eb" IssueInstant="2024-03-08T16:26:59.802Z">
<samlns:Issuer NameQualifier="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=ourorgname, OU=IT, O=ourorgname, L=NewYork</samlns:Issuer>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
<ds:Reference URI="#_1d69ba1f-e047-45e9-a2be-d21ec70827eb">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2000/09/xmldsig#enveloped-signature"/>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<ds:DigestValue>+Kbf8gnL85LKLfHFtRftp9FtiPStCklxeY+mBko9B14=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>Fftcv2W9Eq+iGl+8Yp6wjufX0wMCf3JDsN83kRE7dHiL2ADKEW2bV4akyNIo/hnzDopBi6USQPx1//wQ1BoqVaYKheK+AUHWh7i91zSaVy665OaZO+cA8xXKBCVSTmL617pcsrN7+25FtILn0cdTxGG+WbKrZWgG+WWrXeZpJ6idLlJbm+DK0EQUS/aTocxro7Al6/Grg2jG5bT9pARk35zIJfrzX6Chun6OBLXVxUInVk1CFeLhPFvK3qNqb3DhGXQ5nN0eIjhI/YxGR6omlMFpXRUuLGVmEQNx5R24u1Nzok1DqErGLEO9yW6Wj1e4U6D1M5NslOwula9T8o4ltw==</ds:SignatureValue>
<ds:KeyInfo>
<ds:KeyValue>
<ds:RSAKeyValue>
<ds:Modulus>3yi31ivq8LPcR+e7d52IoY576QqrlkyriwKEPcPp1mkOJ5ScgtTEyEAqz2doE0aQJ8TEY2phzRIk5nnkM0ZE6DkK+1IeLW5JhAqJlpAgjbsJMcPTX6ftjQtOWyB4r5pgxG5BagiYHyLUiVavO3lP7DsaNLrKA6sRBvan+19DnZN9q7vvdG3fnioZNh91EZRsG8ZbBIuG6wp2ctqWcdTHBlEtCO4cmk5tiU6IdxoXiLR1PdrBq336t11dS+0iGVaBXNz+An/AuslVw0rwB+JxEtggrAL+ZXJ9WkVPZh9gQMacSrz9LGZN6lv06QVXI1wJZgG/cjjL2tWy8iyoB4VN6w==</ds:Modulus>
<ds:Exponent>AQAB</ds:Exponent>
</ds:RSAKeyValue>
</ds:KeyValue>
<ds:X509Data>
<ds:X509Certificate>MIIGojCCBIqgAwIBAgIRAINdCG9+zvGuT8bNnY1bUJswDQYJKoZIhvcNAQELBQAwZTELMAkGA1UE
BhMCVVMxEzARBgNVBAgMCk5FVyBKRVJTRVkxETAPBgNVBAcMCEZvcnQgTGVlMQ4wDAYDVQQKDAVN
YXhNRDEeMBwGA1UEAwwVTWF4TUQgVExTIFJTQSBFVkFMIENBMB4XDTI0MDEyMjE5MzE1OFoXDTI1
MDEyMTE5MzE1OFowcDELMAkGA1UEBhMCVVMxITAfBgNVBAoMGEFic3RyYWN0aXZlIEhlYWx0aCwg
SW5jLjEZMBcGA1UECwwQQ0FSRVFVQUxJVFktVEVTVDEjMCEGA1UEAwwaY2FyZXF1YWxpdHkuYWJz
dHJhY3RpdmUuYWkwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIBAQDfKLfWK+rws9xH57t3
nYihjnvpCquWTKuLAoQ9w+nWaQ4nlJyC1MTIQCrPZ2gTRpAnxMRjamHNEiTmeeQzRkToOQr7Uh4t
bkmEComWkCCNuwkxw9Nfp+2NC05bIHivmmDEbkFqCJgfItSJVq87eU/sOxo0usoDqxEG9qf7X0Od
k32ru+90bd+eKhk2H3URlGwbxlsEi4brCnZy2pZx1McGUS0I7hyaTm2JToh3GheItHU92sGrffq3
XV1L7SIZVoFc3P4Cf8C6yVXDSvAH4nES2CCsAv5lcn1aRU9mH2BAxpxKvP0sZk3qW/TpBVcjXAlm
Ab9yOMva1bLyLKgHhU3rAgMBAAGjggJAMIICPDAOBgNVHQ8BAf8EBAMCBaAwHQYDVR0lBBYwFAYI
KwYBBQUHAwEGCCsGAQUFBwMCMEUGA1UdEQQ+MDyCGmNhcmVxdWFsaXR5LmFic3RyYWN0aXZlLmFp
hh5IVFRQOi8vV1dXLkNBUkVRVUFMSVRZLk9SRy9WMDEwDAYDVR0TAQH/BAIwADAdBgNVHQ4EFgQU
aeChng95VdF8VTR9dYonqCfjBLEwHwYDVR0jBBgwFoAUtovLrSuyoOG2jMJKizXgt4QpwVYwbgYI
KwYBBQUHAQEEYjBgMF4GCCsGAQUFBzAChlJodHRwOi8vd3d3LmRpcmVjdG1kZW1haWwuY29tL0NB
X1JlcG9zaXRvcnkvTWF4TURUbHNSc2FFVkFMQ0EvTWF4TURUbHNSc2FFVkFMQ0EucDdiMGMGA1Ud
HwRcMFowWKBWoFSGUmh0dHA6Ly93d3cuZGlyZWN0bWRlbWFpbC5jb20vQ0FfUmVwb3NpdG9yeS9N
YXhNRFRsc1JzYUVWQUxDQS9NYXhNRFRsc1JzYUVWQUxDQS5jcmwwgaAGA1UdIASBmDCBlTAMBgor
BgEEAYLBWwEFMAwGCisGAQQBgsFbAwEwDAYKKwYBBAGCwVsCBTBpBgsrBgEEAYLBWwACADBaMFgG
CCsGAQUFBwIBFkxodHRwczovL3d3dy5kaXJlY3R0cnVzdC5vcmcvcmVzb3VyY2VzL2NvbXBsaWFu
Y2UtYW5kLWtleS1wb2xpY2llcyNjb21wbGlhbmNlMA0GCSqGSIb3DQEBCwUAA4ICAQCZzgvvC3mp
vil434M4VSl90WhZXwLfBsFCZjjGwkWAZDAuNHGIzQTq7EmVBmHTlswz5xg3tNw31Oq6GOO7yuUt
vWip5OMK1X2jxclyTENyZUNf38Sj0TjC0/JYDTaqlksjoHaUaXCuY/qHVek13lCGjRxj4HSV8eCo
NFyhyn4rujNcpelomxFXPo5ahZ9sqoF64AxJ4I8FCmHUVAXe8wjOAWkXSw8tTVl3Mt9HG6zeTPAw
mnV0pJtcwIB5UQnaZn01MoGvaxP0ddcpLADOb3+uDaKV3XJT72K3nzlc7HEQlkKgrSaEvGhrcu2a
Rkr2QgyRzeaF1Oo907G+Yt6OdjBf8ctgrhCsDW1GTeGogeE0xNyvSPjXRv3eUWVMR/IUERfmhnEB
m1+kpCDEXplAXNcaM+QijayufKDFX8dPd6wCk39QuSDDemdpof0fdbv4vrJ6AU9uqhtgIyOcnLJe
z2JLnIcNCXQ2icAEKnN1UG7zfbdd1ghuD+aIxKLzN0YNXbtf5XzOIFMvf3MDrgn1iEvGnCAToOK6
JUjElJNumAsJbh/gvSgRh5RP7s/c7A14KGrNoSPrFjjjLGUhISPpREeCZwb7ZdaWiaaaQDzq++eS
9eCUbnosagtLxOq2ndJQxdf0YaKvPd5QDA0u1Km+jThzJXs1xZOTIpWDQFN1KMENyg==
</ds:X509Certificate>
</ds:X509Data>
</ds:KeyInfo>
</ds:Signature>
<samlns:Subject>
<samlns:NameID Format="urn:oasis:names:tc:SAML:1.1:nameid-format:X509SubjectName">CN=ourorgname, OU=IT, O=ourorgname, L=NewYork</samlns:NameID>
<samlns:SubjectConfirmation Method="urn:oasis:names:tc:SAML:2.0:cm:holder-of-key">
<samlns:SubjectConfirmationData/>
</samlns:SubjectConfirmation>
</samlns:Subject>
<samlns:Conditions NotBefore="2024-03-08T16:26:59.802Z" NotOnOrAfter="2024-03-08T17:26:59.802Z">
<samlns:AudienceRestriction>
<samlns:Audience>http://ihe.connectathon.XUA/X-ServiceProvider-IHE-Connectathon</samlns:Audience>
</samlns:AudienceRestriction>
</samlns:Conditions>
<samlns:AuthnStatement AuthnInstant="2024-03-08T16:26:59.802Z">
<samlns:AuthnContext>
<samlns:AuthnContextClassRef>urn:oasis:names:tc:SAML:2.0:ac:classes:Password</samlns:AuthnContextClassRef>
</samlns:AuthnContext>
</samlns:AuthnStatement>
<samlns:AttributeStatement>
<samlns:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:subject-id" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic" FriendlyName="XSPA Subject">
<samlns:AttributeValue xsi:type="xs:string">valid</samlns:AttributeValue>
</samlns:Attribute>
<samlns:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="XSPA Organization">
<samlns:AttributeValue xsi:type="xs:string">ourorgname</samlns:AttributeValue>
</samlns:Attribute>
<samlns:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:organization-id" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="XSPA Organization ID">
<samlns:AttributeValue xsi:type="xs:string">urn:oid:0.0</samlns:AttributeValue>
</samlns:Attribute>
<samlns:Attribute Name="urn:ihe:iti:xca:2010:homeCommunityId" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="XCA Home Community ID">
<samlns:AttributeValue xsi:type="xs:string">urn:oid:0.0</samlns:AttributeValue>
</samlns:Attribute>
<samlns:Attribute Name="urn:oasis:names:tc:xspa:1.0:subject:purposeofuse" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="Purpose of Use">
<samlns:AttributeValue xsi:type="xs:string">TREATMENT</samlns:AttributeValue>
</samlns:Attribute>
<samlns:Attribute Name="urn:oasis:names:tc:xacml:2.0:subject:role" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:uri" FriendlyName="HL7 Role">
<samlns:AttributeValue xsi:type="xs:string">MedicalDoctor</samlns:AttributeValue>
</samlns:Attribute>
</samlns:AttributeStatement>
</samlns:Assertion>
<ds:Signature xmlns:ds="http://www.w3.org/2000/09/xmldsig#">
<ds:SignedInfo>
<ds:CanonicalizationMethod Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
<ds:SignatureMethod Algorithm="http://www.w3.org/2001/04/xmldsig-more#rsa-sha256"/>
<ds:Reference URI="#_0">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<ds:DigestValue>07nepYP63amsgcufvkLHuLlIMOGG8r2g54JkSTx/d5g=</ds:DigestValue>
</ds:Reference>
<ds:Reference URI="#_1">
<ds:Transforms>
<ds:Transform Algorithm="http://www.w3.org/2001/10/xml-exc-c14n#"/>
</ds:Transforms>
<ds:DigestMethod Algorithm="http://www.w3.org/2001/04/xmlenc#sha256"/>
<ds:DigestValue>0x8aRv+y3gfky0FaUMxHWPaNmwc4fYq0anpvHyees/0=</ds:DigestValue>
</ds:Reference>
</ds:SignedInfo>
<ds:SignatureValue>wmP3xxadW9s6VcLlDOTICH1MWS4aH4WDvu3saw/JmBfDROY7V63q6yRRGCSV24ywtVZZ+euL1jkxHb44QtSwKPH6SB1ZSMihapSJJjLAUM74TFhsb2is+NAqqBcvux/U+CXD5TSKxTKJgBFGDHUCI8jEaF8+SBx1awpWpVkcxQQD0fVMiOpyDaqex6UfAVuagDho7zHYr3/jKhLvlqzBpelYS0W9P7V7PeSqBGjjyPm/YOCQ1T7K2PIERISyP335JbWgn14GSgyaR/QQVCr00MnzwlD+sJBMNtFwfLYi7f51f/Q28HBr6h/Yl6/C87KXL7Y84S0d00Y0HNfS5nNODQ==</ds:SignatureValue>
<ds:KeyInfo>
<wsse:SecurityTokenReference TokenType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0">
<wsse:KeyIdentifier ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID">_1d69ba1f-e047-45e9-a2be-d21ec70827eb</wsse:KeyIdentifier>
</wsse:SecurityTokenReference>
</ds:KeyInfo>
</ds:Signature>
</wsse:Security>
<wsa:Action>urn:hl7-org:v3:PRPA_IN201305UV02:CrossGatewayPatientDiscovery</wsa:Action>
<wsa:MessageID>urn:uuid:b0fe7b2a-9129-4c11-992c-1fcd1201dcbc</wsa:MessageID>
<wsa:To xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" wsu:Id="_0">replacementto.org</wsa:To>
<wsa:Action soap-env:mustUnderstand="1">urn:hl7-org:v3:PRPA_IN201305UV02:CrossGatewayPatientDiscovery</wsa:Action>
<wsa:ReplyTo>
<wsa:Address>ourownurl.org</wsa:Address>
</wsa:ReplyTo>
</soap-env:Header>
<soap-env:Body>
<ns0:PRPA_IN201305UV02 xmlns:ns0="urn:hl7-org:v3" ITSVersion="XML_1.0">
<ns0:id extension="2211" root="4d271a79-0b22-439a-b64a-72d961a6cd38"/>
<ns0:creationTime value="20240308162659"/>
<ns0:interactionId extension="PRPA_IN201305UV02" root="2.16.840.1.113883.1.6"/>
<ns0:processingCode code="P"/>
<ns0:processingModeCode code="T"/>
<ns0:acceptAckCode code="AL"/>
<ns0:receiver typeCode="RCV">
<ns0:device classCode="DEV" determinerCode="INSTANCE">
<ns0:id root="ourownoid"/>
<ns0:asAgent classCode="AGNT">
<ns0:representedOrganization classCode="ORG" determinerCode="INSTANCE">
<ns0:id root="ourownoid"/>
</ns0:representedOrganization>
</ns0:asAgent>
</ns0:device>
</ns0:receiver>
<ns0:sender typeCode="SND">
<ns0:device classCode="DEV" determinerCode="INSTANCE">
<ns0:id root="2.16.840.1.113883.3.9918"/>
</ns0:device>
</ns0:sender>
<ns0:controlActProcess classCode="CACT" moodCode="EVN">
<ns0:code code="PRPA_TE201305UV02" codeSystemName="2.16.840.1.113883.1.6"/>
<ns0:authorOrPerformer typeCode="AUT">
<ns0:assignedPerson classCode="ASSIGNED"/>
</ns0:authorOrPerformer>
<ns0:queryByParameter>
<ns0:queryId root="61023518-3f6e-4ad5-a465-87082e96b66f"/>
<ns0:statusCode code="new"/>
<ns0:responseModalityCode code="R"/>
<ns0:responsePriorityCode code="I"/>
<ns0:matchCriterionList/>
<ns0:parameterList>
<ns0:livingSubjectAdministrativeGender>
<ns0:value code="male"/>
<ns0:semanticsText>LivingSubject.AdministrativeGender</ns0:semanticsText>
</ns0:livingSubjectAdministrativeGender>
<ns0:livingSubjectBirthTime>
<ns0:value value="1955-06-27"/>
<ns0:semanticsText>LivingSubject.birthTime</ns0:semanticsText>
</ns0:livingSubjectBirthTime>
<ns0:livingSubjectName>
<ns0:value>
<ns0:family>Hickle134</ns0:family>
<ns0:given>Abram53</ns0:given>
</ns0:value>
<ns0:semanticsText>LivingSubject.name</ns0:semanticsText>
</ns0:livingSubjectName>
</ns0:parameterList>
</ns0:queryByParameter>
</ns0:controlActProcess>
</ns0:PRPA_IN201305UV02>
</soap-env:Body>
</soap-env:Envelope>
</code></pre>
<p>This is the portion of our code for signing the <code>Timestamp</code> and <code>To</code> sections of the XML:</p>
<pre><code>
# set reference ID for To tag, where toTag is extracted from header generated by zeep.client.create_message
toTag.attrib[QName("http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd", "Id")] = "_0"
# Add timestamp to the headers
time_created = self.issued_at.strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'
time_expires = (self.issued_at + timedelta(hours=1)).strftime('%Y-%m-%dT%H:%M:%S.%f')[:-3] + 'Z'
timestamp = f'<Timestamp xmlns:wsu=\"http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd\" wsu:Id="_1"><Created>{time_created}</Created><Expires>{time_expires}</Expires></Timestamp>'
# Create the SecurityTokenReference element
nsmap = {"wsse": "http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-secext-1.0.xsd", "dsig":"http://www.w3.org/2000/09/xmldsig#"}
E = ElementMaker(namespace=nsmap["wsse"], nsmap=nsmap)
key_iden_ref = E("KeyIdentifier", ValueType="http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLID")
key_iden_ref.text = "_"+refID
security_token_reference = E.SecurityTokenReference(key_iden_ref)
security_token_reference.attrib['TokenType'] = 'http://docs.oasis-open.org/wss/oasis-wss-saml-token-profile-1.1#SAMLV2.0'
no_nsE = ElementMaker(namespace=nsmap["dsig"], nsmap=nsmap)
key_info = no_nsE.KeyInfo(security_token_reference)
securityTag.insert(0,etree.fromstring(timestamp))
# Sign the timestamp
signedXml = XMLSigner(method=signxml.methods.detached, c14n_algorithm="http://www.w3.org/2001/10/xml-exc-c14n#").sign(soap_etree, key=self.key, cert=self.cert, key_info=key_info, reference_uri=["#_0","#_1"], always_add_key_value=False)
verified_data = XMLVerifier().verify(signedXml, x509_cert=self.cert).signed_xml
securityTag.append(signedXml)
return etree.tostring(soap_etree, encoding='utf-8', xml_declaration=False)
</code></pre>
<p>If you feel other code is needed to help solve the issue, happy to edit and include in the post.</p>
|
<python><soap><saml><zeep>
|
2024-03-08 17:30:41
| 1
| 571
|
rocket_boomerang_19
|
78,129,322
| 13,578,682
|
datetimes and dates should never be equal
|
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, date
class MyDate(date):
pass
print(MyDate(2024, 3, 8) == datetime(2024, 3, 8))
print(date(2024, 3, 8) == datetime(2024, 3, 8))
</code></pre>
<p>Expected output:</p>
<pre class="lang-py prettyprint-override"><code>False
False
</code></pre>
<p>Actual output:</p>
<pre class="lang-py prettyprint-override"><code>True
False
</code></pre>
<p>Why does subclassing <code>datetime.date</code> change the <code>__eq__</code> result?</p>
|
<python>
|
2024-03-08 17:23:47
| 1
| 665
|
no step on snek
|
78,129,071
| 6,197,439
|
Break/wrap long text of column names in Pandas dataframe plain text to_string output?
|
<p>Consider this example:</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
"LIDSA": [0, 1, 2, 3],
"CAE": [3, 5, 7, 9],
"FILA": [1, 2, 3, 4], # 2 is default, so table idx 1 is default
"VUAMA": [0.5, 1.0, 1.5, 2.0],
})
df_colnames = { # https://stackoverflow.com/q/48243818
"LIDSA": "Lorem ipsum dolor sit amet",
"CAE": "Consectetur adipiscing elit",
"FILA": "Fusce imperdiet libero arcu",
"VUAMA": "Vitae ultricies augue molestie ac",
}
# "Pandas autodetects the size of your terminal window if you set pd.options.display.width = 0" https://stackoverflow.com/q/11707586
with pd.option_context('display.max_rows', None, 'display.max_columns', None, 'display.width', 0, 'max_colwidth', 20, 'display.float_format', "{:.2f}".format):
df_str = df.rename(df_colnames,axis=1).to_string()
print(df_str)
</code></pre>
<p>This results with the terminal stdout printout, at the time 111 characters wide:</p>
<pre class="lang-none prettyprint-override"><code> Lorem ipsum dolor sit amet Consectetur adipiscing elit Fusce imperdiet libero arcu Vitae ultricies augue
molestie ac
0 0 3 1
0.50
1 1 5 2
1.00
2 2 7 3
1.50
3 3 9 4
2.00
</code></pre>
<p>So, only the last column got line-broken (and correspondingly, the values for it). I would have preferred that each long column name gets line-broken / word-wrapped at say 20 characters, and then the values output correspondingly, something like:</p>
<pre class="lang-none prettyprint-override"><code> Lorem ipsum dolor Consectetur Fusce imperdiet Vitae ultricies
sit amet adipiscing elit libero arcu augue molestie ac
0 0 3 1 0.50
1 1 5 2 1.00
2 2 7 3 1.50
3 3 9 4 2.00
</code></pre>
<p>I thought <code>'max_colwidth', 20</code> would do that, but apparently it doesn't.</p>
<p>I even tried adding explicit linebreaks in the long column names, but they just get rendered as <code>\n</code>, and the column name is still in one line (as noted also in <a href="https://stackoverflow.com/questions/36331369/linebreaks-in-pandas-column-names">Linebreaks in pandas column names</a>)</p>
<p>So, is it possible to "word-wrap"/"line break" long column names in Pandas for plain text string output?</p>
|
<python><pandas><dataframe><word-wrap>
|
2024-03-08 16:38:25
| 2
| 5,938
|
sdbbs
|
78,129,051
| 4,269,851
|
Make this python loop algorithm shorter (mathematics invloved)
|
<p>For sake of learning i opt for not using any additional libraries.</p>
<p>Goal of this algorithm to break lit <code>lst</code> into chunks 8 records each.
And additionally load the values from dictionary <code>dct</code> for records of <code>lst</code> 1-8, 9-16, 17-24.</p>
<p>Trying to avoid unnecessary <code>for loops</code> when possible, i come up with this solution.</p>
<pre><code>dct = {1:'one', 2:'two', 3:'three', 4:'four', 5:'five', 6:'six', 7:'seven', 8:'eight'}
lst = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24]
loop = 0
for record in lst:
print(f'record id: {record:<10} dct id: {record-loop:<10} dct value: {dct.get(record - loop)}')
if record % 8 == 0:
loop += 8
print('--- loop finished ---')
</code></pre>
<p>this output</p>
<pre><code>record id: 1 dct id: 1 dct value: one
record id: 2 dct id: 2 dct value: two
record id: 3 dct id: 3 dct value: three
record id: 4 dct id: 4 dct value: four
record id: 5 dct id: 5 dct value: five
record id: 6 dct id: 6 dct value: six
record id: 7 dct id: 7 dct value: seven
record id: 8 dct id: 8 dct value: eight
--- loop finished ---
record id: 9 dct id: 1 dct value: one
record id: 10 dct id: 2 dct value: two
record id: 11 dct id: 3 dct value: three
record id: 12 dct id: 4 dct value: four
record id: 13 dct id: 5 dct value: five
record id: 14 dct id: 6 dct value: six
record id: 15 dct id: 7 dct value: seven
record id: 16 dct id: 8 dct value: eight
--- loop finished ---
record id: 17 dct id: 1 dct value: one
record id: 18 dct id: 2 dct value: two
record id: 19 dct id: 3 dct value: three
record id: 20 dct id: 4 dct value: four
record id: 21 dct id: 5 dct value: five
record id: 22 dct id: 6 dct value: six
record id: 23 dct id: 7 dct value: seven
record id: 24 dct id: 8 dct value: eight
--- loop finished ---
</code></pre>
<p>Can someone with more mathematical experience suggest how can i get rid of variable <code>loop</code> altogether along with this section</p>
<pre><code>if record % 8 == 0:
loop += 8
</code></pre>
<p>and instead use some mathematical formula instead of <code>record - loop</code></p>
<p>If i am simply using <code>dct.get(record % 8)</code> i am getting wrong result since <code>record % 8</code> produces this code</p>
<pre><code>1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0 1 2 3 4 5 6 7 0
</code></pre>
<p>I need</p>
<pre><code>1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8
</code></pre>
<p>Of course i could use a barbaric fix like this <code>((record - 1) % 8) + 1</code> to make it work, but i am sure there's more clean and more readable way of doing it.</p>
|
<python><loops><mathematical-optimization><mathematical-expressions>
|
2024-03-08 16:35:10
| 5
| 829
|
Roman Toasov
|
78,128,950
| 1,108,872
|
Why do I get back different eigenvectors that I put in?
|
<p>I am trying to build a matrix A from a set of eigenvectors and eigenvalues, and then reverse the operation to get the eigenvectors and eigenvalues back. I am not generally able to recover the eigenvalue-eigenvector combination that I start with, and I'm having trouble understanding why.</p>
<p>I can fully appreciate that this might be a math problem rather than a programming problem -- but I figured I would ask here as its equally possible I'm just doing something foolish with numpy.</p>
<pre><code>evecs = np.array([[6.12e-32, 0.00, 1.0], [0.71, 0.71, 0.00], [-0.71, 0.71, 0.00]])
evals =np.diag(np.array([0.6, 0.3, 0.1]))
A = np.dot(np.dot(evecs, evals), np.linalg.inv(evecs))
eigenvalues, eigenvectors = np.linalg.eig(A)
</code></pre>
<p>This returns:</p>
<pre><code>print (eigenvalues)
[0.1 0.6 0.3]
print (eigenvectors)
[[ 1.000000000000000e+00 6.095061268819254e-32 5.775657482482331e-49]
[ 0.000000000000000e+00 7.071067811865475e-01 7.071067811865475e-01]
[ 0.000000000000000e+00 -7.071067811865475e-01 7.071067811865475e-01]]
</code></pre>
<p>I understand that the eigenvalues came back unsorted, but still, the eigenvector corresponding with the largest eigenvalue is now:</p>
<pre><code>print (eigenvectors[:,1])
[ 6.095061268819254e-32 7.071067811865475e-01 -7.071067811865475e-01]
</code></pre>
<p>Which is not the same as v1 = [0, 0, 1] that was sent in. Why are the eigenvectors switching axes like this? Is there some way that I can recover the original orientation of v1?</p>
<p>While searching around for a solution, I also read several places that computing the inverse should be avoided -- is there a better (more stable) way of moving from the eigenspace to A (and back again)?</p>
|
<python><numpy><linear-algebra><eigenvalue><eigenvector>
|
2024-03-08 16:12:56
| 1
| 635
|
Nordlendingen
|
78,128,890
| 6,197,439
|
Pandas dataframe title/caption in plain text to_string output?
|
<p>As an example:</p>
<pre class="lang-python prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
"Hello World": [1, 2, 3, 4],
"And Some More": [10.0, 20.0, 30.0, 40.0],
})
df_caption = "Table 1: My Table"
df.style.set_caption(df_caption) # only works for HTML; https://stackoverflow.com/q/57958432
with pd.option_context('display.max_rows', None, 'display.max_columns', None, 'display.width', None, 'max_colwidth', 50, 'display.float_format', "{:.2f}".format):
df_str = df.to_string()
print(df_str)
</code></pre>
<p>... outputs:</p>
<pre class="lang-none prettyprint-override"><code> Hello World And Some More
0 1 10.00
1 2 20.00
2 3 30.00
3 4 40.00
</code></pre>
<p>... and clearly, there is no table title/caption in the plain text output of <code>.to_string()</code>.</p>
<p>Sure I can just <code>print(df_caption)</code> myself separately - but is it otherwise somehow possible to add dataframe (table) caption on the Pandas <code>DataFrame</code> object, so that it is output in the string generated by <code>.to_string()</code>?</p>
|
<python><pandas><dataframe><pandas-styles>
|
2024-03-08 16:02:14
| 2
| 5,938
|
sdbbs
|
78,128,878
| 20,122,390
|
How can I apply a validator with each_item in Pydantic 2?
|
<p>I'm migrating a microservice from Pydantic 1 to Pydantic 2. Previously I had this:</p>
<pre><code>class ComplementQuery(BaseModel):
imports: List[str] = Field(default_factory=list)
subquery: str = Field(...)
@validator("imports", pre=True, each_item=True)
def complement_imports(cls, value: str) -> str:
return f'import "{value}"'
</code></pre>
<p>I read the documentation and saw that @field_validator must now be used. Replace the validator and change the "pre=True" to "mode="Before". But the each_item I'm not sure how it should be addressed. Now I have something like this:</p>
<pre><code>class ComplementQuery(BaseModel):
imports: List[Annotated[str, Field(default_factory=List)]]
subquery: str = Field(...)
@field_validator("imports", mode="before")
@classmethod
def complement_imports(cls, value: str) -> str:
return f'import "{value}"'
</code></pre>
<p>Will the complement_imports validator be applied to each of the imports items?
Or should I put the BeforeValidator of pydantic.functional_validators in the Field? I am a little confused.</p>
<p>Maybe it should be something like this:</p>
<pre><code>def complement_imports(v: str) -> str:
return f'import "{v}"'
class ComplementQuery(BaseModel):
imports: List[
Annotated[str, BeforeValidator(complement_imports), Field(default_factory=List)]
]
subquery: str = Field(...)
</code></pre>
|
<python><pydantic><pydantic-v2>
|
2024-03-08 15:58:27
| 1
| 988
|
Diego L
|
78,128,775
| 6,435,921
|
Leverage broadcasting to make this subtraction more efficient
|
<p>I have an array <code>x</code> of shape <code>(N, T, d)</code>. I have two functions <code>f</code> and <code>g</code> which both take an array of shape <code>(some_dimension, d)</code> and return an array of shape <code>(some_dimension, )</code>.</p>
<p>I would like to compute <code>f</code> on all of <code>x</code>. This is simple: <code>f(x.reshape(-1, d))</code>.</p>
<p>I would then like to compute <code>g</code> only on the first slice of the second dimension, meaning <code>g(x[:, 0, :])</code> and subtract this to the evaluation of <code>f</code> on for all dimensions. This is exemplified in the code</p>
<h3>MWE - Inefficient Way</h3>
<pre><code>import numpy as np
# Reproducibility
seed = 1234
rng = np.random.default_rng(seed=seed)
# Generate x
N = 100
T = 10
d = 2
x = rng.normal(loc=0.0, scale=1.0, size=(N, T, d))
# In practice the functions are not this simple
def f(x):
return x[:, 0] + x[:, 1]
def g(x):
return x[:, 0]**2 - x[:, 1]**2
# Compute f on all the (flattened) array
fx = f(x.reshape(-1, d)).reshape(N, T)
# Compute g only on the first slice of second dimension. Here are two ways of doing so
gx = np.tile(g(x[:, 0])[:, None], reps=(1, T))
gx = np.repeat(g(x[:, 0]), axis=0, repeats=T).reshape(N, T)
# Finally compute what I really want to compute
diff = fx - gx
</code></pre>
<p>Is there a more efficient way? I feel that using broadcasting there must be, but I cannot figure it out.</p>
|
<python><arrays><numpy><array-broadcasting>
|
2024-03-08 15:36:46
| 1
| 3,601
|
Euler_Salter
|
78,128,774
| 2,760,194
|
Intermittent Redis timeout issues when a Function attempts to connect to the server
|
<p>I might just be missing something very basic, but please bear with me.</p>
<p>I need to have my Azure Function App connect to an Azure VM that has Redis in it. The following is a simple version of my code with the irrelevant parts removed:</p>
<pre><code># main.py
import logging
import azure.functions as func
from . import redis_db
def main(req: func.HttpRequest) -> func.HttpResponse:
# Load secrets, etc here
# Then prepare the DB connection
redis_instance = redis_db.Db()
# Other checks here (client auth, etc)
# Do stuff with redis_instance
found = redis_instance.check_exists(mykey)
</code></pre>
<pre><code># redis_db.py
import logging
import redis
from . import constants
class Db:
rd = None
def __init__(self):
global rd
if rd is None:
self.connect()
else:
logging.info("[RD] Reusing connection...")
self.rd = rd
def connect(self):
hostname = os.environ.get("REDIS_HOST")
port = os.environ.get("REDIS_PORT")
password = os.environ.get("REDIS_ACCESS_KEY")
logging.info("[Redis] Connecting...")
self.rd = redis.Redis(
host=hostname, port=port, password=password,
socket_connect_timeout=constants.timeout_seconds, socket_timeout=constants.timeout_seconds,
)
rd = self.rd
return self.rd
def check_exists(self, key):
logging.info(f"[Redis] Checking if code exists for {key} ({type(key)})")
try:
result = self.rd.get(name=key)
logging.info(f"[Redis] Check result: {result}")
except Exception as e:
logging.exception(f"[Redis] Exception: {type(e)} {e}")
return result
# Other redis methods
</code></pre>
<p>This works when I run it locally. I have our office VPN enabled, and the Redis VM has the IP in its whitelist. All is good. After deploying the function to Azure (under a <code>premium</code> App Service Plan), our sysadmin ensures that the function is included in the VNet, which should also enable it to communicate with the Redis VM. The function app includes the host, port, etc in its Application Settings, and are Key Vault references (with green check marks).</p>
<p>But all sorts of problems come up here. When I trigger the function, 90% of the time, the function throws a <code>redis.exceptions.TimeoutException</code> — but not all the time. It <em>occasionally</em> responds with a <code>200</code> and tells me whether or not my key exists, without issue. I added a "retry" behavior, which clears the <code>rd</code> variable to ensure a new connection is used when a Timeout happens. But the function still times out. Any amount of <code>socket_timeout</code> (and <code>socket_connect_timeout</code>) doesn't affect it, from 5s up to 30s. (When it does work, it returns in 3s or less.)</p>
<p>I tried to SSH into the function app using the Portal's Web SSH feature, but it doesn't have any <code>ping</code> or <code>telnet</code> commands, so I couldn't check. The VM and the Function App appear to be in the same VNet, upon checking.</p>
<p>I added a <code>self.rd.ping()</code> at the end of the <code>__init__()</code> and a try-catch surrounding this call, after learning that connections are not really "loaded" until you actually send a command. The ping timing out confirms that there is a connection issue, but the VNet should have solved this, right? But then why is it inconsistent?</p>
<p>What might be causing this sporadic behavior? Any ideas?</p>
|
<python><azure><redis><azure-functions><azure-virtual-machine>
|
2024-03-08 15:36:32
| 0
| 385
|
Cezille07
|
78,128,765
| 5,931,672
|
Running SQLAlchemy inside postgres docker container
|
<p>I have the following <code>Dockerfile</code>:</p>
<pre><code>FROM postgres
RUN apt-get update && \
apt-get install \
--yes \
--no-install-recommends \
python3-pip libpq-dev
RUN pip3 install \
--default-timeout=100 \
sqlalchemy sqlalchemy-utils sqlalchemy-utils psycopg2-binary
COPY database.py .
CMD python3 database.py
</code></pre>
<p>The database.py file is:</p>
<pre><code>from sqlalchemy import create_engine
from sqlalchemy.ext.declarative import declarative_base
from sqlalchemy.orm import sessionmaker
from sqlalchemy_utils import database_exists, create_database
SQLALCHEMY_DATABASE_URL = (
"postgresql+psycopg2://postgres:password@localhost/mydb"
)
# SQLALCHEMY_DATABASE_URL = "postgresql:///mydb"
engine = create_engine(SQLALCHEMY_DATABASE_URL)
if not database_exists(engine.url):
print("Database did not exit. Creating it.")
create_database(engine.url)
SessionLocal = sessionmaker(autocommit=False, autoflush=False, bind=engine)
Base = declarative_base()
</code></pre>
<p>The build performs without issue, then I run:</p>
<p><code>docker run --name <SOME_NAME> -p 5432:5432 -e POSTGRES_PASSWORD=password -d <BUILD_TAG></code></p>
<p>And this happens:</p>
<pre><code>sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) connection to server at "localhost" (127.0.0.1), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
connection to server at "localhost" (::1), port 5432 failed: Cannot assign requested address
Is the server running on that host and accepting TCP/IP connections?
(Background on this error at: https://sqlalche.me/e/20/e3q8)
</code></pre>
<p>I already tried specifying the port and trying with port 5433 for example. Also I tried removing the <code>-p 5432:5432</code>. I think that the container exposes the port 5432 to the outside world but not from the inside.</p>
|
<python><docker><sqlalchemy><dockerfile>
|
2024-03-08 15:35:08
| 1
| 4,192
|
J Agustin Barrachina
|
78,128,731
| 1,045,704
|
How to print a horizontal line on the paper of a thermal printer?
|
<p>In HTML5 we use the <code><hr></code> tag to draw a horizontal line. Now from <code>Javascript</code> I want to post data to a Python project for printing on a <strong>thermal printer</strong> :</p>
<pre><code>let data = "Prestation : steak \n";
data += "Quantite : 3 \n";
data += // here I want to draw a horizontal line
data += "Prestation : coca \n";
data += "Quantite : 1";
$.post("localhost:5000/print", JSON.stringify({"printer": "some_printer_name", "payload": data}), function(response) {
...
});
</code></pre>
<p>The Python codes :</p>
<p>file printing.py :</p>
<pre><code>import logging
import win32print
import win32ui
import win32con
class Printing(object):
printer = None
# Constructor
def __init__(self, printer):
self.printer = printer
@staticmethod
def print(printer_name, text):
try:
printer_handle = win32print.OpenPrinter(printer_name)
dc = win32ui.CreateDC()
dc.CreatePrinterDC(printer_name)
dc.StartDoc(text)
dc.StartPage()
dc.TextOut(10, 10, text)
dc.EndPage()
dc.EndDoc()
win32print.ClosePrinter(printer_handle)
return True
except:
logging.info('Error occured during printing')
return False
finally:
pass
</code></pre>
<p>file app.py running in background process waitinf for any call from the Javascript side :</p>
<pre><code>import logging
from flask import Flask, request, jsonify, make_response
from printing import Printing
app = Flask(__name__)
@app.route('/')
def index():
return 'If you see this, it means the printer service is up and running !'
@app.route('/print', methods=["POST"])
def print():
printer_name = request.json['printer']
data = request.json['payload']
result = Printing.print(printer_name, data)
if result:
response = make_response(
jsonify(
{"message": str("Printing success")}
),
200,
)
response.headers["Content-Type"] = "application/json"
return response
else:
response = make_response(
jsonify(
{"message": str("Printing failed"), "severity": "error"}
),
500,
)
response.headers["Content-Type"] = "application/json"
return response
if __name__ == '__main__':
app.run(debug=True)
</code></pre>
<p>So how to code the horizontal line ?</p>
|
<javascript><python><jquery><thermal-printer>
|
2024-03-08 15:30:13
| 1
| 19,577
|
pheromix
|
78,128,694
| 13,328,625
|
Huggingface Seq2seqTrainer freezes on evaluation
|
<p>I'm currently trying to train a Whisper model by following the <a href="https://huggingface.co/blog/fine-tune-whisper#training-and-evaluation" rel="nofollow noreferrer">Fine Tune Whisper Model</a> tutorial. However, during the training phase where I call <code>trainer.train()</code>. I see the progress bar progresses through the training, but when it reaches the evaluation step defined at the training arguments, it will just freeze and the progress bar just stalls up. No error output, no nothing. And it will look like this.</p>
<p><a href="https://i.sstatic.net/GJkza.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GJkza.png" alt="enter image description here" /></a></p>
<p>I'm using Kaggle notebooks to write the code with GPU P100 turned on. Here are my training arguments leading up to the training function.</p>
<pre class="lang-py prettyprint-override"><code>from transformers import WhisperForConditionalGeneration
model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-small")
model.config.forced_decoder_ids = None
model.config.suppress_tokens = []
model.generation_config.language = "en"
</code></pre>
<pre class="lang-py prettyprint-override"><code>from transformers import Seq2SeqTrainingArguments
training_args = Seq2SeqTrainingArguments(
output_dir="./whisper-small-eng-gen", # change to a repo name of your choice
per_device_train_batch_size=16,
gradient_accumulation_steps=1, # increase by 2x for every 2x decrease in batch size
learning_rate=1e-5,
warmup_steps=500,
max_steps=1000,
gradient_checkpointing=True,
fp16=True,
evaluation_strategy="steps",
per_device_eval_batch_size=8,
predict_with_generate=True,
generation_max_length=225,
save_steps=1000,
eval_steps=1000,
logging_steps=25,
report_to=["tensorboard"],
load_best_model_at_end=True,
metric_for_best_model="wer",
greater_is_better=False,
push_to_hub=True,
ignore_data_skip=True
)
</code></pre>
<pre class="lang-py prettyprint-override"><code>from transformers import Seq2SeqTrainer
trainer = Seq2SeqTrainer(
args=training_args,
model=model,
train_dataset=common_voice_train,
eval_dataset=common_voice_test,
data_collator=data_collator,
compute_metrics=compute_metrics,
tokenizer=processor.feature_extractor,
)
</code></pre>
<blockquote>
<p>Initially, the <code>max_steps</code> for training is 4000, and it always stalls at step 1001.</p>
</blockquote>
<p>I think it is also worth noting that my dataset is streamed, and it is an Iterable Dataset.</p>
<p>Any help is appreciated!</p>
<p><strong>**Update**</strong>
I edited my code to include verbose logging with</p>
<pre class="lang-py prettyprint-override"><code>import transformers
transformers.logging.set_verbosity_info()
</code></pre>
<p>And this is the log after the evaluation step is reached.</p>
<blockquote>
<p>You have passed language=en, but also have set <code>forced_decoder_ids</code> to [[1, None], [2, 50359]] which creates a conflict. <code>forced_decoder_ids</code> will be ignored in favor of language=en.</p>
</blockquote>
|
<python><huggingface-transformers><huggingface><openai-whisper><huggingface-trainer>
|
2024-03-08 15:24:41
| 1
| 478
|
InvalidHop
|
78,128,662
| 4,212,158
|
Converting Pytorch bfloat16 tensors to numpy throws TypeError
|
<p>When you try to convert a Torch bfloat16 tensor to a numpy array, it throws a <code>TypeError</code>:</p>
<pre class="lang-py prettyprint-override"><code>import torch
x = torch.Tensor([0]).to(torch.bfloat16)
x.numpy() # TypeError: Got unsupported ScalarType BFloat16
import numpy as np
np.array(x) # same error
</code></pre>
<p>Is there a work-around to make this conversion?</p>
|
<python><numpy><pytorch><floating-point><tensor>
|
2024-03-08 15:18:42
| 1
| 20,332
|
Ricardo Decal
|
78,128,397
| 6,779,049
|
Ignore Greek Letter Representation in SymPy?
|
<p>I'm trying to make a variable with the subscript <code>pi</code> on it as follows:</p>
<pre><code>r_pi = sp.symbols('r_pi')
</code></pre>
<p><a href="https://i.sstatic.net/amlfR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/amlfR.png" alt="enter image description here" /></a></p>
<p>When displayed, the symbol uses the greek letter π as the subscript but my intention in this case is to actually just use "pi" as if it is not a greek letter. Is there to make sympy ignore converting this symbol to a greek letter?</p>
|
<python><sympy>
|
2024-03-08 14:34:33
| 1
| 398
|
Nick-H
|
78,128,367
| 8,506,921
|
Cross merge by group in pandas
|
<p>I am trying to cross merge two dataframes but limiting the merge so only combinations within the same group are provided. The pandas documentation says <code>When performing a cross merge, no column specifications to merge on are allowed</code>. At the moment to achieve this I'm using a for loop and concatenating the resulting dfs but is there a more efficient way?</p>
<p>Example input data:</p>
<pre><code>import pandas as pd
df1 = pd.DataFrame({
'group': [1, 1, 2, 2],
'field_a': ['apple', 'pear', 'banana', 'papaya']
})
df2 = pd.DataFrame({
'group': [1, 1, 2, 2],
'field_b': ['apple', 'strawberry', 'coconut', 'papaya']
})
</code></pre>
<p>Example required output:</p>
<pre><code>pd.DataFrame({'group': [1, 1, 1, 1, 2, 2, 2, 2],
'field_a': ['apple', 'apple', 'pear', 'pear', 'banana', 'banana', 'papaya', 'papaya'],
'field_b': ['apple', 'strawberry', 'apple', 'strawberry', 'coconut', 'papaya', 'coconut', 'papaya']})
</code></pre>
<p>Current approach:</p>
<pre><code>cols = ['group', 'field_a', 'field_b']
all_possible_matches = pd.DataFrame({
col: [] for col in cols
})
for group in [1, 2]:
combined = df1[df1['group'] == group].merge(df2[df2['group'] == group][['field_b']], how='cross')
all_possible_matches = pd.concat([all_possible_matches, combined])
</code></pre>
|
<python><pandas>
|
2024-03-08 14:29:21
| 1
| 1,874
|
Jaccar
|
78,128,366
| 6,779,049
|
How to Change String Representation of Sympy Symbol?
|
<p>Is there a simple way to change the string representation of a SymPy Symbol?</p>
<p>Example:</p>
<pre><code>rpj_g0x = sp.symbols('r_{pj/g0\\,x}')
</code></pre>
<p>This will give me a valid, nicely printed symbol representing the x-component of a position vector.</p>
<p><a href="https://i.sstatic.net/1PRA4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1PRA4.png" alt="enter image description here" /></a></p>
<p>However, if I want to convert this to code at a later point, I can't without doing a variable substitution because <code>r_{pj/g0,x}</code> is not a valid variable name in python.</p>
<p>It would be nice if at the time of construction, you could pass in a string to use for string representations of the symbol. Is there any way to do this?</p>
<p><strong>UPDATE</strong></p>
<p>Behavior I get:</p>
<pre><code>rpj_g0x = sp.symbols('r_{pj/g0\\,x}')
str(rpj_g0x) # --> 'r_{pj/g0,x}'
</code></pre>
<p>Behavior I would like to get:</p>
<pre><code>rpj_g0x = sp.symbols('r_{pj/g0\\,x}', str_repr='r_pj_g0x')
str(rpj_g0x) # --> 'r_pj_g0x'
</code></pre>
<p>I don't believe a <code>str_repr</code> argument exists, but this is the behavior that I'm trying to get as the string representation of the symbol is now a valid python variable name.</p>
|
<python><sympy>
|
2024-03-08 14:29:13
| 0
| 398
|
Nick-H
|
78,128,335
| 4,362,655
|
Correlation matrix shrinkage causes matrix multiplication error for monte carlo simulation
|
<p>I have list of 10 stocks and a correlation 10x10 correlation matrix for these stocks. I have to reduce the size of this matrix to 3x3 and use it for Monte Carlo simulation to simulate possible outcomes. The problem is that reducing the size is causing the matrix multiplication to fail. How can I move forward ?</p>
<pre><code>num_stocks = 10
correlation_matrix = np.zeros((num_stocks, num_stocks))
for i in range(num_stocks):
for j in range(num_stocks):
correlation_matrix[i, j] = 0.9 ** abs(i - j)
# using PCA to generate 3x3 reduced correlation matrix
from sklearn.decomposition import PCA
pca = PCA(n_components=3)
p = pca.fit_transform(correlation_matrix)
reduced_correlation_matrix = np.corrcoef(p, rowvar=False)
# Generate 10 random values for simulation
random_vars = np.random.normal(0, 1, num_stocks)
correlated_random_vars = random_vars @ reduced_correlation_matrix
# logic after this uses monte carlo simulation
# correlated_random_vars needs to be a list of 10 random variables
# failing above as random_vars is [10x1] matrix and reduced_correlation_matrix is [3x3] matrix
</code></pre>
<p>Any help would be appreciated.</p>
|
<python><correlation><matrix-multiplication><pca><montecarlo>
|
2024-03-08 14:23:51
| 0
| 1,575
|
LuckyStarr
|
78,128,310
| 2,175,347
|
PySimpleGUI: How to detect overflow of elements beyond the window limits?
|
<p>In the example below, there is a row with more elements than fit the window. How can I detect the overflow?</p>
<pre class="lang-py prettyprint-override"><code>import PySimpleGUI as sg
layout = [[sg.Text(f"word{i}") for i in range(30)]]
window = sg.Window("Demo", layout, size=(500, 100))
window.finalize()
while True:
event, values = window.read()
if event == sg.WINDOW_CLOSED:
break
</code></pre>
|
<python><user-interface><pysimplegui>
|
2024-03-08 14:20:06
| 1
| 346
|
tmalsburg
|
78,128,307
| 5,043,301
|
How to show data from Database to in Django Templates
|
<p><strong>I am learning django. My <code>views.py</code> is like below which is inside <code>student</code> app folder.</strong></p>
<pre><code>from django.shortcuts import render
# from .models import Student
from student.models import Student
# Create your views here.
def home(request):
student_data = Student.objects.all()
context = {
'student_data': student_data
}
return render(request, 'index.html', context )
</code></pre>
<p>I am trying to get values in <code>index.html</code> template which is inside <code>template</code> folder. My code in <code>index.html</code> is like below.</p>
<pre><code><p>
{{ student_data }}
</p>
</code></pre>
<p>My code of <code>models.py</code> which is inside <code>student</code> app is like below.</p>
<pre><code>from django.db import models
class Student(models.Model):
first_name=models.CharField( max_length=100 )
last_name=models.CharField( max_length=100, blank=True, null=True )
age=models.IntegerField()
</code></pre>
<p>But I can't see any Data in HTML using browser.</p>
|
<python><django>
|
2024-03-08 14:19:38
| 0
| 7,102
|
abu abu
|
78,128,206
| 4,930,914
|
Return sentences from list of sentences using user specified keyword
|
<p>I got a list of sentences (roughly 20000) stored in excel file named list.xlsx and sheet named Sentence under column name named Sentence.</p>
<p>My intention is to get words from user and return those sentences where in those exact words matches.</p>
<p>I am currently able to do so with the code i developed using spacy. But it takes lot of time to check and return output.</p>
<p>Is there any other time saving way of achieving this by any other means.</p>
<p>I see in geany notepad or libre calc wherein its search function return sentences in a jiffy.</p>
<p>How?</p>
<p>kindly help.</p>
<pre><code>import pandas as pd
import spacy
# Load the English language model in spaCy
nlp = spacy.load("en_core_web_sm")
# Function to extract sentences containing the keyword
def extract_sentences_with_keyword(text, keyword):
doc = nlp(text)
sentences = [sent.text for sent in doc.sents if keyword in sent.text.lower()]
return sentences
i = input("Enter Keyword(s):")
# Read the Excel file
file_path = "list.xlsx"
sheet_name = "Sentence" # Update with your sheet name
column_name = "Sentence" # Update with the column containing text data
data = pd.read_excel(file_path, sheet_name=sheet_name)
# Iterate over the rows and extract sentences with the keyword
keyword = i # Update with the keyword you want to search for
for index, row in data.iterrows():
text = row[column_name]
sentences = extract_sentences_with_keyword(text, keyword)
if sentences:
for sentence in sentences:
print(sentence)
print("\n")
</code></pre>
|
<python><spacy><text-processing>
|
2024-03-08 14:02:59
| 1
| 915
|
Programmer_nltk
|
78,128,203
| 2,092,445
|
How to get hold of Warnings in Pandera as objects instead of string?
|
<p>I am new to Pandera and using it to run some schema validations on my dataframe. I want to use a mix of warnings and errors. The error part is what is working seamlessly. What I am doing is to catch the rows in the original dataframe which failed validations using index column of failure_cases in SchemaErrors and send them back to users as a dataframe.</p>
<p>Below works for errors.</p>
<pre><code> except pa.errors.SchemaErrors as e:
# run some custom logic logic using the index of the row which failed validation
print(e.failure_cases.groupby('index')
</code></pre>
<p>I want to do the same with Warnings as well but I am not able to find an easy way to get a handle on index of the validation failed rows in case of warnings and I get warnings as <strong>type = str.</strong></p>
<p>So for below code, I get</p>
<pre><code>with warnings.catch_warnings(record=True) as caught_warnings:
dynamic_schema.validate(df, lazy=True, inplace=False)
if caught_warnings:
print(type((warning.message.args[0])) # print str
</code></pre>
<p>output (below is one single string)</p>
<pre><code><Schema Column(name=series_value_date, type=DataType(str))> failed element-wise validator 0:
<Check validate: last saved date check failed>
failure cases:
index failure_case
0 0 2024-03-26T00:00:00.0000000
1 1 2024-03-24T00:00:00.0000000
2 2 2024-03-23T00:00:00.0000000
3 3 2024-03-22T00:00:00.0000000
</code></pre>
<p>If somehow, I can access index column in above string output, this will fulfill my use case. Is there a way to do this ?</p>
|
<python><pandera>
|
2024-03-08 14:02:52
| 0
| 2,264
|
Naxi
|
78,127,976
| 5,547,553
|
How to add new columns from a list to an existing dataframe in polars?
|
<br>
I'd like to add new empty columns (elements of mylist) to an existing dataframe.<br>
This code does it:
<pre><code>import polars as pl
df = pl.DataFrame({'a': [1,2,3]})
mylist = [f'col{i}' for i in range(1,4)]
data = [[''] for i in range(1,len(mylist)+1)]
df.join(pl.DataFrame(data=data,schema=mylist),
how='cross')
</code></pre>
<p>But is there a more polars way to do this? Something using .with_columns() or pl.any()? So that I do not have to create a new dataframe and join it?</p>
|
<python><dataframe><python-polars>
|
2024-03-08 13:25:30
| 2
| 1,174
|
lmocsi
|
78,127,851
| 3,616,293
|
Find neighborhood for torch tensor
|
<p>I am trying to implement a Self-Organizing Map where for a given input sample, the best matching unit/winning unit is chosen based on (say) L2-norm distance between the SOM and the input. The winning unit/BMU (som[x, y]) has the smallest L2 distance from the given input (z):</p>
<pre><code># Input batch: batch-size = 512, input-dim = 84-
z = torch.randn(512, 84)
# SOM shape: (height, width, input-dim)-
som = torch.randn(40, 40, 84)
print(f"BMU row, col shapes; row = {row.shape} & col = {col.shape}")
# BMU row, col shapes; row = torch.Size([512]) & col = torch.Size([512])
</code></pre>
<p>For clarity, for the first input sample in the batch "z[0]", the winning unit is "som[row[0], col[0]]"-</p>
<pre><code>z[0].shape, som[row[0], col[0]].shape
# (torch.Size([84]), torch.Size([84]))
</code></pre>
<p><code>torch.norm((z[0] - som[row[0], col[0]]))</code> is the smallest L2 distance between z[0] and all other som units except row[0] and col[0].</p>
<pre><code># Define initial neighborhood radius and learning rate-
neighb_rad = torch.tensor(2.0)
lr = 0.5
# To update weights for the first input "z[0]" and its corresponding BMU "som[row[0], col[0]]"-
for r in range(som.shape[0]):
for c in range(som.shape[1]):
neigh_dist = torch.exp(-torch.norm(input = (som[r, c] - som[row[0], col[0]])) / (2.0 * torch.pow(neighb_rad, 2)))
som[r, c] = som[r, c] + (lr * neigh_dist * (z[0] - som[r, c]))
</code></pre>
<p>How can I implement the code for:</p>
<ol>
<li>updating weights for all units around each BMU without the 2 for loops (and)</li>
<li>do it for all of the inputs "z" (here, z has 512 samples)</li>
</ol>
|
<python><numpy><pytorch>
|
2024-03-08 13:01:39
| 1
| 2,518
|
Arun
|
78,127,662
| 1,473,517
|
How to remove "exterior" parts of a diagonal line that is clipping a circle
|
<p>In this <a href="https://stackoverflow.com/a/78120592/1473517">answer</a> it shown how to use <a href="https://matplotlib.org/stable/users/explain/artists/transforms_tutorial.html" rel="nofollow noreferrer">transdata</a> to clip a circle with a diagonal line. The code and images are:</p>
<pre><code> import matplotlib.pyplot as plt
# Create the circle with radius 6
circle = plt.Circle((0, 0), 6, color='r', fill=False)
# Set up the plot (reuse the previous grid settings)
plt.figure(figsize=(8, 8))
plt.xlim(0, 10)
plt.ylim(0, 10)
plt.grid()
# Add the circle to the plot
ax = plt.gca()
ax.add_patch(circle)
# Draw a diagonal line
plt.plot([0, 7], [7, 0], color='b', linestyle='--')
# Set aspect ratio to ensure square grid cells
ax.set_aspect("equal")
polygon = plt.Polygon([[0, 0], [7, 0], [0, 7]], transform=ax.transData)
# Clip the circle using the diagonal line.
circle.set_clip_path(polygon)
# Show the plot
plt.title("Circle Centered at (0,0) Clipped by Diagonal Line")
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/usZa0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/usZa0.png" alt="enter image description here" /></a></p>
<p>I would like the same image but with the part of the diagonal line outside the circle remmove. So all that is left is the truncate circular arc.</p>
|
<python><matplotlib>
|
2024-03-08 12:25:46
| 1
| 21,513
|
Simd
|
78,127,637
| 11,516,350
|
Flask babel not workinf since revamped project with blueprints
|
<p>I have an app working well and translating with flask babel.</p>
<p>Then, I started to split the code in modules and generating blueprints instead of still writing all code inside the same .py file.</p>
<p>This is my project structure:</p>
<p><a href="https://i.sstatic.net/MWkSV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MWkSV.png" alt="Project structure" /></a></p>
<p>Then, this is the code from main file: FinMapper.py</p>
<pre><code>from app import create_app
app = create_app('default')
</code></pre>
<p>Code of app/<strong>init</strong>.py</p>
<pre><code>from flask import Flask
from flask_babel import Babel
from config import config
def register_blueprints(app):
from .file_import import file_import_blueprint
app.register_blueprint(file_import_blueprint)
def get_locale():
return 'es'
babel = Babel()
def create_app(config_name):
app = Flask(__name__)
app.config.from_object(config[config_name])
config[config_name].init_app(app)
register_blueprints(app)
babel.init_app(app)
return app
</code></pre>
<p>Code on forms.py:</p>
<pre><code>class MovementFileForm(FlaskForm):
type = SelectField(lazy_gettext('Type'), choices=[('default', 'DEFAULT'), ('bbva', 'BBVA')])
file = FileField(lazy_gettext('File'),
validators=[DataRequired(),
FileAllowed(['csv'], message='FileExtensionNotAllowed')],
render_kw={"class": "form-control"})
</code></pre>
<p>The calls to lazy_gettext('text') was working until refactoring to create blueprints.</p>
<p>And here an example of base.html with the navbar:</p>
<pre><code><nav class="navbar navbar-expand-lg navbar-dark bg-dark">
<div class="container-fluid">
<a class="navbar-brand" href="#"><b>FinMapper</b></a>
<button class="navbar-toggler" type="button" data-bs-toggle="collapse" data-bs-target="#navbarSupportedContent"
aria-controls="navbarSupportedContent" aria-expanded="false" aria-label="Toggle navigation">
<span class="navbar-toggler-icon"></span>
</button>
<div class="collapse navbar-collapse" id="navbarSupportedContent">
<ul class="navbar-nav me-auto mb-2 mb-lg-0">
<li class="nav-item">
<a class="nav-link active" aria-current="page" href="#">{{ _('Dashboard') }}</a>
</li>
<li class="nav-item">
<a class="nav-link active" aria-current="page" href="#">{{ _('Categories') }}</a>
</li>
<li class="nav-item">
<a class="nav-link active" aria-current="page" href="#">{{ _('Categorize') }}</a>
</li>
<li class="nav-item">
<a class="nav-link active" aria-current="page" href="{{ url_for('file_import_blueprint.load') }}">{{ _('Load') }}</a>
</li>
</ul>
</div>
</div>
</nav>
</code></pre>
<p>Like lazy_gettext, the call of method _('text') was working until create blueprints.</p>
<p>My main doubt is if this code in the app factory is correct, I think that here is the problem:</p>
<p><a href="https://i.sstatic.net/qkvtU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qkvtU.png" alt="Babel declaration detail" /></a></p>
<p>This is the first version of the code, working well without blueprints:</p>
<p><a href="https://i.sstatic.net/Ec9V5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ec9V5.png" alt="Project vanilla no blueprints" /></a></p>
<p>I don't know how must be instantiated or declared Babel to be able to use in all blueprints. Now, my translations are not working.</p>
<h1>Update</h1>
<p>I'm able to call locale selector, but still without translations. If I configure babel like this:</p>
<pre><code>babel = babel()
def create_app(config_name):
app = Flask(__name)
babel.init(app, locale_selector=gel_locale()
# more coding
</code></pre>
<p>Putting a print inside get_locale function, it works, is being called. But translations still not working</p>
|
<python><flask><internationalization><flask-babel>
|
2024-03-08 12:22:01
| 2
| 1,347
|
UrbanoJVR
|
78,127,593
| 4,575,197
|
How to move Altair dropdown to under the legend dynamically (for every screen size)
|
<p>it's been a while that i have done CSS coding so i need help. i want to move the drop down list from under the chart to under the legend. Dataframe:</p>
<pre><code>data = {
'Adj Close': [1.308934, 2.169581, 2.876765, 2.357847, 2.179156],
'Yahoo Finance return': [0.670226, 1.298566, 0.920492, -0.721652, -0.306659],
'date_x': ['2015-01-01', '2015-02-01', '2015-03-01', '2015-04-01', '2015-05-01'],
'Return in USD': [-0.641970, 0.428617, 0.404568, 0.125614, 0.070045]
}
# Convert the dictionary into a DataFrame
df = pd.DataFrame(data)
# Make sure that 'date_x' is a datetime column
df['date_x'] = pd.to_datetime(df['date_x'])
</code></pre>
<p>my code for creating the table:</p>
<pre><code>final_melted = final.melt(id_vars=['date_x'], var_name='Metric', value_name='Value')
selection_metric_1 = alt.binding_select(options=['Adj Close', 'None','Yahoo Finance return', 'Return in USD'], name='Metric 1 ')
select_metric_1 = alt.selection_single(fields=['Metric'], bind=selection_metric_1, name="Selector 1")
# Dropdown selection for the second metric
selection_metric_2 = alt.binding_select(options=['Adj Close','None', 'Yahoo Finance return', 'Return in USD'], name='Metric 2 ')
select_metric_2 = alt.selection_single(fields=['Metric'], bind=selection_metric_2, name="Selector 2")
# Dropdown selection for the third metric
selection_metric_3 = alt.binding_select(options=['Adj Close','None', 'Yahoo Finance return', 'Return in USD'], name='Metric 3 ')
select_metric_3 = alt.selection_single(fields=['Metric'], bind=selection_metric_3, name="Selector 3")
line_chart = alt.Chart(final_melted).mark_line().encode(
x='date_x:T',
y=alt.Y('Value:Q', axis=alt.Axis(title='Metric Value')),
color='Metric:N'
)
# Points layer, making it easier to hover over individual points
points = alt.Chart(final_melted).mark_point().encode(
x='date_x:T',
y='Value:Q',
color='Metric:N',
tooltip=['date_x:T', 'Value:Q', 'Metric:N']
).properties(
width=1600,
height=800
)
# Conditional filters based on the selection
filtered_chart = alt.layer(line_chart, points).add_selection(
select_metric_1,
select_metric_2,
select_metric_3 # Add the third selection here
).transform_filter(
select_metric_1 | select_metric_2 | select_metric_3 # Apply the filter based on the third selection as well
).interactive()
configured_chart = filtered_chart.configure_axis(
titleFontSize=15, # Adjust the size as needed for axis titles
labelFontSize=15 # Adjust the size as needed for axis labels
).configure_legend(
titleFontSize=15, # Adjust legend title size
labelFontSize=12 # Adjust legend labels size
).configure_title(
fontSize=24 # Adjust chart title size
).configure_legend(
titleFontSize=17, # Adjust the font size for the legend title
labelFontSize=15 # Adjust the font size for the legend labels
)
</code></pre>
<p>i know that i shoulk use CSS but i could figure out the absolute positioning and not dynamically. Therefor now when the screen size changes it moves constantly.</p>
<pre><code>form.vega-bindings {
position: absolute;
right: 0px;
top: 350px;
}
</code></pre>
<p>would appreciate any help</p>
|
<python><css><dataframe><visualization><altair>
|
2024-03-08 12:13:28
| 0
| 10,490
|
Mostafa Bouzari
|
78,127,591
| 132,785
|
Pandas with pyarrow does not use additional memory when splitting dataframe
|
<p>When using the <code>float64[pyarrow]</code> dtype in Pandas 2.2.1, it appears that no additional memory is used when splitting a dataframe in two, and then joining it back together again.</p>
<p>When the regular <code>float64</code> dtype is used, this uses 3x the memory of the original dataframe (which is what I'd intuitively expect, if the dataframe is copied).</p>
<p>Can anyone explain why this happens? It's obviously a good thing, but this doesn't seem to be listed under the <a href="https://pandas.pydata.org/docs/user_guide/pyarrow.html#pyarrow-functionality" rel="nofollow noreferrer">benefits of pyarrow</a>, so I'd like to understand why it happens.</p>
<p>The code I'm running is:</p>
<pre><code>import gc
import os
import numpy as np
import pandas as pd
import psutil
def log_memory_usage():
gc.collect()
pid = os.getpid()
p = psutil.Process(pid)
full_info = p.memory_info()
print(
f"Memory usage: {full_info.rss / 1024 / 1024:.2f} MB (RSS)"
)
log_memory_usage()
df1 = pd.DataFrame(np.ones(shape=(10000000, 10)), columns=[f"col_{i}" for i in range(10)], dtype="float64")
log_memory_usage()
split1 = df1.loc[:, df1.columns[:5]]
split2 = df1.loc[:, df1.columns[5:]]
log_memory_usage()
joined_again = pd.concat([split1, split2], axis=1)
log_memory_usage()
</code></pre>
<p>This prints:</p>
<pre><code>Memory usage: 68.28 MB (RSS)
Memory usage: 831.38 MB (RSS)
Memory usage: 1594.66 MB (RSS)
Memory usage: 2346.45 MB (RSS)
</code></pre>
<p>So splitting and concatenating the dataframe uses additional memory each time.</p>
<p>But if I change <code>dtype="float64"</code> to <code>dtype="float64[pyarrow]"</code> I get the following output:</p>
<pre><code>Memory usage: 68.14 MB (RSS)
Memory usage: 833.51 MB (RSS)
Memory usage: 833.84 MB (RSS)
Memory usage: 833.93 MB (RSS)
</code></pre>
<p>So it appears that very little additional memory is used for the split and joined versions of the dataframe.</p>
|
<python><pandas><pyarrow>
|
2024-03-08 12:12:40
| 1
| 1,988
|
Neil
|
78,127,307
| 5,799,799
|
How to read parquet files from AWS S3 with polars
|
<p>Following the <a href="https://docs.pola.rs/user-guide/io/cloud-storage/#reading-from-cloud-storage" rel="nofollow noreferrer">documentation</a> for reading from cloud storage, I have created the below script that fails.</p>
<pre class="lang-py prettyprint-override"><code>import boto3
import polars as pl
import os
session = boto3.Session(profile_name=os.environ["AWS_PROFILE"])
credentials = session.get_credentials()
current_credentials = credentials.get_frozen_credentials()
# Specify your S3 bucket and file path
s3_bucket = "bucket"
s3_file_path = "path/file.parquet"
# Create the full S3 path
s3_path = f"s3://{s3_bucket}/{s3_file_path}"
storage_options = {
'aws_access_key_id': current_credentials.access_key,
'aws_secret_access_key': current_credentials.secret_key,
'aws_region': 'us-east-1' credentials
}
df = pl.scan_parquet(s3_path, storage_options=storage_options)
</code></pre>
<p>This gives output below which I understand is a common error for not having permissions to access the file.</p>
<blockquote>
<p>ComputeError: Generic S3 error: Client error with status 403 Forbidden: No Body</p>
</blockquote>
<p>versions:</p>
<ul>
<li>Python '3.9.18'</li>
<li>polars '0.20.14'</li>
<li>boto3 '1.34.58'</li>
</ul>
<p>Running on macos.</p>
<p>I am also successfully able to read the parquet using pandas just setting the AWS_PROFILE env var.</p>
<p>Am I using the storage_options incorrectly? It doesn't seem able to take 'aws_profile' key value pair to extract local config credentials its self?</p>
|
<python><dataframe><amazon-s3><python-polars>
|
2024-03-08 11:16:26
| 1
| 435
|
DataJack
|
78,127,124
| 2,545,680
|
Is module system resolution in Python synchronous or asynchronous
|
<p>In JavaScript world there are two types of module systems - CommonJS and ES modules. They use different resolution systems - CommonJS is synchronous and ES modules is asynchronous.</p>
<p>Based on this quote from the "Learning Python" book it seems that the module system is synchronous:</p>
<blockquote>
<p>To satisfy such goals, import (and, as you’ll see later, from)
statements execute and load other files on request. More formally, in
Python, cross-file module linking is not resolved until <strong>such import
statements are executed at runtime</strong>; their net effect is to assign
module names—simple variables like b—to loaded module objects. In
fact, the module name used in an import statement serves two purposes:
it identifies the external file to be loaded, but it also becomes a
variable assigned to the loaded module.</p>
</blockquote>
<p>Is this assumption correct?</p>
|
<python>
|
2024-03-08 10:42:42
| 0
| 106,269
|
Max Koretskyi
|
78,127,052
| 9,640,238
|
Grouping a DataFrame containing a JSON column
|
<p>I want to de-duplicate records in a dataframe by grouping values. My data has a structure such as this:</p>
<pre class="lang-py prettyprint-override"><code>json = {
"employees": [
{"name": "Shyam", "email": "shyamjaiswal@gmail.com"},
{"name": "Bob", "email": "bob32@gmail.com"},
{"name": "Jai", "email": "jai87@gmail.com"},
]
}
df = pd.DataFrame({"key": ["A", "A"], "val": [1, 2], "json": [json, json]})
</code></pre>
<p>What I want to get is a single row of: <code>['A', [1, 2], 'json string']</code></p>
<p>I would normally do it as follows:</p>
<pre class="lang-py prettyprint-override"><code>df.groupby(["val", "json"])["key"].apply(list)
</code></pre>
<p>Which would work fine, were it not for the json data: <code>TypeError: unhashable type: 'dict'</code>. So what I do is to convert the column to string first:</p>
<pre class="lang-py prettyprint-override"><code>df["json"] = df["json"].apply(json.dumps)
df = df.groupby(["val", "json"])["key"].apply(list)
</code></pre>
<p>Then I convert the column back to JSON:</p>
<pre class="lang-py prettyprint-override"><code>df["json"] = df["json"].apply(json.loads)
</code></pre>
<p>Now, is that really the best way? I can't help but think that there must be a better one.</p>
<p>Any hint?</p>
|
<python><json><pandas><dataframe>
|
2024-03-08 10:29:52
| 1
| 2,690
|
mrgou
|
78,126,729
| 7,848,740
|
Clear console to get a single row of logging in terminal Python
|
<p>I'm trying to display a single line of logging in the console instead of have a bunh of lines for every single update.</p>
<p>I think with an example is easy to understand. I have a while True cycle like:</p>
<pre><code>logging.info("Starting")
while True:
data = checknewdata()
logging.info("Data " + data)
</code></pre>
<p>I would like that the line logging.info("Data " + data) appears just once on the console but that it updates at every cycle with the new value so, instead of having:</p>
<pre><code>Data 1
Data 2
Data 5
Data 8
</code></pre>
<p>I would have just a single line <code>Data x</code> with <code>x</code> actually varying with the new values</p>
<p>I've tried to follow <a href="https://stackoverflow.com/questions/11984684/display-only-one-logging-line-removing-the-previous-ones">this</a> or adding <code>sys.stdout.flush()</code> but it doesn't work</p>
<p>Any suggestion?</p>
<p>I'm using PyCharm and Grep Console, both on the latest versions</p>
|
<python><logging><console>
|
2024-03-08 09:37:42
| 1
| 1,679
|
NicoCaldo
|
78,126,715
| 10,863,083
|
Convert IMU data into trajectory data
|
<p>I am using IMU data in the next format :
[![enter image description here][1]][1]</p>
<p>Here is the whole file : <a href="https://github.com/badiaamakhlouf/Awesome_Dataset/blob/main/digit7.csv" rel="nofollow noreferrer">https://github.com/badiaamakhlouf/Awesome_Dataset/blob/main/digit7.csv</a></p>
<p>I want to extract trajectory data from this IMU data. The trajectory data corresponding to the 3 coordinates of the position and velocity.
Actually, I had performed the next steps:</p>
<ul>
<li>First, I have integrated the accelerometer data (Acc_x, Acc_y and Acc_z) to get the velocity (Vel_x, Vel_y, Vel_z)</li>
<li>Second, I had integrated velocity data to find the position data (Pos_x, Pos_y, Pos_z)</li>
<li>Third, I had plotted the trajectories in 2D but unfortunately I did not get the right plot.</li>
</ul>
<p>I had performed digit handwriting with my IMU sensor and after plot I did not get any digit format</p>
<p>Here is my Code:</p>
<pre><code> # Iterate through the data points
for i in range(len(timestamps)):
# Calculate acceleration in the body frame
acc_body_x = acc_x[i] * np.cos(pitch[i]) * np.cos(yaw[i]) + acc_y[i] * (
np.cos(roll[i]) * np.sin(yaw[i]) * np.sin(pitch[i]) - np.cos(yaw[i]) * np.sin(roll[i])) + acc_z[
i] * (
np.sin(roll[i]) * np.sin(yaw[i]) + np.cos(roll[i]) * np.cos(yaw[i]) * np.sin(pitch[i]))
acc_body_y = acc_x[i] * np.cos(pitch[i]) * np.sin(yaw[i]) + acc_y[i] * (
np.cos(roll[i]) * np.cos(yaw[i]) + np.sin(roll[i]) * np.sin(yaw[i]) * np.sin(pitch[i])) + acc_z[
i] * (
-np.cos(yaw[i]) * np.sin(roll[i]) * np.sin(pitch[i]) + np.cos(roll[i]) * np.sin(yaw[i]))
acc_body_z = -acc_x[i] * np.sin(pitch[i]) + acc_y[i] * np.cos(pitch[i]) * np.sin(roll[i]) + acc_z[i] * np.cos(
roll[i]) * np.cos(pitch[i])
# Integrate acceleration to calculate velocity
if i == 0:
vel_x.append(0)
vel_y.append(0)
vel_z.append(0)
else:
dt = (timestamps[i] - timestamps[i - 1]) / 1000.0 # Convert timestamp to seconds
vel_x.append(vel_x[-1] + acc_body_x * dt)
vel_y.append(vel_y[-1] + acc_body_y * dt)
vel_z.append(vel_z[-1] + acc_body_z * dt)
# Integrate velocity to calculate position
if i == 0:
pos_x.append(0)
pos_y.append(0)
pos_z.append(0)
else:
pos_x.append(pos_x[-1] + vel_x[-1] * dt)
pos_y.append(pos_y[-1] + vel_y[-1] * dt)
pos_z.append(pos_z[-1] + vel_z[-1] * dt)
</code></pre>
<p>This is the second part of my code: first part preprocesses the IMU data and performs sensor fusion using a complementary filter.
However, in this part, I have integrated accelerometer data, using the estimated orientation data previously, to estimate velocity and position, plots the 2D trajectories.</p>
<p>I got the next plot : <a href="https://i.sstatic.net/qpepr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qpepr.png" alt="enter image description here" /></a></p>
<p>I have the next questions:</p>
<ul>
<li>The plot does not correspond to the digit 7, which I had write using my sensor and hand and I could not understand the problem in my code?</li>
<li>I have calculated the orientation estimation because every code I have found in the internet or any article says that you need Orientation estimation to find the trajectory but as you I have not been used it to calculate the position and velocity.</li>
<li>Is there anything I can improve in my code?</li>
</ul>
|
<python><accelerometer><gyroscope><imu><devicemotion>
|
2024-03-08 09:35:06
| 0
| 417
|
baddy
|
78,126,632
| 3,616,293
|
Find winning unit between 2 torch tensors of different shapes
|
<p>I am trying to implement a Self-Organizing Map where for a given input sample, the best matching unit/winning unit is chosen based on (say) L2-norm distance between the SOM and the input. To implement this, I have:</p>
<pre><code># Input batch: batch-size = 512, input-dim = 84-
z = torch.randn(512, 84)
# SOM shape: (height, width, input-dim)-
som = torch.randn(40, 40, 84)
# Compute L2 distance for a single sample out of 512 samples-
dist_l2 = np.linalg.norm((som.numpy() - z[0].numpy()), ord = 2, axis = 2)
# dist_l2.shape
# (40, 40)
# Get (row, column) index of the minimum of a 2d np array-
row, col = np.unravel_index(dist_l2.argmin(), dist_l2.shape)
print(f"BMU for z[0]; row = {row}, col = {col}")
# BMU for z[0]; row = 3, col = 9
</code></pre>
<p>So for the first input sample of 'z', the winning unit in SOM has the index: (3, 9). I can put this in a for loop iterating over all 512 such input samples, but that is very inefficient.</p>
<p>Is there an efficient vectorized PyTorch manner to compute this for the entire batch?</p>
|
<python><numpy><pytorch>
|
2024-03-08 09:19:26
| 1
| 2,518
|
Arun
|
78,126,282
| 23,106,915
|
How to prevent repeated downloading with HuggingFace
|
<h4>Description:</h4>
<p>I am confused on how the installation of the packages are performed. Currently I was working on a StableDiffusion model and every-time I run the code its again and again downloading files which are 3 to 4 Gigs big.</p>
<h4>Code:</h4>
<p>This is the code I was trying to run at first:</p>
<pre class="lang-py prettyprint-override"><code>from torch import autocast
from diffusers import StableDiffusionPipeline
pipe = StableDiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
use_auth_token=True
).to("cuda")
prompt = "a photo of an astronaut riding a horse on mars"
with autocast("cuda"):
image = pipe(prompt)["sample"][0]
image.save("astronaut_rides_horse.png")
</code></pre>
<h4>Issue:</h4>
<p>When I run the code the following appears in my shell:</p>
<pre class="lang-bash prettyprint-override"><code>Fetching 16 files: 0%| | 0/16 [00:00<?, ?it/s]
vae/diffusion_pytorch_model.safetensors: 0%| | 0.00/335M [00:00<?, ?B/s]
unet/diffusion_pytorch_model.safetensors: 0%| | 0.00/3.44G [00:00<?, ?B/s]
safety_checker/model.safetensors: 0%| | 0.00/1.22G [00:00<?, ?B/s]
text_encoder/model.safetensors: 0%| | 0.00/492M [00:00<?, ?B/s]
</code></pre>
<p>and this happens each and everytime I run the code.</p>
<h4>What I tried?</h4>
<p>I tried installing and cloning the whole git repo. (I honestly don't know why I did that even though I know it wasn't gonna affect a thing!) Also I tried searching for many forums for this issue but not even a single clue, maybe its because of my in-experienced approach.</p>
|
<python><pytorch><huggingface><stable-diffusion>
|
2024-03-08 08:09:49
| 1
| 546
|
AshhadDevLab
|
78,126,259
| 7,357,673
|
How to use Python package "fastkde" to predict density at each given data point?
|
<p>I am trying to use the <a href="https://github.com/LBL-EESA/fastkde" rel="nofollow noreferrer">package</a> <code>fastkde</code> to estimate the density from a sample. The authors give an example</p>
<pre><code>""" Demonstrate the first README example. """
import numpy as np
import fastkde
import matplotlib.pyplot as plt
#Generate two random variables dataset (representing 100,000 pairs of datapoints)
N = int(1e5)
x = 50*np.random.normal(size=N) + 0.1
y = 0.01*np.random.normal(size=N) - 300
#Do the self-consistent density estimate
PDF = fastkde.pdf(x, y, var_names = ['x', 'y'])
PDF.plot();
</code></pre>
<p>However, I don't know how to proceed in the case of a single random variable. In my case, I have a sample <code>z</code> containing 100,000 observations. I want to predict the density at each data point in <code>w</code>:</p>
<pre><code>import numpy as np
import fastkde
N = int(1e5)
z = 50*np.random.normal(size=N) + 0.1
w = list(range(10,0,-2))
</code></pre>
<p>Could you elaborate on how to do so? Thank you so much for your elaboration!</p>
|
<python><kernel-density><probability-density>
|
2024-03-08 08:04:14
| 1
| 2,882
|
Akira
|
78,126,251
| 2,446,702
|
Wxpython Sizer position
|
<p>I have the below sample application which has a couple of simple sizers which, for some reason, are centered vertically, which I dont want (see attachment). I want the wxchoice sizer to be position at the top of the window, not in the middle vertically, even when resizing the window, it stays in that position, but I just can't find the asnwer.</p>
<p><a href="https://i.sstatic.net/FTnxE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FTnxE.png" alt="enter image description here" /></a></p>
<pre><code>import wx
class Mywin(wx.Frame):
def __init__(self, parent, id, title):
super(Mywin, self).__init__(parent, title=title)
self.panel = wx.Panel(self)
self.vbox = wx.BoxSizer(wx.VERTICAL)
nm = wx.StaticBox(self.panel, -1, 'Drop List:')
nmSizer = wx.StaticBoxSizer(nm, wx.VERTICAL)
self.workflowList = ["choice 1", "choice 2"]
nmbox = wx.BoxSizer(wx.HORIZONTAL)
self.workflowChoice = wx.Choice(self.panel, choices=self.workflowList)
self.vbox.Add(nmSizer, 0, wx.ALL | wx.EXPAND, 0)
nmbox.Add(self.workflowChoice, 0, wx.ALL, 5)
nmSizer.Add(nmbox, 0, wx.ALL, 5)
self.btnSizer = wx.BoxSizer(wx.HORIZONTAL)
self.buttonGo = wx.Button(self.panel, -1, "Go")
self.buttonClose = wx.Button(self.panel, -1, "Quit")
self.btnSizer.Add(self.buttonGo, 0, wx.ALL, 5)
self.btnSizer.Add(self.buttonClose, 0, wx.ALL, 5)
self.vbox.Add(nmSizer, 0, wx.ALL | wx.CENTER | wx.TOP, 5)
self.vbox.Add(self.btnSizer, 0, wx.ALL | wx.CENTER, 5)
self.panel.SetSizer(self.vbox)
self.panel.Fit()
self.Show()
app = wx.App()
Mywin(None, -1, 'Test App ')
app.MainLoop()
</code></pre>
|
<python><wxpython><sizer>
|
2024-03-08 08:02:06
| 2
| 3,255
|
speedyrazor
|
78,126,235
| 10,200,497
|
Is it possible to exclude first n values in each window when using rolling() to get the maximum value?
|
<p>This is my DataFrame:</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'a': [150, 106, 119, 131, 121, 140, 160, 119, 170]})
</code></pre>
<p>And this is the expected output. I want to create column <code>b</code>:</p>
<pre><code> a b
0 150 140
1 106 160
2 119 160
3 131 161
4 121 NaN
5 140 NaN
6 160 NaN
7 119 NaN
8 170 NaN
</code></pre>
<p>I want to get the maximum value in a rolling window of 6. However I want to ignore the first value of each window.</p>
<p>In this picture I have shown the windows that I want. Red cells are the ones that should be excluded from calculations and green ones are the maximum value of the window that are in <code>b</code>.</p>
<p><a href="https://i.sstatic.net/gnzKA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gnzKA.png" alt="enter image description here" /></a></p>
<p>I prefer a generic solution. For example getting <code>max()</code> after ignoring first N values of each window.</p>
<p>These are some of my attempts that did not work:</p>
<pre><code># attempt 1
df['b'] = df.a.shift(-1).rolling(6).max()
# attempt 2
df['b'] = df.a.rolling(6, closed='left').max()
# attempt 3
for i in range(3):
x = df.iloc[i+1:i+6]
</code></pre>
|
<python><pandas>
|
2024-03-08 07:58:50
| 1
| 2,679
|
AmirX
|
78,125,897
| 2,604,247
|
Is Dependency Inversion Necessary to Ensure Decoupling between Caller and Callee?
|
<p>I am trying to understand the dependency inversion principle (DIP) via some simple, but concrete codes and classes (implemented in python) from <a href="https://www.pythontutorial.net/python-oop/python-dependency-inversion-principle" rel="nofollow noreferrer">this tutorial</a>. I am summarising it (with my own comments and understanding) to save you the pain of going through the whole thing.</p>
<p>Basically we are building a currency-converter application where we are separating the <em>main</em> application logic from the currency converter itself. The codes (some comments and docstrings mine) are as follows.</p>
<h6>Snippet 1</h6>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
"""Currency converter application using some exchange API."""
class FXConverter:
"""The converter class."""
def convert(self, from_currency, to_currency, amount):
"""
Core method of the class. Assume the magic number 1.2 is from some API like
Oanda
"""
print(f'{amount} {from_currency} = {amount * 1.2} {to_currency}')
return amount * 1.2
class App:
"""The Application"""
def start(self):
"""The main method to create and invoke the converter object."""
converter = FXConverter()
converter.convert('EUR', 'USD', 100)
if __name__ == '__main__':
app = App()
app.start()
</code></pre>
<p>Now, the tutorial claims that (direct quotation)</p>
<blockquote>
<p>In the future, if the FX’s API changes, it’ll break the code. Also, if you want to use a different API, you’ll need to change the App class.</p>
</blockquote>
<p>So they proposed this.</p>
<h6>Snippet 2</h6>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
# encoding: utf-8
"""Currency converter application using dependency inversion."""
from abc import ABC
class CurrencyConverter(ABC):
def convert(self, from_currency, to_currency, amount) -> float:
pass
class FXConverter(CurrencyConverter):
def convert(self, from_currency, to_currency, amount) -> float:
print('Converting currency using FX API')
print(f'{amount} {from_currency} = {amount * 1.2} {to_currency}')
return amount * 1.2 # The tutorial seems to have a typo here.
class App:
def __init__(self, converter: CurrencyConverter):
self.converter = converter
def start(self):
self.converter.convert('EUR', 'USD', 100)
if __name__ == '__main__':
converter = FXConverter()
app = App(converter)
app.start()
</code></pre>
<h4>Question</h4>
<p>To me, the quotation does not seem to ring true, and that goes to the heart of why I cannot wrap my head around the DIP. Even if <code>FXConverter</code> uses a different exchange API (suppose bloomberg instead of Oanda) would not the change stay localised to the <code>convert</code> method? So long as the <code>convert</code> method maintains the signature</p>
<pre class="lang-py prettyprint-override"><code>convert(str, str, float)->float # The strings must be valid currency names
</code></pre>
<p>the <code>App.start</code> should be happy. The necessity of maintaining this valid method signature is</p>
<ul>
<li>not done away even in the DIP version.</li>
<li>automatically enforced in a more type-safe language like Rust or C++. In a stricter language, probably I would enum the range of possible currencies to make sure the string variable is not free form like <em>US$</em> or <em>£</em> etc.</li>
</ul>
<p>That is why I am failing to see how the DIP contributes to better decoupling at all, when what we actually need is adherence to the function/method signature as in a statically typed language?</p>
<p>In general, when <code>A</code> invokes <code>B</code> (object methods, or functions etc.), can we assume that</p>
<ul>
<li><code>A</code> is unaware of the internal workings of <code>B</code></li>
<li><code>B</code> is unaware of what <code>A</code> does with the result</li>
</ul>
<p>so that the desired decoupling is automatically enforced?</p>
|
<python><oop><solid-principles><object-oriented-analysis><dependency-inversion>
|
2024-03-08 06:34:57
| 2
| 1,720
|
Della
|
78,125,825
| 14,256,643
|
Django How to save image on my custom path when triggering signals
|
<p>When I trigger this signal, the image is copied from my ImageModel and uploaded to my OrderItem model in the order_image_file field. The product image is being saved to its original ImageModel destination even though I have defined a custom path in my OrderItem model.</p>
<pre><code>def orderImage_upload_path(instance, filename):
return f'order_image/{generate_sku()}_{filename}'
class OrderItem(models.Model):
product_image = models.ForeignKey(ProductImage, on_delete=models.SET_NULL,blank=True,null=True)
order_image_file = models.ImageField(blank=True,null=True,upload_to=orderImage_upload_path)
</code></pre>
<p>my signals</p>
<pre><code>@receiver(post_save, sender=OrderItem)
def OrderItem_Signals(sender,created ,instance, **kwargs):
if created:
if not instance.order_image_file and instance.product_image:
instance.order_image_file = instance.product_image.image
instance.save()
</code></pre>
|
<python><python-3.x><django><django-models>
|
2024-03-08 06:15:58
| 1
| 1,647
|
boyenec
|
78,125,650
| 2,982,323
|
pytest two level of parametrization with one parameter dependent on other one
|
<p>I have to a test a scenario where one parameter is dependent on other. I tried using the pytest hook <code>pytest_generate_test</code> but i'm not sure how to pass a retrieve the value in hook which is parametrized in the test.</p>
<pre><code>import pytest
import logging
logger = logging.getLogger(__name__)
apps = ['app1', 'app2', 'app3']
def pytest_generate_tests(metafunc):
common_services = ['dns', 'dhcp']
service = apps[0]
common_services.append(service)
if "total_services" in metafunc.fixturenames:
metafunc.parametrize("total_services", common_services)
@pytest.fixture()
def total_services(request):
return request.param
@pytest.mark.parametrize("app", apps)
def test_example(app, total_services):
logging.info(f"App: {app}, ServiceName: {total_services}")
</code></pre>
<p>Output:</p>
<pre><code>============================= test session starts ==============================
platform darwin -- Python 3.11.7, pytest-8.0.2, pluggy-1.4.0
rootdir: /private/tmp/test/tests
collected 9 items
test_example.py::test_example[dns-app1]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app1, ServiceName: dns
PASSED [ 11%]
test_example.py::test_example[dns-app2]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app2, ServiceName: dns
PASSED [ 22%]
test_example.py::test_example[dns-app3]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app3, ServiceName: dns
PASSED [ 33%]
test_example.py::test_example[dhcp-app1]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app1, ServiceName: dhcp
PASSED [ 44%]
test_example.py::test_example[dhcp-app2]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app2, ServiceName: dhcp
PASSED [ 55%]
test_example.py::test_example[dhcp-app3]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app3, ServiceName: dhcp
PASSED [ 66%]
test_example.py::test_example[app1-app1]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app1, ServiceName: app1
PASSED [ 77%]
test_example.py::test_example[app1-app2]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app2, ServiceName: app1
PASSED [ 88%]
test_example.py::test_example[app1-app3]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app3, ServiceName: app1
PASSED [100%]
============================== 9 passed in 0.01s ===============================
</code></pre>
<p>Expected output:</p>
<pre><code>============================= test session starts ==============================
platform darwin -- Python 3.11.7, pytest-8.0.2, pluggy-1.4.0
rootdir: /private/tmp/test/tests
collected 9 items
test_example.py::test_example[dns-app1]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app1, ServiceName: dns
PASSED [ 11%]
test_example.py::test_example[dns-app2]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app2, ServiceName: dns
PASSED [ 22%]
test_example.py::test_example[dns-app3]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app3, ServiceName: dns
PASSED [ 33%]
test_example.py::test_example[dhcp-app1]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app1, ServiceName: dhcp
PASSED [ 44%]
test_example.py::test_example[dhcp-app2]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app2, ServiceName: dhcp
PASSED [ 55%]
test_example.py::test_example[dhcp-app3]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app3, ServiceName: dhcp
PASSED [ 66%]
test_example.py::test_example[app1-app1]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app1, ServiceName: app1
PASSED [ 77%]
test_example.py::test_example[app1-app2]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app2, ServiceName: app2
PASSED [ 88%]
test_example.py::test_example[app1-app3]
-------------------------------- live log call ---------------------------------
INFO root:test_example.py:21 App: app3, ServiceName: app3
PASSED [100%]
============================== 9 passed in 0.01s ===============================
</code></pre>
<p>I know I have hard coded to get the first element of the list <code>service = apps[0]</code> i'm not sure how can we get the parameter passed in test can be retrieved in <code>pytest_generate_tests</code></p>
<p>To keep explain it in simple way</p>
<pre><code>from itertools import product
apps = ['app1', 'app2', 'app3']
def cartesian_product(app, common_servcies):
return list(product(app, common_servcies))
for app in apps:
common_services = ['dns', 'dhcp']
common_services.append(app)
print(cartesian_product([app], common_services))
Output:
$ python3 test_example2.py
[('app1', 'dns'), ('app1', 'dhcp'), ('app1', 'app1')]
[('app2', 'dns'), ('app2', 'dhcp'), ('app2', 'app2')]
[('app3', 'dns'), ('app3', 'dhcp'), ('app3', 'app3')]
</code></pre>
|
<python><pytest><dynamicparameters>
|
2024-03-08 05:10:20
| 2
| 687
|
Swaroop Kundeti
|
78,125,524
| 9,986,939
|
Adding Dynamic PV/C to Airflow pods
|
<p>Problem: I have airflow on kubernetes and I have an issue with massive disk pressure on the nodes.</p>
<p>Context: There are about 5k pods a day and they are running a Python ETL package. I'm using airflow's KubernetesPodOperator to dynamically generate the pods via a class called "KubernetesPodGenerator".</p>
<p>Proposed Solution: I can use the kubernetes library to create a PV and a PVC per the code below and it should start creating pods with the PV mounted to the image. If you were to run this code you would get that behavior.</p>
<pre><code>from airflow.contrib.operators.kubernetes_pod_operator import KubernetesPodOperator
from kubernetes import client as k8s
etl_image = "path_to_a_super_secret_image"
volume_mounts = [k8s.V1VolumeMount(name="dag-storage", mount_path='/data', sub_path=None, read_only=False)]
volumes = [
k8s.V1Volume(
name='dag-storage',
persistent_volume_claim=k8s.V1PersistentVolumeClaimVolumeSource(claim_name='dag-storage')
)
]
k = KubernetesPodOperator(
name="hello-dry-run",
image=etl_image,
cmds=["bash", "-cx"],
arguments=["echo", "10"],
env_from=[
k8s.V1EnvFromSource(
config_map_ref=k8s.V1ConfigMapEnvSource(name="etl-config")
)
],
labels={"foo": "bar"},
task_id="dry_run_demo",
volumes=volumes,
volume_mounts=volume_mounts
)
k.dry_run()
</code></pre>
<p>Problem: However, when I deploy this to airflow I do not get that mount, I get another one named "kube-api-access-1234" (1234 is variable, so that smells like programmatic creation). I do not see anything in my airflow code that would be adding this, so it must be something internal?</p>
<p>I also don't understand why it's overwriting the volume/mounts instead of appending them.</p>
|
<python><kubernetes><airflow>
|
2024-03-08 04:24:48
| 0
| 407
|
Robert Riley
|
78,125,493
| 4,294,028
|
Stitching together overlapping arrays in scipy
|
<p>Given two numpy arrays (matrices)</p>
<pre><code>A = np.linspace(1,9,9).reshape(3,3)
B = np.linspace(10,18,9).reshape(3,3)
</code></pre>
<p>We can combine them into a block diagonal matrix by doing:</p>
<pre><code>from scipy.linalg import block_diag
block_diag(A,B)
array([[ 1., 2., 3., 0., 0., 0.],
[ 4., 5., 6., 0., 0., 0.],
[ 7., 8., 9., 0., 0., 0.],
[ 0., 0., 0., 10., 11., 12.],
[ 0., 0., 0., 13., 14., 15.],
[ 0., 0., 0., 16., 17., 18.]])
</code></pre>
<p>I am wondering if there is a straight forward way to define the following 'overlap' type matrix from A and B:</p>
<pre><code>array([[ 1., 2., 3., 0.],
[ 4., (10.+5.)/2, (6.+11.)/2, 12.],
[ 7., (13.+8.)/2, (14.+9.)/2, 15.],
[ 0., 16., 17., 18.]])
</code></pre>
<p>So instead of placing A, B on the diagonal, we allow them to overlap to some degree, and elements in the overlap get averaged. In general, I'd like to specify the degree of overlap, so in this example the degree of overlap would be 2. I'd also ideally want to keep the sparse structure, as opposed to constructing the entire array.</p>
|
<python><scipy><sparse-matrix><diagonal>
|
2024-03-08 04:08:52
| 1
| 938
|
WeakLearner
|
78,124,960
| 2,877,552
|
Python library to open a file handle to Azure Blob Storage object, similar to s3fs library for AWS S3
|
<p>For AWS S3, there is a Python library called <a href="https://github.com/fsspec/s3fs/" rel="nofollow noreferrer">s3fs</a> that can open file handles to S3 objects.</p>
<p>E.g.</p>
<pre><code>import s3fs
s3 = s3fs.S3FileSystem(anon=True)
with s3.open('my-bucket/my-file.txt', 'rb') as f:
print(f.read())
</code></pre>
<p>Is there something similar for Azure Blob Storage?</p>
|
<python><azure-blob-storage>
|
2024-03-08 00:07:47
| 2
| 734
|
Kevin Tianyu Xu
|
78,124,839
| 525,865
|
scraper on wikipedia: deprecated methods - removed from pandas in a future version
|
<p>this is one of my first steps in webscraping so i need to have some hints and tips: note i love to scrape on Wikipage.</p>
<p>btw: probably i gonna make use of this:
<a href="https://stackoverflow.com/questions/7185288/how-can-i-get-wikipedia-content-using-wikipedias-api">How can I get Wikipedia content using Wikipedia's API?</a></p>
<p>**my project: **Using the contents and beautiful soup load the data from the wikipedia site#.</p>
<p>well: by market capitalization table into a pandas dataframe. The dataframe should have the bank Name and Market Cap (US$ Billion) as column names. Display the first five rows using head.</p>
<p>But at the moment i still want to do some extra.things - i .e. gather a bit more of data - and the market capitalization is not important to me. i do not need this - but i need</p>
<p>a. the name of the bank and</p>
<p>b. the website</p>
<p>so i will need to take care for those data:</p>
<p><strong>Scraping the Data</strong></p>
<p>addtional idea: Using the contents and beautiful soup load the data from the By market capitalization table into a pandas dataframe. The dataframe should have the bank Name and Market Cap (US$ Billion) as column names. Display the first five rows using head. Note: very important - the name and the website: i am going to make make usage of bs and pandas.</p>
<p><strong>Using BeautifulSoup parse the contents of the webpage.</strong></p>
<p>well i think i try out the setup. The dataframe should has the bank Name and Market Cap (US$ Billion) as column names. i am using the empty dataframe data and the given loop extract the necessary data from each row and append it to the empty dataframe.</p>
<p>well i need to take care for the data</p>
<pre><code>
# Imports
from bs4 import BeautifulSoup
import requests
import pandas as pd
# URL of the Wikipedia page listing largest banks
url = "https://en.wikipedia.org/wiki/List_of_largest_banks"
# Fetch HTML content of the webpage
html_data = requests.get(url).text
# Parse HTML content using BeautifulSoup
soup = BeautifulSoup(html_data, 'html.parser')
# Find the table containing bank data
table = soup.find('table', class_='wikitable')
# Check if the table is found
if table is not None:
# Initialize an empty DataFrame to store bank data
data = pd.DataFrame(columns=["Bank", "Market Cap (US$ Billion)"])
# Extract data from each row of the table
for row in table.find_all('tr')[1:]: # Skip the header row
columns = row.find_all('td')
if len(columns) >= 3: # Ensure there are at least three columns
bank_name = columns[0].text.strip()
market_cap = columns[2].text.strip()
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
# Display the first five rows of the DataFrame
print(data.head()
Usually we will Load the pandas dataframe created above into a JSON named bank_market_cap.json using the to_json() function,
but this time the data will be sent to another team who will split the data file into two files and inspect it. If we inspect the data it will interfere with the next part of the assignment.
# Save the DataFrame to a JSON file (if needed)
data_to_json = data.to_json(orient='records')
with open('largest_banks_market_cap.json', 'w') as f:
f.write(data_to_json)
else:
print("Table not found. Please check if the class name of the table is correct or if the page structure has changed.")
</code></pre>
<p>see the output</p>
<pre><code>
Bank Market Cap (US$ Billion)
0 1 491.76
1 2 266.45
2 3 219.45
3 4 178.74
4 5
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
<ipython-input-7-29fe9d42d044>:29: FutureWarning: The frame.append method is deprecated and will be removed from pandas in a future version. Use pandas.concat instead.
data = data.append({'Bank': bank_name, 'Market Cap (US$ Billion)': market_cap}, ignore_index=True)
</code></pre>
|
<python><bash><web-scraping>
|
2024-03-07 23:19:43
| 0
| 1,223
|
zero
|
78,124,761
| 4,736,459
|
How to properly send response to React Frontend from Django Channels?
|
<p>I am trying to use Django Channels to implement long-polling for a React frontend web app and Django REST backend. I believe that much of what I have is working to some degree, but some thing(s) must be incorrectly configured or coded to produce unexpected results.</p>
<hr />
<p><strong>UPDATE</strong>: It seems that the problem lies in the <code>axios.get(...)</code> request. When swapping out that request with a <code>fetch(...)</code>, that fetch receives a response <em>every</em> single time, whereas the Axios call gets a response every other time on average. Unsure of the solution to this at this time.</p>
<hr />
<p>In short, the problem that I am receiving is that when the Django Channels Consumer sends back a response, it does not immediately go back to the frontend (per <code>console.log(...)</code>s in the code); rather, it seems the Consumer must send another response and then the previous or both responses appear in the frontend.</p>
<p>I am trying to implement using <code>AsyncHttpConsumer</code> because Websockets are not possible in our use-case.</p>
<p>Asgi.py</p>
<pre><code>os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
django_asgi_app = get_asgi_application()
application = ProtocolTypeRouter(
{
"http": URLRouter(
longpoll_urlpatterns + [re_path(r"", django_asgi_app)]
),
}
)
</code></pre>
<p>Routing.py</p>
<pre><code>longpoll_urlpatterns = [
# Tried using re_path but did not work
path("analysisSubscription/<int:analysisId>/", consumers.AnalysisConsumer.as_asgi()),
]
</code></pre>
<p>Consumers.py</p>
<pre><code>class AnalysisConsumer(AsyncHttpConsumer):
async def handle(self, body):
print("In Handle")
print(self.scope)
self.analysisId = self.scope["url_route"]["kwargs"]["analysisId"]
self.analysis_group_name = f"analysis_{self.analysisId}"
# Register with the appropriate channel
await self.channel_layer.group_add(self.analysis_group_name, self.channel_name)
await self.send_headers(headers=[
(b"Content-type", b"application/json"),
(b"Access-Control-Allow-Origin", b"*"),
(b"Access-Control-Allow-Headers", b"Origin, X-Requested-With, Content-Type, Accept, Authorization"),])
#The server won't send the headers unless we start sending the body
await self.send_body(b"", more_body=True)
print("Registered consumer for ID: ", self.analysisId, " and group: ", self.analysis_group_name)
# await self.channel_layer.group_send(self.analysis_group_name, {"type": "analysis.update", "text": self.analysisId})
async def http_request(self, message):
print("In Request")
print(message)
if "body" in message:
self.body.append(message["body"])
if not message.get("more_body"):
try:
await self.handle(b"".join(self.body))
except:
print("Stopping")
# If something goes wrong, disconnect
# In the parent this ALWAYS disconnects and thus long-polling breaks
await self.disconnect()
raise StopConsumer()
async def disconnect(self):
print("Disconnecting!")
await self.channel_layer.group_discard(self.analysis_group_name, self.channel_name)
async def analysis_update(self, event):
print(event)
print("Inside Analysis Consumer")
analysisId = event['id']
analysisData = ""
try:
analysisData = await self.getAnalysis(analysisId)
analysisData = json.dumps(analysisData)
except Exception as ex:
print("Failed to retrieve Analysis object: {ex}")
return
print("Retrieved analysis:\n\t", analysisData)
await self.send_body(analysisData.encode('utf-8'))
print("Sent the response")
await asyncio.sleep(1)
await self.http_disconnect(None)
@database_sync_to_async
def getAnalysis(self, id):
return AnalysisSerializer(Analysis.objects.filter(id=id)[0]).data
</code></pre>
<p>Then, in the Views.py file during an update of an Analysis, I call the below line to communicate to the Consumer group and call the previously defined function, <code>analysis_update</code>.</p>
<pre><code>async_to_sync(layers.group_send)(f"analysis_{idAnalysis}", {"type": "analysis.update", "id": idAnalysis})
</code></pre>
<p>The frontend ReactJS code loops by basically doing the below and checking the <code>response</code>.</p>
<pre><code>await axios.get(ApiUrl.analysisSubscribe(analysisId), {
timeout: 60000,
})
</code></pre>
|
<python><reactjs><axios><fetch-api><django-channels>
|
2024-03-07 22:54:01
| 0
| 703
|
Doug Steiert
|
78,124,647
| 4,883,262
|
How to set scope correctly for a parameterized fixture
|
<p>Scope of a parameterized fixture is not working.</p>
<p>Here is an example of the fixture and the test:</p>
<pre><code>@pytest.fixture(scope='session')
def my_fixture(request):
# make some API calls
# print API call response
return response
</code></pre>
<p>Tests</p>
<pre><code>@pytest.mark.parametrize('my_fixture',['a','b']):
def test_scenario_1(my_fixture):
assert response['text'] == 'abc'
@pytest.mark.parametrize('my_fixture',['a','b']):
def test_scenario_2(my_fixture):
assert response['image'] == 'def'
</code></pre>
<p>When I run the tests I see the API responses printed 4 times(twice for a parameter and twice for b parameter). I was expecting it to be printed just twice(once for both the parameters - a and b) since both the tests use same set of parameters and the fixed is scoped session. Obviously, if I don't parameterize the fixture the api response is printed once. Pytest version is 7.4.2</p>
|
<python><pytest>
|
2024-03-07 22:25:34
| 2
| 362
|
gowthamjs23
|
78,124,626
| 4,727,774
|
Why dtypes are not changing when updating columns in Pandas 2.x but would change in Pandas 1.x?
|
<p>When changing the values and/or dtypes of specific columns there is a different behaviour from Pandas 1.x to 2.x.</p>
<p>For example, on column <code>e</code> in the example below:</p>
<ul>
<li>Pandas 1.x: Using <code>pd.to_datetime</code> to update the column will
parse the date and change its dtype</li>
<li>Pandas 2.x: Using
<code>pd.to_datetime</code> to update the column will parse the date but
will not change its dtype</li>
</ul>
<p>What change from Pandas 1.x to 2.x explains this behavior?</p>
<p><strong>Example code</strong></p>
<pre><code>import pandas as pd
# Creates example DataFrame
df = pd.DataFrame({
'a': ['1', '2'],
'b': ['1.0', '2.0'],
'c': ['True', 'False'],
'd': ['2024-03-07', '2024-03-06'],
'e': ['07/03/2024', '06/03/2024'],
'f': ['aa', 'bb'],
})
# Changes dtypes of existing columns
df.loc[:, 'a'] = df.a.astype('int')
df.loc[:, 'b'] = df.b.astype('float')
df.loc[:, 'c'] = df.c.astype('bool')
# Parses and changes dates dtypes
df.loc[:, 'd'] = pd.to_datetime(df.d)
df.loc[:, 'e'] = pd.to_datetime(df.e, format='%d/%m/%Y')
# Changes values of existing columns
df.loc[:, 'f'] = df.f + 'cc'
# Creates new column
df.loc[:, 'g'] = [1, 2]
</code></pre>
<p><strong>Results in Pandas 1.5.2</strong></p>
<pre><code>In [2]: df
Out[2]:
a b c d e f g
0 1 1.0 True 2024-03-07 2024-03-07 aacc 1
1 2 2.0 True 2024-03-06 2024-03-06 bbcc 2
In [3]: df.dtypes
Out[3]:
a int64
b float64
c bool
d datetime64[ns]
e datetime64[ns]
f object
g int64
dtype: object
</code></pre>
<p><strong>Results in Pandas 2.1.4</strong></p>
<pre><code>In [2]: df
Out[2]:
a b c d e f g
0 1 1.0 True 2024-03-07 00:00:00 2024-03-07 00:00:00 aacc 1
1 2 2.0 True 2024-03-06 00:00:00 2024-03-06 00:00:00 bbcc 2
In [3]: df.dtypes
Out[3]:
a object
b object
c object
d object
e object
f object
g int64
dtype: object
</code></pre>
|
<python><pandas><dataframe><pandas-loc>
|
2024-03-07 22:19:25
| 1
| 393
|
bpbutti
|
78,124,574
| 1,874,170
|
Proper way to use KeyboardInterrupt with UDP socket.recv()?
|
<p>I'm currently using <code>socket.socketpair()</code> with <code>signal.set_wakeup_fd()</code> as a solution to make <code>socket.recv()</code> compatible with <code>KeyboardInterrupt</code> on Windows — particularly when using UDP — see below for example.</p>
<p>I'm also aware of the <a href="https://stackoverflow.com/q/34871191/1874170">other popular solution</a> of configuring a busy loop with <code>timeout=</code> to periodically check if the program's been interrupted.</p>
<p>Both of these solutions seem like pretty <strong>ugly hacks</strong>, though. Is there any "correct" way to do this?</p>
<hr />
<pre class="lang-py prettyprint-override"><code>import select
import signal
import socket
def _udp_listen(address, family=socket.AF_INET, flags=0, sockopts=frozenset()):
bufsize = getpagesize()
with socket.socket(family, socket.SOCK_DGRAM) as sock:
for sockopt in sockopts:
sock.setsockopt(*sockopt)
sock.bind(address)
_coalmine, _canary = socket.socketpair()
with _canary, _coalmine, _wakeup_fd_ctx(_coalmine.fileno(), strict=True, warn_on_full_buffer=False):
while True:
ready = select.select((sock, _canary), (), ())[0]
if _canary in ready:
# There's no need to raise any error ourselves,
# since Python itself will raise KeyboardInterrupt
# out of select.select() if needed
pass
if sock in ready:
yield sock.recv(bufsize, flags)
# -----
from contextlib import contextmanager
try:
from resource import getpagesize
except ImportError:
import mmap
def getpagesize():
return mmap.PAGESIZE
@contextmanager
def _wakeup_fd_ctx(fd, strict=True, **k):
_orig_wakeup_fd = signal.set_wakeup_fd(fd, **k)
_needs_restore = True
try:
if _orig_wakeup_fd == -1:
yield fd
else:
# We overwrote the existing handler
if strict:
raise RuntimeError(f'wakeup fd already occupied ({_orig_wakeup_fd}). Not sure what to do about that.')
else:
signal.set_wakeup_fd(_orig_wakeup_fd)
_needs_restore = False
yield _orig_wakeup_fd
finally:
if _needs_restore:
signal.set_wakeup_fd(_orig_wakeup_fd)
</code></pre>
|
<python><sockets><signals>
|
2024-03-07 22:02:55
| 1
| 1,117
|
JamesTheAwesomeDude
|
78,124,421
| 7,758,174
|
How to pass a file as hyperparameters for argparse
|
<p>I am trying to reproduce a pipeline from github. Since the Idea is to reproduce it, I don't want to change the code. Onde of the scripts ask for hyperparameters to be passed as a dictionary, as in
<code> -p {'param1' : 'p1', 'param1' : 'p2' ...}</code> and so forth. These parameters are processed with argparse.</p>
<p>I have a long list of parameters. Is it possible to pass the dictionary of hyperparameters as a file?</p>
<p>Thanks!</p>
|
<python><argparse>
|
2024-03-07 21:26:56
| 2
| 430
|
RMelo
|
78,124,417
| 3,957,754
|
oracledb.exceptions.DatabaseError: DPY-4011: the database or network closed the connection - python oracledb library
|
<p>I'm trying to connect oracle 11g with python</p>
<pre><code>import oracledb
import os
user = 'system'
password = 'admin123'
port = 1521
service_name = 'xe'
oracle_server_addr = 'localhost'
conn_string = "{oracle_server_addr}:{port}/{service_name}".format(oracle_server_addr=oracle_server_addr, port=port, service_name=service_name)
print(conn_string)
with oracledb.connect(user=user, password=password, dsn=conn_string) as conn:
with conn.cursor() as cursor:
sql = """select sysdate from dual"""
for r in cursor.execute(sql):
print(r)
</code></pre>
<p>But I'm getting this error:</p>
<pre><code>conn_string: localhost:1521/xe
Traceback (most recent call last):
File "src/oracledb/impl/thin/connection.pyx", line 353, in oracledb.thin_impl.ThinConnImpl._connect_with_address
File "src/oracledb/impl/thin/protocol.pyx", line 207, in oracledb.thin_impl.Protocol._connect_phase_one
File "src/oracledb/impl/thin/protocol.pyx", line 386, in oracledb.thin_impl.Protocol._process_message
File "src/oracledb/impl/thin/protocol.pyx", line 365, in oracledb.thin_impl.Protocol._process_message
File "src/oracledb/impl/thin/messages.pyx", line 1835, in oracledb.thin_impl.ConnectMessage.process
File "src/oracledb/impl/thin/buffer.pyx", line 845, in oracledb.thin_impl.Buffer.read_uint32
File "src/oracledb/impl/thin/packet.pyx", line 235, in oracledb.thin_impl.ReadBuffer._get_raw
File "src/oracledb/impl/thin/packet.pyx", line 588, in oracledb.thin_impl.ReadBuffer.wait_for_packets_sync
File "src/oracledb/impl/thin/transport.pyx", line 306, in oracledb.thin_impl.Transport.read_packet
File "/home/acme/.local/lib/python3.10/site-packages/oracledb/errors.py", line 162, in _raise_err
raise exc_type(_Error(message)) from cause
oracledb.exceptions.DatabaseError: DPY-4011: the database or network closed the connection
Help: https://python-oracledb.readthedocs.io/en/latest/user_guide/troubleshooting.html#dpy-4011
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/tmp/workspace/sql_ping/sql_ping.py", line 16, in <module>
with oracledb.connect(user=user, password=password, dsn=conn_string) as conn:
File "/home/acme/.local/lib/python3.10/site-packages/oracledb/connection.py", line 1134, in connect
return conn_class(dsn=dsn, pool=pool, params=params, **kwargs)
File "/home/acme/.local/lib/python3.10/site-packages/oracledb/connection.py", line 523, in __init__
impl.connect(params_impl)
File "src/oracledb/impl/thin/connection.pyx", line 449, in oracledb.thin_impl.ThinConnImpl.connect
File "src/oracledb/impl/thin/connection.pyx", line 445, in oracledb.thin_impl.ThinConnImpl.connect
File "src/oracledb/impl/thin/connection.pyx", line 411, in oracledb.thin_impl.ThinConnImpl._connect_with_params
File "src/oracledb/impl/thin/connection.pyx", line 392, in oracledb.thin_impl.ThinConnImpl._connect_with_description
File "src/oracledb/impl/thin/connection.pyx", line 358, in oracledb.thin_impl.ThinConnImpl._connect_with_address
File "/home/acme/.local/lib/python3.10/site-packages/oracledb/errors.py", line 162, in _raise_err
raise exc_type(_Error(message)) from cause
oracledb.exceptions.OperationalError: DPY-6005: cannot connect to database (CONNECTION_ID=emkjRug2341/ymv7uTC0Bg==).
DPY-4011: the database or network closed the connection
Help: https://python-oracledb.readthedocs.io/en/latest/user_guide/troubleshooting.html#dpy-4011
</code></pre>
<h2>Context (Environment)</h2>
<p>Both (database and client app are in the same host)</p>
<ul>
<li><p>Database</p>
<ul>
<li>Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit (localhost)</li>
<li>ubuntu</li>
<li>user = 'system'</li>
<li>password = 'admin123'</li>
<li>port = 1521</li>
<li>service_name = 'xe'</li>
<li>host = 'localhost' (I also tried with local ip and 127.0.0.1)</li>
</ul>
</li>
<li><p>App</p>
<ul>
<li>ubuntu</li>
<li>python 3.10</li>
<li>oracledb 2.0.1 & 2.0.0</li>
<li>thin mode</li>
</ul>
</li>
<li><p>No firewall, proxy, antivirus, etc. All in my ubuntu localhost</p>
</li>
</ul>
<h2>Attempts</h2>
<h4># Works with java</h4>
<p>In the same host, with the same database and thin mode, java <3 is able to connect without any problems. Code is <a href="https://gist.github.com/jrichardsz/5998f84bc33b26f92d68f271157ac8a9#file-jdbc-snippets-oracle-md" rel="nofollow noreferrer">here</a></p>
<h4># Works with <a href="https://dbeaver.io/" rel="nofollow noreferrer">dbeaver</a> database ide</h4>
<p><a href="https://i.sstatic.net/ynl1Z.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ynl1Z.png" alt="enter image description here" /></a></p>
<h4># oracledb pip library downgrade</h4>
<p>I tried with all the versions since 1.3.1 and the error is almost the same</p>
<p><a href="https://pypi.org/project/oracledb/#history" rel="nofollow noreferrer">https://pypi.org/project/oracledb/#history</a></p>
<p><strong>1.2.1, 1.3.2, 1.4.2</strong></p>
<pre><code>oracledb.exceptions.NotSupportedError: DPY-3010: connections to this database server version are not supported by python-oracledb in thin mode
</code></pre>
<h3># disable_oob= True</h3>
<p>I tried <a href="https://stackoverflow.com/a/73281007/3957754">this</a> answer but the error is the same</p>
<h3># Similar unsolved questions</h3>
<ul>
<li><a href="https://stackoverflow.com/questions/76308017/dpy-4011-the-database-or-network-closed-connection">DPY-4011: the database or network closed connection</a></li>
<li><a href="https://stackoverflow.com/questions/73468766/dpy-4011-the-database-or-network-closed-the-connection">DPY-4011: the database or network closed the connection</a></li>
<li><a href="https://stackoverflow.com/questions/73241118/error-connecting-virtual-machine-to-oracle-using-python-error-dpy-4011">Error Connecting virtual machine to oracle, using python (error DPY-4011)</a></li>
<li><a href="https://stackoverflow.com/questions/72385508/what-does-dpy-6001-cannot-connect-to-database-mean-with-python-oracledb">What does 'DPY-6001: cannot connect to database' mean with python-oracledb?</a></li>
<li><a href="https://stackoverflow.com/questions/72385533/what-does-dpy-6005-cannot-connect-to-database-connection-failed-with-errno">What does 'DPY-6005: cannot connect to database. Connection failed with "[Errno 61] Connection refused"' mean with python-oracledb</a></li>
<li><a href="https://stackoverflow.com/questions/73184530/oracle-dpy-6005-cannot-connect-to-database-winerror-10061-no-connection-cou">Oracle DPY-6005: cannot connect to database. "[WinError 10061] No connection could be made because the target machine actively refused it"</a></li>
<li><a href="https://stackoverflow.com/questions/74866786/python-oracledb-thin-client-returns-dpy-6005">python-oracledb thin client returns DPY-6005</a></li>
<li><a href="https://github.com/oracle/python-oracledb/issues/234" rel="nofollow noreferrer">https://github.com/oracle/python-oracledb/issues/234</a></li>
</ul>
|
<python><oracle-database><oracle11g><python-oracledb>
|
2024-03-07 21:25:53
| 1
| 16,864
|
JRichardsz
|
78,124,241
| 23,555,881
|
Python to format and create a nested JSON file
|
<p>I am using pandas "to_json" option to generate some JSON files after filtering DF. The output of these comes as below:</p>
<pre><code>[{"time":1677287760000,"x":0.001,"y":0.001,"z":0.0},{"time":1677632400000,"x":0.0,"y":0.0,"z":0.0},{"time":1677636000000,"x":0.0,"y":0.0,"z":0.0},{"time":1677639600000,"x":0.0,"y":0.0,"z":0.0}]
</code></pre>
<p>and</p>
<pre><code>[{"dt":20,"count":6},{"dt":23,"count":9},{"dt":11,"count":7},{"dt":2,"count":16},{"dt":17,"count":1},{"dt":20,"count":6}]
</code></pre>
<p>I am looking for options to create a JSON file in python which will hold both as below:</p>
<pre><code>{
"StatusTimes": [
{
"time": 1677287760000,
"x": 0.001,
"y": 0.001,
"z": 0
},
{
"time": 1677632400000,
"x": 0,
"y": 0,
"z": 0
},
{
"time": 1677636000000,
"x": 0,
"y": 0,
"z": 0
},
{
"time": 1677639600000,
"x": 0,
"y": 0,
"z": 0
}
],
"DtCount": [
{
"dt": 20,
"count": 6
},
{
"dt": 23,
"count": 9
},
{
"dt": 11,
"count": 7
},
{
"dt": 2,
"count": 16
},
{
"dt": 17,
"count": 1
},
{
"dt": 20,
"count": 6
}
]
}
</code></pre>
<p>Tried json dumps and then update to get errors. I was thinking to create a JSON object and then update the value as needed. i.e to start with</p>
<pre><code>{
"StatusTimes": [],
"DtCount": []
}
</code></pre>
<p>Adding a name to the specific inner JSON object also didn't work. Appreciate the help here</p>
|
<python><json><pandas>
|
2024-03-07 20:43:38
| 2
| 606
|
Halod
|
78,124,127
| 4,294,028
|
Sampling from a Normal distribution with sparse covariance matrix
|
<p>To sample from a gaussian distribution with mean zero and covariance matrix S, we can do the following:</p>
<pre><code>from scipy import sparse
from numpy import np
S = sparse.diags([np.full(100,1),0.1*np.ones(99),0.1*np.ones(99)],[0,-1,1])
S.A
array([[1. , 0.1, 0. , ..., 0. , 0. , 0. ],
[0.1, 1. , 0.1, ..., 0. , 0. , 0. ],
[0. , 0.1, 1. , ..., 0. , 0. , 0. ],
...,
[0. , 0. , 0. , ..., 1. , 0.1, 0. ],
[0. , 0. , 0. , ..., 0.1, 1. , 0.1],
[0. , 0. , 0. , ..., 0. , 0.1, 1. ]])
np.random.multivariate_normal(np.zeros(100), S.A, 1)
</code></pre>
<p>Obviously this completely ignores the fact that the covariance matrix is sparse. I am wondering if there is a better way to more efficiently sample from a Gaussian with a sparse matrix without having to use the full 100x100 matrix. In my problem, I am dealing with a 1Mx1M matrix which exhausts my memory, but the matrix is extremely sparse so there should hopefully be a better way to do this, ideally in Python.</p>
|
<python><random><sparse-matrix><sample>
|
2024-03-07 20:19:52
| 0
| 938
|
WeakLearner
|
78,124,029
| 2,301,970
|
# Computing multiple 1d curves into their 2d array (image) representation
|
<p>I wonder if anyone could please help me do this operation in python as efficiently as possible. I think numpy fancy indexing is the best approach but I have not been able to make it work (any alternative approach is also welcome)</p>
<p>I have simple data sets which take the shape of curves like this:</p>
<p><a href="https://i.sstatic.net/vfPjo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vfPjo.png" alt="enter image description here" /></a></p>
<p>I want to compute their image represetation: a matrix where values under the curve are 1 and above 0:</p>
<p><a href="https://i.sstatic.net/BH35q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BH35q.png" alt="enter image description here" /></a></p>
<p>My current approach consists in using np.searchsorted to map the 1d array values to the number of "pixels" in the image:</p>
<pre><code>import numpy as np
from matplotlib import pyplot as plt
# 1D data
n_data = 8
x_data = np.arange(n_data)
y_data = np.array([0.2, 2.3, 5.9, 7.2, 6.2, 4.7, 2.9, 0.9])
# 2D empty data array
y_matrix = np.zeros((n_data, n_data))
rows = np.arange(n_data)[:, np.newaxis]
# Approximation to a 0 to 7 interval
threshold_array = np.array([0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0])
approximation = np.searchsorted(threshold_array, y_data)
</code></pre>
<p>Afterwards a simple substraction can generate the 2D mask:</p>
<pre><code># Use broadcasting to create a boolean mask for the elements to be set to 1
mask = n_data - 1 - rows < approximation
y_matrix[mask] = 1
print(y_matrix)
</code></pre>
<p>This mask looks like:</p>
<pre><code>[[0. 0. 0. 1. 0. 0. 0. 0.]
[0. 0. 0. 1. 1. 0. 0. 0.]
[0. 0. 1. 1. 1. 0. 0. 0.]
[0. 0. 1. 1. 1. 1. 0. 0.]
[0. 0. 1. 1. 1. 1. 0. 0.]
[0. 1. 1. 1. 1. 1. 1. 0.]
[0. 1. 1. 1. 1. 1. 1. 0.]
[1. 1. 1. 1. 1. 1. 1. 1.]]
</code></pre>
<p>Now I would like to apply this opertation to hundreds of thousands of vectors. For example with:</p>
<pre><code>y_data = np.array([[0.2, 2.3, 5.9, 7.2, 6.2, 4.7, 2.9, 0.9],
[0.2, 2.3, 5.9, 7.2, 6.2, 4.7, 2.9, 0.9],
[0.2, 2.3, 5.9, 7.2, 6.2, 4.7, 2.9, 0.9]])
</code></pre>
<p>Should result into three matrices or an array with (8,8,3) shape. Would anyone offer an efficient approach?</p>
<p>Thanks</p>
|
<python><numpy><performance><indexing>
|
2024-03-07 19:58:36
| 2
| 693
|
Delosari
|
78,123,814
| 3,618,604
|
Installing pyfftw on mac OSX
|
<p>I am trying to install <code>pyfftw</code> on mac OSX using <code>pip</code>. Here is what I have already done.</p>
<p>I have installed fftw using <code>brew install fftw</code>, and also tested the linking, by compiling and running the following code:</p>
<pre><code>#include <fftw3.h>
int main() {
printf("Hello, FFTW!\n");
return 0;
}
</code></pre>
<p>with</p>
<p><code>gcc -o test test.cpp -I/opt/homebrew/opt/fftw/include -L/opt/homebrew/opt/fftw/lib -lfftw3</code></p>
<p>which runs fine and prints Hello, FFTW as expected.
After this I added the following lines to my <code>.zshrc</code> file:</p>
<pre><code>export PATH=/opt/homebrew/bin:$PATH
export PATH=/opt/homebrew:$PATH
export DYLD_LIBRARY_PATH=/opt/homebrew/lib:$DYLD_LIBRARY_PATH
export LDFLAGS="-Wl,-S,-rpath,/opt/homebrew/opt/fftw/lib -L/opt/homebrew/opt/fftw/lib"
export CFLAGS="-Wno-implicit-function-declaration -I/opt/homebrew/opt/fftw/include"
</code></pre>
<p>When after sourcing the <code>zshrc</code> file I try <code>pip install pyfftw</code> from within a conda environment, I get the error message:</p>
<pre><code> ld: library 'fftw3f' not found
</code></pre>
<p>for each of the libraries and then finally says:</p>
<pre><code>error: Could not find any of the FFTW libraries
</code></pre>
<p>I have checked the libraries are there.</p>
<p>From this message, <code> DEBUG:__main__:Checking with includes ['fftw3.h']...ok</code> I believe it is able to locate the header file but the linker is not finding the libraries. Any help will be greatly appreciated.</p>
|
<python><pip><conda><fftw><pyfftw>
|
2024-03-07 19:12:20
| 0
| 353
|
R.U.
|
78,123,702
| 8,652,920
|
How to access fixtures from pytest hook pytest_collection_modifyitems?
|
<pre class="lang-py prettyprint-override"><code>$ ls
conftest.py __pycache__ test.py
$ cat conftest.py
import pytest
@pytest.fixture
def myfixture():
t = 800
yield t
def pytest_collection_modifyitems(config, items):
# I want a reference to myfixture here
pass
$ cat test.py
import pytest
def test_trivial():
assert True
</code></pre>
<p>Here is a minimum example of what I'm working with.</p>
<p>I've already tried the answers from <a href="https://stackoverflow.com/questions/55413277/can-pytest-hooks-use-fixtures">Can pytest hooks use fixtures?</a></p>
<p>and they both don't work.</p>
<pre class="lang-py prettyprint-override"><code>$ cat conftest.py
import pytest
@pytest.fixture
def myfixture():
t = 800
yield t
def pytest_collection_modifyitems(config, items, request):
f = request.getfixturevalue('myfixture')
pass
$ pytest test.py
Traceback (most recent call last):
File "/u/*/.local/bin/pytest", line 8, in <module>
sys.exit(main())
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 65, in main
config = _prepareconfig(args, plugins)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 214, in _prepareconfig
pluginmanager=pluginmanager, args=args
File "/u/*/.local/lib/python2.7/site-packages/pluggy/hooks.py", line 289, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 87, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 81, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/helpconfig.py", line 94, in pytest_cmdline_parse
config = outcome.get_result()
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 81, in get_result
_reraise(*ex) # noqa
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 789, in pytest_cmdline_parse
self.parse(args)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 997, in parse
self._preparse(args, addopts=addopts)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 953, in _preparse
early_config=self, args=args, parser=self._parser
File "/u/*/.local/lib/python2.7/site-packages/pluggy/hooks.py", line 289, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 87, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 81, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 81, in get_result
_reraise(*ex) # noqa
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 852, in pytest_load_initial_conftests
self.pluginmanager._set_initial_conftests(early_config.known_args_namespace)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 390, in _set_initial_conftests
self._try_load_conftest(anchor)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 396, in _try_load_conftest
self._getconftestmodules(anchor)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 431, in _getconftestmodules
mod = self._importconftest(conftestpath.realpath())
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 481, in _importconftest
self.consider_conftest(mod)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 534, in consider_conftest
self.register(conftestmodule, name=conftestmodule.__file__)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 328, in register
ret = super(PytestPluginManager, self).register(plugin, name)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 120, in register
self._verify_hook(hook, hookimpl)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 254, in _verify_hook
notinspec,
pluggy.manager.PluginValidationError: Plugin '/tmp/test/conftest.py' for hook 'pytest_collection_modifyitems'
hookimpl definition: pytest_collection_modifyitems(config, items, request)
Argument(s) set(['request']) are declared in the hookimpl but can not be found in the hookspec
</code></pre>
<pre class="lang-py prettyprint-override"><code>$ cat conftest.py
import pytest
@pytest.fixture
def myfixture():
t = 800
yield t
def pytest_collection_modifyitems(config, items, item):
f = item.funcargs['myfixture']
pass
$ pytest test.py
Traceback (most recent call last):
File "/u/*/.local/bin/pytest", line 8, in <module>
sys.exit(main())
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 65, in main
config = _prepareconfig(args, plugins)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 214, in _prepareconfig
pluginmanager=pluginmanager, args=args
File "/u/*/.local/lib/python2.7/site-packages/pluggy/hooks.py", line 289, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 87, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 81, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 203, in _multicall
gen.send(outcome)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/helpconfig.py", line 94, in pytest_cmdline_parse
config = outcome.get_result()
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 81, in get_result
_reraise(*ex) # noqa
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 789, in pytest_cmdline_parse
self.parse(args)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 997, in parse
self._preparse(args, addopts=addopts)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 953, in _preparse
early_config=self, args=args, parser=self._parser
File "/u/*/.local/lib/python2.7/site-packages/pluggy/hooks.py", line 289, in __call__
return self._hookexec(self, self.get_hookimpls(), kwargs)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 87, in _hookexec
return self._inner_hookexec(hook, methods, kwargs)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 81, in <lambda>
firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 208, in _multicall
return outcome.get_result()
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 81, in get_result
_reraise(*ex) # noqa
File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 187, in _multicall
res = hook_impl.function(*args)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 852, in pytest_load_initial_conftests
self.pluginmanager._set_initial_conftests(early_config.known_args_namespace)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 390, in _set_initial_conftests
self._try_load_conftest(anchor)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 396, in _try_load_conftest
self._getconftestmodules(anchor)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 431, in _getconftestmodules
mod = self._importconftest(conftestpath.realpath())
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 481, in _importconftest
self.consider_conftest(mod)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 534, in consider_conftest
self.register(conftestmodule, name=conftestmodule.__file__)
File "/u/*/.local/lib/python2.7/site-packages/_pytest/config/__init__.py", line 328, in register
ret = super(PytestPluginManager, self).register(plugin, name)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 120, in register
self._verify_hook(hook, hookimpl)
File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 254, in _verify_hook
notinspec,
pluggy.manager.PluginValidationError: Plugin '/tmp/test/conftest.py' for hook 'pytest_collection_modifyitems'
hookimpl definition: pytest_collection_modifyitems(config, items, item)
Argument(s) set(['item']) are declared in the hookimpl but can not be found in the hookspec
</code></pre>
<pre class="lang-py prettyprint-override"><code>$ cat conftest.py
import pytest
@pytest.fixture
def myfixture():
t = 800
yield t
def pytest_collection_modifyitems(config, items):
for item in items:
f = item.funcargs['myfixture']
pass
$ pytest test.py
=================================================================== test session starts ====================================================================
platform linux2 -- Python 2.7.18, pytest-4.6.11, py-1.11.0, pluggy-0.12.0
rootdir: /tmp/test
collected 1 item
INTERNALERROR> Traceback (most recent call last):
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/_pytest/main.py", line 206, in wrap_session
INTERNALERROR> session.exitstatus = doit(config, session) or 0
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/_pytest/main.py", line 249, in _main
INTERNALERROR> config.hook.pytest_collection(session=session)
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/hooks.py", line 289, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 87, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 81, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 81, in get_result
INTERNALERROR> _reraise(*ex) # noqa
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/_pytest/main.py", line 259, in pytest_collection
INTERNALERROR> return session.perform_collect()
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/_pytest/main.py", line 498, in perform_collect
INTERNALERROR> session=self, config=self.config, items=items
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/hooks.py", line 289, in __call__
INTERNALERROR> return self._hookexec(self, self.get_hookimpls(), kwargs)
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 87, in _hookexec
INTERNALERROR> return self._inner_hookexec(hook, methods, kwargs)
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/manager.py", line 81, in <lambda>
INTERNALERROR> firstresult=hook.spec.opts.get("firstresult") if hook.spec else False,
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 208, in _multicall
INTERNALERROR> return outcome.get_result()
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 81, in get_result
INTERNALERROR> _reraise(*ex) # noqa
INTERNALERROR> File "/u/*/.local/lib/python2.7/site-packages/pluggy/callers.py", line 187, in _multicall
INTERNALERROR> res = hook_impl.function(*args)
INTERNALERROR> File "/tmp/test/conftest.py", line 10, in pytest_collection_modifyitems
INTERNALERROR> f = item.funcargs['myfixture']
INTERNALERROR> KeyError: 'myfixture'
=============================================================== no tests ran in 0.01 seconds ===============================================================
</code></pre>
|
<python><python-3.x><pytest><fixtures>
|
2024-03-07 18:47:00
| 0
| 4,239
|
notacorn
|
78,123,684
| 286,034
|
running django tests --parallel and splitting log files per test-runner worker
|
<p>I'm using <code>manage.py test --parallel</code> to run my tests and want to create a separate log file for each test runner.</p>
<p>Currently, all the test runner worker write to the same log file, so I get a single log file with contents "striped" like this:</p>
<pre><code>[ForkPoolWorker-2] ...
[ForkPoolWorker-1] ...
[ForkPoolWorker-1] ...
[ForkPoolWorker-3] ...
[ForkPoolWorker-2] ...
[ForkPoolWorker-4] ...
[ForkPoolWorker-1] ...
</code></pre>
<p>I create a custom <code>configure_logging()</code> method like this:</p>
<pre><code>import logging.config
import os
def configure_logging() -> None:
"""Custom logging configuration with process-id named files"""
process_id = os.getpid()
log_file_path = f"runner-{process_id}.log"
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"handlers": {
"file": {
"level": "DEBUG",
"class": "logging.FileHandler",
"filename": log_file_path,
},
},
"loggers": {
"django.log": {
"handlers": ["file"],
"level": "DEBUG",
"propagate": True,
},
},
}
logging.config.dictConfig(LOGGING)
</code></pre>
<p>I'm using a custom test-runner to wire things up:</p>
<pre><code>from django.test.runner import DiscoverRunner
from myapp.tests.logging_config import configure_logging
class ParallelTestRunner(DiscoverRunner):
"""Parallel test runner with separate log files for each worker"""
def setup_test_environment(self, **kwargs):
"""Configure the test environment with our custom log setup"""
super().setup_test_environment(**kwargs)
# Configure logging with a unique file per test process
configure_logging()
</code></pre>
<p>I wire that up in settings.py like this:</p>
<p><code>TEST_RUNNER = "myapp.tests.runners.ParallelTestRunner"</code></p>
<p>However, when I look for the log files, it appears it's only creating a single log file like <code>runner-16.log</code>, so I think the name-generation for the log files it happening in the testrunner main process, before the worker split.</p>
<p>Any ideas on how I can wire this up so that the log files get created per testrunner worker? I'd like to see log files like:</p>
<pre><code>runner-11.log
runner-12.log
runner-13.log
runner-14.log
runner-15.log
runner-16.log
</code></pre>
<p>Thanks!</p>
|
<python><django><python-unittest><django-unittest>
|
2024-03-07 18:45:28
| 0
| 320
|
siebo
|
78,123,564
| 7,404,222
|
Opencv movement detection without being trigger by random noise
|
<p>I'm new to image processing and I'm struggling a bit, I'm making my own diy security software and I made a function to detect some movement in order to start recording and notify me.</p>
<p>The idea of this function is to take two images and diff them in order to find some movement, the problem I have is that either :</p>
<ol>
<li>The detection is working really fine but at night there's some noise on the image and even daily in the shadows which trigger a positive detection wrongly</li>
<li>The function is not wrongly triggered but it miss some detections</li>
</ol>
<p>The way I tried the option 2 is through the commented code, main ideas was to</p>
<ul>
<li>divide gray images and blur</li>
<li>replace the basic threshold by an adaptative one (gaussian and mean)</li>
</ul>
<p>Here is my code :</p>
<pre><code>import cv2
import numpy as np
from skimage.metrics import structural_similarity as ssim
def count_diff_nb(img_1, img_2):
# resize images
img_1_height, img_1_width = img_1.shape[:2]
new_height = int((600 / img_1_width) * img_1_height)
img_1 = cv2.resize(img_1, (600,new_height))
img_2 = cv2.resize(img_2, (600,new_height))
# convert to gray scale
gray_image1 = cv2.cvtColor(img_1, cv2.COLOR_BGR2GRAY)
gray_image2 = cv2.cvtColor(img_2, cv2.COLOR_BGR2GRAY)
# Gaussian blur in order to remove some noise
blur1 = cv2.GaussianBlur(gray_image1, (5,5), 0)
blur2 = cv2.GaussianBlur(gray_image2, (5,5), 0)
# divide (bad idea)
#divide1 = cv2.divide(gray_image1, blur1, scale=255)
#divide2 = cv2.divide(gray_image2, blur2, scale=255)
# Compute SSIM between two images
#ssim_value, diff = ssim(gray_image1, gray_image2, full=True)
ssim_value, diff = ssim(blur1, blur2, full=True)
#ssim_value, diff = ssim(divide1, divide2, full=True)
diff_percent = (1 - ssim_value) * 100
# The diff image contains the actual image differences between the two images
# and is represented as a floating point data type so we must convert the array
# to 8-bit unsigned integers in the range [0,255] before we can use it with OpenCV
diff = (diff * 255).astype("uint8")
# Adaptative threshold (bad idea too)
#thresh = cv2.adaptiveThreshold(diff, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY_INV, 11, 2)
#thresh = cv2.adaptiveThreshold(diff, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 10)
# Threshold the difference image
thresh = cv2.threshold(diff, 0, 255, cv2.THRESH_BINARY_INV | cv2.THRESH_OTSU)[1]
# followed by finding contours to
# obtain the regions that differ between the two images
contours = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
contours = contours[0] if len(contours) == 2 else contours[1]
# Highlight differences
mask = np.zeros(img_1.shape, dtype='uint8')
filled = img_2.copy()
contours_nb = 0
for c in contours:
# limit is an area so sqrt of size
area = cv2.contourArea(c)
# 72000 is 1/3 of global img area
if area > 2000 and area < 72000:
contours_nb = contours_nb + 1
x,y,w,h = cv2.boundingRect(c)
cv2.rectangle(img_1, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.rectangle(img_2, (x, y), (x + w, y + h), (36,255,12), 2)
cv2.drawContours(mask, [c], 0, (0,255,0), -1)
cv2.drawContours(filled, [c], 0, (0,255,0), -1)
return contours_nb, diff_percent, img_2, filled
</code></pre>
<p>Do you have any ideas or things I'm missing in order to be able to find the sweetspot between sensibility (not miss detections) and ignoring random noise due to the darkness ?</p>
<p>I thought to ignore the dark colors before converting to grayscale but if the moving thing is black then .. it's a bad idea I think.</p>
<p>Thanks a lot !</p>
<p>Edit :</p>
<p>I changed the whole thing by implementing <a href="https://learnopencv.com/moving-object-detection-with-opencv/" rel="nofollow noreferrer">this solution</a> suggested by @pippo1980. I use BackgroundSubtractorMOG2 which works the best in my case. (I tested the different options).</p>
<p>So it works almost perfectly, the last pain point is now at the sunrise and sunset, when my cheap webcam is struggling with noise and the image is a little blur / randomly noised.</p>
<p>I'm searching how to deal with this but I'm not sure.</p>
<p>Here's when it's working fine, you can see that the mask is really sharp :</p>
<p><a href="https://i.sstatic.net/4525b.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4525b.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/HCuhi.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HCuhi.jpg" alt="enter image description here" /></a></p>
<p>And at sunset with the blur / noise on image :</p>
<p><a href="https://i.sstatic.net/5UNsk.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5UNsk.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Mmrw3.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Mmrw3.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/hPc1E.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hPc1E.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/ahMuu.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ahMuu.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/xBDfC.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xBDfC.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/KREX5.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KREX5.jpg" alt="enter image description here" /></a></p>
|
<python><opencv><image-processing><computer-vision><detection>
|
2024-03-07 18:20:46
| 3
| 390
|
pipou
|
78,123,402
| 51,280
|
Ideal Conda and pip workflow to get the latest dependencies
|
<p>I usually start with this conda environment when I have a dependency called <code>the_dependency</code>:</p>
<pre class="lang-yaml prettyprint-override"><code>name: my_app
channels:
- defaults
- conda-forge
dependencies:
- python=3.11
- pip
- pip:
- the_dependency
</code></pre>
<p>I find that <code>pip</code> usually finds a more recent version of a package on PyPi, and only when I have architecture-specific trouble I revert to using a Conda version like so:</p>
<pre class="lang-yaml prettyprint-override"><code>name: my_app
channels:
- defaults
- conda-forge
dependencies:
- python=3.11
- the_dependency
</code></pre>
<p>Are there better workflows to achieve the same thing?</p>
|
<python><pip><conda>
|
2024-03-07 17:45:43
| 0
| 5,458
|
opyate
|
78,123,358
| 9,053,942
|
What is the best approach to train a pytorch model over large dataset stored in a database?
|
<p>I have a large dataset, and for convenience, I put it in an sqlite database. It has about 270k rows, each row has 10_000 bp long DNA sequence. It's impossible to load the entire dataset at once, let alone train a model (yeah, I tried, and my laptop's GPU ain't running / making any noise).</p>
<p>So currently I am running a loop. In each iteration, I am selecting paged data using offset and limit (say 500 sequences at a time) from the database, and training on the model on those 500 sequences with, say, 10 epochs.</p>
<p>The relevant part of my code:</p>
<pre class="lang-py prettyprint-override"><code>my_offset = 0
my_limit = 500
my_batch_size = 100
some_model = MyModel()
db_page_number = 0
# pagination
while db_page_number < 100:
db_page_number += 1
my_offset += my_limit
query = f"SELECT sequences, classification FROM table ORDER BY id LIMIT {my_limit} OFFSET {my_offset}"
paged_data = pd.read_sql_query(query)
x, y = get_x_y_from(paged_data)
train_dataset = torch.utils.data.TensorDataset(x, y)
train_loader = DataLoader(train_dataset, batch_size=my_batch_size)
for epoch in range(0, 10):
# ... typical pytorch code...
for data in train_loader:
x1, y1 = data
predicted_outputs = self.pytorch_model(x1) # predict output from the model
train_loss = self.loss_function(predicted_outputs, y1) # calculate loss for the predicted output
train_loss.backward() # back propagate the loss
optimizer.step() # adjust params based on the calculated gradients
# ... typical pytorch code...
</code></pre>
<p>Here, I am doing the pagination manually (the outer most while loop). I managed to run my code, but I wonder what is the best practice? Maybe using pandas, or some other library for for pagination part? I'm open to suggestions.</p>
|
<python><sqlite><deep-learning><pytorch>
|
2024-03-07 17:36:15
| 2
| 2,226
|
Qazi Fahim Farhan
|
78,123,245
| 17,617,395
|
How can we draw a interactable CIElab Color Space using python and also plot the data points (having color parameters l*,a*, b* , h and C* values)?
|
<p>I want to measure the color difference between the reference and sample. And also want to show the color shift (difference) on the chart itself. For you reference below is the color space picture. Also Parameters with range and significance:</p>
<ul>
<li>L* [lightness - black to white]:[0 to 100]</li>
<li>a* [green to red]:[-128 to 127]</li>
<li>b* [blue to yellow]:[-128 to 127]</li>
<li>C* [Chroma/Saturation]:[0 to 100]</li>
<li>h [hue angle]:[0 to 360]</li>
</ul>
<p><a href="https://i.sstatic.net/aRwok.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aRwok.png" alt="CIElab Color Space" /></a></p>
|
<python><python-3.x><colors><visualization>
|
2024-03-07 17:17:15
| 0
| 371
|
Abhishek Kashyap
|
78,123,239
| 9,322,863
|
Ensure trivial solution is found to matrix equation
|
<p>I'm trying to solve a matrix equation $Ax = b$ with numpy (actually I'm using <code>scipy.sparse</code> but I believe the question remains the same). In my setup, $b$ is the derivative of some function.</p>
<p>In my system, it's possible that the equation may sometimes be $Ax = 0$, in which case I want to get the zero vector back. When I try to do this with a vector of floats, the result I get back is a vector of values ~<code>e-22</code>. This causes havoc at future timesteps and causes weird numerical artefacts when it should be a steady state. When I manually make the right hand side a vector of integer 0s, I get a vector of exactly zeros back.</p>
<p>How can I do this without having to manually check if the vector is all zeros, and convert to integers if so?</p>
<p>Many thanks</p>
|
<python><numpy><scipy><linear-algebra><differential-equations>
|
2024-03-07 17:16:29
| 1
| 323
|
TIF
|
78,123,222
| 11,516,350
|
Error configuring flask-babel method jinja2 not found
|
<p>This is my babel.cfg file:</p>
<pre><code>[python: **.py]
[jinja2: **/templates/**.html]
</code></pre>
<p>When I execute this command:</p>
<p><code>pybabel extract -F babel.cfg -k _l -o messages.pot .</code></p>
<p>Then return this error:</p>
<pre><code>ValueError: Unknown extraction method 'jinja2'
</code></pre>
<p><a href="https://i.sstatic.net/07N3n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/07N3n.png" alt="Error detail" /></a></p>
<p>¿How is it possible? That example of config is literally gotten from docs</p>
|
<python><flask><translation><flask-babel>
|
2024-03-07 17:13:32
| 1
| 1,347
|
UrbanoJVR
|
78,123,183
| 20,075,659
|
Redis Data Persistance
|
<p>I want to store the Redis data in memory to Hard or any other in my server if sudden shutdowns or restarts happen and restore after it restarts. Are there a way to do it in Python without slowing the read/write on Redis?</p>
|
<python><redis>
|
2024-03-07 17:05:24
| 1
| 396
|
Anon
|
78,123,149
| 7,211,014
|
Flask (connexion) app not reloading monitored extra_files when building app as a python package
|
<p>I have a python flask (using connexion) project that I need to reload if I change any of the files in the project.
This project is built into a python module using <code>setup.py</code>, then run.
I tried using <code>extra_files</code> parameter when running the project but this does not help. I could not tell if the extra_files cared about relative path, so I designed my code to provide full paths for all files. It still does not work.</p>
<p>I build the module with <code>pip install -e .</code>, then run it <code>python -m thing.app.main</code>
But the module does not pick up on changes. I thought using links (using the <code>-e</code> editable) parameter would help but it does not.</p>
<p>I should also mention that I am running all this in my pyenv, so the paths that are run during run time maybe different as they are in the pyenv path. Is that a problem?</p>
<p>Is this expected? Is there a way I can use extra_files? or will I need to use a watchdog daemon to monitor file changes?</p>
|
<python><flask><package><watchdog><connexion>
|
2024-03-07 17:01:31
| 1
| 1,338
|
Dave
|
78,123,142
| 11,516,350
|
Flask babel is marking fuzzy without reason
|
<p>I have these 2 files:</p>
<pre><code>msgid "Categories"
msgstr "Categories"
msgid "Dashboard"
msgstr "Dashboard"
</code></pre>
<p>And:</p>
<pre><code>msgid "Categories"
msgstr "Categorías"
msgid "Dashboard"
msgstr "Dashboard"
</code></pre>
<p>I don't have the "# fuzzy" line. But, executing without force, it says that is fuzzy and exit.</p>
<p>I have read this:</p>
<p><a href="https://stackoverflow.com/questions/12555692/flask-babel-translations-de-lc-messages-messages-po-is-marked-as-fuzzy-skip">Flask Babel - 'translations/de/LC_MESSAGES/messages.po' is marked as fuzzy, skipping</a></p>
<p>But in that case the user contains the #, fuzzy line. I don't have that line</p>
<p>Response compilations says that the files are marked as fuzzy.</p>
<p>This is my project structure:</p>
<p><a href="https://i.sstatic.net/5QooV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5QooV.png" alt="enter image description here" /></a></p>
<p>Project structure</p>
<p>¿Any new idea different from use -f?</p>
|
<python><flask><internationalization><translation><flask-babel>
|
2024-03-07 17:00:45
| 1
| 1,347
|
UrbanoJVR
|
78,123,062
| 9,025,983
|
Python AWS CDK Unable to synthetize stack, Unable to set secret rotation in aws cdk
|
<p>I've been unable to synth my cdk stack. I need to instantiate a Postgres RDS database instance. I've attempted to add a single user and secrete rotation schedule to no avail. Do you have any ideas of what is required to achieve a secret rotation? The error message received when trying to synth the stack:</p>
<p><code>[Error at /usecase-1-stack/test-uc1-pgdb/Secret/Resource] AwsSolutions-SMG4: The secret does not have automatic rotation scheduled. AWS Secrets Manager can be configured to automatically rotate the secret for a secured service or database.</code></p>
<p>and cdk code below:</p>
<pre><code> from aws_cdk import aws_rds as rds
from aws_cdk import aws_secretsmanager as sm
from aws_cdk import aws_ec2 as ec2
curated_rds = rds.DatabaseInstance(
self,
f"{env_id}-uc1-pgdb",
database_name=curated_db_name,
engine=rds.DatabaseInstanceEngine.postgres(
version=rds.PostgresEngineVersion.VER_14_10
),
port=curated_db_port,
instance_type=ec2.InstanceType.of(
ec2.InstanceClass.STANDARD5, ec2.InstanceSize.LARGE
),
credentials=rds.Credentials.from_generated_secret(
"admin",
encryption_key=data_key,
secret_name=f"{env_id}-uc1-pgdb-admin",
),
vpc=data_vpc,
vpc_subnets=ec2.SubnetSelection(
subnet_type=ec2.SubnetType.PRIVATE_ISOLATED
),
security_groups=[curated_rds_security_group],
storage_encrypted=True,
storage_encryption_key=data_key,
auto_minor_version_upgrade=True,
deletion_protection=True,
multi_az=True,
publicly_accessible=False,
enable_performance_insights=True,
)
# curated_rds.add_rotation_single_user(automatically_after=Duration.days(30))
curated_rds.secret.add_rotation_schedule("RotationSchedule", hosted_rotation=sm.HostedRotation.postgre_sql_single_user(), automatically_after=Duration.days(7))
</code></pre>
|
<python><aws-cdk><aws-secrets-manager>
|
2024-03-07 16:46:24
| 1
| 523
|
Francisco
|
78,122,985
| 2,989,089
|
Test for object callability in match-case construct
|
<p>For context, I have a function that matches keys in a dictionary to perform certain action; to match item keys, the function accepts either a sequence of keys to match, or a function that recognizes those keys.</p>
<p>I'm wondering if I can use the <code>match-case</code> pattern for it. I try something like:</p>
<pre class="lang-py prettyprint-override"><code>def process_fields(dataset,function,fields):
"""Apply function to selected values of a dictionary
fields can be a list of keys whose values shall be processed,
or a predicate that returns True for the targeted fields."""
match fields:
case list() | set() | tuple():
key_matcher = lambda x:x in fields
case <what I'm looking for>:
key_matcher=fields
walk_items(dataset,key_matcher,function)
</code></pre>
<p>So far I've tried:</p>
<pre><code> case callable(function):
key_matcher=function
</code></pre>
<pre><code> case typing.Callable(function):
key_matcher=function
</code></pre>
<p>I can't find what I need in the official documentation. I'm I missing something or it's not doable?</p>
<p>Example is here to avoid too dry abstraction. Note that I'm NOT looking for alternatives to solve that particular problem, I can perfectly do it myself; I'm looking to find out if it exist a way to use that python structure in particular.</p>
<p><em>Edit: Even though the function itself is not the focus of the post, I've added a docstring and simplified a bit to clarify the example.</em></p>
|
<python>
|
2024-03-07 16:33:55
| 2
| 884
|
Antoine Gallix
|
78,122,965
| 13,392,257
|
Cognito Pre-SignUp Trigger - Empty Lambda Event
|
<p>I am trying to trigger AWS lambda code for AWS Cognito signup</p>
<p>My actions</p>
<ol>
<li>Creted a code (AWS lambda)</li>
</ol>
<pre><code>import json
import boto3
client = boto3.client('cognito-idp', region_name='eu-central-1')
def lambda_handler(event, context):
print("DEBUG: register a new user")
str = json.dumps(event, indent=2)
print("DEBUG EVENT: ", str) # ERROR - event is empty {}
</code></pre>
<p>2 Configured trigger (Cognito -> User pool properties -> Pre sign-up Lambda trigger)</p>
<p><strong>Problem</strong> -- <code>event</code> function argument is empty. Why the <code>event</code> is empty?</p>
<p>On the other hand I see that user was created successfully</p>
<p>Also I was trying to print context. But didn't find usefull information</p>
<pre><code>DEBUG EVENT: {}
DEBUG CONTEXT: LambdaContext([aws_request_id=445ea5a1-8c99-4217-8dca-1ec99d01ceb5,log_group_name=/aws/lambda/setCustomAttribsP,log_stream_name=2024/03/08/[$LATEST]119c906a7f894c14bda7315aad7ad087,function_name=setCustomAttribsP,memory_limit_in_mb=128,function_version=$LATEST,invoked_function_arn=arn:aws:lambda:eu-central-1:011431447545:function:setCustomAttribsP,client_context=None,identity=CognitoIdentity([cognito_identity_id=None,cognito_identity_pool_id=None])])
</code></pre>
<p>I was reading this question <a href="https://stackoverflow.com/questions/55854824/event-object-is-empty-in-aws-lambda-nodejs-function">Event Object is empty in AWS Lambda nodejs function</a>
But I didn't find Proxy settings in my AWS</p>
<p>I can see API Gateway --> APIs (but I don't see 'Resources' button)</p>
|
<python><amazon-web-services><aws-lambda><amazon-cognito>
|
2024-03-07 16:31:22
| 1
| 1,708
|
mascai
|
78,122,848
| 7,949,129
|
Microsoft Graph-API (graph.microsoft.com/v1.0) returns duplicate or non existing mails
|
<p>I have an Outlook account and I see <strong>one Mail</strong> in my inbox folder. I can somehow also expand this mail in the overview which will expand it to two entries. (Probably because it was forwarded once and moved into Archive folder and then back to inbox).
But I cannot delete one of them, I can just delete it completely.</p>
<p>Now, when I fetch the mails via Graph-API, then i get a result of <strong>61 mails</strong> with almost the same uuid.</p>
<p>They differ only at the 5th character from behind. And that letter is ascending.</p>
<blockquote>
<p>Example uuid: <em>.....AAJokmf<strong>S</strong>AAA=</em></p>
<p>and the next is: <em>.....AAJokmf<strong>T</strong>AAA=</em></p>
<p>and so on.</p>
</blockquote>
<p>I don´t understand why it is like that and I want to have only the one mail returned for my query, which I also see in Outlook Online.</p>
<p>In theory it would be possible to filter the "same" mails based on the common part of the uuid, as it is obviously not exactly the same, but almost. But then I should at least understand, how the uuid is defined and why I am getting those duplicates.</p>
<p>My query looks like this (Python):</p>
<pre><code># init
request = f"grant_type=password&client_id={oauth_client_id}&username={user}&scope=user.read%20openid%20profile%20offline_access&password=" + \
requests.utils.quote(password)
user_result = requests.post(
f"https://login.microsoftonline.com/{oauth_tenant_id}/oauth2/v2.0/token", data=request)
if user_result.status_code >= 300 or not 'access_token' in user_result.json():
error_msg = f"MailConnection.__enter__ failed with error: {json.loads(user_result.content)['error']} : {json.loads(user_result.content)['error_description']}"
print(error_msg)
raise Exception(error_msg)
access_token = user_result.json()['access_token']
# send
endpoint = f'https://graph.microsoft.com/v1.0/users/{user}/mailFolders'
mailfolders = requests.get(endpoint,
headers={'Authorization': 'Bearer ' + access_token})
mailfolders = json.loads(mailfolders.text)
print(mailfolders)
src_folder = 'inbox'
src_folder_param = list(filter(lambda folder: folder['displayName'] == src_folder, mailfolders['value'] ))
endpoint = f'https://graph.microsoft.com/v1.0/users/{user}/mailFolders/{src_folder_param[0]["id"]}/messages?$top=100'
res = requests.get(endpoint, headers={'Authorization': 'Bearer ' + access_token})
resultmails = json.loads(res.text)
for mail in resultmails['value']:
print(mail)
print(len(resultmails['value'])) # why it is 61? there is only one mail in inbox
</code></pre>
<p>Anybody knows how to resolve this? Help would be really appreciated.</p>
|
<python><email><python-requests><microsoft-graph-api><office365>
|
2024-03-07 16:12:47
| 0
| 359
|
A. L
|
78,122,836
| 3,385,432
|
Difference between numpy power and ** for certain values
|
<p>I have a numpy array where the entries in <code>f**2</code> differ from <code>f[i]**2</code>, but only for some specific value.</p>
<pre><code>import numpy as np
np.set_printoptions(precision = 16)
f = np.array([ -40709.6555510835, -40708.6555510835, -33467.081758611654, -27653.379955714125])
f2 = f**2
# f2 = np.power(f,2)
print("outside loop", np.abs(f[1]**2 - f2[1]), np.abs(1.0 - f2[1] / f[1]**2), f.dtype, f[1].dtype, f2.dtype, f2[1].dtype)
for i, val in enumerate(f):
print("inside loop", i, np.abs(val**2 - f2[i]), np.abs(1.0 - f2[i] / val**2), val.dtype, f2.dtype, f2[i].dtype)
</code></pre>
<p>Produces output:</p>
<pre><code>outside loop 2.384185791015625e-07 2.220446049250313e-16 float64 float64 float64 float64
inside loop 0 0.0 0.0 float64 float64 float64
inside loop 1 2.384185791015625e-07 2.220446049250313e-16 float64 float64 float64
inside loop 2 0.0 0.0 float64 float64 float64
inside loop 3 0.0 0.0 float64 float64 float64
</code></pre>
<p>I do note that this is a relative error on the order of epsilon.</p>
<p>This issue goes away when using <code>np.power</code> instead of <code>**</code> in the definition of <code>f2</code>.
Even so, why is <code>f[i]**2</code> not the same as the <code>i</code>th value of <code>f**2</code> (even if only for certain values in <code>f</code>).</p>
<p>I'm using python 3.10.6 and the latest numpy 1.26.4.</p>
<h2>Edit:</h2>
<p>The fundamental issue is captured in:</p>
<pre><code>import numpy as np
f = np.array([-40708.6555510835])
print((f[0])**2 - (f**2)[0])
</code></pre>
<p>which displays a value of</p>
<pre><code>-2.384185791015625e-07
</code></pre>
<p>I would like to know why that specific number has this specific issue. If you'd like confirmation, or to try different values for <code>f</code>, see this <a href="https://ato.pxeger.com/run?1=m72soLIkIz9vwYKlpSVpuhY3SzNzC_KLShTySnMLKhUSixXyCrjyCvSKU0viC4oy80ryC0oy8_OKNQqKUpMzi4FMBVsFQzNNrjQgDVSXWFSUWKkRrWtiYG5goWdmCgSGBhbGprGaXGDtGhpp0QaxmlpaRgq6ChppQFoTxIdYDnUDzC0A" rel="nofollow noreferrer">demo</a>.</p>
|
<python><numpy>
|
2024-03-07 16:11:35
| 3
| 988
|
jmlarson
|
78,122,820
| 7,662,164
|
Equivalent of `jax.lax.cond` for multiple boolean conditions
|
<p>Currently <code>jax.lax.cond</code> works for one boolean condition. Is there a way to extend it to multiple boolean conditions?</p>
<p>As an example, below is an untraceable function:</p>
<pre><code>def func(x):
if x < 0: return x
elif (x >= 0) & (x < 1): return 2*x
else: return 3*x
</code></pre>
<p>How to write this function in JAX in a traceable way?</p>
|
<python><conditional-statements><jit><jax>
|
2024-03-07 16:08:51
| 1
| 335
|
Jingyang Wang
|
78,122,755
| 9,107,502
|
PySide GUI freezes during filtering using QSortFilterProxyModel
|
<p>I'm developing a PySide6/Python3 application, that contains a QTableView with a custom model <code>DataFrameTableModel</code>. I want to support filtering the rendered table. Hence, I'm using <code>QSortFilterProxyModel</code> in addition.</p>
<p>One requirement is to filter based on columns with different operators, e.g. <em>filter all rows with a value >= 5 in column x</em>. For representing a filter I implemented the class <code>DataFrameFilter</code> which basically just stores something like <code>{column: 'Price', operator: 'eq', value: 12}</code>. To apply the custom filter format, I created a class <code>DataFrameSortFilterProxyModel</code> that is inheriting from <code>QSortFilterProxyModel</code>.</p>
<pre><code>from enum import Enum
from PySide6.QtCore import QAbstractTableModel, QSortFilterProxyModel, QModelIndex, Qt
import pandas as pd
class DataFrameTableModel(QAbstractTableModel):
def __init__(self, df: pd.DataFrame = None):
super(DataFrameTableModel, self).__init__()
self._df: pd.DataFrame = df
def rowCount(self, parent: QModelIndex = ...) -> int:
if parent.isValid() or self._df is None:
return 0
return self._df.shape[0]
def columnCount(self, parent: QModelIndex = ...) -> int:
if parent.isValid() or self._df is None:
return 0
return self._df.shape[1]
def data(self, index: QModelIndex, role: int = ...) -> object:
if index.isValid() and self._df is not None:
value = self._df.iloc[index.row(), index.column()]
if role == Qt.ItemDataRole.DisplayRole:
return str(value)
elif role == Qt.ItemDataRole.UserRole:
return value
def headerData(self, section: int, orientation: Qt.Orientation, role: int = ...) -> object:
if self._df is not None:
if role == Qt.ItemDataRole.DisplayRole:
if orientation == Qt.Orientation.Horizontal:
return str(self._df.columns[section])
else:
return str(self._df.index[section])
elif role == Qt.ItemDataRole.UserRole:
if orientation == Qt.Orientation.Horizontal:
return self._df.columns[section]
else:
return self._df.index[section]
def flags(self, index: QModelIndex) -> Qt.ItemFlag:
return Qt.ItemFlag.ItemIsSelectable | Qt.ItemFlag.ItemIsEnabled
@property
def df(self) -> pd.DataFrame:
return self._df
@df.setter
def df(self, value: pd.DataFrame):
self._df = value
self.layoutChanged.emit()
class DataFrameFilterOperation(Enum):
EQUAL = "eq"
NOT_EQUAL = "ne"
GREATER_THAN = "gt"
GREATER_THAN_OR_EQUAL = "ge"
LESS_THAN = "lt"
LESS_THAN_OR_EQUAL = "le"
class DataFrameFilter:
def __init__(self, column: str, column_index: int, operation: DataFrameFilterOperation, value):
self._column = column
self._column_index = column_index
self._operation = operation
self._value = value
@property
def column(self) -> str:
return self._column
@property
def column_index(self) -> int:
return self._column_index
@property
def operation(self) -> DataFrameFilterOperation:
return self._operation
@property
def value(self):
return self._value
def __eq__(self, value: object) -> bool:
if not isinstance(value, DataFrameFilter):
return False
return self._column == value.column and self._column_index == value.column_index and self._operation == value.operation and self._value == value.value
def __ne__(self, __value: object) -> bool:
return not self.__eq__(__value)
class DataFrameSortFilterProxyModel(QSortFilterProxyModel):
OPERATIONS = {
DataFrameFilterOperation.EQUAL: lambda x, y: x == y,
DataFrameFilterOperation.NOT_EQUAL: lambda x, y: x != y,
DataFrameFilterOperation.GREATER_THAN: lambda x, y: x > y,
DataFrameFilterOperation.GREATER_THAN_OR_EQUAL: lambda x, y: x >= y,
DataFrameFilterOperation.LESS_THAN: lambda x, y: x < y,
DataFrameFilterOperation.LESS_THAN_OR_EQUAL: lambda x, y: x <= y
}
def __init__(self):
super(DataFrameSortFilterProxyModel, self).__init__()
self._filters = []
def filterAcceptsRow(self, source_row: int, source_parent: QModelIndex) -> bool:
result = []
for filter in self._filters:
value = self.sourceModel().index(source_row, filter.column_index, source_parent).data(Qt.ItemDataRole.UserRole)
result.append(self.OPERATIONS[filter.operation](value, filter.value))
return all(result)
def lessThan(self, left: QModelIndex, right: QModelIndex) -> bool:
left_value = left.data(Qt.ItemDataRole.UserRole)
right_value = right.data(Qt.ItemDataRole.UserRole)
return left_value < right_value
def add_filter(self, filter: DataFrameFilter):
self._filters.append(filter)
self.invalidate()
def remove_filter(self, filter: DataFrameFilter):
self._filters.remove(filter)
self.invalidate()
def clear_filters(self):
self._filters.clear()
self.invalidate()
</code></pre>
<p><strong>Problem:</strong> Basically, everything works wonderfully for small data sets. The problem is that for larger data sets (~60000 rows) filtering obviously takes so long that the GUI freezes for a couple of seconds. I thought about moving the filtering logic to a second thread (<code>QThread</code>), but the UI should only be manipulated in the GUI thread, and since editing the model also changes the UI, I cannot adapt the model from a second thread.</p>
<p>It's not a problem if the filtering takes a few seconds, it's just that the UI shouldn't freeze during this time so that you can display a progress bar or something like that. Any suggesstions or solutions?</p>
<p><strong>EDIT 03/11/24</strong></p>
<p>I came up with a custom solution by implementing a custom <code>QTableModel</code> without using a <code>QSortFilterProxyModel</code> at alll. The idea is to outsource filtering to a second <code>QThread</code> controlled by a method of the custom model. The model itself isn't altered until filtering is finished. The separate thread will return the finally filtered <code>DataFrame</code> and it will be applied to the model instance that has started the thread. This means the UI no longer freezes and loading animations can be displayed and controlled via Qt Signals.</p>
<p><code>model.py</code>:</p>
<pre><code>from enum import Enum
from PySide6.QtCore import QAbstractTableModel, QModelIndex, Qt, QThreadPool, Signal
import pandas as pd
from thread import DataFrameFilterTask
class DataFrameTableModel(QAbstractTableModel):
beginFiltering = Signal()
endFiltering = Signal()
beginSorting = Signal()
endSorting = Signal()
beginTransforming = Signal()
endTransforming = Signal()
def __init__(self, base_df: pd.DataFrame = None):
super(DataFrameTableModel, self).__init__()
self._base_df: pd.DataFrame = base_df
self._transformed_df: pd.DataFrame = None
self._filters: list[DataFrameFilter] = []
self._is_filtering = False
self._is_sorting = False
def rowCount(self, parent: QModelIndex = ...) -> int:
if parent.isValid() or self._current_df is None:
return 0
return self._current_df.shape[0]
def columnCount(self, parent: QModelIndex = ...) -> int:
if parent.isValid() or self._current_df is None:
return 0
return self._current_df.shape[1]
def data(self, index: QModelIndex, role: int = ...) -> object:
if index.isValid() and self._current_df is not None:
value = self._current_df.iloc[index.row(), index.column()]
if role == Qt.ItemDataRole.DisplayRole:
return str(value)
elif role == Qt.ItemDataRole.UserRole:
return value
def headerData(self, section: int, orientation: Qt.Orientation, role: int = ...) -> object:
if self._current_df is not None:
if role == Qt.ItemDataRole.DisplayRole:
if orientation == Qt.Orientation.Horizontal:
return str(self._current_df.columns[section])
else:
return str(self._current_df.index[section])
elif role == Qt.ItemDataRole.UserRole:
if orientation == Qt.Orientation.Horizontal:
return self._current_df.columns[section]
else:
return self._current_df.index[section]
def flags(self, index: QModelIndex) -> Qt.ItemFlag:
return Qt.ItemFlag.ItemIsSelectable | Qt.ItemFlag.ItemIsEnabled
@property
def base_df(self) -> pd.DataFrame:
return self._base_df
@base_df.setter
def base_df(self, value: pd.DataFrame):
self._base_df = value
self._transformed_df = None
self.layoutChanged.emit()
@property
def transformed_df(self) -> pd.DataFrame:
return self._transformed_df
@property
def filters(self) -> list[DataFrameFilter]:
return self._filters
@property
def is_filtering(self) -> bool:
return self._is_filtering
@property
def is_sorting(self) -> bool:
return self._is_sorting
@property
def is_transforming(self) -> bool:
return self._is_filtering or self._is_sorting
@property
def _current_df(self) -> pd.DataFrame:
return self._base_df if self._transformed_df is None else self._transformed_df
def add_filter(self, filter: DataFrameFilter):
self._filters.append(filter)
self._apply_filters()
def remove_filter(self, filter: DataFrameFilter):
self._filters.remove(filter)
self._apply_filters()
def clear_filters(self):
self._filters.clear()
self._apply_filters()
def _apply_filters(self):
self.beginFiltering.emit()
self._is_filtering = True
task = DataFrameFilterTask(self._base_df.copy(deep=True), self._filters)
task.signals.data.connect(self._on_filter_task_data)
task.signals.finished.connect(self._on_filter_task_finished)
task.signals.error.connect(self._on_filter_task_error)
QThreadPool.globalInstance().start(task)
def _on_filter_task_data(self, df: pd.DataFrame):
self.beginResetModel()
self._transformed_df = df
self.endResetModel()
def _on_filter_task_finished(self):
self._is_filtering = False
self.endFiltering.emit()
def _on_filter_task_error(self, error: tuple[Exception, type, str]):
raise error[0]
</code></pre>
<p><code>thread.py</code>:</p>
<pre><code>import sys
import traceback
from PySide6.QtCore import QRunnable, Signal, QObject
import pandas as pd
class DataFrameFilterTaskSignals(QObject):
finished = Signal()
error = Signal(Exception)
data = Signal(pd.DataFrame)
class DataFrameFilterTask(QRunnable):
OPERATIONS = {
"eq": lambda x, y: x == y,
"ne": lambda x, y: x != y,
"lt": lambda x, y: x < y,
"le": lambda x, y: x <= y,
"gt": lambda x, y: x > y,
"ge": lambda x, y: x >= y
}
def __init__(self, df: pd.DataFrame, filters: list):
super(DataFrameFilterTask, self).__init__()
self.signals = DataFrameFilterTaskSignals()
self._df = df
self._filters = filters
def run(self):
try:
for filter in self._filters:
if self._df[filter.column].dtype != type(filter.value):
self._df = self._df[self._df[filter.column].apply(lambda x: type(x) == type(filter.value))]
self._df = self._df[self.OPERATIONS[filter.operation.value](self._df[filter.column], filter.value)]
else:
self._df = self._df[self.OPERATIONS[filter.operation.value](self._df[filter.column], filter.value)]
self.signals.data.emit(self._df)
except:
traceback.print_exc()
exctype, value = sys.exc_info()[:2]
self.signals.error.emit((value, exctype, traceback.format_exc()))
finally:
self.signals.finished.emit()
</code></pre>
<p>Any comments on my solution? It does work without any freezing issues, but I'm not sure if this is common practice or more a quick and dirty solution... Is it better practice to keep <code>DataFrameTableModel</code> untouched and create a second <code>DataFrameSortFilterProxyModel</code> inheriting from <code>QAbstractProxyModel</code> that contains the thread handling logic and updates the <code>sourceModel()</code> (a <code>DataFrameTableModel</code> instance) from outside of the class by setting the filtered <code>DataFrame</code> to fulfill the separation of concerns principle?</p>
|
<python><qt><pyqt><pyside><pyside6>
|
2024-03-07 15:58:51
| 1
| 1,320
|
Constantin Müller
|
78,122,677
| 7,631,505
|
Matplotlib 3d zoom issue
|
<p>Here to ask a question about the basic matplotlib 3d plotting tools. Now, let me make some dummy data to explain my issue:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(-1, 1, 1000)
y = x
X, Y = np.meshgrid(x, y)
f = np.exp(-(X**2 / 0.5) - Y**2 / 0.5)
</code></pre>
<p>I have a bunch of plots I want to stack side by side because they are connected by a parameter, here I'm just going to re-use the same data as it doesn't change much my problem. So my basic plot is appearing like this:</p>
<pre><code>fig, ax = plt.subplots(1, 1, subplot_kw={"projection": "3d"})
for i in range(5):
ax.contourf(x, y, f, zdir="z", offset=i)
ax.view_init(elev=15, azim=45, roll=-90)
ax.set_zlim3d(-0.5, 3.5)
ax.set_box_aspect((3, 3, 7), zoom=1)
</code></pre>
<p>Where I see a nice plot on the side:
<a href="https://i.sstatic.net/MNbWc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MNbWc.png" alt="enter image description here" /></a>
Now, since this graph is eventually going to be included in a larger figure, I tried to reduce the amount of white space around it. This is a question as old as matplotlib, apparently, unfortunately I wasn't able to find an easy solution, changing the aspect ratio helped covering the space in a nicer way. Now my issue: if I try to zoom, therefore change <code>ax.set_box_aspect((3, 3, 7), zoom=1)</code> to <code>ax.set_box_aspect((3, 3, 7), zoom=2)</code>:
<a href="https://i.sstatic.net/pJA5Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pJA5Q.png" alt="enter image description here" /></a>
The plots only show up in the centre. Regardless the fact that this is too large of a zoom, aesthetically speaking, I'm confused as of why the plot that should appear at z=3 doesn't show up, and the one at 0 is cut. I can't add a video, but if I try and use the drag-zoom tool I can see that my data will only be shown in the centre of the figure (where the current figure is cut) and any attempt to move it around will end up with a similar result as this: only the centre of the 3d axis shows the data.
Now I'm wonder: why does this happen? Is this because of the implementation of 3d axes in matplotlib? Is there a reason why it's been coded this way? Can I do anything about it?</p>
<p>Thank you.</p>
|
<python><matplotlib><matplotlib-3d>
|
2024-03-07 15:48:51
| 1
| 316
|
mmonti
|
78,122,636
| 4,704,335
|
How do you cleanly terminate a zmq steerable proxy in python?
|
<h2>Context</h2>
<p>I am implementing a many-to-many in process Pub/Sub interface with python zmq that can be built up and torn down as needed in a running application. I am using a proxy to connect the <code>XSUB</code> and <code>XPUB</code> sockets and am utilizing the <code>zmq.proxy_steerable</code> variant so that I can use a control socket to send the <code>TERMINATE</code> command to stop the proxy. When I do this I am getting an error when the <code>TERMINATE</code> command is received by the proxy.</p>
<h2>Minimum Working Example (MWE)</h2>
<p>In the following MWE, a background thread is used to start the steerable proxy. After a brief sleep of 3 seconds, the <code>TERMINATE</code> signal is sent to the control socket, which causes the <code>zmq.proxy_steerable</code> line in the thread to raise an error.</p>
<p><code>mwe.py</code></p>
<pre class="lang-py prettyprint-override"><code>import zmq
from threading import Thread
class Proxy:
def __init__(self, context: zmq.Context):
self.context = context
self.thread = Thread(
daemon=True,
target=self._run,
name='Proxy')
self.thread.start()
def _run(self):
publish_socket = self.context.socket(zmq.XPUB)
publish_socket.bind("inproc://subscribe")
subscribe_socket = self.context.socket(zmq.XSUB)
subscribe_socket.bind("inproc://publish")
control_socket = self.context.socket(zmq.SUB)
control_socket.connect("inproc://proxy")
control_socket.setsockopt_string(zmq.SUBSCRIBE, '')
zmq.proxy_steerable(
publish_socket, subscribe_socket,
control=control_socket)
def stop(self):
socket = self.context.socket(zmq.PUB)
socket.bind("inproc://proxy")
socket.send_string('TERMINATE')
self.thread.join()
self.thread = None
if __name__ == "__main__":
import time
context = zmq.Context()
proxy = Proxy(context)
time.sleep(3)
proxy.stop()
</code></pre>
<p>Which when run, raises the following error.</p>
<pre class="lang-bash prettyprint-override"><code>>> python mwe.py
Exception in thread Proxy:
Traceback (most recent call last):
File "/home/bellockk/.conda/envs/zmq/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/home/bellockk/.conda/envs/zmq/lib/python3.11/threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "/home/bellockk/Development/zeromq/mwe.py", line 21, in _run
zmq.proxy_steerable(
File "zmq/backend/cython/_proxy_steerable.pyx", line 56, in zmq.backend.cython._proxy_steerable.proxy_steerable
File "zmq/backend/cython/checkrc.pxd", line 28, in zmq.backend.cython.checkrc._check_rc
zmq.error.ZMQError: Operation not supported
</code></pre>
<p>This I believe indicates the <code>TERMINATE</code> is being received by the control socket, but is raising a very non-descriptive error. I'm currently catching this error and discarding it to work around the issue, but want to implement it properly.</p>
<hr />
<p>Note: Everything works fine if I let garbage collection take care of everything at program close, but for my actual use case I need to build up and tear down a Pub/Sub interface over several iterations, so I need to be able to have the proxy cleanly stop and be able to restart later.</p>
<hr />
<p>I am just a few days into learning <code>zmq</code>, so I'm likely making a simple mistake. Any help is greatly appreciaited!</p>
|
<python><pyzmq>
|
2024-03-07 15:42:32
| 1
| 1,066
|
Kenneth E. Bellock
|
78,122,590
| 8,378,586
|
How to preserve type hints despite variable type annotation during assignment
|
<p>I am writing a <code>Lazy</code> class that works like <code>partial</code>, but has some extra functionalities.
The goal is to pre-initialize an object by wrapping it in the <code>Lazy</code> class and later finish the initialization by calling <code>to_eager</code>, where the user can provide some additional arguments. The issue is with preserving type hints in IDE.</p>
<p>Here is the code</p>
<pre><code>from typing import Callable, Generic, ParamSpec, Type, TypeVar
T = TypeVar("T")
P = ParamSpec("P")
class Lazy(Generic[T, P]): # Any makes it ok to asign to any other type
def __init__(
self, cls: Type[T] | Callable[P, T], *args: P.args, **kwargs: P.kwargs
):
self.cls = cls
self.args = args
self.kwargs = kwargs
def to_eager(self, *args: P.args, **kwargs: P.kwargs) -> T:
assert not args
kwg = {**self.kwargs, **kwargs}
return self.cls(*self.args, **kwg)
class SomeClass:
def __init__(self, y: int = 1):
self.y = y
l = Lazy(SomeClass, y=5)
l.to_eager() # here VSCode hints me with (y: int -> 1)
</code></pre>
<p>When I try to call <code>to_eager()</code>, I get a nice hint about possible args for the original class.</p>
<p><a href="https://i.sstatic.net/qz8bt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qz8bt.png" alt="enter image description here" /></a></p>
<p>Now the issue is, if this is combined with variable type assignment</p>
<pre><code>l: Lazy = Lazy(SomeClass, y=5)
l.to_eager() # type hint is gone, I get (... -> Unknown)
</code></pre>
<p>This can be improved a bit by</p>
<pre><code>l: Lazy[SomeClass, ...] = Lazy(SomeClass, y=5)
l.to_eager() # I get (... -> SomeClass)
</code></pre>
<p>but this does not contain the args.</p>
<p>My question is - how to preserve the args type hints in this case while also doing the variable type assignment? The type assignment is needed eg when I combine this with a dataclass. Maybe there is some configuration for <code>pyright</code> to not do type widening? Or perhaps vscode has some useful option for this?</p>
<p>What I tried so far:</p>
<ol>
<li>Pass <code>P</code> as the second argument in <code>Lazy[SomeClass, P]</code>, but getting warning that P has no meaning in this context and also no type hints from VSCode</li>
</ol>
|
<python><python-typing><pyright>
|
2024-03-07 15:36:05
| 1
| 308
|
JAV
|
78,122,541
| 2,610,933
|
Unexpected keyword argument 'use_dora' when attempting to generate summary from Mistral7B fine-tuned LLM
|
<p>I have fine-tuned a Mistral7B LLM using LoRA in 16 bit configuration using samsum training set from Hugginggace. The idea is to feed the fine-tuned LLM a conversation an it should generate a summary.</p>
<p>here is my trining script:</p>
<pre><code># set the train & validation data set
from datasets import load_dataset
train_dataset = load_dataset('json', data_files='/data/datasets/summarisation/samsum/train.json', split='train')
eval_dataset = load_dataset('json', data_files='/data/datasets/summarisation/samsum/validation.json', split='train')
# Set up the Accelerator. I'm not sure if we really need this for a QLoRA given its description (I have to read more about it) but it seems it can't hurt, and it's helpful to have the code for future reference. You can always comment out the accelerator if you want to try without.
from accelerate import FullyShardedDataParallelPlugin, Accelerator
from torch.distributed.fsdp.fully_sharded_data_parallel import FullOptimStateDictConfig, FullStateDictConfig
fsdp_plugin = FullyShardedDataParallelPlugin(
state_dict_config=FullStateDictConfig(offload_to_cpu=True, rank0_only=False),
optim_state_dict_config=FullOptimStateDictConfig(offload_to_cpu=True, rank0_only=False),
)
accelerator = Accelerator(fsdp_plugin=fsdp_plugin)
# Let's use Weights & Biases to track our training metrics. You'll need to apply an API key when prompted. Feel free to skip this if you'd like, and just comment out the `wandb` parameters in the `Trainer` definition below.
import wandb, os
wandb.login()
wandb_project = "mistral-samsun-finetune"
if len(wandb_project) > 0:
os.environ["WANDB_PROJECT"] = wandb_project
# Formatting prompts
# Then create a `formatting_func` to structure training examples as prompts.
def formatting_func(example):
text = f"### Dialog: {example['dialogue']}\n ### Summary: {example['summary']}"
return text
### 2. Load Base Model
# Let's now load Mistral - mistralai/Mistral-7B-v0.1 - using 4-bit quantization!
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, AutoConfig
base_model_id = "mistralai/Mistral-7B-v0.1"
# 4 bit config
#bnb_config = BitsAndBytesConfig(
# load_in_4bit=True,
# bnb_4bit_use_double_quant=True,
# bnb_4bit_quant_type="nf4",
# bnb_4bit_compute_dtype=torch.bfloat16
#)
#model = AutoModelForCausalLM.from_pretrained(base_model_id, quantization_config=bnb_config, device_map="auto")
#16 bit
config = AutoConfig.from_pretrained(base_model_id)
config.quantization = "int8" # Set quantization to int8 for 16-bit precision
model = AutoModelForCausalLM.from_pretrained(base_model_id, config=config)
### 3. Tokenization
#Set up the tokenizer. Add padding on the left as it [makes training use less memory](https://ai.stackexchange.com/questions/41485/while-fine-tuning-a-decoder-only-llm-like-llama-on-chat-dataset-what-kind-of-pa).
#For `model_max_length`, it's helpful to get a distribution of your data lengths. Let's first tokenize without the truncation/padding, so we can get a length distribution.
tokenizer = AutoTokenizer.from_pretrained(
base_model_id,
padding_side="left",
add_eos_token=True,
add_bos_token=True,
)
tokenizer.pad_token = tokenizer.eos_token
def generate_and_tokenize_prompt(prompt):
return tokenizer(formatting_func(prompt))
# return tokenizer(prompt, padding="max_length", truncation=True)
# Reformat the prompt and tokenize each sample:
tokenized_train_dataset = train_dataset.map(generate_and_tokenize_prompt)
tokenized_val_dataset = eval_dataset.map(generate_and_tokenize_prompt)
# Let's get a distribution of our dataset lengths, so we can determine the appropriate `max_length` for our input tensors.
import matplotlib.pyplot as plt
def plot_data_lengths(tokenized_train_dataset, tokenized_val_dataset):
lengths = [len(x['input_ids']) for x in tokenized_train_dataset]
lengths += [len(x['input_ids']) for x in tokenized_val_dataset]
print(len(lengths))
# Plotting the histogram
plt.figure(figsize=(10, 6))
plt.hist(lengths, bins=20, alpha=0.7, color='blue')
plt.xlabel('Length of input_ids')
plt.ylabel('Frequency')
plt.title('Distribution of Lengths of input_ids')
plt.show()
plot_data_lengths(tokenized_train_dataset, tokenized_val_dataset)
#From here, you can choose where you'd like to set the max_length to be. You can truncate and pad training examples to fit them to your chosen size. Be aware that choosing a larger max_length has its compute tradeoffs.
# I'm using my personal notes to train the model, and they vary greatly in length. I spent some time cleaning the dataset so the samples were about the same length, cutting up individual notes if needed, but being sure to not cut in the middle of a word or sentence.
# Now let's tokenize again with padding and truncation, and set up the tokenize function to make labels and input_ids the same. This is basically what self-supervised fine-tuning is.
#max_length = 512 # This was an appropriate max length for my dataset
#We have a dynamic max length
max_length = max(max(len(x['input_ids']) for x in tokenized_train_dataset), max(len(x['input_ids']) for x in tokenized_val_dataset))
print(f"Max length: {max_length}")
wandb.init()
wandb.log({"Max length": max_length})
def generate_and_tokenize_prompt2(prompt):
result = tokenizer(
formatting_func(prompt),
truncation=True,
max_length=max_length,
padding="max_length",
)
result["labels"] = result["input_ids"].copy()
return result
tokenized_train_dataset = train_dataset.map(generate_and_tokenize_prompt2)
tokenized_val_dataset = eval_dataset.map(generate_and_tokenize_prompt2)
# Check that `input_ids` is padded on the left with the `eos_token` (2) and there is an `eos_token` 2 added to the end, and the prompt starts with a `bos_token` (1).
print(tokenized_train_dataset[1]['input_ids'])
# Now all the samples should be the same length, `max_length`.
plot_data_lengths(tokenized_train_dataset, tokenized_val_dataset)
# Set Up LoRA
# Now, to start our fine-tuning, we have to apply some preprocessing to the model to prepare it for training. For that use the prepare_model_for_kbit_training method from PEFT.
from peft import prepare_model_for_kbit_training
model.gradient_checkpointing_enable()
model = prepare_model_for_kbit_training(model)
def print_trainable_parameters(model):
# """
# Prints the number of trainable parameters in the model.
# """
trainable_params = 0
all_param = 0
for _, param in model.named_parameters():
all_param += param.numel()
if param.requires_grad:
trainable_params += param.numel()
print(
f"trainable params: {trainable_params} || all params: {all_param} || trainable%: {100 * trainable_params / all_param}"
)
# T
#Let's print the model to examine its layers, as we will apply QLoRA to all the linear layers of the model. Those layers are q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj, and lm_head.
print(model)
#Here we define the LoRA config.
#`r` is the rank of the low-rank matrix used in the adapters, which thus controls the number of parameters trained. A higher rank will allow for more expressivity, but there is a compute tradeoff.
#`alpha` is the scaling factor for the learned weights. The weight matrix is scaled by `alpha/r`, and thus a higher value for `alpha` assigns more weight to the LoRA activations.
#The values used in the QLoRA paper were `r=64` and `lora_alpha=16`, and these are said to generalize well, but we will use `r=32` and `lora_alpha=64` so that we have more emphasis on the new fine-tuned data while also reducing computational complexity.
from peft import LoraConfig, get_peft_model
config = LoraConfig(
r=32,
lora_alpha=64,
target_modules=[
# "q_proj",
# "k_proj",
# "v_proj",
# "o_proj",
# "gate_proj",
# "up_proj",
# "down_proj",
"lm_head",
],
bias="none",
lora_dropout=0.05, # Conventional
task_type="CAUSAL_LM",
)
#apply Lora
model = get_peft_model(model, config)
print_trainable_parameters(model)
#See how the model looks different now, with the LoRA adapters added:
print(model)
#5 Run Training!
#Overfitting is when the validation loss goes up (bad) while the training loss goes down significantly, meaning the model is learning the training set really well, but is unable to generalize to new datapoints.
# In most cases, this is not desired, but since I am just playing around with a model to generate outputs like my journal entries, I was fine with a moderate amount of overfitting.
#With that said, a note on training: you can set the max_steps to be high initially, and examine at what step your model's performance starts to degrade.
#There is where you'll find a sweet spot for how many steps to perform. For example, say you start with 1000 steps, and find that at around 500 steps the model starts overfitting, as described above.
#Therefore, 500 steps would be yt spot, so you would use the checkpoint-500 model repo in your output dir (mistral-journal-finetune) as your final model in step 6 below.
#If you're just doing something for fun like I did and are OK with overfitting, you can try different checkpoint versions with different degrees of overfitting.
#You can interrupt the process via Kernel -> Interrupt Kernel in the top nav bar once you realize you didn't need to train anymore.
if torch.cuda.device_count() > 1: # If more than 1 GPU
model.is_parallelizable = True
model.model_parallel = True
model = accelerator.prepare_model(model)
import transformers
from datetime import datetime
project = "finetune-lora"
base_model_name = "mistral"
run_name = base_model_name + "-" + project
trainer = transformers.Trainer(
model=model,
train_dataset=tokenized_train_dataset,
eval_dataset=tokenized_val_dataset,
args=transformers.TrainingArguments(
output_dir=output_dir,
warmup_steps=500,
per_device_train_batch_size=2,
gradient_accumulation_steps=1,
gradient_checkpointing=True,
max_steps=10000,
learning_rate=2.5e-5, # Want a small lr for finetuning
bf16=True,
optim="paged_adamw_8bit",
logging_steps=25, # When to start reporting loss
logging_dir="./logs", # Directory for storing logs
save_strategy="steps", # Save the model checkpoint every logging step
save_steps=25, # Save checkpoints every 50 steps
evaluation_strategy="steps", # Evaluate the model every logging step
eval_steps=25, # Evaluate and save checkpoints every 50 steps
do_eval=True, # Perform evaluation at the end of training
report_to="wandb", # Comment this out if you don't want to use weights & baises
run_name=f"{run_name}-{datetime.now().strftime('%Y-%m-%d-%H-%M')}" # Name of the W&B run (optional)
),
data_collator=transformers.DataCollatorForLanguageModeling(tokenizer, mlm=False),
)
model.config.use_cache = False # silence the warnings. Please re-enable for inference!
trainer.train()
</code></pre>
<p>The script above works as expected. Now I have a seconds script where I feed a file path that contains the conversation it should return a summary. This is my second script:</p>
<pre><code>import sys
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig
from peft import PeftModel
def read_file(file_path):
try:
with open(file_path, 'r') as file:
file_content = file.read()
return file_content
except FileNotFoundError:
print(f"The file at path '{file_path}' was not found.")
except Exception as e:
print(f"An error occurred: {e}")
return None
if __name__ == "__main__":
if len(sys.argv) != 2:
print("Usage: python script.py <file_path>")
sys.exit(1)
file_path = sys.argv[1]
content = read_file(file_path)
if content is None:
sys.exit(1)
print(f"File with dialogue '{file_path}':")
# Load Base Model
base_model_id = "mistralai/Mistral-7B-v0.1"
config = AutoConfig.from_pretrained(base_model_id)
#model = AutoModelForCausalLM.from_pretrained(base_model_id, config=config)
config.quantization = "int8" # Set quantization to int8 for 16-bit precision
model = AutoModelForCausalLM.from_pretrained(base_model_id, config=config)
tokenizer = AutoTokenizer.from_pretrained(base_model_id, add_bos_token=True, trust_remote_code=True)
# Load the LoRA adapter from the appropriate checkpoint directory
new_checkpoint_path = "mistral-finetune-lora16/checkpoint-10000"
# Since PeftModel.from_pretrained expects only one argument, we need to pass the model and the checkpoint path separately
ft_model = PeftModel.from_pretrained(base_model_id, new_checkpoint_path)
# Generate output using the loaded model and tokenizer
base_prompt = " ### Dialog: \n ### Summary: #"
eval_prompt = f"{base_prompt[:13]}{content}{base_prompt[13:]}"
model_input = tokenizer(eval_prompt, return_tensors="pt").to("cuda")
ft_model.eval()
with torch.no_grad():
output = ft_model.generate(**model_input, max_new_tokens=100, repetition_penalty=1.15)[0]
decoded_output = tokenizer.decode(output, skip_special_tokens=True)
print(decoded_output)
</code></pre>
<p>When I'm running the second script it throws this error:</p>
<pre><code>Traceback (most recent call last):
File "run-mistral-lora-samsun.py", line 43, in <module>
ft_model = PeftModel.from_pretrained(base_model_id, new_checkpoint_path)
File "/bigdata/usr/src/mistral-train-full-samsum/lib/python3.8/site-packages/peft/peft_model.py", line 325, in from_pretrained
config = PEFT_TYPE_TO_CONFIG_MAPPING[
File "/bigdata/usr/src/mistral-train-full-samsum/lib/python3.8/site-packages/peft/config.py", line 152, in from_pretrained
return cls.from_peft_type(**kwargs)
File "/bigdata/usr/src/mistral-train-full-samsum/lib/python3.8/site-packages/peft/config.py", line 119, in from_peft_type
return config_cls(**kwargs)
TypeError: __init__() got an unexpected keyword argument 'use_dora'
</code></pre>
<p>Does anyone know why this happens?</p>
<p>I have specify that I have done the same thing using QLoRa (4 bit) and it worked as expected</p>
<p>chat GPT suggestion: tried initialising the ft_model this way :</p>
<pre><code> ft_model = PeftModel.from_pretrained("lora", new_checkpoint_path, model=model)
</code></pre>
<p>but I get a different exception message:</p>
<pre><code>Traceback (most recent call last):
File "run-mistral-lora-samsun.py", line 44, in <module>
ft_model = PeftModel.from_pretrained("lora", new_checkpoint_path, model=model)
TypeError: from_pretrained() got multiple values for argument 'model'
</code></pre>
|
<python><machine-learning>
|
2024-03-07 15:29:37
| 1
| 2,112
|
android enthusiast
|
78,122,504
| 86,072
|
how can I spot if a callable is a generator?
|
<p><strong>Context</strong></p>
<p>for testing purposes, I would like to trace all calls to the methods of an object.</p>
<p>right now: I managed to write the following loop:</p>
<pre class="lang-py prettyprint-override"><code>def instrument_obj(obj):
for k in dir(obj):
try:
method = getattr(obj, k, None)
if not callable(method):
continue
# skip reserved words:
if k.startswith("__"):
continue
wrapped = my_wrapper(method)
setattr(obj, k, wrapped)
print(f"-- instrumented {k}")
except:
pass
</code></pre>
<p>which works for regular methods.</p>
<p><strong>Problem</strong>
The first issue I encountered with my code is with generators:</p>
<p>Right now <code>my_wrapper(...)</code> looks like that:</p>
<pre class="lang-py prettyprint-override"><code>def my_wrapper(fn):
def _w(*args, **kwargs):
print(f"-- called {fn.__qualname__}")
return fn(*args, **kwargs)
return _w
</code></pre>
<p>but this breaks generators: since my wrapper doesn't <code>yield</code>, a caller can't iterate on it anymore.</p>
<p>For generators, it should do something closer to this:</p>
<pre class="lang-py prettyprint-override"><code>def my_wrapper(gen):
def _w(*args, **kwargs):
print(f"-- called {gen.__qualname__}")
for itm in gen(*args, **kwargs):
yield itm
return _w
</code></pre>
<p><strong>Question</strong></p>
<p>How can I spot if the callable I got from my object is a generator ?</p>
|
<python><python-3.11>
|
2024-03-07 15:23:33
| 1
| 53,340
|
LeGEC
|
78,122,301
| 5,547,553
|
How to handle multi-line results of user-defined functions in polars?
|
<br>
I'd like to parse lines of text to multiple columns and lines in polars, with user defined function.
<pre><code>import polars as pl
df = pl.DataFrame({'file': ['aaa.txt','bbb.txt'], 'text': ['my little pony, your big pony','apple+banana, cake+coke']})
def myfunc(p_str: str) -> list:
res = []
for line in p_str.split(','):
x = line.strip().split(' ')
res.append({f'word{e+1}': w for e, w in enumerate(x)})
return res
</code></pre>
<p>If I just run a test it's fine, list of dicts is created:</p>
<pre><code>myfunc(df['text'][0])
[{'word1': 'my', 'word2': 'little', 'word3': 'pony'},
{'word1': 'your', 'word2': 'big', 'word3': 'pony'}]
</code></pre>
<p>Even creating a dataframe of it is easy:</p>
<pre><code>pl.DataFrame(myfunc(df['text'][0]))
</code></pre>
<p>But trying to do map_elements() fails:</p>
<pre><code>(df.with_columns(pl.struct(['text']).map_elements(lambda x: myfunc(x['text'])).alias('aaa')
)
)
</code></pre>
<p>thread '' panicked at crates/polars-core/src/chunked_array/builder/list/anonymous.rs:161:69:
called <code>Result::unwrap()</code> on an <code>Err</code> value: InvalidOperation(ErrString("It is not possible to concatenate arrays of different data types."))
--- PyO3 is resuming a panic after fetching a PanicException from Python. ---</p>
<p>What I'd like as a result is something like:</p>
<pre><code>file word1 word2 word3
aaa.txt my little pony
aaa.txt your big pony
bbb.txt apple+banana
bbb.txt cake+coke
</code></pre>
<p>Any idea?</p>
|
<python><dataframe><python-polars>
|
2024-03-07 14:55:20
| 2
| 1,174
|
lmocsi
|
78,122,099
| 10,811,647
|
How to display YOLO predictions on Grafana?
|
<p>I am trying to display my YOLO predictions directly on my grafana dashboard/graph. My model takes a picture of the grafana graph as input and detects the various graph patterns that correspond to different categories. I would like to use the graph shown inside the webpage as input and display the predictions directly on the same graph inside the same webpage. Ideally I need something able to draw bounding boxes directly on the webpage/screen, based on the results given by my YOLO model.</p>
<p>I have no idea how to do this, nor what tools could be appropriate.</p>
<p>I hope it is clear, any help would be much appreciated</p>
|
<python><grafana><yolo><yolov8>
|
2024-03-07 14:23:20
| 1
| 397
|
The Governor
|
78,122,083
| 1,231,450
|
Matplotlib strange output with timestamps as index
|
<p>Given the following dataframe</p>
<pre><code> open close high low timestamp
0 17910.25 17902.75 17910.50 17901.75 2024-02-28 20:46:10.628867+00:00
1 17902.50 17901.75 17906.50 17900.50 2024-02-28 20:46:50.270189+00:00
2 17902.25 17904.50 17905.50 17901.25 2024-02-28 20:47:55.031114+00:00
3 17904.75 17907.25 17909.00 17904.50 2024-02-28 20:48:44.114794+00:00
4 17907.25 17910.25 17914.50 17906.75 2024-02-28 20:49:31.424009+00:00
5 17910.25 17912.50 17914.50 17906.00 2024-02-28 20:50:05.337921+00:00
6 17912.75 17909.00 17912.75 17906.50 2024-02-28 20:50:51.489718+00:00
7 17909.00 17908.50 17910.50 17905.50 2024-02-28 20:51:32.629298+00:00
8 17908.75 17899.75 17908.75 17899.25 2024-02-28 20:52:06.190489+00:00
9 17899.50 17900.50 17902.00 17897.50 2024-02-28 20:52:38.421629+00:00
10 17900.25 17900.75 17904.25 17898.75 2024-02-28 20:53:22.651748+00:00
11 17900.25 17898.00 17901.25 17895.50 2024-02-28 20:54:07.124026+00:00
12 17898.00 17895.75 17900.75 17892.00 2024-02-28 20:54:36.208693+00:00
13 17895.75 17897.25 17898.00 17893.75 2024-02-28 20:55:00.171311+00:00
14 17897.25 17902.00 17902.00 17895.75 2024-02-28 20:55:09.389301+00:00
15 17902.00 17905.50 17906.75 17901.00 2024-02-28 20:55:30.245781+00:00
16 17905.25 17905.25 17906.50 17900.75 2024-02-28 20:56:03.837146+00:00
17 17905.25 17909.75 17910.75 17904.75 2024-02-28 20:56:26.466095+00:00
18 17909.75 17916.50 17918.25 17908.50 2024-02-28 20:56:47.284619+00:00
19 17916.25 17911.50 17917.25 17911.50 2024-02-28 20:57:16.248157+00:00
20 17911.50 17913.75 17914.50 17909.25 2024-02-28 20:57:56.980558+00:00
21 17913.50 17916.00 17917.25 17910.00 2024-02-28 20:58:32.008823+00:00
22 17916.00 17916.25 17918.75 17913.50 2024-02-28 20:58:59.908276+00:00
23 17916.25 17915.00 17918.25 17915.00 2024-02-28 20:59:16.061040+00:00
24 17915.00 17911.75 17915.25 17910.00 2024-02-28 20:59:27.103036+00:00
</code></pre>
<p>I can plot candles with the following function</p>
<pre><code>def plotCandles(df):
#df.set_index('timestamp', inplace=True)
plt.figure()
# "up" dataframe will store the stock_prices
# when the closing stock price is greater
# than or equal to the opening stock prices
up = df[df.close >= df.open]
# "down" dataframe will store the stock_prices
# when the closing stock price is
# lesser than the opening stock prices
down = df[df.close < df.open]
# When the stock prices have increased, then it
# will be represented by green color candlestick
col1 = 'green'
# When the stock prices have decreased, then it
# will be represented by red color candlestick
col2 = 'red'
# Setting width of candlestick elements
width = .3
width2 = .03
# Plotting up prices of the stock
plt.bar(up.index, up.close-up.open, width, bottom=up.open, color=col1)
plt.bar(up.index, up.high-up.close, width2, bottom=up.close, color=col1)
plt.bar(up.index, up.low-up.open, width2, bottom=up.open, color=col1)
# Plotting down prices of the stock
plt.bar(down.index, down.close-down.open, width, bottom=down.open, color=col2)
plt.bar(down.index, down.high-down.open, width2, bottom=down.open, color=col2)
plt.bar(down.index, down.low-down.close, width2, bottom=down.close, color=col2)
# rotating the x-axis tick labels at 30degree
# towards right
plt.xticks(rotation=30, ha='right')
# show it
plt.show()
</code></pre>
<p>This produces the following figure</p>
<p><a href="https://i.sstatic.net/QmTRE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QmTRE.png" alt="Some beautiful candles here" /></a></p>
<p>However, if I try to set the index to the timestamp column via</p>
<pre><code>df.set_index('timestamp', inplace=True)
</code></pre>
<p>the figure is transformed into an expressionistic something</p>
<p><a href="https://i.sstatic.net/rjNYV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rjNYV.png" alt="Some expressionistic picture" /></a></p>
<p>How comes?</p>
|
<python><pandas><matplotlib>
|
2024-03-07 14:20:22
| 1
| 43,253
|
Jan
|
78,122,031
| 378,386
|
Missing cache statistics for cachedmethod decorator in cachetools
|
<p>I am implementing a cached method with cachetools like this:</p>
<pre class="lang-py prettyprint-override"><code>class CachedNum(object):
def __init__(self, cachesize):
self.cache = LRUCache(maxsize=cachesize)
@cachedmethod(lambda self: self.cache)
def get(self, num):
return num + 1000
nums = CachedNum(cachesize=10)
nums.get.cache_info() # This is missing
nums.cache.cache_info() # The cache itself also has no statistics
</code></pre>
<p>In contrast there are no cache statistics about hits/misses. Is this an oversight in cachetools or do I need to track them myself?</p>
|
<python><cachetools>
|
2024-03-07 14:13:18
| 1
| 2,557
|
aggsol
|
78,121,811
| 10,746,224
|
How to apply an operator sequentially to all items in a list?
|
<p>I have an arbitrarily sized list:</p>
<pre><code>l = [1, 2, 3, 4, 5, ...]
</code></pre>
<p>I want to apply the <em>pipe</em> operator to all items in the list, sequentially, like this:</p>
<pre><code>1 | 2 | 3 | 4 | 5 # = 7
</code></pre>
<p>I know there is a function in the stdlib that does this, and that this question is probably a duplicate, but I don't recall the function and can't find answer pointing me to it.</p>
|
<python>
|
2024-03-07 13:37:13
| 1
| 16,425
|
Lord Elrond
|
78,121,764
| 1,088,577
|
What should be a type annotation for dataclass descriptor fields?
|
<p>I'm working on a class for which user should be able to set its fields in the most convenient way, which includes assigning strings to any of the fields. Values assigned by the user should be automatically converted actual data type (so for example <code>"2022-01-02"</code> assigned to a field <code>date</code> should be converted to <code>datetime.date</code> object).</p>
<p>For this I chose a <a href="https://docs.python.org/3/library/dataclasses.html#descriptor-typed-fields" rel="nofollow noreferrer">descriptor-typed fields</a> approach of Python dataclasses module. To avoid unnecessary and/or unsupported conversions I inspect <code>__annotations__</code> to determine whether it's ok to assign the user-provided value without a conversion.</p>
<pre><code>from typing import Optional
import datetime
from dataclasses import dataclass
from datetime import date
from decimal import Decimal
class Conversion:
def __init__(self, *, conv, default=None):
self._conv = conv
self._default = default
self._name = None
self._prop = None
def __set_name__(self, owner, name):
self._prop = name
self._name = "_" + name
def __get__(self, obj, tp):
# dataclasses determines default value by calling
# descriptor.__get__(obj=None, tp=cls)
if obj is None:
return self._default
return getattr(obj, self._name, self._default)
def __set__(self, obj, value):
tp = obj.__annotations__.get(self._prop)
# Don't convert values which already match desired type
if tp and isinstance(value, tp):
setattr(obj, self._name, value)
else:
try:
val = self._conv(value)
except:
raise ValueError(
f"Conversion error for '{self._name.lstrip('_')}': {value}"
)
setattr(obj, self._name, val)
@dataclass
class Entry:
date: datetime.date = Conversion(conv=date.fromisoformat, default=date.today())
amount: Optional[Decimal] = Conversion(conv=Decimal, default=None)
e = Entry()
print(e)
e.date = "2022-02-05"
e.amount = "11.02"
print(e)
</code></pre>
<p>And output is, as expected:</p>
<pre><code>Entry(date=datetime.date(2024, 3, 7), amount=None)
Entry(date=datetime.date(2022, 2, 5), amount=Decimal('11.02'))
</code></pre>
<p>This works beautifully and I feel that this is very clean and elegant solution, but I noticed that <a href="https://docs.python.org/3/library/dataclasses.html#descriptor-typed-fields" rel="nofollow noreferrer">documentation</a> always annotates descriptor-typed fields with the type of descriptors, not the underlying data type. For me this would be e.g. <code>date: Conversion = Conversion(...)</code>. Is there a reason why dataclasses authors chose to do it this way and am I wrong to annotate fields with data types?</p>
|
<python><python-typing><python-dataclasses><python-descriptors>
|
2024-03-07 13:30:00
| 1
| 1,537
|
Michał Góral
|
78,121,739
| 7,233,155
|
Sphinx autodoc if Python method moved to Rust PyO3
|
<p>If I have a <strong>Python</strong> module and a method:</p>
<pre class="lang-py prettyprint-override"><code># mymod.py
def func(x):
"""
Return a value.
"""
return x
</code></pre>
<p>And a <strong>Sphinx</strong> autodocument command in an rst file:</p>
<pre><code>API Ref
-------
.. automodapi:: project.mymod
</code></pre>
<p>This will generate relevant pages and cross-referencable links for <code>:meth:'project.mymod.func'</code>. <strong>All is well and good!</strong></p>
<p><strong>However</strong>, If I now create a Rust extension for my Python package and move this function to Rust and build the extension with PyO3 I can maintain code backwards compatibility by importing the rust function to the Python module as an overload:</p>
<pre class="lang-rust prettyprint-override"><code>// mymod.rs
#[pymethod]
fn func(x: T) -> PyResult<T> {
Ok(x)
}
</code></pre>
<pre class="lang-py prettyprint-override"><code># mymod.py
from mymodrs import func
</code></pre>
<p><strong>Sphinx</strong> will now no longer auto detect this <code>func</code> method, with no cross-referencable link. <strong>Can I add something to maintain the Sphinx operability? I don't mind if it is manual.</strong></p>
|
<python><rust><documentation><python-sphinx><pyo3>
|
2024-03-07 13:25:57
| 2
| 4,801
|
Attack68
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.