QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,158,662
| 4,061,181
|
Comparing two timezone-aware datetimes
|
<p>Most probably, this question is asked hundred times, but I couldn't still find the answer.
I have the following code:</p>
<pre><code>import pytz
import datetime
from pytz import timezone
tz_moscow = timezone('Europe/Moscow')
tz_yerevan = timezone('Asia/Yerevan')
dt1 = tz_yerevan.localize(datetime.datetime(2011, 7, 20, 0, 0, 0, 0))
dt2 = tz_moscow.localize(datetime.datetime(2011, 6, 20, 0, 0, 0, 0))
print (dt2.utcoffset()) # Expected 3, actual result - 4
print (dt1)
print (dt2)
print (dt1 == dt2)
</code></pre>
<p>I have two questions:</p>
<ol>
<li>Timezone offset for Moscow is GMT+3, but for some reason, I see "4" for some reason</li>
<li>Can I compare two timezone aware datetimes by just doing <code>dt1 < dt2</code></li>
</ol>
|
<python><datetime><timezone>
|
2023-01-18 11:37:19
| 1
| 4,541
|
Edgar Navasardyan
|
75,158,580
| 2,531,140
|
python setup.py egg_info did not run successfully when building on Docker now
|
<p>It built fine before.</p>
<p>I did no change for when it still worked before.</p>
<p>Here are its final moments before dying.</p>
<p>As you can see, it dies on installing measurement.</p>
<p>Here is Docker code partial:</p>
<pre><code># Install Python dependencies
COPY requirements.txt /app/
WORKDIR /app
#RUN pip install --upgrade setuptools
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
</code></pre>
<p>Here is the error</p>
<pre><code>#0 24.46 Collecting measurement==3.2.0
#0 24.48 Downloading measurement-3.2.0.tar.gz (12 kB)
#0 24.49 Preparing metadata (setup.py): started
#0 30.54 Preparing metadata (setup.py): finished with status 'error'
#0 30.55 error: subprocess-exited-with-error
#0 30.55
#0 30.55 Γ python setup.py egg_info did not run successfully.
#0 30.55 β exit code: 1
#0 30.55 β°β> [14 lines of output]
#0 30.55 Traceback (most recent call last):
#0 30.55 File "<string>", line 2, in <module>
#0 30.55 File "<pip-setuptools-caller>", line 34, in <module>
#0 30.55 File "/tmp/pip-install-uis_8hdi/measurement_709a216af0574a738b280207cf3f2a2e/setup.py", line 4, in <module>
#0 30.55 setup(use_scm_version=True)
#0 30.55 File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 152, in setup
#0 30.55 _install_setup_requires(attrs)
#0 30.55 File "/usr/local/lib/python3.8/site-packages/setuptools/__init__.py", line 147, in _install_setup_requires
#0 30.55 dist.fetch_build_eggs(dist.setup_requires)
#0 30.55 File "/usr/local/lib/python3.8/site-packages/setuptools/dist.py", line 785, in fetch_build_eggs
#0 30.55 resolved_dists = pkg_resources.working_set.resolve(
#0 30.55 File "/usr/local/lib/python3.8/site-packages/pkg_resources/__init__.py", line 777, in resolve
#0 30.55 raise VersionConflict(dist, req).with_context(dependent_req)
#0 30.55 pkg_resources.ContextualVersionConflict: (sphinxcontrib.applehelp 1.0.3 (/tmp/pip-install-uis_8hdi/measurement_709a216af0574a738b280207cf3f2a2e/.eggs/sphinxcontrib.applehelp-1.0.3-py3.8.egg), Requirement.parse('sphinxcontrib-applehelp'), {'sphinx'})
#0 30.55 [end of output]
#0 30.55
#0 30.55 note: This error originates from a subprocess, and is likely not a problem with pip.
#0 30.56 error: metadata-generation-failed
#0 30.56
#0 30.56 Γ Encountered error while generating package metadata.
#0 30.56 β°β> See above for output.
#0 30.56
#0 30.56 note: This is an issue with the package mentioned above, not pip.
#0 30.56 hint: See above for details.
</code></pre>
|
<python><python-3.x><docker>
|
2023-01-18 11:30:59
| 1
| 919
|
Mark Lopez
|
75,158,485
| 17,561,414
|
reading files from path with wildcard does not work - Databricks JSON
|
<p>trying to read a JSON file from databricks with the following code</p>
<pre><code> with open('/dbfs/mnt/bronze/categories/20221006/data_10.json') as f:
d = json.load(f)
</code></pre>
<p>which works perfecyl but problem is that I would like to use the wild cards since there are multiple folders and files. Preferebly want to make the below code working</p>
<pre><code>with open('/dbfs/mnt/bronze/categories/**/*.json') as f:
d = json.load(f)
</code></pre>
<p>when I read JSON using spark, wildcards work perfectly. But I prefer the above option</p>
<pre><code>df = spark.read.json(f'/mnt/bronze/AKENEO/categories/**/*.json')
</code></pre>
|
<python><json><databricks>
|
2023-01-18 11:23:06
| 1
| 735
|
Greencolor
|
75,158,484
| 19,716,381
|
Assign values to multiple columns using DataFrame.assign()
|
<p>I have a list of strings stored in a pandas dataframe <code>df</code>, with column name of text (i.e. <code>df['text']</code>).
I have a function <code>f(text: str) -> (int, int, int)</code>.
Now, I want to do the following.</p>
<pre><code>df['a'], df['b'], df['c'] = df['text'].apply(f)
</code></pre>
<p>How can I create three columns with the three returned values from the function?</p>
<p>The above code gives the error of</p>
<pre><code>ValueError: too many values to unpack (expected 3)
</code></pre>
<p>I tried</p>
<pre><code>df['a', 'b', 'c'] = df['text'].apply(f)
</code></pre>
<p>but I get one column with the name of <code>'a', 'b', 'c'</code></p>
<p>NB:</p>
<ol>
<li>There is a <a href="https://stackoverflow.com/questions/68748849/create-multiple-new-dataframe-columns-using-dataframe-assign-and-apply">similar question</a> in SO, but when I use the following solution from there, I again get an error.</li>
</ol>
<pre><code>df[['a', 'b', 'c']] = df['text'].apply(f, axis=1, result_type='expand')
</code></pre>
<p>The error is</p>
<pre><code>f() got an unexpected keyword argument 'axis'
f() got an unexpected keyword argument 'result_type' #(once I remove the axis=1 parameter)
</code></pre>
<ol start="2">
<li>Note that <code>df</code> has other columns as well</li>
</ol>
|
<python><pandas><dataframe>
|
2023-01-18 11:23:02
| 1
| 484
|
berinaniesh
|
75,158,465
| 2,304,575
|
Saving large sparse arrays in hdf5 using pickle
|
<p>In my code I am generating a list of large sparse arrays that are in <code>csr</code> format. I want to store these arrays to file. I was initially saving them to file in this way:</p>
<pre><code>from scipy.sparse import csr_matrix
import h5py
As = [ csr_matrix([[1, 2, 0], [0, 0, 3], [4, 0, 5]]),
csr_matrix([[1, 0, 0], [0, 1, 0], [0, 0, 1]]),
csr_matrix([[2, 0, 0], [0, 3, 0], [0, 0, 4]]) ]
np_matrices = [mat.toarray() for mat in As]
with h5py.File(filename, "w") as f:
f.create_dataset("matrices", data=np_matrices)
</code></pre>
<p>In this way however, I run into out of memory error, as I am trying to allocate the memory at once. Therefore it's not possible to save more than 1000 of these sparse arrays. I already checked the <code>scipy.sparse.save_npz()</code> library but this allows me to save only 1 <code>csr_matrix</code> per file, which i don't want that as I have to generate and store more than 100K matrices. I therefore started to check <code>pickle</code> to serialize the sparse matrix objects:</p>
<pre><code>pickled_obj = pickle.dumps(As)
with h5py.File('obj.hdf5', 'w') as f:
dset = f.create_dataset('obj', data=pickled_obj)
</code></pre>
<p>But this leads to the following error: <code>VLEN strings do not support embedded NULLs</code></p>
<p>Is there a way to deal with this error? Or anyone has a better way to save a list of <code>csr_matrix</code> with good memory performance?</p>
|
<python><arrays><pickle><h5py>
|
2023-01-18 11:21:42
| 2
| 692
|
Betelgeuse
|
75,158,382
| 4,196,709
|
How to call .cpp function inside p4python script?
|
<p>My job: I want to create a python script that changes the content of a file automatically and then commit the change in Perforce.</p>
<p>Because the file at first is read-only. So I need to create a python script which</p>
<ol>
<li>check out the file (in order to edit it)</li>
<li>call UpdateContent.cpp to change the content of that file</li>
<li>commit the change</li>
</ol>
<p><strong>UpdateContent.cpp</strong></p>
<pre><code>#include <iostream>
#include <string>
#include <stream>
#include <vector>
using namespace std;
const string YearKeyword = "YEAR";
const string ResourceInfoFileName = "//depot/Include/Version.h";
int main()
{
fstream ReadFile;
ReadFile.open(ResourceInfoFileName);
if (ReadFile.fail())
{
cout << "Error opening file: " << ResourceInfoFileName << endl;
return 1;
}
vector<string> lines; // a vector stores content of all the lines of the file
string line; // store the content of each line of the file
while (getline(ReadFile, line)) // read each line of the file
{
if (line.find(YearKeyword) != string::npos)
{
line.replace(line.size() - 5, 4, "2023"); // update current year
}
lines.push_back(line); // move the content of that line into the vector
}
ReadFile.close();
// after storing the content (after correcting) of the file into vector, we write the content back to the file
ofstream WriteFile;
WriteFile.open(ResourceInfoFileName);
if (WriteFile.fail())
{
cout << "Error opening file: " << ResourceInfoFileName << endl;
return 1;
}
for (size_t i = 0; i < lines.size() ; i++)
{
WriteFile << lines[i] << endl;
}
WriteFile.close();
return 0;
}
</code></pre>
<p><strong>UpdateResource.py</strong></p>
<pre><code>from P4 import P4, P4Exception
import subprocess
p4 = P4()
print('User ', p4.user, ' connecting to ', p4.port)
print('Current workspace is ', p4.client)
try:
p4.connect()
versionfile = '//depot/Include/Version.h'
p4.run( "edit", versionfile )
cmd = "UpdateContent.cpp"
subprocess.call(["g++", cmd]) // **THIS IS ERROR POSITION (line 24)**
subprocess.call("./UpdateContent.out")
change = p4.fetch_change()
change._description = "Update information"
change._files = versionfile
p4.run_submit( change )
p4.disconnect()
print('Disconnected from server.')
except P4Exception:
for e in p4.errors:
print(e)
print('Script finished')
</code></pre>
<p>I put both UpdateResource.py and UpdateContent.cpp in the same directory in N drive (N:/ResourceTools/), the file <code>Version.h</code> which needs to be changed is in another directory.
I receive this message when I run the script.</p>
<p><a href="https://i.sstatic.net/gwj5n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gwj5n.png" alt="enter image description here" /></a></p>
<p>I am newbie with python, where am I wrong?
I guess because of this line <code>const string ResourceInfoFileName = "//depot/Include/Version.h";</code> in the .cpp file (maybe).</p>
|
<python><c++><perforce><p4python>
|
2023-01-18 11:15:24
| 1
| 648
|
gnase
|
75,158,376
| 4,321,525
|
scipy.optimize.minimize SLSQP indirect constraints problem
|
<p>I am struggling with <code>minimize</code> (<code>method=SLSQL</code>) and need help. This is a simplified car battery (dis)charging problem to prop up the power grid during reduced stability (peak demand). What I expect to happen is that during such instability, the battery gets discharged as much as possible when the price of electricity is highest and charged when the price is lowest, and the grid is stable again. Instead, the optimization happily says it succeeded but either didn't optimize anything or ignores one or more constraints. I played around with the parameters passed to SLSQP, and when I set <code>esp=1</code>, things start happening, but it never comes close to discharging the battery completely, and does not attempt to recharge (much). It can happen, that it discharges the battery beyond zero to -2 or so.</p>
<p><a href="https://stackoverflow.com/questions/33511284/scipy-optimize-minimize-method-slsqp-ignores-constraint">Here</a> someone suggested using transformations to enforce variable limits, but in this case, it's an indirect constraint applied to the combination of several variables. Are transforms still an option?</p>
<p>The constraint that the battery should be recharged to 1 at the end of the cycle is ignored, too.</p>
<p>I imagine SLSQP is not the best search algorithm for this, and I would be thankful for suggestions for better options for the long term. Even telling me what kind of problem (in optimization terms) this is would be very much appreciated. Short term, I would be very thankful to point out how I can get SLSQP to work.</p>
<p>UPDATE: Someone told me that this might be a linear programming problem. How does that work with such flexible price data if that is the case? Where can I find an example explaining how to tackle it?</p>
<pre><code>import numpy as np
from scipy.optimize import minimize, OptimizeResult
from typing import TypedDict
SOC_MAX = 1.0 # normalized
ONE_MINUTE= 60
CHARGING_SOC_PER_SEC = SOC_MAX / (3*60*60) # fully charge in 3h
CHARGE_TIME_MAX_SECONDS = 3*60*60 # in seconds
time_a = np.arange(171901, step=900)
price_a = np.array([
143.08, 143.08, 143.08, 140.25, 140.25, 140.25, 140.25, 130.84,
130.84, 130.84, 130.84, 130.03, 130.03, 130.03, 130.03, 138.7 ,
138.7 , 138.7 , 138.7 , 169.95, 169.95, 169.95, 169.95, 204.37,
204.37, 204.37, 204.37, 230.09, 230.09, 230.09, 230.09, 219.09,
219.09, 219.09, 219.09, 207.62, 207.62, 207.62, 207.62, 213.26,
213.26, 213.26, 213.26, 206.2 , 206.2 , 206.2 , 206.2 , 211.11,
211.11, 211.11, 211.11, 215.9 , 215.9 , 215.9 , 215.9 , 227.28,
227.28, 227.28, 227.28, 234.97, 234.97, 234.97, 234.97, 257.99,
257.99, 257.99, 257.99, 274.33, 274.33, 274.33, 274.33, 236. ,
236. , 236. , 236. , 202.95, 202.95, 202.95, 202.95, 183.63,
183.63, 183.63, 183.63, 165.92, 165.92, 165.92, 165.92, 145.07,
145.07, 145.07, 145.07, 165.39, 165.39, 165.39, 165.39, 152.51,
152.51, 152.51, 152.51, 145.87, 145.87, 145.87, 145.87, 143.06,
143.06, 143.06, 143.06, 145.38, 145.38, 145.38, 145.38, 159.61,
159.61, 159.61, 159.61, 183.77, 183.77, 183.77, 183.77, 210.8 ,
210.8 , 210.8 , 210.8 , 213.77, 213.77, 213.77, 213.77, 203.33,
203.33, 203.33, 203.33, 200.97, 200.97, 200.97, 200.97, 199.02,
199.02, 199.02, 199.02, 193.72, 193.72, 193.72, 193.72, 179.7 ,
179.7 , 179.7 , 179.7 , 165.57, 165.57, 165.57, 165.57, 163.94,
163.94, 163.94, 163.94, 178.01, 178.01, 178.01, 178.01, 200.93,
200.93, 200.93, 200.93, 201.01, 201.01, 201.01, 201.01, 193.47,
193.47, 193.47, 193.47, 165.32, 165.32, 165.32, 165.32, 135.09,
135.09, 135.09, 135.09, 125.56, 125.56, 125.56, 125.56, 104.86,
104.86, 104.86, 104.86, 109.41, 109.41, 109.41, 109.41, 111.09])
stability_ranges = np.array([[ 33, 53],
[ 71, 119],
[131, 191]])
instability_ranges = np.array([[ 21, 32],
[ 54, 70],
[120, 130]])
booking_dt = np.dtype([("start", int),
("duration", int),
("soc", float), # state of charge of the battery
("delta_soc", float), # change in battery charge
("price", float)]) # this is what the user is billed.
class OptimisationItem(TypedDict, total=False):
start: int # this might be not needed, as its the key in the dict
duration: int
soc: float # state of charge of the battery
delta_soc: float # change in battery charge
price: float # this is what the user is billed.
bound_high: int
bound_low: int
def integrate(start, duration):
x_start = start
x_end = start + duration
x_data = np.linspace(x_start, x_end, 10)
y_interp = np.interp(x_data, time_a, price_a)
area = np.trapz(y_interp, x_data)
return area
"""the battery can not be charged more than 100% and not less than 0%"""
def soc_constraint(x):
soc = SOC_MAX
durations = x[1::2]
for i, j in durations.reshape(-1, 2):
soc -= i * CHARGING_SOC_PER_SEC
if soc < 0:
break
soc += j * CHARGING_SOC_PER_SEC
if soc > SOC_MAX:
soc = SOC_MAX - soc
break
return soc
"""the (dis)charging should happen only within the bounds"""
def bounds_constraint(x, bounds):
end_points= bounds - (x[::2] + x[1::2])
smallest_end_point = np.min(end_points)
return smallest_end_point
"""the battery should be fully charge at the end of the cycle"""
def end_constraint(x, plan):
final_soc_gap = x[-1] * CHARGING_SOC_PER_SEC - (SOC_MAX - plan["soc"][- 1])
return final_soc_gap
def objective_func(x: np.array, plan) -> float:
events= x.reshape(-1, 2)
# insert the optimizsation variables into the plan
plan["start"] = events[:,0]
plan["duration"] = events[:,1]
# start values for soc=SOC_MAX and delta_soc: 0
plan["soc"][-1] = SOC_MAX
plan["delta_soc"][-1] = 0
plan["duration"][-1] = 0
# straighten out the plan, calculate price and soc
for step in range(len(plan)):
plan["soc"][step] = plan["soc"][step - 1] + plan["delta_soc"][step - 1]
if step % 2 == 0:
plan["delta_soc"][step] = - plan["duration"][step] * CHARGING_SOC_PER_SEC
plan["price"][step] = - integrate(plan["start"][step], plan["duration"][step])
else:
plan["delta_soc"][step] = plan["duration"][step] * CHARGING_SOC_PER_SEC
plan["price"][step] = + integrate(plan["start"][step], plan["duration"][step])
price: float = plan["price"].sum()
return price
def optimise_discharge_plan(plan: np.array, discharge_events_d) -> np.array:
x_initial = []
bounds = []
constraints = []
bounds_high_l = []
for time in plan["start"]:
data_set = discharge_events_d[time]
x_initial.append(time)
x_initial.append(data_set["duration"])
bound_low = data_set["bound_low"]
bound_high = data_set["bound_high"]
bounds_high_l.append(bound_high)
bounds.append((bound_low, bound_high))
bounds.append( (0, min(CHARGE_TIME_MAX_SECONDS, bound_high - bound_low)))
bounds_high = np.array(bounds_high_l)
constraints.append({"type": "ineq", "fun": bounds_constraint, "args": (bounds_high,)})
constraints.append({"type": "eq", "fun": end_constraint, "args": (plan,)})
constraints.append({"type": "ineq", "fun": soc_constraint})
result:OptimizeResult = minimize(objective_func, np.array(x_initial),
args=plan,
method="SLSQP",
bounds=bounds,
constraints=constraints,
#options={"eps": 1 , "maxiter": 1000, "ftol": 1.0, "disp": True},
)
print(result.x.reshape(-1, 4),"\n", result, "\n",plan)
def main():
discharge_events_l:list[booking_dt] = []
discharge_events_d: dict = {}
for range_cnt, (instability_range, stability_range) in enumerate(zip(instability_ranges, stability_ranges)):
# build up list of discharge times, that we can modify in the optimisation step
if instability_range[0] == instability_range[1]:
time_max_price = instability_range[0]
else:
max_index = np.argmax(price_a[instability_range[0]:instability_range[1]])
time_max_price = time_a[instability_range[0] + max_index]
instability_item: OptimisationItem = {"bound_low": time_a[instability_range[0]],
"bound_high": time_a[instability_range[1]]+15*60 -1,
"duration": ONE_MINUTE, # inital value
"start": time_max_price,}
discharge_events_d[time_max_price] = instability_item
if stability_range[0] == stability_range[1]:
time_min_price = stability_range[0]
else:
min_index = np.argmin(price_a[stability_range[0]:stability_range[1]])
time_min_price = time_a[stability_range[0] + min_index]
stability_item: OptimisationItem = {"bound_low": time_a[stability_range[0]],
"bound_high": time_a[stability_range[1]]+15*60 -1,
"duration": ONE_MINUTE,
"start": time_min_price,}
discharge_events_d[time_min_price] = stability_item
event: booking_dt = np.zeros(2, dtype=booking_dt)
# discharge
event["start"][0] = time_max_price
event["duration"][0] = ONE_MINUTE
# charge
event["start"][1] = time_min_price
event["duration"][1] = ONE_MINUTE
discharge_events_l.append(event)
discharge_plan = np.array(discharge_events_l, dtype=booking_dt).reshape(-1)
optimise_discharge_plan(discharge_plan, discharge_events_d)
if __name__ == '__main__':
main()
</code></pre>
|
<python><linear-programming><scipy-optimize-minimize>
|
2023-01-18 11:14:52
| 1
| 405
|
Andreas Schuldei
|
75,158,299
| 6,212,530
|
Django Rest Framework SimpleRouter inclusion inserts ^ into url pattern
|
<p>I have DRF application with urls defined using <code>SimpleRouter</code>.</p>
<pre><code># project/app/urls.py:
from rest_framework.routers import SimpleRouter
from .viewsets import ExampleViewset, TopViewset
router = SimpleRouter()
router.register(r"example/", ExampleViewSet, basename="example")
</code></pre>
<p>I imported this router to main project urls file.</p>
<pre><code># project/urls.py:
from project.app.urls import router as app_router
from django.contrib import admin
from django.urls import include, path
urlpatterns = [
path("admin/", admin.site.urls),
path("app/", include(app_router.urls)),
]
</code></pre>
<p><code>GET</code> localhost:8000/app/example/ returns 404.</p>
<p>Opening localhost:8000/app/example/ in browser returns this error page:</p>
<pre><code>Page not found (404)
Request Method: GET
Request URL: http://localhost:8000/app/example/
Using the URLconf defined in backend.urls, Django tried these URL patterns, in this order:
admin/
app/ ^example//$ [name='example-list']
app/ ^example//(?P<pk>[^/.]+)/$ [name='example-detail']
The current path, app/example/, didnβt match any of these.
Youβre seeing this error because you have DEBUG = True in your Django settings file. Change that to False, and Django will display a standard 404 page.
</code></pre>
<p>I expected <code>app/example</code> in URLconf, but instead there is <code>app/ ^example</code>. I think ^ means beginning of line. So my question is, why this happened and how to fix it?</p>
|
<python><django-rest-framework>
|
2023-01-18 11:08:48
| 1
| 1,028
|
Matija Sirk
|
75,158,264
| 10,753,968
|
Difference between multiprocessing and concurrent libraries?
|
<p>Here's what I understand:</p>
<p>The <code>multiprocessing</code> library uses multiple cores, so it's processing in parallel and not just simulating parallel processing like some libraries. To do this, it overrides the Python GIL.</p>
<p>The <code>concurrent</code> library doesn't override the Python GIL and so it doesn't have the issues that <code>multiprocessing</code> has (ie locking, hanging). So it seems like it's not actually using multiple cores.</p>
<p>I understand the difference between concurrency and parallelism. My question is:</p>
<p>How does <code>concurrent</code> actually work behind the scenes?</p>
<p>And does <code>subprocess</code> work like <code>multiprocessing</code> or <code>concurrent</code>?</p>
|
<python><concurrency><multiprocessing>
|
2023-01-18 11:05:49
| 2
| 2,112
|
half of a glazier
|
75,158,238
| 19,009,577
|
How to get multiprocessing.Pool().starmap() to return iterable
|
<p>Im trying to construct a dataframe from the inputs of a function as well as the output. Previously I was using for loops</p>
<pre><code>for i in range(x):
for j in range(y):
k = func(i, j)
(Place i, j, k into dataframe)
</code></pre>
<p>However the range was quite big so I tried to speed it up with multiprocessing.Pool()</p>
<pre><code>with mp.Pool() as pool:
result = pool.starmap(func, ((i, j) for j in range(y) for i in range(x))
(Place result into dataframe)
</code></pre>
<p>However with pool I no longer have access to i and j as they are merely inputs into the function</p>
<p>I tried to get the function to return the inputs but that doesn't really make sense as the number of for loops increases, hence how to get the iterables passed into starmap?</p>
|
<python><multiprocessing><python-multiprocessing>
|
2023-01-18 11:03:27
| 1
| 397
|
TheRavenSpectre
|
75,158,198
| 12,231,454
|
Test validate_unique raises ValidationError Django forms
|
<p>I have a ModelForm called SignUpForm located in myproj.accounts.forms</p>
<p>SignUpForm overrides Django's validate_unique so that the 'email' field is excluded from 'unique' validation as required by the model's unique=True (this is dealt with later in the view). Everything works as expected.</p>
<p>I now want to test the code by raising a ValidationError when self.instance.validate_unique(exclude=exclude) is called.</p>
<p>The problem I have is how to use mock to patch the instance.validate_unique so that a ValidationError is raised.</p>
<p><strong>SignUpForm's validate_unique(self) - myproj.accounts.forms</strong></p>
<pre><code> def validate_unique(self):
exclude = self._get_validation_exclusions()
exclude.add('email')
try:
self.instance.validate_unique(exclude=exclude)
except forms.ValidationError as e:
self._update_errors(e)
</code></pre>
<p>This test works, but it does not raise the error on the method (validate_unique) and not the instance (self.instance.validate_unique).</p>
<pre><code> def test_validate_unique_raises_exception(self):
with patch.object(SignUpForm, 'validate_unique') as mock_method:
mock_method.side_effect = Exception(ValidationError)
data = {"email": 'someone@somewhere.com',
'full_name': A User,
"password": "A19A23CD",
}
form = SignUpForm(data)
self.assertRaises(ValidationError)
</code></pre>
<p>My question is how can I raise a ValidationError using mock when self.instance.validate_unique is called?</p>
|
<python><django><django-forms><python-unittest><django-unittest>
|
2023-01-18 10:59:58
| 1
| 383
|
Radial
|
75,158,187
| 10,437,727
|
Python unittest mocking not working as expected
|
<p>I got this project structure:</p>
<pre><code>my_project
βββ __init__.py
βββ app.py
βββ helpers
β βββ __init__.py
βββ tests
βββ __init__.py
βββ integration
β βββ __init__.py
β βββ test_app.py
βββ unit
βββ __init__.py
βββ test_helpers.py
</code></pre>
<p>So far, unit testing for <code>helpers</code> hasn't been complicated, i'd patch 3rd parties and some functions within helpers.</p>
<p>The integration testing in <code>tests/integration/test_app.py</code> is being a bit of a blocking point because the patching isn't doing what i need it do to.
For example, I got a method like this:</p>
<p>helpers/<strong>init</strong>.py:</p>
<pre class="lang-py prettyprint-override"><code>def compute_completeness_metric(
vendor,
product,
fields,
endpoint):
body = {"vendor": vendor, "product": product, "fields": fields}
response_json = requests.post(
"http://example.com", json=body)
if response_json.status_code != 200:
response_json = "Can't compute"
else:
response_json = response_json.json()
return response_json
</code></pre>
<p>Now, in app.py, I call it the following way:</p>
<pre class="lang-py prettyprint-override"><code>from helpers import compute_completeness_metric
def main(req: func.HttpRequest) -> func.HttpResponse:
final_results = {}
final_results = compute_completeness_metric(vendor, product, fields, endpoint)
...
</code></pre>
<p>And when testing, I try to patch it the following way:</p>
<pre class="lang-py prettyprint-override"><code> @patch('data_quality.helpers.compute_completeness_metric')
def test_app(self, mock_compute_completeness_metric):
mock_compute_completeness_metric.return_value = "blablabla"
</code></pre>
<p>But the mocked methods do not return what they're supposed to return, instead execute themselves as if they weren't mocked.</p>
<p>Am I missing something? Should I be mocking the methods within <code>get_rule_data_models()</code> ?</p>
<p>TIA!</p>
|
<python><unit-testing><integration-testing><python-unittest>
|
2023-01-18 10:58:58
| 1
| 1,760
|
Fares
|
75,158,146
| 3,623,723
|
How to convert a structured numpy array to xarray.DataArray?
|
<p>I have a structured numpy array, containing sampled data from several measurement series:
Each series samples <code>m</code> as a function of <code>l</code>, and differs from the other series by <code>a</code>. l is not sampled at constant values, and there is a different number of samples per series, so we can't just generate a 2D array for each of <code>m</code> and <code>l</code>.
Example data:</p>
<pre class="lang-py prettyprint-override"><code> In [1]: data
Out[1]:array([( 0., 1323., 69384.), ( 0., 1344., 73674.), ( 0., 1344., 73674.),
( 0., 1439., 76678.), ( 0., 1538., 79584.), ( 0., 1643., 82389.),
( 0., 2382., 95634.), ( 0., 2439., 96028.), ( 0., 2439., 96028.),
( 0., 2574., 98154.), ( 0., 2795., 99937.), (1219., 1316., 59055.),
(1219., 1332., 61473.), (1219., 1350., 63881.), (1219., 1372., 66270.),
(1219., 1372., 66270.), (1219., 1491., 69654.), (1219., 1617., 72917.),
(1219., 1749., 76053.), (1219., 1885., 79060.), (1219., 2028., 81927.),
(1219., 2072., 82803.), (1219., 2118., 83606.), (1219., 2166., 84340.),
(1219., 2846., 91028.), (1219., 2911., 91379.), (1219., 2977., 91635.),
(1219., 4164., 95161.), (2438., 1313., 52688.), (2438., 1331., 54496.),
(2438., 1350., 56304.), (2438., 1368., 58113.), (2438., 1480., 60990.),
(2438., 1598., 63754.), (2438., 1720., 66399.), (2438., 1846., 68926.),
(2438., 1978., 71326.), (2438., 2757., 79713.), (2438., 2819., 80026.),
(2438., 2882., 80258.), (2438., 4155., 84968.)],
dtype=[('a', '<f8'), ('l', '<f8'), ('m', '<f8')])
</code></pre>
<p>I'm pretty sure that it should be possible to turn this into an <code>xarray.DataArray</code>, since that is made to deal with incomplete data.</p>
<p>What I would like to end up with is an <code>xarray.DataSet</code>, with two coordinates, <code>a</code> and <code>i</code>, where <code>i</code> is a simple integer index enumerating the data points in each measurement series.
That way, I can get the measurements at each value of <code>a</code>, by the first index, and e.g. the first measured sample by selecting <code>i=0</code>.</p>
<p>So my preferred end result would look like this:</p>
<pre class="lang-py prettyprint-override"><code>In [100]: desired_array
Out[100]: <xarray.Dataset>
Dimensions: (a: 3, i: 17)
Coordinates:
* a (a) float64 0. 1219. 2438.
* i (i) int64 0 1 2 3 4 ... 15 16 17
Data variables:
l (a, i) float64 1323. 1344. 1344. 1439. ... 2882. 4155.
m (a, i) float64 69384. 73674. 73674. ... 80258. 84968.
</code></pre>
<p>There would be some missing values in there as well, since there are not exactly 17 data values for each value of <code>a</code>, but my understanding is that <code>xarray</code> can deal with this.</p>
<p>Simply defining an xarray, and specifying <code>data['m']</code> as the data, and the other two as coordinates, fails because that requires a 2D array as input, and <code>data['m']</code> only has one dimension.
I suppose I could manually iterate through the data, find the points where 'a' changes and then generate 2D arrays for both 'm' and 'l', where each column corresponds to one value of 'a', and put them into an <code>xarray.DataSet</code> (with two data variables 'l' and 'm', and coordinates 'a' and another unnamed one, which enumerates the points of each measurement series), but then I'd first have to figure out the length of the longest measurement series, and the resulting intermediate <code>np.array</code> would contain a bunch of empty fields.
The full dataset has several more variables that differ between series, and I imagine at that point implementing code to sort everything would get pretty tedious.</p>
<p>The recommended way, according to <a href="https://docs.xarray.dev/en/stable/user-guide/io.html" rel="nofollow noreferrer">xarray documentation</a>, is to convert to a Pandas dataframe first, and then use <code>pandas.DataFrame.to_xarray()</code>.</p>
<p>As I've just been made aware (thanks to jhamman), pandas is actually a dependency of xarray, so this should be convenient.</p>
<p>however ...
In [61]: tempdf = pandas.DataFrame(data, index=data['a'].astype(int), columns=['l', 'm'])</p>
<pre class="lang-py prettyprint-override"><code>In [62]: tempdf
Out[62]:
l m
0 1323.0 69384.0
0 1344.0 73674.0
0 1344.0 73674.0
0 1439.0 76678.0
0 1538.0 79584.0
0 1643.0 82389.0
0 2382.0 95634.0
0 2439.0 96028.0
0 2439.0 96028.0
0 2574.0 98154.0
0 2795.0 99937.0
1219 1316.0 59055.0
1219 1332.0 61473.0
1219 1350.0 63881.0
...
</code></pre>
<p>It seems that Pandas does not notice that my chosen index is repeating, and does not group the data accordingly. Also, I'd like to add a second index which goes through the data where <code>a</code> is constant.</p>
<p>Knowing that I probably won't like the result, I convert the above to xarray, and get:</p>
<pre class="lang-py prettyprint-override"><code>tempdf.to_xarray()
Out[66]:
<xarray.Dataset>
Dimensions: (index: 41)
Coordinates:
* index (index) int64 0 0 0 0 0 0 0 ... 2438 2438 2438 2438 2438 2438 2438
Data variables:
l (index) float64 1.323e+03 1.344e+03 ... 2.882e+03 4.155e+03
m (index) float64 6.938e+04 7.367e+04 ... 8.026e+04 8.497e+04
</code></pre>
<p>...not what I wanted:</p>
<ul>
<li>the index has lost its name</li>
<li>the index variable has repeating values</li>
<li>...and of course the data is still in 1D format.</li>
</ul>
<p>... what am I not getting?
I tried different variations on the data above, and sometimes pandas seemed to accept an index variable as I wanted to, sometimes it did not, but I haven't worked out what the problem is, and I definitely haven't worked out how to add a generic index, especially since the measurement series are not of equal length (so there's no nice and regular array that could hold the data).</p>
|
<python><numpy><python-xarray>
|
2023-01-18 10:56:06
| 0
| 3,363
|
Zak
|
75,158,097
| 1,421,907
|
Is it possible to use Python Mixed Integer Linear programming to get all solutions in an interval?
|
<p>I have a linear problem to solve looking for integer numbers. I found a way to solve it using the new <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.milp.html" rel="nofollow noreferrer">milp</a> implementation in spicy. Hereafter is a demonstration code.</p>
<p>The problem is the following. From a vector of weights <code>w</code> I am looking for the integer vector x such as the dot product of x and weights is in a given range. It looks like something like this</p>
<pre><code># minimize
abs(w^T @ x - target)
</code></pre>
<p>And I translated this in the following to implement in milp:</p>
<pre><code># maximize
w^T @ x
# constraints
target - error <= w^T @ x <= target + error
</code></pre>
<p>In my specific context, several solutions may exist for x. Is there a way to get all the solutions in the given interval instead of maximizing (or minimizing) something ?</p>
<p>Here is the milp implementation.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from scipy.optimize import milp, LinearConstraint, Bounds
# inputs
ratio_l, ratio_u = 0.2, 3.0
max_bounds = [100, 200, 2, 20, 2]
target = 380.2772 # 338.34175
lambda_parameter = 2
error = lambda_parameter * 1e-6 * target
# coefficients of the linear objective function
w = np.array([12.0, 1.007825, 14.003074, 15.994915, 22.989769], dtype=np.float64)
# the aim is to minimize
# w^T x - target_mass
# instead I maximize
# w^T x
# in the constraint domain
# target - error <= w^T x <= target + error
# constraints on variables 0 and 1:
# ratio_l <= x[1] / x[0] <= ratio_u
# translation =>
# (ratio_l - ratio_u) * x[1] <= -ratio_u * x[0] + x[1] <= 0
# use max (x[1]) to have a constant
# linear objective function
c = w
# integrality of the decision variables
# 3 is semi-integer = within bounds or 0
integrality = 3 * np.ones_like(w)
# Matrice A that define the constraints
A = np.array([
# boundaries of the mass defined from lambda_parameters
w,
# c[1] / c[0] max value
[-ratio_u, 1.0, 0., 0., 0.],
])
# b_up and b_low vectors
# b_low <= A @ x <= b_up
n_max_C = max_bounds[0]
b_up = [
target + error, # mass target
0., # c[1] / c[0] constraints up
]
b_low = [
target - error, # mass target
(ratio_l - ratio_u) * max_bounds[0], # H_C constraints up
]
# set up linear constraints
constraints = LinearConstraint(A, b_low, b_up)
bounds = Bounds(
lb=[0, 0, 0, 0, 0],
ub=max_bounds,
)
results = milp(
c=c,
constraints=constraints,
integrality=integrality,
bounds=bounds,
options=dict(),
)
print(results)
</code></pre>
<p>The results is this</p>
<pre class="lang-bash prettyprint-override"><code> fun: 380.277405
message: 'Optimization terminated successfully. (HiGHS Status 7: Optimal)'
mip_dual_bound: 380.27643944560145
mip_gap: 2.5390790665913637e-06
mip_node_count: 55
status: 0
success: True
x: array([19., 40., 0., 7., 0.])
</code></pre>
<p>But it exists other possible x arrays but with an highest error. This one is the</p>
<pre><code>m = np.dot(w, [19., 40., 0., 7., 0.])
print(f"{'target':>10s} {'calc m':>27s} {'deviation':>27s} {'error':>12s} match?")
print(f"{target:10.6f} {target - error:14.6f} <= {m:10.6f} <= {target + error:10.6f}"
f" {m - target:12.6f} {error:12.6f} -> {target - error <= m <= target + error}")
</code></pre>
<pre><code> target calc m deviation error match?
380.277200 380.276439 <= 380.277405 <= 380.277961 0.000205 0.000761 -> True
</code></pre>
<p>These two other examples work also and I wonder how I can got them without implementing a grid algorithm (like brute in scipy).</p>
<pre><code>m = np.dot(w, [20., 39., 1., 4., 1.])
print(f"{'target':>10s} {'calc m':>27s} {'deviation':>27s} {'error':>12s} match?")
print(f"{target:10.6f} {target - error:14.6f} <= {m:10.6f} <= {target + error:10.6f}"
f" {m - target:12.6f} {error:12.6f} -> {target - error <= m <= target + error}")
</code></pre>
<pre><code> target calc m deviation error match?
380.277200 380.276439 <= 380.277678 <= 380.277961 0.000478 0.000761 -> True
</code></pre>
<pre><code>m = np.dot(w, [21., 38., 2., 1., 2.])
print(f"{'target':>10s} {'calc m':>27s} {'deviation':>27s} {'error':>12s} match?")
print(f"{target:10.6f} {target - error:14.6f} <= {m:10.6f} <= {target + error:10.6f}"
f" {m - target:12.6f} {error:12.6f} -> {target - error <= m <= target + error}")
</code></pre>
<pre><code> target calc m deviation error match?
380.277200 380.276439 <= 380.277951 <= 380.277961 0.000751 0.000761 -> True
</code></pre>
|
<python><scipy><scipy-optimize><mixed-integer-programming>
|
2023-01-18 10:51:59
| 1
| 9,870
|
Ger
|
75,158,015
| 9,212,995
|
How can additional visibility options be added to the CKAN form for data records?
|
<p>I'm wondering if there is a straightforward way to change the CKAN dataset form. I would be interested in <strong>adding extra visibility options</strong> than <strong>public</strong> and <strong>private</strong>:
that is to say;</p>
<ol>
<li>Public</li>
<li>Private</li>
<li>Sub Project</li>
<li>Project</li>
<li>Organisation/Company/Institute</li>
</ol>
<p>CKAN uses <code>package_basic_fields.html</code></p>
<p>...</p>
<pre><code> {% if show_visibility_selector %}
{% block package_metadata_fields_visibility %}
<div class="form-group control-medium">
<label for="field-private" class="form-label">{{ _('Visibility') }}</label>
<div class="controls">
<select id="field-private" name="private" class="form-control">
{% for option in [('True', _('Private')), ('False', _('Public'))] %}
<option value="{{ option[0] }}" {% if option[0] == data.private|trim %}selected="selected"{% endif %}>{{ option[1] }}</option>
### I want to add extra options here
{% endfor %}
</select>
</div>
</div>
{% endblock %}
{% endif %}
{% if show_organizations_selector and show_visibility_selector %}
</code></pre>
<p>...</p>
<p><strong>ckanext-scheming</strong> doesn't talk about this either.</p>
<p>Any recommendation for me?</p>
|
<python><jinja2><ckan>
|
2023-01-18 10:45:06
| 1
| 372
|
Namwanza Ronald
|
75,158,000
| 997,832
|
`Tensor' object has no attribute 'numpy' Error
|
<p>I'm trying to apply a Lambda function to convert tensor values. I need to get the tensor values in numpy array. I'm trying <code>.numpy()</code> method but it gives <code>'Tensor' object has no attribute 'numpy'</code> error. I added configurations for running the tensor functions eagerly but I'm not sure if it works in this case. And I simply create a Tensor constant and use <code>.numpy()</code> it works. What's wrong here?</p>
<p>The code that doesn't work:</p>
<pre><code>tf.config.experimental_run_functions_eagerly(True)
def find_cluster(arr):
print(arr.numpy())
FindCluster = keras.layers.core.Lambda(lambda x: find_cluster(x))
graph_input = Input(shape=(), dtype='string', name='graph_input')
x = FindCluster(graph_input)
m = Model(inputs=graph_input, outputs=x)
m.compile(loss='categorical_crossentropy', optimizer='adam', run_eagerly=True, metrics=['acc'])
</code></pre>
<p>The code below works:</p>
<pre><code>tensor = tf.constant([[10,20], [30,40], [50,60]])
tensor_array = tensor.numpy()
</code></pre>
<p>NOTE: I need to use Python 3.6 so the latest tensorflow version available is 2.6.2 and I'm using tensorflow-gpu.</p>
|
<python><tensorflow><deep-learning>
|
2023-01-18 10:44:22
| 1
| 1,395
|
cuneyttyler
|
75,157,892
| 12,883,297
|
Create a new column which calculates the difference between last value and the first value of time column at groupby level in pandas
|
<p>I have a dataframe</p>
<pre><code>df = pd.DataFrame([["A","9:00 AM"],["A","11:12 AM"],["A","1:03 PM"],["B","9:00 AM"],["B","12:56 PM"],["B","1:07 PM"],
["B","1:18 PM"]],columns=["id","time"])
</code></pre>
<pre><code>id time
A 09:00 AM
A 11:12 AM
A 01:03 PM
B 09:00 AM
B 12:56 PM
B 01:07 PM
B 01:18 PM
</code></pre>
<p>I want to create a new column which calculates the difference between last value and the first value of time column at id level, and add offset value of 30 min to the value.</p>
<p>Ex: Here for id A, diff between 01:03 PM and 09:00 AM is 4hr 3 min. For this add 30 min as offset value so it becomes 4 hr 33 min. Add that value to new column total_hrs for all the rows of id A.</p>
<p><strong>Expected Output:</strong></p>
<pre><code>df_out = pd.DataFrame([["A","9:00 AM","04:33:00"],["A","11:12 AM","04:33:00"],["A","1:03 PM","04:33:00"],["B","9:00 AM","04:48:00"],
["B","12:56 PM","04:48:00"],["B","1:07 PM","04:48:00"],["B","1:18 PM","04:48:00"]],columns=["id","time","total_hrs"])
</code></pre>
<pre><code>id time total_hrs
A 09:00 AM 04:33:00
A 11:12 AM 04:33:00
A 01:03 PM 04:33:00
B 09:00 AM 04:48:00
B 12:56 PM 04:48:00
B 01:07 PM 04:48:00
B 01:18 PM 04:48:00
</code></pre>
|
<python><python-3.x><pandas><dataframe><datetime>
|
2023-01-18 10:35:25
| 2
| 611
|
Chethan
|
75,157,791
| 17,788,573
|
AttributeError: 'NoneType' object has no attribute 'glfwGetCurrentContext'
|
<p>I'm trying a tutorial on Reinforcement learning wherein I'm currently trying to use gymnasium, mujoco, to train agents. I've installed mujoco and when I try to run the program, the sim window opens for a second and throws this error. Idk what I'm doing wrong.</p>
<pre><code>import gymnasium as gym
import stable_baselines3
import imageio
env=gym.make('Humanoid-v4', render_mode='human')
obs=env.reset()
env.render()
print(obs)
</code></pre>
<p>BTW, I'm getting the output of print(obs) which returns a tensor. It's just the sim environment which is throwing this error.</p>
<p>Error Traceback</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\krish\anaconda3\envs\learn\lib\site-packages\gymnasium\envs\mujoco\mujoco_rendering.py", line 337, in __del__
File "C:\Users\krish\anaconda3\envs\learn\lib\site-packages\gymnasium\envs\mujoco\mujoco_rendering.py", line 330, in free
File "C:\Users\krish\anaconda3\envs\learn\lib\site-packages\glfw\__init__.py", line 2358, in get_current_context
AttributeError: 'NoneType' object has no attribute 'glfwGetCurrentContext'
</code></pre>
|
<python><reinforcement-learning><mujoco>
|
2023-01-18 10:29:00
| 2
| 311
|
Vaikruta
|
75,157,585
| 8,916,474
|
Tox installing external libraries in tox env not from requirements.txt
|
<p>I use windows. My tox.ini:</p>
<pre><code>[tox]
envlist =
docs
min_version = 4
skipsdist = True
allowlist_externals = cd
passenv =
HOMEPATH
PROGRAMDATA
basepython = python3.8
[testenv:docs]
changedir = docs
deps =
-r subfolder/package_name/requirements.txt
commands =
bash -c "cd ../subfolder/package_name/ && pip install ."
</code></pre>
<p>Within a tox environment, I install some additional packages. I do it as above in tox's commands.</p>
<p>However, tox is installing a library in WSL's python instance path:</p>
<blockquote>
<p><strong>/home/usr/.local/lib/python3.8/site-packages</strong></p>
</blockquote>
<p>Not in tox's environment place for libraries:</p>
<blockquote>
<p><strong>.tox/env/lib/site-packages</strong></p>
</blockquote>
<p>I trigger tox inside the python virtual environment. If I run it in Docker everything works fine.</p>
<p>I use Windows. Besides WSL I have two python versions installed on my Windows: 3.8 and the main one 3.10. However, the virtual env that I'm using for tox is with 3.8. But I tested it with 3.10 and I receive the same result. In the system variables' path they are in such order and on the top of the list:</p>
<blockquote>
<p>C:\Users\UserName\AppData\Local\Programs\Python\Python310\Scripts<br />
C:\Users\UserName\AppData\Local\Programs\Python\Python310<br />
C:\Users\UserName\AppData\Local\Programs\Python\Python38\Scripts<br />
C:\Users\UserName\AppData\Local\Programs\Python\Python38\</p>
</blockquote>
<p>As I found in the tox documentation:</p>
<blockquote>
<p>Name or path to a Python interpreter which will be used for creating
the virtual environment, first one found wins. This determines in
practice the Python for what weβll create a virtual isolated
environment.</p>
</blockquote>
<p>So:
I'm trying to understand and fix the way how tox picks up the path to the python instance. As a result it should install libraries inside tox environment in libs folder.</p>
|
<python><tox>
|
2023-01-18 10:09:50
| 2
| 504
|
QbS
|
75,157,501
| 9,452,512
|
How to access the previous item calculated in a list comprehension?
|
<p>when I create a list, I use the one-liner</p>
<pre class="lang-py prettyprint-override"><code>new_list = [func(item) for item in somelist]
</code></pre>
<p>Is there a simple way to write the following <em>iteration</em> in one line?</p>
<pre class="lang-py prettyprint-override"><code>new_list = [0]
for _ in range(N):
new_list.append(func(new_list[-1]))
</code></pre>
<p>or even</p>
<pre class="lang-py prettyprint-override"><code>new_list = [0]
for t in range(N):
new_list.append(func(t, new_list[-1]))
</code></pre>
<p>i.e. each item is calculated based on the previous item with a specific initializer.</p>
|
<python><python-3.x>
|
2023-01-18 10:03:07
| 2
| 1,473
|
Uwe.Schneider
|
75,157,428
| 16,852,041
|
redis.exceptions.DataError: Invalid input of type: 'dict'. Convert to a bytes, string, int or float first
|
<p>Goal: store a <code>dict()</code> or <code>{}</code> as the value for a key-value pair, to <code>set()</code> onto <strong>Redis</strong>.</p>
<p>Code</p>
<pre><code>import redis
r = redis.Redis()
value = 180
my_dict = dict(bar=value)
r.set('foo', my_dict)
</code></pre>
<pre><code>redis.exceptions.DataError: Invalid input of type: 'dict'. Convert to a bytes, string, int or float first.
</code></pre>
|
<python><python-3.x><serialization><redis><byte>
|
2023-01-18 09:58:14
| 1
| 2,045
|
DanielBell99
|
75,157,367
| 11,945,144
|
cross validation in pipeline python
|
<p>I have this pipeline:</p>
<pre><code>
pipe = Pipeline(steps=[
("fasttext", FastTextVectorTransformer()),
("umap",umap_skl(n_components=8)),
("classifier", HistGradientBoostingClassifier())])
pipe.fit(X_train, y_train)
print(f'Fbeta_score:{fbeta_score(y[test_index], pipe.predict(X[test_index]), average=None, beta=1.4)}')
print(f'Accuracy:{accuracy_score(y[test_index], pipe.predict(X[test_index]))}')
print(f'Precission:{precision_score(y_test, pipe.predict(X[test_index]))}')
print(f'Recall:{recall_score(y_test, pipe.predict(X_test))}')
print(f'Roc_AUC:{roc_auc_score(y_test, pipe.predict(X_test))}')
</code></pre>
<p>And I need to report the scores use coss Validation.
How can I add cross validation to the pipeline?</p>
<p>Thanks!</p>
|
<python><pipeline><cross-validation>
|
2023-01-18 09:53:12
| 0
| 343
|
Maite89
|
75,157,339
| 19,238,204
|
I want to Plot Circle and its Solid Revolution (Sphere) but get Error: loop of ufunc does not support argument 0 o
|
<p>I have add the assumption of nonnegative for variables <code>x</code> and <code>r</code> so why I can't plot this?</p>
<p>this is my code:</p>
<pre><code># Calculate the surface area of y = sqrt(r^2 - x^2)
# revolved about the x-axis
import matplotlib.pyplot as plt
import numpy as np
import sympy as sy
x = sy.Symbol("x", nonnegative=True)
r = sy.Symbol("r", nonnegative=True)
def f(x):
return sy.sqrt(r**2 - x**2)
def fd(x):
return sy.simplify(sy.diff(f(x), x))
def f2(x):
return sy.sqrt((1 + (fd(x)**2)))
def vx(x):
return 2*sy.pi*(f(x)*sy.sqrt(1 + (fd(x) ** 2)))
vxi = sy.Integral(vx(x), (x, -r, r))
vxf = vxi.simplify().doit()
vxn = vxf.evalf()
n = 100
fig = plt.figure(figsize=(14, 7))
ax1 = fig.add_subplot(221)
ax2 = fig.add_subplot(222, projection='3d')
ax3 = fig.add_subplot(223)
ax4 = fig.add_subplot(224, projection='3d')
x = np.linspace(1, 3, 3)
# Plot the circle
y = np.sqrt(r ** 2 - x ** 2)
t = np.linspace(0, np.pi * 2, n)
xn = np.outer(x, np.cos(t))
yn = np.outer(x, np.sin(t))
zn = np.zeros_like(xn)
for i in range(len(x)):
zn[i:i + 1, :] = np.full_like(zn[0, :], y[i])
ax1.plot(x, y)
ax1.set_title("$f(x)$")
ax2.plot_surface(xn, yn, zn)
ax2.set_title("$f(x)$: Revolution around $y$")
# find the inverse of the function
y_inverse = x
x_inverse = np.sqrt(r ** 2 - y_inverse ** 2)
xn_inverse = np.outer(x_inverse, np.cos(t))
yn_inverse = np.outer(x_inverse, np.sin(t))
zn_inverse = np.zeros_like(xn_inverse)
for i in range(len(x_inverse)):
zn_inverse[i:i + 1, :] = np.full_like(zn_inverse[0, :], y_inverse[i])
ax3.plot(x_inverse, y_inverse)
ax3.set_title("Inverse of $f(x)$")
ax4.plot_surface(xn_inverse, yn_inverse, zn_inverse)
ax4.set_title("$f(x)$: Revolution around $x$ \n Surface Area = {}".format(vxn))
plt.tight_layout()
plt.show()
</code></pre>
|
<python><sympy>
|
2023-01-18 09:50:15
| 1
| 435
|
Freya the Goddess
|
75,157,264
| 498,504
|
incompatible input layer size error in a Cat Dog Classification CNN model
|
<p>I'm writing a simple CNN model to Classification Cat and Dog picture from a local directory named <code>train</code>.</p>
<p>Below are the codes that I have written so far:</p>
<pre><code>import numpy as np
import cv2 as cv
import tensorflow.keras as keras
import os
from sklearn.preprocessing import LabelEncoder
from tensorflow.keras.utils import to_categorical
from tensorflow.keras import layers , models
from sklearn.model_selection import train_test_split
images_vector =[]
images_label =[]
fileNames = os.listdir('train')
for i , f_name in enumerate(fileNames) :
image = cv.imread('train/' + f_name)
image = cv.resize(image , (50,50))
image = image/255.0
image = image.flatten()
images_vector.append(image)
images_label.append(f_name.split('.')[0])
if i%10000 == 0 :
print(f" [INFO ] : {i} images are processed...")
labelEncoder = LabelEncoder()
images_label = labelEncoder.fit_transform(images_label)
images_label = to_categorical(images_label)
images_label
X_train , X_test , y_train , y_test =
train_test_split(images_vector ,images_label , random_state=40 , train_size=0.8)
print('X_train: ' + str(X_train.shape))
print('Y_train: ' + str(y_train.shape))
print('X_test: ' + str(X_test.shape))
print('Y_test: ' + str(y_test.shape))
</code></pre>
<p>Now after running following code to build model :</p>
<pre><code>net = models.Sequential([
layers.Conv2D(32 , (3,3) , activation='relu' , input_shape = (1,7500)) ,
layers.MaxPooling2D(2,2),
layers.Conv2D(64 , (3,3) , activation='relu'),
layers.Flatten(),
layers.Dense(2 , activation='softmax')
])
net.summary()
</code></pre>
<p>I got this error :</p>
<pre><code>ValueError: Input 0 of layer "conv2d_96" is incompatible with the layer: expected min_ndim=4, found ndim=3. Full shape received: (None, 1, 7500)
</code></pre>
<p>I searched a lot to solving the problem and try and test different shapes but can not find the solution</p>
<p>can any body help me?</p>
|
<python><tensorflow><keras>
|
2023-01-18 09:45:45
| 1
| 6,614
|
Ahmad Badpey
|
75,157,153
| 7,800,760
|
Always include set of packages when creating a new conda environment
|
<p>Everytime I create a new conda environment I need to:</p>
<ol>
<li>Always include black pylint pytest packages in addition to the project's specific ones</li>
<li>Always create three subdirectories: data, tests and a third with the environment's name</li>
<li>(optionally) create a corresponding new GitHub repo</li>
</ol>
<p>Is there a "conda" way of achieving 1</p>
<p>Is there some utility / script to achieve the latter two items?</p>
|
<python><conda>
|
2023-01-18 09:36:27
| 0
| 1,231
|
Robert Alexander
|
75,157,090
| 11,279,170
|
Fill NAs : Min of values in group
|
<p>Here is my DataFrame.</p>
<pre><code>df = pd.DataFrame ( {'CNN': ['iphone 11 63 GB TMO','iphone 11 128 GB ATT','iphone 11 other carrier','iphone 12 256 GB TMO','iphone 12 64 GB TMO','iphone 12 other carrier'],
'Family Name':['iphone 11', 'iphone 11', 'iphone 11', 'iphone 12', 'iphone 12', 'iphone 12'],
'Storage': [63, 128,np.nan, 256,64, np.nan]})
</code></pre>
<pre><code>Output:
CNN Family Name Storage
0 iphone 11 63 GB TMO iphone 11 63.0
1 iphone 11 128 GB ATT iphone 11 128.0
2 iphone 11 other carrier iphone 11 NaN
3 iphone 12 256 GB TMO iphone 12 256.0
4 iphone 12 64 GB TMO iphone 12 64.0
5 iphone 12 other carrier iphone 12 NaN
</code></pre>
<p>What I am trying to achieve is find NAs. Criteria is minimum of storage from group(Family Name). I have tried to group by and fillna(min()) but it doesnt seems to be working.</p>
<pre><code>#Tried
df["Storage"] = df.groupby("Family Name").apply(lambda x: x.fillna(x.min()))
</code></pre>
<p>Here is final output expected.</p>
<pre><code>Expected Output:
CNN Family Name Storage
0 iphone 11 63 GB TMO iphone 11 63.0
1 iphone 11 128 GB ATT iphone 11 128.0
2 iphone 11 other carrier iphone 11 63.0
3 iphone 12 256 GB TMO iphone 12 256.0
4 iphone 12 64 GB TMO iphone 12 64.0
5 iphone 12 other carrier iphone 12 64.0
</code></pre>
|
<python><pandas><fillna>
|
2023-01-18 09:31:10
| 3
| 631
|
DSR
|
75,156,952
| 6,653,602
|
Migrate django database to existing one
|
<p>So I am using Django with mysql database (most basic tables, like auth, user, admin and few custom models) but I need to migrate those tables with data to a existing PostgreSQL database. The issue is that there are tables created there already. Should I create additional models in the models.py file for those models existing in the database already so that migrations will be applied correctly?</p>
<p>I am trying to figure what are the steps that I should take to properly switch databases in the Django app. So far what I have done is saved the data from current (mysql) database:</p>
<pre><code>python manage.py dumpdata > dump.json
</code></pre>
<p>Now the next step I took was to change database in settings.py to postgres.</p>
<p>After this I save current table schemas using <code>inspectdb</code>.
Here is where I want to ask what should be next steps(?)</p>
<ul>
<li><p>Merge old and new models.py files.</p>
</li>
<li><p>Apply migrations</p>
</li>
<li><p>Add data from json dump file to the new database.</p>
</li>
</ul>
|
<python><django><postgresql><django-models>
|
2023-01-18 09:19:32
| 1
| 3,918
|
Alex T
|
75,156,877
| 3,004,698
|
Access CosmosDB Data from Azure App Service by using managed identity (Failed)
|
<p>A FastAPI-based API written in Python has been deployed as an Azure App Service. The API needs to read and write data from CosmosDB, and I attempted to use Managed Identity for this purpose, but encountered an error, stating <code>Unrecognized credential type</code></p>
<p>These are the key steps that I took towards that goal</p>
<p><strong>Step One</strong>: I used Terraform to configure the managed identity for Azure App Service, and assigned the 'contributor' role to the identity so that it can access and write data to CosmosDB. The role assignment was carried out in the file where the Azure App Service is provisioned.</p>
<pre><code> resource "azurerm_linux_web_app" "this" {
name = var.appname
location = var.location
resource_group_name = var.rg_name
service_plan_id = azurerm_service_plan.this.id
app_settings = {
"PROD" = false
"DOCKER_ENABLE_CI" = true
"DOCKER_REGISTRY_SERVER_URL" = data.azurerm_container_registry.this.login_server
"WEBSITE_HTTPLOGGING_RETENTION_DAYS" = "30"
"WEBSITE_ENABLE_APP_SERVICE_STORAGE" = false
}
lifecycle {
ignore_changes = [
app_settings["WEBSITE_HTTPLOGGING_RETENTION_DAYS"]
]
}
https_only = true
identity {
type = "SystemAssigned"
}
data "azurerm_cosmosdb_account" "this" {
name = var.cosmosdb_account_name
resource_group_name = var.cosmosdb_resource_group_name
}
// built-in role that allow the app-service to read and write to an Azure Cosmos DB
resource "azurerm_role_assignment" "cosmosdbContributor" {
scope = data.azurerm_cosmosdb_account.this.id
principal_id = azurerm_linux_web_app.this.identity.0.principal_id
role_definition_name = "Contributor"
}
</code></pre>
<p><strong>Step Two</strong>: I used the managed identity library to fetch the necessary credentials in the Python code.</p>
<pre><code>from azure.identity import ManagedIdentityCredential
from azure.cosmos.cosmos_client import CosmosClient
client = CosmosClient(get_endpoint(),credential=ManagedIdentityCredential())
client = self._get_or_create_client()
database = client.get_database_client(DB_NAME)
container = database.get_container_client(CONTAINER_NAME)
container.query_items(query)
</code></pre>
<p>I received the following error when running the code locally and from Azure (the error can be viewed from the Log stream of the Azure App Service):</p>
<pre><code>raise TypeError(
TypeError: Unrecognized credential type. Please supply the master key as str, or a dictionary or resource tokens, or a list of permissions.
</code></pre>
<p>Any help or discussion is welcome</p>
|
<python><terraform><azure-web-app-service><azure-cosmosdb><azure-managed-identity>
|
2023-01-18 09:12:29
| 1
| 5,152
|
SLN
|
75,156,728
| 3,672,883
|
Pyreverse dont take relationship when I type data like List[class]
|
<p>Hello I have the following clas</p>
<pre><code>class Page(BaseModel):
index_page:int
content:str
class Book(BaseModel):
name:str
pages: List[Page] = []
</code></pre>
<p>The problem is that when I execute pyreverse it isn't set the relationship between Book and Page</p>
<p>But if I change the class Book to</p>
<pre><code>class Book(BaseModel):
name:str
pages: Page
</code></pre>
<p>Pyreverse take and draw that relationship</p>
<p>I can imagine that the problem is List typing, but don't know how can I resolve this</p>
<p>Is there anyway of do it?</p>
<p>Thanks</p>
|
<python><uml><pylint><pyreverse>
|
2023-01-18 08:57:15
| 0
| 5,342
|
Tlaloc-ES
|
75,156,644
| 10,583,765
|
Django Ninja API schema circular import error
|
<p>I have <code>UserSchema</code>:</p>
<pre class="lang-py prettyprint-override"><code># users/models.py
class User(AbstractUser):
...
# users/schemas.py
from typing import List
from tasks.schemas import TaskSchema
class UserSchema(ModelSchema):
tasks: List[TaskSchema] = []
class Config:
model = User
...
</code></pre>
<p>...and <code>TaskSchema</code>:</p>
<pre class="lang-py prettyprint-override"><code># tasks/models.py
class Task(models.Model):
...
owner = models.ForeignKey(User, related_name="tasks", on_delete=models.CASCASE)
# tasks/schemas.py
from users.schemas import UserSchema
class TaskSchema(ModelSchema):
owner: UserSchema
class Config:
model = Task
...
</code></pre>
<p>But it throws:</p>
<pre class="lang-bash prettyprint-override"><code>ImportError: cannot import name 'TaskSchema' from partially initialized module 'tasks.schemas' (most likely due to a circular import) (/Users/myname/codes/django/ninja-api/tasks/schemas.py)
</code></pre>
<hr />
<p>What I want to do is that I want to fetch:</p>
<ol>
<li><code>GET /api/todos</code> - a list of tasks with related owners</li>
<li><code>GET /api/todos/{task_id}</code> - a task with owner</li>
<li><code>GET /api/users/{user_id}</code> - a user with a list of owned tasks</li>
</ol>
<hr />
<p>Versions:</p>
<pre><code>python = ^3.11
django = ^4.1.5
django-ninja = ^0.20.0
</code></pre>
|
<python><django><django-ninja>
|
2023-01-18 08:49:05
| 1
| 2,143
|
Swix
|
75,156,551
| 7,800,760
|
Finding on which port a given server is running within python program
|
<p>I am developing a python 3.11 program which will run on a few different servers and needs to connect to the local Redis server. On each machine the latter might run on a different port, sometimes the default 6379 but not always.</p>
<p>On the commandline I can issue the following command which on both my Linux and MacOS servers works well:</p>
<pre><code>(base) bob@Roberts-Mac-mini ~ % sudo lsof -n -i -P | grep LISTEN | grep IPv4 | grep redis
redis-ser 60014 bob 8u IPv4 0x84cd01f56bf0ee21 0t0 TCP *:9001 (LISTEN)
</code></pre>
<p>What's the better way to get the running port using python functions/libraries?</p>
|
<python><linux><macos><network-programming><tcp>
|
2023-01-18 08:39:23
| 1
| 1,231
|
Robert Alexander
|
75,156,511
| 3,826,362
|
Pandas - sort_values by combining multiple columns
|
<p>Assume file <code>sizes.txt</code> with the following content:</p>
<pre><code>Sig Sta FP Method Size
10 10 100 array 108
10 10 100 csr-heur 130
10 10 100 list 220
10 10 15 array 108
10 10 15 csr-heur 45
10 10 15 list 50
10 10 25 array 108
10 10 25 csr-heur 62
10 10 25 list 70
10 10 50 array 108
10 10 50 csr-heur 95
8 4 100 array 40
8 4 100 csr-heur 50
8 4 100 list 78
8 4 25 array 40
8 4 25 csr-heur 26
8 4 25 list 30
8 4 50 array 40
8 4 50 csr-heur 36
8 4 50 list 46
8 4 75 array 40
8 4 75 csr-heur 43
8 4 75 list 62
</code></pre>
<p>And the following code:</p>
<pre class="lang-py prettyprint-override"><code>def m4():
df=pandas.read_csv('sizes.txt', sep=' ')
df=df.pivot(index=['Sig','Sta','FP'],columns='Method',values='Size')
vals = [v[0] * v[1] * v[2] / 100.0 for v in df.index]
df['vals'] = vals
df = df.sort_values(by='vals', kind='mergesort', axis='index')
print(df)
# df.drop('vals') # <- causes error if executed
m4()
</code></pre>
<p>Output is:</p>
<pre><code>Method array csr-heur list vals
Sig Sta FP
8 4 25 40.0 26.0 30.0 8.0
10 10 15 108.0 45.0 50.0 15.0
8 4 50 40.0 36.0 46.0 16.0
75 40.0 43.0 62.0 24.0
10 10 25 108.0 62.0 70.0 25.0
8 4 100 40.0 50.0 78.0 32.0
10 10 50 108.0 95.0 NaN 50.0
100 108.0 130.0 220.0 100.0
</code></pre>
<p>And this is exactly what I expect. But I also want to drop column <code>vals</code> after sorting. Unfortunately uncommenting this causes exception:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/home/jd/.local/lib/python3.10/site-packages/pandas/core/indexes/base.py", line 3803, in get_loc
return self._engine.get_loc(casted_key)
File "pandas/_libs/index.pyx", line 138, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index.pyx", line 146, in pandas._libs.index.IndexEngine.get_loc
File "pandas/_libs/index_class_helper.pxi", line 49, in pandas._libs.index.Int64Engine._check_type
KeyError: 'vals'
</code></pre>
<p>It is most probably caused by the fact that sorting isn't performed immediately but is delayed, so when data is actually sorted, the column is gone and</p>
<p>So there are two questions, how to either:</p>
<ol>
<li>Drop this <code>vals</code> column without causing error above, or</li>
<li>Sort DataFrame using index that is a tuple.</li>
</ol>
<p>I tried <code>sort_index</code> with custom <code>key</code>, but this causes my sort function to be called exactly three times, each time with next level of the key - this is clearly stated in documentation and perfectly acceptable, but at the same time absolutely useless in my case. Is there anything that can be done about it?</p>
|
<python><pandas><dataframe><sorting>
|
2023-01-18 08:35:18
| 0
| 1,121
|
JΔdrzej Dudkiewicz
|
75,156,434
| 10,829,044
|
Drop duplicate lists within a nested list value in a column
|
<p>I have a pandas dataframe with nested lists as values in a column as follows:</p>
<pre><code>sample_df = pd.DataFrame({'single_proj_name': [['jsfk'],['fhjk'],['ERRW'],['SJBAK']],
'single_item_list': [['ABC_123'],['DEF123'],['FAS324'],['HSJD123']],
'single_id':[[1234],[5678],[91011],[121314]],
'multi_proj_name':[['AAA','VVVV','SASD'],['QEWWQ','SFA','JKKK','fhjk'],['ERRW','TTTT'],['SJBAK','YYYY']],
'multi_item_list':[[['XYZAV','ADS23','ABC_123'],['XYZAV','ADS23','ABC_123']],['XYZAV','DEF123','ABC_123','SAJKF'],['QWER12','FAS324'],[['JFAJKA','HSJD123'],['JFAJKA','HSJD123']]],
'multi_id':[[[2167,2147,29481],[2167,2147,29481]],[2313,57567,2321,7898],[1123,8775],[[5237,43512],[5237,43512]]]})
</code></pre>
<p>As you can see above, in some columns, the same list is repeated twice or more.</p>
<p>So, I would like to remove the duplicated list and only retain one copy of the list.</p>
<p>I was trying something like the below:</p>
<pre><code>for i, (single, multi_item, multi_id) in enumerate(zip(sample_df['single_item_list'],sample_df['multi_item_list'],sample_df['multi_id'])):
if (any(isinstance(i, list) for i in multi_item)) == False:
for j, item_list in enumerate(multi_item):
if single[0] in item_list:
pos = item_list.index(single[0])
sample_df.at[i,'multi_item_list'] = [item_list]
sample_df.at[i,'multi_id'] = [multi_id[j]]
else:
print("under nested list")
for j, item_list in enumerate(zip(multi_item,multi_id)):
if single[0] in multi_item[j]:
pos = multi_item[j].index(single[0])
sample_df.at[i,'multi_item_list'][j] = single[0]
sample_df.at[i,'multi_id'][j] = multi_id[j][pos]
else:
sample_df.at[i,'multi_item_list'][j] = np.nan
sample_df.at[i,'multi_id'][j] = np.nan
</code></pre>
<p>But this assigns NA to the whole column value. I expect to remove that specific list (within a nested list).</p>
<p>I expect my output to be like as below:</p>
<p><a href="https://i.sstatic.net/1HNEz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1HNEz.png" alt="enter image description here" /></a></p>
|
<python><pandas><list><dataframe><group-by>
|
2023-01-18 08:28:41
| 2
| 7,793
|
The Great
|
75,156,360
| 19,920,392
|
Why Popen.communicate returns the satus of another subprocess.Popen?
|
<p>I have codes that starts Django server and React server at the same time.<br />
Here are the codes:<br />
<strong>back_server.py</strong></p>
<pre><code>import os
import subprocess
def run_server():
current_dir = os.getcwd()
target_dir = os.path.join(current_dir, 'marketBack/BackServer')
subprocess.Popen(['python', f'{target_dir}/manage.py', 'runserver'], universal_newlines=True)
</code></pre>
<p><strong>front_server.py</strong></p>
<pre><code>import os
import subprocess
def run_server():
current_dir = os.getcwd()
target_dir = os.path.join(current_dir, 'marketPlaying/market-playing')
os.chdir(target_dir)
process = subprocess.Popen(['npm', 'start'], universal_newlines=True, shell=True)
return process
</code></pre>
<p><strong>server.py</strong></p>
<pre><code>import back_server
import front_server
def run_server_all():
back_server.run_server()
front_server.run_server().communicate()
run_server_all()
</code></pre>
<p>When I check the status of the front_server using <code>subprocess.communicate()</code>, it also returns the output of the back_server. Doesn't subprocess.Popen create independent process? Why does <code>front_server.run_server().communicate()</code> returns the status of back_server too?</p>
|
<python><reactjs><django><subprocess>
|
2023-01-18 08:21:18
| 0
| 327
|
gtj520
|
75,156,323
| 13,476,175
|
Pickle AttributeError: Can't get attribute 'EnsembleModel' on <module '__main__' from '<input>'>
|
<p>I've written a custom class to represent an ensemble model, and I want to pickle it for later usage. Here is how I construct the model and save the object using pickle:</p>
<pre class="lang-py prettyprint-override"><code>import pickle
from typing import Any
class EnsembleModel:
def __init__(self, estimators: List[Any]):
return ...
def fit(self, ...):
return ...
def predict(self, ...):
return ...
ensemble_model = EnsembleModel(estimators=[est_1, est_2, est_3], ...)
ensemble_model.fit(X_train, y_train)
with open("ensemble-model.mdl", "wb") as f:
pickle.dump(ensemble_model, f)
</code></pre>
<p>Now, I want to use this binary object <strong>"ensemble-model.mdl"</strong> in another code. I have a more general class, say <code>MyModel</code>, to load and represent such models (the reason behind having this class is not the point here). As you can see, this class is responsible for unpickling the EnsembleModel object:</p>
<pre class="lang-py prettyprint-override"><code>class MyModel:
model: Any = None
_probability: float = None
_predict_method: str = None
_predict_proba: Any = None
def __init__(self, model_path: str, model_name: str = 'MyModel', threshold: float = 0.5) -> None:
with open(model_path, 'rb') as f:
self.model = pickle.load(f)
...
</code></pre>
<p>I keep both classes, <code>EnsembleModel</code> and <code>MyModel</code>, in a single script in a separate package named <code>my_package</code> (<em>my_package/model.py</em>) and install this package on the virtual environment on which my main script should be run.</p>
<pre><code>my_package:
- __init__.py
- model.py
-> EnsembleModel
-> MyModel
- ...
</code></pre>
<p>This is my main script in which I need to initialize an instance of <code>MyModel</code> using the binary object <strong>"ensemble-model.mdl"</strong>:</p>
<p><em>main.py</em></p>
<pre class="lang-py prettyprint-override"><code>import asyncio
from my_package.model import EnsembleModel, MyModel
async def main():
model = MyModel(
model_name="My Ensemble Model",
model_path="ensemble-model.mdl",
threshold=0.5
)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
loop.close()
</code></pre>
<p>I get this error when I run <em>main.py</em>:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\JetBrains\PyCharm Community Edition 2022.1.3\plugins\python-ce\helpers\pydev\pydevconsole.py", line 364, in runcode
coro = func()
File "<input>", line 2, in <module>
File "C:\Users\mehdi\AppData\Local\Programs\Python\Python38\lib\asyncio\base_events.py", line 616, in run_until_complete
return future.result()
File "D:\my_app\my_pipeline\__init__.py", line 108, in main
MyModel(
File "D:\my_app\venv\lib\site-packages\my_package\model.py", line 92, in __init__
self.model = pickle.load(f)
AttributeError: Can't get attribute 'EnsembleModel' on <module '__main__' from '<input>'>
</code></pre>
<p>It seems that the error is due to the pickling/unpickling process, but I'm not sure how to fix it. Any ideas on how can I fix this error?</p>
|
<python><python-asyncio><pickle>
|
2023-01-18 08:18:10
| 1
| 875
|
frisko
|
75,156,182
| 11,520,576
|
Is there any efficient way both getting 'max' and 'argmax' with a multi-dimensional array
|
<p>I have an array <code>a</code> with shape (18,4096,4096).</p>
<p>And I want to do like these:</p>
<pre class="lang-py prettyprint-override"><code>max_value = np.max(a,0)
index = np.argmax(a,0)
</code></pre>
<p><code>max_value </code> and <code>index</code> are both array with shape (4096, 4096), and I think calling both <code>np.max</code> and <code>np.argmax</code> has some useless cost.</p>
<p>And I know if <code>a</code> is a 1D array, I can do like this:</p>
<pre class="lang-py prettyprint-override"><code>index = np.argmax(a,0)
max_value = a[index]
</code></pre>
<p>But I can't do like this when <code>a</code> is a 3D array. Is there any efficient way doing this?</p>
|
<python><numpy>
|
2023-01-18 08:03:52
| 1
| 327
|
YD Zhou
|
75,156,148
| 19,500,571
|
Plotly Dash (Python): Data is loaded twice upon start
|
<p>I'm building a dashboard in Dash, where I want to visualie a dataset that I load at the start of the dashboard. As this dataset is to be used by all methods in the dashboard, I keep it in a singleton class.</p>
<p>The issue is though that the data is loaded twice when I start the dashboard ("generated" is shown twice in the console when I start the dashboard). Here is a MWE showing the problem:</p>
<p><strong>app.py</strong></p>
<pre><code>from dash import Dash, dcc, html
from callbacks import get_callbacks
from data import MyData
app = Dash(__name__, suppress_callback_exceptions=True)
get_callbacks(app) #load all callbacks
m = MyData() #load data at start of the dashboard -- somehow, this happens twice?
if __name__ == '__main__':
app.layout = html.Div([dcc.Location(id="url")])
app.run_server(debug=True)
</code></pre>
<p><strong>data.py</strong></p>
<pre><code>import pandas as pd
def generate_df():
print("generated")
return pd.DataFrame({'a':[1,2,3]})
class MyData:
def __new__(cls):
if not hasattr(cls, 'instance'):
cls.instance = super(MyData, cls).__new__(cls)
return cls.instance
def __init__(self):
self.df = generate_df()
</code></pre>
<p><strong>callbacks.py</strong></p>
<pre><code>def get_callbacks(app):
pass
</code></pre>
<p>Where does this additional load of data happen?</p>
|
<python><plotly-dash><dashboard>
|
2023-01-18 08:00:48
| 0
| 469
|
TylerD
|
75,155,799
| 15,781,591
|
How to select dataframe in dictionary of dataframes that contains a column with specific substring
|
<p>I have a dictionary of dataframes <code>df_dict</code>. I then have a substring "blue". I want to identify the name of the dataframe in my dictionary of dataframes that has at least one column that has a name containing the substring "blue".</p>
<p>I am thinking of trying something like:</p>
<pre><code>for df in df_dict:
if df.columns.contains('blue'):
return df
else:
pass
</code></pre>
<p>However, I am not sure if a for loop is necessary here. How can I find the name of the dataframe I am looking for in my dictionary of dataframes?</p>
|
<python><pandas><dataframe><dictionary><substring>
|
2023-01-18 07:22:26
| 1
| 641
|
LostinSpatialAnalysis
|
75,155,659
| 17,267,064
|
How to sort list of dictionaries by value without using built in functions via Python?
|
<p>I wish to sort below list of dictionaries by age key into ascending order without using any built in functions.</p>
<pre><code>[{'Name': 'Alpha', 'Age': 14}, {'Name': 'Bravo', 'Age': 21}, {'Name': 'Charlie', 'Age': 12}]
</code></pre>
<p>I wish to have below output.</p>
<pre><code>[{'Name': 'Charlie', 'Age': 12}, {'Name': 'Alpha', 'Age': 14}, {'Name': 'Bravo', 'Age': 21}]
</code></pre>
<p>I have done it via sorted function but unable to find a way to do it without same. I believe this should be done via bubble sort method but am unable to implement it.</p>
<p>Many thanks in advance.</p>
|
<python><dictionary>
|
2023-01-18 07:04:08
| 0
| 346
|
Mohit Aswani
|
75,155,652
| 14,045,537
|
Pandas replace values if value isin dictionary of key and values as list
|
<p>I do realize this has already been addressed here (e.g., <a href="https://stackoverflow.com/q/72185441/14045537">pandas: replace column value with keys and values in a dictionary of list values</a>, <a href="https://stackoverflow.com/q/46432315/14045537">Replace column values by dictionary keys if they are in dictionary values (list)</a>). Nevertheless, I hope this question was different.</p>
<p>I want to replace values in the <code>dataframe</code> <code>COl</code> with <code>key</code> if the column value is present in the <code>list values</code> of the <code>dictionary</code></p>
<p><strong>Sample dataframe:</strong></p>
<pre><code>import pandas as pd
df = pd.DataFrame({"Col": ["Apple", "Appy", "aple","Banana", "Banaa", "Banananan", "Carrot", "Mango", "Pineapple"]})
</code></pre>
<pre><code>remap_dict = {"Apple": ["Appy", "aple"],
"Banana": ["Banaa", "Banananan"]}
</code></pre>
<p>pandas <code>replace</code> doesn't support dictionary with key value(list):</p>
<pre><code>df["Col"].replace(remap_dict, regex=True)
</code></pre>
<p>Is there a way to replace values in the <code>pandas</code> dataframe with keys in the <code>dictionary</code> if the value is in the list??</p>
<p><strong>Desired Output:</strong></p>
<pre><code> Col
0 Apple
1 Apple
2 Apple
3 Banana
4 Banana
5 Banana
6 Carrot
7 Mango
8 Pineapple
</code></pre>
|
<python><pandas><dataframe><dictionary>
|
2023-01-18 07:02:55
| 1
| 3,025
|
Ailurophile
|
75,155,423
| 19,106,705
|
'OMP: Error #179' when using hugging face BERT
|
<p>I simply tried running this code.</p>
<pre class="lang-py prettyprint-override"><code>from transformers import BertForSequenceClassification, BertTokenizer
</code></pre>
<p>with this command. For 'GPU_ID' I used my GPU UUID, which wasn't wrong.</p>
<pre><code>CUDA_VISIBLE_DEVICES='GPU_ID' python3 MNLI.py
</code></pre>
<p>Sometimes it works fine but sometimes this error pops up.</p>
<pre><code>OMP: Error #179: Function Can't open SHM failed:
OMP: System error #0: Success
Aborted (core dumped)
</code></pre>
<p>How can I fix it? Any help is appreciated.</p>
|
<python><pytorch><gpu><huggingface>
|
2023-01-18 06:31:53
| 0
| 870
|
core_not_dumped
|
75,155,225
| 3,031,069
|
Can SQLAlchemy express one-to-many relations via association tables?
|
<p>I am trying to access an existing DB with SQLAlchemy. (Thus, I cannot easily change the schema or the existing data. I am also not sure why the schema is structured the way it is.)</p>
<p>Consider the following pattern:</p>
<pre><code>association_table = Table(
"association_table",
Base.metadata,
Column("child_id", ForeignKey("child_table.id"), primary_key=True),
Column("parent_id", ForeignKey("parent_table.id")),
)
class Parent(Base):
__tablename__ = "parent_table"
id = Column(Integer, primary_key=True)
children = relationship("Child", backref="parent", secondary=association_table)
class Child(Base):
__tablename__ = "child_table"
id = Column(Integer, primary_key=True)
</code></pre>
<p>In this example, note how there is a primary key in the association table. This should model a one-to-many relationship (one parent per child, many children per parent), even though the association table would not be necessary for that purpose.</p>
<p>The documentation does not mention this at all, however. secondary is only ever associated with many-to-many relationships and if I translate this pattern to real-world code, the parent field is indeed created as a collection.</p>
<p>Is there a way to convince SQLAlchemy to read a one-to-many relationship from an association table?</p>
|
<python><sql><sqlalchemy>
|
2023-01-18 06:06:48
| 1
| 3,597
|
choeger
|
75,155,170
| 15,781,591
|
How to filter for substring in list comprehension in python
|
<p>I have a dictionary of dataframes. I want to create a list of the names of all of the dataframes in this dictionary that have the substring "blue" in them. No dataframes in the dictionary of dataframes contain a column simply called "blue". It is some variation of "blue", including: "blue_max", "blue_min", blue_average", etc. The point is that "blue" is a substring in the column names of all the dataframes in my dictionary of dataframes.</p>
<p>And so, to create a list of all the dataframes in my dictionary of dataframes that contain a column that exactly is "blue_max", I run the following using a list comprehension:</p>
<pre><code>df_list = [x for x in df_dictionary if "blue_max" in df_dictionary[x].columns]
print(df_list)
</code></pre>
<p>And this prints a list of all dataframe names in my dictionary of dataframes that have a column called exactly "blue_max".</p>
<p>However, this is not what I want. I want a list of all the dataframes names in my dictionary of dataframes that contain at least one column whose name contains the substring "blue". And so, if one of these dataframes has a column called "blue_max", or "blue_min", or "blue_average" in it, then I want the name of that dataframe added to my list "df_list".</p>
<p>However, when I try to find those dataframes that have a column containing the substring "blue" running:</p>
<pre><code>df_list = [x for x in df_dictionary if "blue" in df_dictionary[x].columns]
print(df_list)
</code></pre>
<p>I just get an empty list: <code>[]</code>.</p>
<p>This is not what I want. I know I have dataframes in my dictionary of dataframes that have columns with names (headers) containing the substring "blue", because I know there are the columns "blue_max", "blue_min", and "blue_average". How can I fix my code so that is looks for the substring "blue" in the column names of my dataframes, rather than column names with the exactly name "blue"?</p>
|
<python><pandas><list><substring><list-comprehension>
|
2023-01-18 05:58:40
| 1
| 641
|
LostinSpatialAnalysis
|
75,155,155
| 10,969,548
|
Configure Python interpreter for notebook in vscode to be the same as the interpreter for a conda environment
|
<p>From my understanding of this <a href="https://docs.anaconda.com/anaconda/user-guide/tasks/integration/python-path/" rel="nofollow noreferrer">source</a> and what a python interpreter is, if I have an environment like this:</p>
<pre><code>$ conda activate myenv
(myenv) $ python3
> import pandas as pd
>
</code></pre>
<p>With no errors. And <code>which python3</code> returns
<code>/AbsPath/.conda/envs/myenv/bin/python3</code></p>
<p>Then in my Notebook in vscode I should be able to select this interpreter in the command pallet and not have an error importing pandas.</p>
<p>So open <code>my-notebook.ipynb</code>. Open command pallet and select <code>Python: Select Interpreter</code>, click <code>+ Enter interpreter path</code> and then paste <code>/AbsPath/.conda/envs/myenv/bin/python3</code> then I should be able to execute a cell that contains</p>
<p>(The <a href="https://code.visualstudio.com/docs/datascience/jupyter-notebooks#_setting-up-your-environment" rel="nofollow noreferrer">vscode website</a> says I should be able to set the interpreter for Jupyter Notebook environments this way.)</p>
<pre><code>import pandas as pd
</code></pre>
<p>And not have an error thrown. But I get the error that I have no module named Pandas.</p>
<p>Am I doing something incorrectly or is this unexpected weird behavior?</p>
<p>I don't think it's actually using the correct interpreter because</p>
<pre><code>import sys
print(sys.executable)
</code></pre>
<p>Prints <code>/bin/python3</code> from my notebook while this same command from the command line gives me <code>/AbsPath/.conda/envs/myenv/bin/python3</code>.</p>
<p>I tried</p>
<pre><code>import sys
sys.executable = '/AbsPath/.conda/envs/myenv/bin/python3'
</code></pre>
<p>To override it but that didn't work (unsurprisingly).</p>
|
<python><visual-studio-code><jupyter-notebook>
|
2023-01-18 05:56:49
| 1
| 2,076
|
financial_physician
|
75,155,030
| 4,718,335
|
Python checking datetime overlapping in list of dictionaries
|
<p>I've list of dictionary format like below</p>
<pre><code>[{'end': '19:00', 'start': '10:00'}, {'end': '23:00', 'start': '12:15'}, {'end': '12:00', 'start': '09:15'}]
</code></pre>
<p>and want to check is time interval is overlapping or not</p>
<p>What I'm planning to do:</p>
<ol>
<li>First want to sort the list of dictionary based on start value</li>
<li>Comparing second item start value with first item end value</li>
</ol>
<p>So, my current approach is:</p>
<pre><code>for elem,next_elem in zip(sorted_slot, sorted_slot[1:]):
print(datetime.strptime(elem['end'], '%H:%M') > datetime.strptime(next_elem['start'], '%H:%M'))
</code></pre>
<p>Am I in right approach?</p>
|
<python>
|
2023-01-18 05:36:21
| 3
| 1,864
|
Russell
|
75,154,979
| 9,303,844
|
Longest consecutive 1 from consecutive colums in Pyspark dataframe
|
<p>Suppose I have a Py spark data frame as follows:</p>
<pre><code>b1 b2 b3 b4 b5 b6
1 1 1 0 1 1
test_df = spark.createDataFrame([
(1,1,1,0,1,1)
], ("b1", "b2","b3","b4","b5","b6"))
</code></pre>
<p>Here, the length of the longest consecutive 1 is 3. I have a large dataset of 1M rows. For each row, I want to calculate like this.</p>
<p>So, my initial idea was to make a new column that concatenates each of the value of the column. So, I follow this way. At first, I concatenated all of the column values to a new value:</p>
<pre><code>test_df = test_df.withColumn('Ind', F.concat(*cols))
</code></pre>
<p>I got the dataframe like this:</p>
<pre><code>b1 b2 b3 b4 b5 b6 ind
1 1 1 0 1 1 '111011'
</code></pre>
<p>Then I create a seperate UDF:</p>
<pre><code>def findMaxConsecutiveOnes(X) -> int:
nums = [int(j) for a,j in enumerate(X)]
count = 0
maxCount = 0
for idx, num in enumerate(nums):
if num == 1:
count += 1
if num == 0 or idx == len(nums) - 1:
maxCount = max(maxCount, count)
count = 0
return maxCount
</code></pre>
<p>then created a UDF:</p>
<pre><code>maxcon_udf = udf(lambda x: findMaxConsecutiveOnes(x))
</code></pre>
<p>and finally,</p>
<pre><code>test_df = test_df.withColumn('final', maxcon_udf('ind'))
</code></pre>
<p>However, this shows error. Can someone please help me to solve this problem?</p>
|
<python><dataframe><apache-spark><pyspark>
|
2023-01-18 05:24:08
| 2
| 493
|
Lzz0
|
75,154,834
| 10,620,003
|
Bar Plot horizontally with some setting in python
|
<p>I have a dataset and I want to do the bar plot horizontally in python. Here is the code which I use:</p>
<pre><code>rating = [8, 4, 5, 6,7, 8, 9, 5]
objects = ('h', 'b', 'c', 'd', 'e', 'f', 'g', 'a')
y_pos = np.arange(len(objects))
plt.barh(y_pos, rating, align='center', alpha=0.5)
plt.yticks(y_pos, objects)
#plt.xlabel('Usage')
#plt.title('Programming language usage')
plt.show()
</code></pre>
<p>It works, however the thing that I want, I want to change the plot like this image:</p>
<p><a href="https://i.sstatic.net/JDzTD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JDzTD.jpg" alt="enter image description here" /></a></p>
<p>I want to change the topest column to red. And put the yticks to a clumn like the image. Could you please help me with that? Thank you.</p>
|
<python><matplotlib>
|
2023-01-18 04:55:33
| 1
| 730
|
Sadcow
|
75,154,817
| 8,522,675
|
Sqlalchemy How to count rows per month using sqlalchemy select()?
|
<p>I have this model:</p>
<pre><code>from sqlalchemy import Table, Column, BIGINT, VARCHAR, TIMESTAMP
Table(
'person',
metadata,
Column('id', BIGINT, nullable=False, primary_key=True),
Column('name', VARCHAR(300)),
Column('user_created', TIMESTAMP),
Column('user_deleted', TIMESTAMP)
)
</code></pre>
<p>I would like to make a select that show how many user created per month. So with some research I came up with this query.</p>
<pre><code>def count_users_by_month(user_table: Table):
query = select(
user_table.c.id,
func.count(user_table.c.user_created).label('count'),
).where(and_(table.c.user_deleted.is_(None)))\
.group_by(
user_table.c.id,
user_table.c.created_at,
func.date_trunc('month', table.c.user_created)
)
return query
</code></pre>
<p>This query is executed in another method by <code>conn.execute(query)</code>.<br />
The result I'm getting is:</p>
<pre><code>(74, datetime.datetime(2020, 3, 2, 10, 19, 39), 1)
(75, datetime.datetime(2020, 3, 2, 10, 21, 24), 1)
(102, datetime.datetime(2020, 3, 4, 18, 46, 49), 1)
(141, datetime.datetime(2020, 3, 6, 16, 12, 6), 1)
(443, datetime.datetime(2020, 4, 1, 11, 37, 29), 1)
(450, datetime.datetime(2020, 4, 1, 14, 16, 53), 1)
(487, datetime.datetime(2020, 4, 6, 10, 42, 23), 1)
(509, datetime.datetime(2020, 4, 8, 10, 51, 55), 1)
</code></pre>
<p>So it kinda grouped all my rows by month, on this case I can see 4 rows grouped by March and 4 rows grouped by April, and count of 1 on each one. Not really what I would like to have. I would to know the total number of rows by month, like this:</p>
<pre><code>(datetime.datetime(2020, 3), 4),
(datetime.datetime(2020, 4), 4),
...
...
(datetime.datetime(2022, 12), 10),
</code></pre>
<p>Before anyone ask, If I remove the <code>user_table.c.id</code> or <code>user_table.c.created_at</code> from my <code>group_by</code> I get an error saying <code>column "person.id" must appear in the GROUP BY clause or be used in an aggregate function'</code>.</p>
|
<python><sqlalchemy>
|
2023-01-18 04:51:34
| 1
| 657
|
RonanFelipe
|
75,154,736
| 257,924
|
How to properly initialize win32com.client.constants?
|
<p>I'm writing a simple email filter to work upon Outlook incoming messages on Windows 10, and seek to code it up in Python using the <code>win32com</code> library, under Anaconda. I also seek to avoid using magic numbers for the "Inbox" as I see in other examples, and would rather use constants that <em>should</em> be defined under <code>win32com.client.constants</code>. But I'm running into simple errors that are surprising:</p>
<p>So, I concocted the following simple code, loosely based upon <a href="https://stackoverflow.com/a/65800130/257924">https://stackoverflow.com/a/65800130/257924</a> :</p>
<pre class="lang-py prettyprint-override"><code>import sys
import win32com.client
try:
outlookApp = win32com.client.Dispatch("Outlook.Application")
except:
print("ERROR: Unable to load Outlook")
sys.exit(1)
outlook = outlookApp.GetNamespace("MAPI")
ofContacts = outlook.GetDefaultFolder(win32com.client.constants.olFolderContacts)
print("ofContacts", type(ofContacts))
sys.exit(0)
</code></pre>
<p>Running that under an Anaconda-based installer (Anaconda3 2022.10 (Python 3.9.13 64-bit)) on Windows 10 errors out with:</p>
<pre><code>(base) c:\Temp>python testing.py
Traceback (most recent call last):
File "c:\Temp\testing.py", line 11, in <module>
ofContacts = outlook.GetDefaultFolder(win32com.client.constants.olFolderContacts)
File "C:\Users\brentg\Anaconda3\lib\site-packages\win32com\client\__init__.py", line 231, in __getattr__
raise AttributeError(a)
AttributeError: olFolderContacts
</code></pre>
<p>Further debugging indicates that the <code>__dicts__</code> property is referenced by the <code>__init__.py</code> in the error message above. See excerpt of that class below. For some reason, that <code>__dicts__</code> is an empty list:</p>
<pre class="lang-py prettyprint-override"><code>class Constants:
"""A container for generated COM constants."""
def __init__(self):
self.__dicts__ = [] # A list of dictionaries
def __getattr__(self, a):
for d in self.__dicts__:
if a in d:
return d[a]
raise AttributeError(a)
# And create an instance.
constants = Constants()
</code></pre>
<p>What is required to have <code>win32com</code> properly initialize that <code>constants</code> object?</p>
<p>The timestamps on the <strong>init</strong>.py file show 10/10/2021 in case that is relevant.</p>
|
<python><outlook><constants><win32com><office-automation>
|
2023-01-18 04:37:37
| 1
| 2,960
|
bgoodr
|
75,154,539
| 4,755,954
|
Auto-update a csv based with edited grid in streamlit-aggrid
|
<p>I am new to <code>Streamlit</code> and <code>streamlit-aggrid</code> so I am trying to figure this simple case out.</p>
<h3># Goal:</h3>
<ol>
<li>Load a table from a <code>csv</code> as <code>df</code></li>
<li>Display it in <code>streamlit</code></li>
<li>Edit some cells using <code>streamlit-aggrid</code></li>
<li>Overwrite the <code>df</code> with the edits <strong>without any button click</strong></li>
<li>Save it back to the <code>csv</code></li>
<li>Continue with step 1</li>
</ol>
<h3># Issue:</h3>
<p>I was able to do this process and the first edit works like a charm, but it seems I need an additional trigger to reload the dataframe, which is where the code is not working as intended. It does update the dataframe, but takes a page refresh or editing a cell 2 times (basically an additional trigger) to save the update to the csv.</p>
<p>My question is, is there a way to make such edits on the fly, directly to the csv, while loading the updated csv back to the agGrid object? And to do this without using a button click as a trigger to refresh / update / save the data?</p>
<h3># My code:</h3>
<p>Here is my sample working code, which has the issue at hand.</p>
<pre><code>import streamlit as st
import pandas as pd
from st_aggrid import AgGrid
#df = pd.DataFrame({"col1": [1, 2, 3], "col2": [4, 5, 6]}) # Original csv
df = pd.read_csv('sample.csv')
grid_options = {
"columnDefs": [
{
"headerName": "col1",
"field": "col1",
"editable": True,
},
{
"headerName": "col2",
"field": "col2",
"editable": False,
},
],
}
grid_return = AgGrid(df, grid_options)
df = grid_return["data"]
df.to_csv('sample.csv', index=False) #Overwrite sample.csv
## CHECK IF SAMPLE CSV IS UPDATED ##
df = pd.read_csv('sample.csv')
st.write(df)
</code></pre>
<p><a href="https://i.sstatic.net/D96fn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D96fn.png" alt="enter image description here" /></a></p>
|
<python><ag-grid><streamlit>
|
2023-01-18 03:54:05
| 1
| 19,377
|
Akshay Sehgal
|
75,154,473
| 17,560,347
|
How to set negative part to zero in a MatrixSymbol?
|
<p>I want to set negative part to zero in a MatrixSymbol.</p>
<pre class="lang-py prettyprint-override"><code>from sympy import MatrixSymbol
x = MatrixSymbol('x', 10, 10)
_ = x < 0 # raise TypeError: Invalid comparison of non-real x
# what I want to do:
x[x < 0] = 0
</code></pre>
|
<python><sympy><symbolic-math>
|
2023-01-18 03:38:28
| 1
| 561
|
ε΄ζ
ι
|
75,154,238
| 3,411,191
|
FastAPI: Task was destroyed but it is pending
|
<p>I've been fighting asyncio + FastAPI for the last 48 hours. Any help would be appreciated. I'm registering an <code>on_gift</code> handler that parses gifts I care about and appends them to <code>gift_queue</code>. I'm then pulling them out in a while loop later in the code to send to the websocket.</p>
<p>The goal is to block the main thread until the client disconnects. Then we can do cleanup. It appears that the cleanup is being ran but I'm seeing this error after the first request:</p>
<pre><code>Task was destroyed but it is pending!
task: <Task pending name='Task-12' coro=<WebSocketProtocol13._receive_frame_loop() running at /Users/zane/miniconda3/lib/python3.9/site-packages/tornado/websocket.py:1106> wait_for=<Future pending cb=[IOLoop.add_future.<locals>.<lambda>() at /Users/zane/miniconda3/lib/python3.9/site-packages/tornado/ioloop.py:687, <TaskWakeupMethWrapper object at 0x7f9558a53af0>()]> cb=[IOLoop.add_future.<locals>.<lambda>() at /Users/zane/miniconda3/lib/python3.9/site-packages/tornado/ioloop.py:687]>
</code></pre>
<pre class="lang-py prettyprint-override"><code>@router.websocket('/donations')
async def scan_donations(websocket: WebSocket, username):
await websocket.accept()
client = None
gift_queue = []
try:
client = TikTokLiveClient(unique_id=username,
sign_api_key=api_key)
def on_gift(event: GiftEvent):
gift_cost = 1
print('[DEBUG] Could not find gift cost for {}. Using 1.'.format(
event.gift.extended_gift.name))
try:
if event.gift.streakable:
if not event.gift.streaking:
if gift_cost is not None:
gift_total_cost = gift_cost * event.gift.repeat_count
# if it cost less than 99 coins, skip it
if gift_total_cost < 99:
return
gift_queue.append(json.dumps({
"type": 'tiktok_gift',
"data": dict(name=event.user.nickname)
}))
else:
if gift_cost < 99:
return
gift_queue.append(json.dumps({
'type': 'tiktok_gift',
'data': dict(name=event.user.nickname)
}))
except:
print('[DEBUG] Could not parse gift event: {}'.format(event))
client.add_listener('gift', on_gift)
await client.start()
while True:
if gift_queue:
gift = gift_queue.pop(0)
await websocket.send_text(gift)
del gift
else:
try:
await asyncio.wait_for(websocket.receive_text(), timeout=1)
except asyncio.TimeoutError:
continue
except ConnectionClosedOK:
pass
except ConnectionClosedError as e:
print("[ERROR] TikTok: ConnectionClosedError {}".format(e))
pass
except FailedConnection as e:
print("[ERROR] TikTok: FailedConnection {}".format(e))
pass
except Exception as e:
print("[ERROR] TikTok: {}".format(e))
pass
finally:
print("[DEBUG] TikTok: Stopping listener...")
if client is not None:
print('Stopping TTL')
client.stop()
await websocket.close()
</code></pre>
<p>To add:</p>
<p>I can see</p>
<pre><code>[ERROR] TikTok: 1000
[DEBUG] TikTok: Stopping listener...
Stopping TTL
</code></pre>
<p>Once I disconnect in Postman. It seems that something is not exiting cleanly. I have a <code>scan_chat</code> function which is nearly identical and I don't get these errors.</p>
|
<python><websocket><python-asyncio><fastapi><tornado>
|
2023-01-18 02:44:49
| 0
| 1,064
|
Zane Helton
|
75,154,182
| 17,696,880
|
Remove 2 or more consecutive non-caps words from strings stored within a list of strings using regex, and separate
|
<pre class="lang-py prettyprint-override"><code>import re
list_with_penson_names_in_this_input = ["MarΓa Sol", "MarΓa del Carmen Perez AgΓΌiΓ±o", "Melina Saez Sossa", "el de juego es Alex" , "Harry ddeh jsdelasd Maltus ", "Ben White ddesddsh jsdelasd Regina yΓ‘shas asdelas Javier Ruben Rojas", "Robert", 'Melina presento el nuevo presupuesto', "presento el nuevo presupuesto, MarΓa del CarmΓ©n "]
aux_list = []
for i in list_with_penson_names_in_this_input:
list_with_penson_names_in_this_input.remove(i)
aux_list = re.sub(, , i)
list_with_penson_names_in_this_input = list_with_penson_names_in_this_input + aux_list
aux_list = []
print(list_with_penson_names_in_this_input) #print fixed name list
</code></pre>
<p>If there are more than 2 words <code>((?:\w\s*)+)</code> that do not start with a capital letter in a row, and that are not connectors of the type <code>[del|de el|de]</code> then it should be eliminated from the first name, in capital letter, detected, and have the second name detected (if it exists) place it as a separate element within the list. for example,</p>
<p><code>["Harry ddeh jsdelasd Maltus "]</code> --> <code>["Harry", "Maltus"]</code></p>
<p><code>["Ben White ddesddsh jsdelasd Regina yΓ‘shas asdelas Javier Ruben Rojas"]</code> --> <code>["Ben White", "Regina", "Javier Ruben Rojas"]</code></p>
<p>and if there is not more than one name, you should remove if there are 2 or more consecutive words that do not start with a capital letter, and that are not connectors of the type <code>[del|de el|de]</code></p>
<p><code>["Melina Martinez presento el nuevo presupuesto"]</code> --> <code>["Melina Martinez"]</code></p>
<p><code>["Melina presento el nuevo presupuesto"]</code> --> <code>["Melina"]</code></p>
<p><code>["presento el nuevo presupuesto, MarΓa del CarmΓ©n "]</code> --> <code>["MarΓa del CarmΓ©n"]</code></p>
<p>When it comes to fixing those elements that do not meet the specifications, the elements on this list should be:</p>
<pre class="lang-py prettyprint-override"><code>["MarΓa Sol", "MarΓa del Carmen Perez AgΓΌiΓ±o", "Melina Saez Sossa", "Alex" , "Harry", "Maltus", "Ben White", "Regina", "Javier Ruben Rojas", "Robert", 'Melina', "MarΓa del CarmΓ©n"]
</code></pre>
<p>For the words in between that don't start with a capital letter i tried something like this <code>((?:\w\s*)+)</code> , but even so it does not restrict the presence or not of words with a capital letter</p>
|
<python><python-3.x><regex><list><regex-group>
|
2023-01-18 02:32:04
| 1
| 875
|
Matt095
|
75,154,134
| 9,338,509
|
Getting Incorrect padding error while decoding a message in pyhon when the encoding is from base64 in java
|
<p>I have a service in python which will decode the messages like below:</p>
<pre><code>json.loads(base64.b64decode(myMessage['data']))
</code></pre>
<p>This is what I am trying to do in Java to encode</p>
<pre><code>JSONObject msg=new JSONObject();
msg.put("name", "myName");
msg.put("class", "myClass");
JSONObject obj=new JSONObject();
obj.put("data",Base64.getEncoder().encodeToString(msg.toString().getBytes("UTF-8")));
</code></pre>
<p>When I am encoding in the above way getting <code>binascii.Error: Incorrect padding</code> error in python. Is there anyway I can send encoded base64 message from Java which will be accepted by my python code.</p>
<p>When I try to encode in python</p>
<pre><code>msg = {}
msg["name"] = "myName"
msg["class"] = "myClass"
res = {}
res["data"] = base64.b64encode(json.dumps(msg).encode("utf-8"))
</code></pre>
<p>It is working fine. I am not sure what I am missing in java.</p>
<p>Note: I can't make changes in the decoder which is in python.</p>
<p>Thanks in advance.</p>
|
<python><java><encoding><base64><decoding>
|
2023-01-18 02:23:09
| 0
| 553
|
lakshmiravali rimmalapudi
|
75,153,957
| 4,152,567
|
Loading saved Keras model fails: ValueError: Inconsistent values for attr 'Tidx' DT_FLOAT vs. DT_INT32 while building NodeDef
|
<p>The code below generates the <strong>error</strong>:</p>
<pre><code>ValueError: Inconsistent values for attr 'Tidx' DT_FLOAT vs. DT_INT32 while building NodeDef
</code></pre>
<p>tf_op_layer_Mean_17/Mean_17' using Op<name=Mean; signature=input:T, reduction_indices:Tidx -> output:T; attr=keep_dims:bool,default=false; attr=T:type,allowed=[DT_FLOAT, DT_DOUBLE, DT_INT32, DT_UINT8, DT_INT16, ..., DT_UINT16, DT_COMPLEX128, DT_HALF, DT_UINT32, DT_UINT64]; attr=Tidx:type,default=DT_INT32,allowed=[DT_INT32, DT_INT64]></p>
<p><strong>Code</strong>:</p>
<pre><code>import tensorflow as tf
import numpy as np
from tensorflow.keras import Input, Model
tf.compat.v1.disable_eager_execution()
#tf.compat.v1.enable_eager_execution()
inputs = Input(shape=(2,))
output_loss = tf.keras.backend.mean(inputs)
outputs = [inputs, output_loss]
model = Model(inputs, outputs)
loss = tf.reduce_mean((output_loss)) #Error
#loss = tf.math.rsqrt((output_loss)) #No Error
model.add_loss(loss)
model.compile(optimizer="adam", loss=[None] * len(model.outputs))
model.fit(np.random.random((5, 2)), epochs=2)
model.save("my_model_.h5")
#Error when loading and loss tf.reduce_mean
model_ = keras.models.load_model("my_model_.h5", compile=False)# ValueError: Inconsistent values for attr 'Tidx' DT_FLOAT vs. DT_INT32 while building NodeDef 'tf_op_layer_Mean_1/Mean_1'
model_.summary()
</code></pre>
<p>Note that the loss function causes the error. A change in the loss function (uncomment the sqrt loss) results in no error. Any suggestions would be great! This is also related to this <a href="https://github.com/tensorflow/tensorflow/issues/47309" rel="nofollow noreferrer">issue</a>.</p>
<p>Update, I tried closure for the loss function, it did not work.</p>
<pre><code>class Custom_Reduce_Mean_Loss(tf.keras.layers.Layer):
def __init__(self):
super().__init__()
def __call__(self, input):
return tf.cast(tf.reduce_mean(input), dtype=tf.float32)
def get_config(cls, config):
return cls(**config)
loss = Custom_Reduce_Mean_Loss()(output_loss)
</code></pre>
|
<python><tensorflow><keras>
|
2023-01-18 01:45:30
| 1
| 512
|
Mihai.Mehe
|
75,153,851
| 3,099,733
|
Is it possible to automatically convert a Union type to only one type automatically with pydantic?
|
<p>Given the following data model:</p>
<pre class="lang-py prettyprint-override"><code>
class Demo(BaseModel):
id: Union[int, str]
files: Union[str, List[str]]
</code></pre>
<p>Is there a way to tell <code>pydantic</code> to always convert <code>id</code> to <code>str</code> type and <code>files</code> to <code>List[str]</code> type automatically when I access them, instead of doing this manually every time.</p>
|
<python><pydantic>
|
2023-01-18 01:25:01
| 2
| 1,959
|
link89
|
75,153,848
| 14,673,832
|
IndexError while calling dunder methods
|
<p>I want to print the values from the dunder methods of a class <code>foo</code>. However, I got the error:</p>
<pre><code>IndexError: Replacement index 1 out of range for positional args tuple
</code></pre>
<p>The code snippet is as follows:</p>
<pre class="lang-py prettyprint-override"><code>class foo(object):
def __str__(self):
return 'Testing'
def __repr__(self):
return 'Programming'
print('{}{}'.format(foo()))
</code></pre>
|
<python><format><index-error>
|
2023-01-18 01:24:35
| 2
| 1,074
|
Reactoo
|
75,153,809
| 10,549,044
|
Run 3 Python Lines of Code Asynchronous without order
|
<p>Ok so I have a function that calls an API and waits for the response. I am using threading in order to parallelize calling requests during building the DataFrame. The same function gets called later many times but only with 1 item. So iterator is actually a list of just 1 element, hence no threading happens.</p>
<p>I want to use something like asyncio to allow the 3 lines of code to execute regardless of order (not wait each other) if iterator length is 1.</p>
<p>Minimal Code:</p>
<pre><code>from tqdm.contrib.concurrent import thread_map
import pandas as pd
def func(x):
return x+1
df = pd.DataFrame()
iterator = [1, 2, 3, 4]
df["foo"] = thread_map(func, iterator, max_workers=4)
df["bar"] = thread_map(func, iterator, max_workers=4)
df["foobar"] = thread_map(func, iterator, max_workers=4)
</code></pre>
<p>What I want to do (wrong syntax for async):</p>
<pre><code>if len(iterator) == 1:
with async:
df["foo"] = func(iterator[0])
df["bar"] = func(iterator[0])
df["foobar"] = func(iterator[0])
else:
df["foo"] = thread_map(func, iterator, max_workers=4)
df["bar"] = thread_map(func, iterator, max_workers=4)
df["foobar"] = thread_map(func, iterator, max_workers=4)
</code></pre>
|
<python><asynchronous>
|
2023-01-18 01:14:52
| 1
| 410
|
ma7555
|
75,153,745
| 1,961,582
|
Is it possible to filter a pandas dataframe with an IntervalIndex or something similar?
|
<p>I have a pandas dataframe that has a DatetimeIndex. I want to filter this dataframe in 15 day chunks at a time. To do this manually I can do something like:</p>
<pre><code>start, end = pd.Timestamp('2022-07-01'), pd.Timestamp('2022-07-16')
filt_df = df[start: end]
</code></pre>
<p>However, I want to be able to do this over a long range without manually creating these filtering strings. I could just iterate over a loop and advance them all with a <code>pd.Timedelta</code> but it seems like I should be able to do something with the functions pandas has built-in such as <code>pandas.interval_range</code> or something similar. However, <code>interval_range</code> produces an <code>IntervalIndex</code> and I don't know how to use that to filter a dataframe. Is there a cleaner way to do what I'm trying to do here?</p>
|
<python><pandas><dataframe>
|
2023-01-18 00:57:24
| 2
| 1,163
|
gammapoint
|
75,153,726
| 3,750,694
|
How to convert bounding box with relative coordinates of object resulting from running yolov5 into absolute coordinates
|
<p>I have run the yolov5 model to predict trees in imagery with a geocoordinate. I got the bounding box objects with relative coordinates after running the model. I would like to have these bounding boxes with the absolute coordinate system as the imagery (NZTM 2000). Please help me to do this. Below are my bounding box information and original imagery for the prediction.</p>
<pre><code>models= torch.hub.load('ultralytics/yolov5', 'custom', 'C:/Users/yolov5-master/runs/train/exp/weights/best.pt')
im=r'C:\imagery1.PNG'
results = models(im)
results.print()
results.show()
results.xyxy[0]
a=results.pandas().xyxy[0]
df=pd.DataFrame(a)
print(df)
</code></pre>
<p><a href="https://i.sstatic.net/4oJxE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4oJxE.png" alt="enter image description here" /></a></p>
<pre><code> # function to convert polygons to bbox
def bbox(long0, lat0, lat1, long1):
return Polygon([[long0, lat0], #long0=xmin, lat0=ymin, lat1=ymax, long1=xmax
[long1,lat0],
[long1,lat1],
[long0, lat1]])
β
test = bbox(144.2734528,350.0042114,359.900177,152.4013672)
test1=bbox(366.7437744,215.1108856,226.6479034,376.282196)
β
gpd.GeoDataFrame(pd.DataFrame(['p1','p2'], columns = ['geom']),
geometry = [test,test1]).to_file(r'C:\delete\poly1.shp')
</code></pre>
<p>β
<a href="https://i.sstatic.net/liR1M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/liR1M.png" alt="enter image description here" /></a></p>
<p>The coordinate system of imagery: NZGD 2000 New Zealand Transverse Mercator.</p>
<pre><code> The extent of imagery: top: 5,702,588.730967 m, bottom:
5,702,581.666007 m, left:1,902,830.371719m, right: 1,902,837.436679m
</code></pre>
|
<python><pandas><bounding-box><coordinate-systems><yolov5>
|
2023-01-18 00:55:16
| 1
| 703
|
user30985
|
75,153,711
| 386,861
|
How to fine tune formatting in Pandas
|
<p>Just learning some new pandas techniques and working on trying to fine tune the outputs.</p>
<p>Here's my code.</p>
<p>import pandas as pd
import numpy as np</p>
<pre><code>dogs = np.random.choice(['labrador', 'poodle', 'pug', 'beagle', 'dachshund'], size=50_000)
smell = np.random.randint(1,100, size=50_000)
df = pd.DataFrame(data= np.array([dogs, smell]).T, columns= ['dog', 'smell'])
</code></pre>
<p>So far so simple.</p>
<pre><code> dog smell
0 poodle 83
1 labrador 3
2 poodle 86
3 dachshund 31
4 labrador 16
... ... ...
</code></pre>
<p>Then created a one-liner to list the number of each breed using .value_counts.</p>
<p>I normalised using the normalize attribute and then multiplied by 100 to return percentage and then combined .to_frame and .round()</p>
<pre><code>print(f"{(df.value_counts('dog', normalize=True, )*100).to_frame().round(2)}")
0
dog
beagle 20.04
poodle 20.03
labrador 19.98
dachshund 19.98
pug 19.97
</code></pre>
<p>It's almost there but is there a simple way to extend the formatting of this one-liner so it looks like - that is that there is a percentage symbol?</p>
<pre><code> 0
dog
beagle 20.04%
poodle 20.03%
labrador 19.98%
dachshund 19.98%
pug 19.97%
</code></pre>
|
<python><pandas>
|
2023-01-18 00:52:26
| 1
| 7,882
|
elksie5000
|
75,153,626
| 2,706,344
|
How to find a single NaN row in my DataFrame
|
<p>I have a DataFrame with a 259399 rows and one column. It is called <code>hfreq</code>. In one single row I have a NaN value and I want to find it. I thought this is easy and tried <code>hfreq[hfreq.isnull()]</code>. But as you can see it doesn't help:</p>
<p><a href="https://i.sstatic.net/KyEt8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KyEt8.png" alt="enter image description here" /></a></p>
<p>What am I doing wrong and how is it done correctly?</p>
<p><strong>Edit:</strong> For clarity: That is how my DataFrame looks like:</p>
<p><a href="https://i.sstatic.net/Jqsdi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jqsdi.png" alt="enter image description here" /></a></p>
<p>There is only one NaN value hidden somewhere in the middle and I want to learn where it is so I want to get its index.</p>
|
<python><pandas>
|
2023-01-18 00:34:35
| 1
| 4,346
|
principal-ideal-domain
|
75,153,610
| 998,070
|
Drawing an ellipse at an angle between two points in Python
|
<p>I'm trying to draw an ellipse between two points. So far, I have it mostly working:</p>
<p><a href="https://i.sstatic.net/b2FJ7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b2FJ7.png" alt="Arc Angle" /></a></p>
<p>The issue comes with setting the ellipse height (<code>ellipse_h</code> below).</p>
<pre class="lang-py prettyprint-override"><code> x = center_x + radius*np.cos(theta+deg)
y = center_y - ellipse_h * radius*np.sin(theta+deg)
</code></pre>
<p>In this example, it's set to -0.5:
<a href="https://i.sstatic.net/yxNxM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yxNxM.png" alt="Ellipse Height" /></a></p>
<p>Can anyone please help me rotate the ellipse height with the ellipse? Thank you!</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
def distance(x1, y1, x2, y2):
return np.sqrt(np.power(x2 - x1, 2) + np.power(y2 - y1, 2) * 1.0)
def midpoint(x1, y1, x2, y2):
return [(x1 + x2) / 2,(y1 + y2) / 2]
def angle(x1, y1, x2, y2):
#radians
return np.arctan2(y2 - y1, x2 - x1)
x1 = 100
y1 = 150
x2 = 200
y2 = 190
ellipse_h = -1
x_coords = []
y_coords = []
mid = midpoint(x1, y1, x2, y2)
center_x = mid[0]
center_y = mid[1]
ellipse_resolution = 40
step = 2*np.pi/ellipse_resolution
radius = distance(x1, y1, x2, y2) * 0.5
deg = angle(x1, y1, x2, y2)
cos = np.cos(deg * np.pi /180)
sin = np.sin(deg * np.pi /180)
for theta in np.arange(0, np.pi+step, step):
x = center_x + radius*np.cos(theta+deg)
y = center_y - ellipse_h * radius*np.sin(theta+deg)
x_coords.append(x)
y_coords.append(y)
plt.xlabel("X")
plt.ylabel("Y")
plt.title("Arc between 2 Points")
plt.plot(x_coords,y_coords)
plt.scatter([x1,x2],[y1,y2])
plt.axis('equal')
plt.show()
</code></pre>
|
<python><numpy><geometry><linear-algebra><trigonometry>
|
2023-01-18 00:31:29
| 1
| 424
|
Dr. Pontchartrain
|
75,153,525
| 7,766,024
|
If MySQL is case-sensitive on Linux, why am I getting a duplicate entry error?
|
<p>I was using MLflow to log some parameters for my ML experiment and kept experiencing a <code>BAD_REQUEST</code> error. The specific traceback is:</p>
<pre><code>mlflow.exceptions.RestException: BAD_REQUEST: (pymysql.err.IntegrityError) (1062, "Duplicate entry 'hidden_Size-417080a853934d5d8a7cf5a27' for key 'params.PRIMARY'")
[SQL: INSERT INTO params (`key`, value, run_uuid) VALUES (%(key)s, %(value)s, %(run_uuid)s)]
[parameters: ({'key': 'dropout_rate', 'value': '0.1', 'run_uuid': '417080a853934d5d8a7cf5a27'}, {'key': 'model_type', 'value': '', 'run_uuid': '417080a853934d5d8a7cf5a27'}, {'key': 'model_name_or_path', 'value': '', 'run_uuid': '417080a853934d5d8a7cf5a27'}, {'key': 'model_checkpoint_path', 'value': '', 'run_uuid': '417080a853934d5d8a7cf5a27'}, {'key': 'hidden_size', 'value': '768', 'run_uuid': '417080a853934d5d8a7cf5a27'}, {'key': 'vocab_size',
'value': '49408', 'run_uuid': '417080a853934d5d8a7cf5a27'}, {'key': 'num_layers', 'value': '12', 'run_uuid': '417080a853934d5d8a7cf5a27'}, {'key': 'attn_heads', 'value': '8', 'run_uuid': '417080a853934d5d8a7cf5a27'} ... displaying 10 of 40 total bo
und parameter sets ... {'key': 'num_labels', 'value': '13', 'run_uuid': '417080a853934d5d8a7cf5a27'}, {'key': 'hidden_Size', 'value': '256', 'run_uuid': '417080a853934d5d8a7cf5a27'})]
(Background on this error at: https://sqlalche.me/e/14/gkpj)
</code></pre>
<p>I'm using an argument parser that contains all of these values and was using <code>mlflow.log_params(vars(args))</code> to log them. Somewhere in the code I was assigning a new value to the Namespace called <code>hidden_Size</code> which is identical to <code>hidden_size</code> but with a different name.</p>
<p>Doing some reading tells me that this is a SQL problem where I probably was assigning the same value for two keys. But after doing some reading it seems that MySQL is case-sensitive in Linux which is what I'm using. If SQL is case-sensitive, why am I getting a duplicate entry error? Shouldn't <code>hidden_size</code> and <code>hidden_Size</code> both be treated as separate entries?</p>
|
<python><mysql>
|
2023-01-18 00:15:16
| 1
| 3,460
|
Sean
|
75,153,425
| 7,984,318
|
python pandas convert for loop in one line code
|
<p>I have a Dataframe df,you can have it by running:</p>
<pre><code>import pandas as pd
data = [10,20,30,40,50,60]
df = pd.DataFrame(data, columns=['Numbers'])
df
</code></pre>
<p>now I want to check if df's columns are in an existing list,if not then create a new column and set the column value as 0,column name is same as the value of the list:</p>
<pre><code>columns_list=["3","5","8","9","12"]
for i in columns_list:
if i not in df.columns.to_list():
df[i]=0
</code></pre>
<p>How can I code it in one line,I have tried this:</p>
<pre><code>[df[i]=0 for i in columns_list if i not in df.columns.to_list()]
</code></pre>
<p>However the IDE return :</p>
<pre><code>SyntaxError: cannot assign to subscript here. Maybe you meant '==' instead of '='?
</code></pre>
<p>Any friend can help ?</p>
|
<python><pandas><dataframe>
|
2023-01-17 23:56:33
| 3
| 4,094
|
William
|
75,153,423
| 8,406,398
|
Jupyter notebook gui missing Java kernel
|
<p>I'm using below docker to use <code>IJava</code> kernel to my jupyter notebook.</p>
<pre><code>FROM ubuntu:18.04
ARG NB_USER="some-user"
ARG NB_UID="1000"
ARG NB_GID="100"
RUN apt-get update || true && \
apt-get install -y sudo && \
useradd -m -s /bin/bash -N -u $NB_UID $NB_USER && \
chmod g+w /etc/passwd && \
echo "${NB_USER} ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers && \
# Prevent apt-get cache from being persisted to this layer.
rm -rf /var/lib/apt/lists/*
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y locales && \
sed -i -e 's/# en_US.UTF-8 UTF-8/en_US.UTF-8 UTF-8/' /etc/locale.gen && \
dpkg-reconfigure --frontend=noninteractive locales && \
update-locale LANG=en_US.UTF-8
RUN apt-get install -y \
openjdk-11-jdk-headless \
python3-pip git curl unzip
RUN ln -s /usr/bin/python3 /usr/bin/python & \
pip3 install --upgrade pip
RUN pip3 install packaging jupyter ipykernel awscli jaydebeapi
RUN python -m ipykernel install --sys-prefix
# Install Java kernel
RUN mkdir ijava-kernel/ && cd ijava-kernel && curl -LO https://github.com/SpencerPark/IJava/releases/download/v1.3.0/ijava-1.3.0.zip && \
unzip ijava-1.3.0.zip && \
python install.py --sys-prefix && \
rm -rf ijava-kernel/
RUN jupyter kernelspec list
ENV SHELL=/bin/bash
ENV LANG en_US.UTF-8
ENV LC_ALL en_US.UTF-8
ENV JAVA_HOME=/usr/lib/jvm/java-11-openjdk-arm64/
WORKDIR /home/$NB_USER
USER $NB_UID
</code></pre>
<p>As soon as I run the docker image, inside the container:</p>
<pre><code>some-user@023f579253ec:~$ jupyter kernelspec list ββ―
Available kernels:
python3 /usr/local/share/jupyter/kernels/python3
java /usr/share/jupyter/kernels/java
some-user@023f579253ec:~$
</code></pre>
<p>As well as, the console with kernel java is also installed and working as per README.md</p>
<pre><code> jupyter console --kernel java
In [2]: String helloWorld = "Hello world!"
In [3]: helloWorld
Out[3]: Hello world!
</code></pre>
<p>But as soon as I run open the <code>jupyter notebook</code> inside the container, I only see Python3 kernel not the Java. see attached image.<a href="https://i.sstatic.net/ggvOX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ggvOX.png" alt="No Java Kernel Listing" /></a></p>
<p>can anyone help me out to add the Java Kernel to Notebook GUI?</p>
|
<python><docker><jupyter-notebook>
|
2023-01-17 23:56:27
| 1
| 2,125
|
change198
|
75,153,395
| 17,487,457
|
matplotlib: common legend to a column in a subplots
|
<p>Suppose the below code is used to create my plot:</p>
<pre class="lang-py prettyprint-override"><code>x = np.linspace(0, 2 * np.pi, 400)
y = np.sin(x ** 2)
fig, axs = plt.subplots(2, 2)
axs[0, 0].plot(x, y)
axs[0, 0].set_title('Axis [0, 0]')
axs[0, 1].plot(x, y, 'tab:orange')
axs[0, 1].set_title('Axis [0, 1]')
axs[1, 0].plot(x, -y, 'tab:green')
axs[1, 0].set_title('Axis [1, 0]')
axs[1, 1].plot(x, -y, 'tab:red')
axs[1, 1].set_title('Axis [1, 1]')
for ax in axs.flat:
ax.set(xlabel='x-label', ylabel='y-label')
# Hide xlabels & and tick labels for top plots.
for ax in axs.flat:
ax.label_outer()
</code></pre>
<p>Figure:</p>
<p><a href="https://i.sstatic.net/NirHA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NirHA.png" alt="Enter image description here" /></a></p>
<p>How do I add a common <code>legend</code> to each column, for example left column <em>time-domain</em> right one <em>frequency-domain</em>?</p>
|
<python><matplotlib><legend><subplot>
|
2023-01-17 23:52:28
| 1
| 305
|
Amina Umar
|
75,153,310
| 3,466,818
|
Why can't ArUco3Detection find these tags?
|
<p>I created a custom 4x4 dictionary, and put four of them on a circuit board into the silkscreen.</p>
<p><a href="https://i.sstatic.net/WEnql.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WEnql.jpg" alt="enter image description here" /></a></p>
<pre><code>myDict = cv2.aruco.custom_dictionary(4, 4, 1337)
</code></pre>
<p>I am using cv2.aruco in python, and just calling <code>cv2.aruco.detectMarkers</code> with arguments for <code>image</code> and <code>dictionary</code> (and no argument for <code>parameters</code>, thus leaving them default) works just fine.</p>
<pre><code>corners, ids, _ = cv2.aruco.detectMarkers(frame, myDict)
</code></pre>
<p>However, ArUco3Detection (a parameter I can pass) is much faster. I would rather use that, as I have a lot of frames of video to parse.</p>
<p>When I try to do so, however, it fails to find the tags! I have messed with practically every setting, and can't figure out why this is happening.</p>
<pre><code>parameters = cv2.aruco.DetectorParameters_create()
parameters.useAruco3Detection = True
corners, ids, rejected = cv2.aruco.detectMarkers(frame, myDict, parameters = parameters)
</code></pre>
<p>I can see that it's finding squares/contours, and they're showing up in the <code>rejected</code> return value.</p>
<p><a href="https://i.sstatic.net/FTJaH.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FTJaH.jpg" alt="enter image description here" /></a></p>
<p>Note: Since the ArUco tag is supposed to be black on white, I made a white border around it in the silkscreen on the PCB. The square the detector is finding encompases that white border, rather than selecting only the inside of it. I thought perhaps I could try setting <code>markerBorderBits</code> to <code>2</code> and <code>maxErroneousBitsInBorderRate</code> to <code>1.0</code> to counter this, but it didn't work. Fiddling with other parameters, I have managed to get it to select the inner square, but it still didn't decode.</p>
<p>Suggestions?</p>
|
<python><opencv><computer-vision><detection><aruco>
|
2023-01-17 23:37:36
| 0
| 706
|
Helpful
|
75,153,191
| 238,012
|
PIL Open image, causing DecompressionBombError, with lower resolution
|
<p>I have a <code>problem_page</code> such that</p>
<pre class="lang-py prettyprint-override"><code>from PIL import Image
problem_page = "/home/rajiv/tmp/kd/pss-images/f1-577.jpg"
img = Image.open(problem_page)
</code></pre>
<p>results in</p>
<pre><code>PIL.Image.DecompressionBombError: Image size (370390741 pixels) exceeds limit of 178956970 pixels, could be decompression bomb DOS attack.
</code></pre>
<p>I'd like to respect the limit and not increase the limit (as described here: <a href="https://stackoverflow.com/questions/51152059/pillow-in-python-wont-let-me-open-image-exceeds-limit">Pillow in Python won't let me open image ("exceeds limit")</a>)</p>
<p>How can I load it in a way that the resolution is lowered just below the limit and the lower resolution image is referenced in <code>img</code> without causing any error.</p>
<p>It'd be great to have a Python solution but if not, any other solution will work too.</p>
<p><strong>Update(to answer questions in comments):</strong></p>
<p>These images are derived from PDFs to do machine learning(ML). These PDFs come from outside the system. So we have to protect our system from possible decompression bombs. For most ML, pixel size requirements are well below the limit imposed by PIL so we are ok with that limit as a heuristic to protect us.</p>
<p>Our current option is to use <code>pdf2image</code> which converts pdfs to images and specify a pixel size (e.g. width=1700 pixels, height=2200 pixels) there but I was curious if this can be done at the point of loading an image.</p>
|
<python><python-imaging-library>
|
2023-01-17 23:15:50
| 0
| 6,344
|
RAbraham
|
75,153,001
| 9,878,135
|
SQLAlchemy what's the difference between creating an type instance and setting default values
|
<p>I am learning SQLAlchemy and found different usages of defining types and how to set default values. What are the differences between them?</p>
<p>Example 1</p>
<pre class="lang-py prettyprint-override"><code>class User(Base):
__tablename__ = "user"
age = sa.Column(
sa.Integer,
default=0,
)
</code></pre>
<p>Example 2</p>
<pre class="lang-py prettyprint-override"><code>class User(Base):
__tablename__ = "user"
age = sa.Column(
sa.Integer(),
default=0,
)
</code></pre>
<p>and</p>
<p>Example 3</p>
<pre class="lang-py prettyprint-override"><code>class User(Base):
__tablename__ = "user"
age = sa.Column(
sa.Integer(0),
)
</code></pre>
<p>Is there any difference in functionality between these examples or are they just syntactic sugar?</p>
|
<python><sqlalchemy><fastapi>
|
2023-01-17 22:46:48
| 1
| 1,328
|
Myzel394
|
75,152,807
| 12,116,507
|
how to convert a string into an array of variables
|
<p>how can i take a string like this</p>
<pre><code>string = "image1 [{'box': [35, 0, 112, 36], 'score': 0.8626706004142761, 'label': 'FACE_F'}, {'box': [71, 80, 149, 149], 'score': 0.8010843992233276, 'label': 'FACE_F'}, {'box': [0, 81, 80, 149], 'score': 0.7892318964004517, 'label': 'FACE_F'}]"
</code></pre>
<p>and turn it into variables like this?</p>
<pre><code>filename = "image1"
box = [35, 0, 112, 36]
score = 0.8010843992233276
label = "FACE_F"
</code></pre>
<p>or if there are more than one of box, score, or label</p>
<pre><code>filename = "image1"
box = [[71, 80, 149, 149], [35, 0, 112, 36], [0, 81, 80, 149]]
score = [0.8010843992233276, 0.8626706004142761, 0.7892318964004517]
label = ["FACE_F", "FACE_F", "FACE_F"]
</code></pre>
<p>this is how far i've gotten</p>
<pre><code>log = open(r'C:\Users\15868\Desktop\python\log.txt', "r")
data = log.readline()
log.close()
print(data)
filename = data.split(" ")[0]
info = data.rsplit(" ")[1]
print(filename)
print(info)
</code></pre>
<p>output</p>
<pre><code>[{'box':
image1
</code></pre>
|
<python><arrays>
|
2023-01-17 22:19:59
| 1
| 361
|
dam1ne
|
75,152,773
| 1,611,396
|
Vertical scrollbar in FLET app, does not appear
|
<p>I am building a FLET app, but sometimes I have a datatable which is too large for the frame it is in. For the row I have a scrollbar appearing, but for the column I just don't seem to get it working.</p>
<p>In this code a scrollbar simply does not appear.</p>
<pre><code>import pandas as pd
pd.options.display.max_columns = 100
from services.bag import PCHN
from utils.convertors import dataframe_to_datatable
import flet as ft
def main(page: ft.page):
def bag_service(e):
pc = '9351BP' if postal_code_field.value == '' else postal_code_field.value
hn = '1' if house_number_field.value == '' else house_number_field.value
address = PCHN(pc,
hn).result
bag_container[0] = dataframe_to_datatable(address)
page.update() # This is not updating my bag_table in place though. It stays static as it is.
# define form fields
postal_code_field = ft.TextField(label='Postal code')
house_number_field = ft.TextField(label='House number')
submit_button = ft.ElevatedButton(text='Submit', on_click=bag_service)
# fields for the right column
address = PCHN('9351BP', '1').result
bag_table = dataframe_to_datatable(address)
bag_container = [bag_table]
# design layout
# 1 column to the left as a frame and one to the right with two rows
horizontal_divider = ft.Row
left_column = ft.Column
right_column = ft.Column
# fill the design
page.add(
horizontal_divider(
[left_column(
[postal_code_field,
house_number_field,
submit_button
]
),
right_column(
[
ft.Container(
ft.Row(
bag_container,
scroll='always'
),
bgcolor=ft.colors.BLACK,
width=800,)
],scroll='always'
)
]
)
)
if __name__ == '__main__':
ft.app(target=main,
view=ft.WEB_BROWSER,
port=666
)
</code></pre>
<p>I am lost as to what could be the case here. Any help would be much appreciated.</p>
|
<python><flet>
|
2023-01-17 22:14:50
| 1
| 362
|
mtjiran
|
75,152,736
| 5,924,264
|
Converting column of dataframe based on scaling factor from another dataframe
|
<p>I have a dataframe that has a column <code>length</code> (numeric) and another column units (string). For example,</p>
<pre><code>df2convert =
length units
1.0 "m" # meters
1.4 "in" # inches
0.5 "km" # kilometers
....
</code></pre>
<p>I have another dataframe that maps all the units to meters. For example</p>
<pre><code>units2meters:
units scaling
"m" 1.0
"in" 0.0254
"km" 1000
</code></pre>
<p>I want to convert everything in <code>df2convert</code> to meters.</p>
<p>Currently, I do it without joining/merging the 2 dataframes.
I convert <code>units2meters</code> to a <code>dict</code> with key as <code>units</code> and value as <code>scaling</code> and then pass in <code>df2.convert.units</code> as a key into the hash table.</p>
<p>Is there a better/more efficient way to do this?</p>
|
<python><pandas><dataframe>
|
2023-01-17 22:11:28
| 2
| 2,502
|
roulette01
|
75,152,695
| 12,436,050
|
Filter rows from Pandas dataframe based on max value of a column
|
<p>I have a pandas dataframe with following columns</p>
<pre><code>col1 col2 col3
x 12 abc
x 7 abc
x 5 abc
x 3
y 10 abc
y 9 abc
</code></pre>
<p>I would like to find all rows in a pandas DataFrame which have the max value for col2 column, after grouping by 'col1' columns after filtering the rows where col3 is null?</p>
<p>The expected output is:</p>
<pre><code>col1 col2 col3
x 12 abc
y 10 abc
</code></pre>
<p>I have tried the below code so far.</p>
<pre><code>df[df[['col3']].notnull().all(1) & df.sort_values('col2').drop_duplicates(['col1'], keep='last')]
</code></pre>
<p>However I am getting following error.</p>
<pre><code>TypeError: unsupported operand type(s) for &: 'bool' and 'float'
</code></pre>
<p>Any help is highly appreciated</p>
|
<python><pandas>
|
2023-01-17 22:05:27
| 2
| 1,495
|
rshar
|
75,152,653
| 1,563,347
|
Can a python pickle object be checked for malicious code before unpickling?
|
<p>Is there any way to test a pickle file to see if it loads a function or class during unpickling?</p>
<p>This gives a good summary of how to stop loading of selected functions:
<a href="https://docs.python.org/3/library/pickle.html#restricting-globals" rel="nofollow noreferrer">https://docs.python.org/3/library/pickle.html#restricting-globals</a></p>
<p>I assume it could be used to check if there is function loading at all, by simply blocking all function loading and getting an error message.</p>
<p>But is there a way to write a function that will simply say: there is only text data in this pickled object and no function loading?</p>
<p>I can't say I know which builtins are safe!</p>
|
<python><pickle>
|
2023-01-17 22:00:07
| 2
| 601
|
wgw
|
75,152,539
| 278,403
|
Substitute compound expression in SymPy
|
<p>In sympy how can I make a substitution of a compound expression for a single variable as in the following example that only works for one of the instances of the common factor?</p>
<pre class="lang-py prettyprint-override"><code>from sympy import *
x, y, z = symbols('x y z')
eq = Eq(2*(x+y) + 3*(x+y)**2, 0)
print(eq)
eq1 = Eq(z, x+y)
print(eq1)
eq2 = eq.subs(eq1.rhs, eq1.lhs)
print(eq2)
</code></pre>
<p>Output</p>
<pre><code>Eq(2*x + 2*y + 3*(x + y)**2, 0)
Eq(z, x + y)
Eq(2*x + 2*y + 3*z**2, 0)
</code></pre>
<p>Desired output for last line</p>
<pre><code>Eq(2*z + 3*z**2, 0)
</code></pre>
|
<python><sympy>
|
2023-01-17 21:44:11
| 2
| 2,219
|
glennr
|
75,152,437
| 1,003,190
|
PyTorch: bitwise OR all elements below a certain dimension
|
<p>New to pytorch and tensors in general, I could use some guidance :) I'll do my best to write a correct question, but I may use terms incorrectly here and there. Feel free to correct all of this :)</p>
<p>Say I have a tensor of shape (n, 3, 3). Essentially, n matrices of 3x3. Each of these matrices contains either 0 or 1 for each cell.</p>
<p>What's the best (fastest, easiest?) way to do a bitwise OR for all of these matrices?</p>
<p>For example, if I have 3 matrices:</p>
<pre><code>0 0 1
0 0 0
1 0 0
--
1 0 0
0 0 0
1 0 1
--
0 1 1
0 1 0
1 0 1
</code></pre>
<p>I want the final result to be</p>
<pre><code>1 1 1
0 1 0
1 0 1
</code></pre>
|
<python><machine-learning><pytorch><tensor>
|
2023-01-17 21:32:34
| 1
| 3,925
|
aspyct
|
75,152,397
| 1,142,502
|
Python-docx delete table code not working as expected
|
<p>I am trying to do the following tasks:</p>
<ul>
<li>Open a DOCX file using <code>python-docx</code> library</li>
<li>Count the # of tables in the DOCX: <code>table_count = len(document.tables)</code></li>
<li>The function <code>read_docx_table()</code> extracts the table, creates a dataframe.</li>
</ul>
<p>My objective here is as following:</p>
<ul>
<li>Extract ALL tables from the DOCX</li>
<li>Find the table that is empty</li>
<li>Delete the empty table</li>
<li>Save the DOCX</li>
</ul>
<p>My code is as follows:</p>
<pre><code>import pandas as pd
from docx import Document
import numpy as np
document = Document('tmp.docx')
table_count = len(document.tables)
table_num= table_count
print(f"Number of tables in the Document is: {table_count}")
nheader=1
i=0
def read_docx_table(document, table_num=1, nheader=1):
table = document.tables[table_num-1]
data = [[cell.text for cell in row.cells] for row in table.rows]
df = pd.DataFrame(data)
if nheader ==1:
df = df.rename(columns=df.iloc[0]).drop(df.index[0]).reset_index(drop=True)
elif nheader == 2:
outside_col, inside_col = df.iloc[0], df.iloc[1]
h_index = pd.MultiIndex.from_tuples(list(zip(outside_col, inside_col)))
df = pd.DataFrame(data, columns=h_index).drop(df.index[[0,1]]).reset_index(drop=True)
elif nheader > 2:
print("More than two headers. Not Supported in Current version.")
df = pd.DataFrame()
return df
def Delete_table(table):
print(f" We are deleting table now. Table index is {table}")
print(f"Type of index before casting is {type(table)}")
index = int(table)
print(f"Type of index is {type(index)}")
try:
print("Deleting started...")
document.tables[index]._element.getparent().remove(document.tables[index]._element)
except Exception as e:
print(e)
while (i < table_count):
print(f"Dataframe table number is {i} ")
df = read_docx_table(document,i,nheader)
df = df.replace('', np.nan)
print(df)
if (df.dropna().empty):
print(f'Empty DataFrame. Table Index = {i}')
print('Deleting Empty table...')
#Delete_table(i)
try:
document.tables[i]._element.getparent().remove(document.tables[i]._element)
print("Table deleted...")
except Exception as e:
print(e)
else:
print("DF is not empty...")
print(df.size)
i+=1
document.save('OUT.docx')
</code></pre>
<p>My INPUT docx has <code>3 tables</code>:</p>
<p><a href="https://i.sstatic.net/ZAhci.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZAhci.png" alt="enter image description here" /></a></p>
<p>But, my CODE gives me the following <strong>Output</strong>:</p>
<p><a href="https://i.sstatic.net/TQyiv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TQyiv.png" alt="enter image description here" /></a></p>
<p>It is keeping the empty table and deleting the non-empty table.</p>
<p>Is there something I am missing? I am doubting my logic to check the Table is empty using <code>if (df.dropna().empty):</code></p>
|
<python><pandas><python-docx>
|
2023-01-17 21:27:19
| 1
| 427
|
aki2all
|
75,152,324
| 8,588,743
|
OpenCV's GrabCut produces unsatisfactory segmentation
|
<p>I have the following image</p>
<p><a href="https://i.sstatic.net/pa6tL.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pa6tL.png" alt="enter image description here" /></a></p>
<p>I want to identify the main object in this image, which is the blue couch, and remove the background (preferably turn it into white). I use the code below, but it's not really doing the job as You can see in the second image. Is there better ways to do this rather than using opencv?</p>
<pre><code>import cv2
import numpy as np
import matplotlib.pyplot as plt
image_bgr = cv2.imread('path/couch.PNG')
image_rgb = cv2.cvtColor(image_bgr, cv2.COLOR_BGR2RGB)
rectangle = (0,0,800,800)
mask = np.zeros(image_rgb.shape[:2], np.uint8)
bgdModel = np.zeros((1,65), np.float64)
fgdModel = np.zeros((1,65), np.float64)
cv2.grabCut(image_rgb, mask, rectangle, bgdModel, fgdModel, 5, cv2.GC_INIT_WITH_RECT)
mask_2 = np.where((mask == 2) | (mask == 0), 0, 1).astype('uint8')
image_rgd_nobg = image_rgb * mask_2[:,:, np.newaxis]
plt.imshow(image_rgd_nobg)
plt.axis('off')
plt.show()
</code></pre>
<p>result:</p>
<p><a href="https://i.sstatic.net/Wg5BK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Wg5BK.png" alt="enter image description here" /></a></p>
|
<python><opencv><computer-vision><image-segmentation><semantic-segmentation>
|
2023-01-17 21:19:31
| 0
| 903
|
Parseval
|
75,152,092
| 14,551,577
|
stale element reference: element is not attached to the page document: tree recursion in python selenium
|
<p>I am trying to download web pages using python selenium.<br />
There is a tree view on the left side and the content on the right side.</p>
<p>This is HTML of treeview. Of course, all sub menus are closed at first.</p>
<pre><code><ul>
<li>
<a href="#" onclick="openSubMenu()">item1</a>
<ul>
<li>
<a href="./item2.html">item2</a>
</li>
<li>
<a href="#" onclick="openSubMenu()">item3</a>
<ul>
<li>
<a href="./item4.html">item4</a>
</li>
<li>
<a href="#" onclick="openSubMenu()">item5</a>
<ul>
<li>
<a href="./item6.html">item6</a>
</li>
</ul>
</li>
</ul>
</li>
<li>
<a href="#" onclick="openSubMenu()">item7</a>
<ul>
<li>
<a href="./item8.html">item8</a>
</li>
</ul>
</li>
</ul>
</li>
<li>
<a href="#" onclick="openSubMenu()">item9</a>
<ul>
<li>
<a href="./item10.html">item10</a>
</li>
</ul>
</li>
<li>
<a href="#" onclick="openSubMenu()">item11</a>
<ul>
<li>
<a href="./item11.html">item12</a>
</li>
</ul>
</li>
</ul>
</code></pre>
<p>When I click an item, if it has a page link, it is linked to the right's <code>iframe</code> tag, if not, opens the sub-menu.</p>
<p>I used tree recursion to open all sub-menus.</p>
<pre><code>def tree_recursion(self, tree_container):
tree_branches = tree_container.find_elements(By.XPATH, './li')
for tree_branch in tree_branches:
time.sleep(0.5)
tree_branch.find_element(By.XPATH, './a').click()
try:
new_tree = tree_branch.find_element(By.XPATH, './ul')
if new_tree:
tree_recursion(new_tree)
except:
continue
</code></pre>
<p>But it didn't work, Following error occurred.</p>
<pre><code>File "...\run.py", line 105, in tree_recursion
tree_branch.find_element(By.XPATH, './a').click()
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webelement.py", line 433, in find_element
return self._execute(Command.FIND_CHILD_ELEMENT, {"using": by, "value": value})["value"]
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webelement.py", line 410, in _execute
return self._parent.execute(command, params)
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 444, in execute
self.error_handler.check_response(response)
File "...\AppData\Local\Programs\Python\Python310\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 249, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
(Session info: chrome=109.0.5414.75)
Stacktrace:
Backtrace:
(No symbol) [0x00B66643]
(No symbol) [0x00AFBE21]
(No symbol) [0x009FDA9D]
(No symbol) [0x00A009E4]
(No symbol) [0x00A008AD]
(No symbol) [0x00A00B30]
(No symbol) [0x00A30FAC]
(No symbol) [0x00A3147B]
(No symbol) [0x00A264C1]
(No symbol) [0x00A4FDC4]
(No symbol) [0x00A2641F]
(No symbol) [0x00A500D4]
(No symbol) [0x00A66B09]
(No symbol) [0x00A4FB76]
(No symbol) [0x00A249C1]
(No symbol) [0x00A25E5D]
GetHandleVerifier [0x00DDA142+2497106]
GetHandleVerifier [0x00E085D3+2686691]
GetHandleVerifier [0x00E0BB9C+2700460]
GetHandleVerifier [0x00C13B10+635936]
(No symbol) [0x00B04A1F]
(No symbol) [0x00B0A418]
(No symbol) [0x00B0A505]
(No symbol) [0x00B1508B]
BaseThreadInitThunk [0x7607FA29+25]
RtlGetAppContainerNamedObjectPath [0x777D7A9E+286]
RtlGetAppContainerNamedObjectPath [0x777D7A6E+238]
</code></pre>
<p>I've tried to solve this problem, but I didn't find any solution for it because it needs <strong>dynamic selector</strong> in three recursion function.</p>
<p>What is the best solution for the dynamic selector?</p>
<p>Or any other way to scrap this?</p>
|
<python><selenium><web-scraping><recursion><treeview>
|
2023-01-17 20:52:56
| 1
| 644
|
bcExpt1123
|
75,152,070
| 10,791,330
|
Year transition on a 4-4-5 calendar outputs incorrect year on yyyy-mm and quarter
|
<p>I created a 4-4-5 calendar by repurposing some code here:
<a href="https://stackoverflow.com/questions/67017473/how-to-create-a-4-4-5-fiscal-calendar-using-python">How to create a 4-4-5 fiscal calendar using python?</a></p>
<pre><code>import pandas as pd
import datetime as dt
def create_445_calender(start_date: str, years: int, leap_week_year = 5, leap_week_quarter = 4):
output = []
tmp_datetime = dt.datetime.strptime(start_date,'%m/%d/%Y')
for year in range(1,years+1):
month_of_year = 1
week_of_year = 1
# 4 quarters
for quarters in range(1,5):
# 4 weeks
for weeks in range(1,5):
# 7 days
for days in range(1,8):
tmp_datestr = dt.datetime.strftime(tmp_datetime,'%m/%d/%Y')
tmp_weekday = dt.datetime.strftime(tmp_datetime,'%A')
tmp_monthstr = str(month_of_year) if month_of_year >= 10 else '0' + str(month_of_year)
tmp_yyyy_mm = dt.datetime.strftime(tmp_datetime,'%Y') + '-' + tmp_monthstr
tmp_quarter = 'Q' + str(quarters) + ' ' + dt.datetime.strftime(tmp_datetime,'%Y')
output.append([tmp_datestr,tmp_weekday, week_of_year, tmp_yyyy_mm, tmp_quarter])
tmp_datetime = tmp_datetime + dt.timedelta(days=1)
week_of_year += 1
month_of_year += 1
# 4 weeks
for weeks in range(1,5):
# 7 days
for days in range(1,8):
tmp_datestr = dt.datetime.strftime(tmp_datetime,'%m/%d/%Y')
tmp_weekday = dt.datetime.strftime(tmp_datetime,'%A')
tmp_monthstr = str(month_of_year) if month_of_year >= 10 else '0' + str(month_of_year)
tmp_yyyy_mm = dt.datetime.strftime(tmp_datetime,'%Y') + '-' + tmp_monthstr
tmp_quarter = 'Q' + str(quarters) + ' ' + dt.datetime.strftime(tmp_datetime,'%Y')
output.append([tmp_datestr,tmp_weekday, week_of_year, tmp_yyyy_mm, tmp_quarter])
tmp_datetime = tmp_datetime + dt.timedelta(days=1)
week_of_year += 1
month_of_year += 1
# 5 / 6 weeks
# 5 years
if (year % leap_week_year == 0 ) and (quarters == leap_week_quarter):
tmp_weeks = 6
else:
tmp_weeks = 5
for weeks in range(1,tmp_weeks+1):
# 7 days
for days in range(1,8):
tmp_datestr = dt.datetime.strftime(tmp_datetime,'%m/%d/%Y')
tmp_weekday = dt.datetime.strftime(tmp_datetime,'%A')
tmp_monthstr = str(month_of_year) if month_of_year >= 10 else '0' + str(month_of_year)
tmp_yyyy_mm = dt.datetime.strftime(tmp_datetime,'%Y') + '-' + tmp_monthstr
tmp_quarter = 'Q' + str(quarters) + ' ' + dt.datetime.strftime(tmp_datetime,'%Y')
output.append([tmp_datestr,tmp_weekday, week_of_year, tmp_yyyy_mm, tmp_quarter])
tmp_datetime = tmp_datetime + dt.timedelta(days=1)
week_of_year += 1
month_of_year += 1
df = pd.DataFrame(output,columns = ['date','day','week_no','yyyy-mm','quarter'])
df['date'] = pd.to_datetime(df['date'])
return df
cal = create_445_calender('01/01/2012', 30, 5, 4)
</code></pre>
<p>However, I found a bug in this code. If the year ends on a day that's not Saturday, the subsequent year will be printed instead of the previous year in the <code>yyyy-mm</code> field and <code>quarter</code> columns.</p>
<p>If you run the above code, you'll face two scenarios:</p>
<p>Scenario 1 (most common):</p>
<pre><code>date day week_no yyyy-mm quarter
12/30/12 Sunday 1 2012-01 Q1 2012
12/31/12 Monday 1 2012-01 Q1 2012
1/1/13 Tuesday 1 2013-01 Q1 2013
1/2/13 Wednesday 1 2013-01 Q1 2013
1/3/13 Thursday 1 2013-01 Q1 2013
1/4/13 Friday 1 2013-01 Q1 2013
1/5/13 Saturday 1 2013-01 Q1 2013
</code></pre>
<p><code>week_no</code> is consistently correct but the <code>yyyy-mm</code> and <code>quarter</code> should be the year 2013 for the month of December. (one year higher than it current is).</p>
<p>What it should be:</p>
<pre><code>date day week_no yyyy-mm quarter
12/30/12 Sunday 1 2013-01 Q1 2013
12/31/12 Monday 1 2013-01 Q1 2013
1/1/13 Tuesday 1 2013-01 Q1 2013
1/2/13 Wednesday 1 2013-01 Q1 2013
1/3/13 Thursday 1 2013-01 Q1 2013
1/4/13 Friday 1 2013-01 Q1 2013
1/5/13 Saturday 1 2013-01 Q1 2013
</code></pre>
<p>However, sometimes there will be a year with 53 weeks. In the case of 53 weeks, the year for January needs to be one number lower than it currently is.</p>
<p>Scenario 2 (only on 53 week years):</p>
<pre><code>date day week_no yyyy-mm quarter
12/26/21 Sunday 53 2021-12 Q4 2021
12/27/21 Monday 53 2021-12 Q4 2021
12/28/21 Tuesday 53 2021-12 Q4 2021
12/29/21 Wednesday 53 2021-12 Q4 2021
12/30/21 Thursday 53 2021-12 Q4 2021
12/31/21 Friday 53 2021-12 Q4 2021
1/1/22 Saturday 53 2022-12 Q4 2022
</code></pre>
<p>This is how it should look:</p>
<pre><code>date day week_no yyyy-mm quarter
12/26/21 Sunday 53 2021-12 Q4 2021
12/27/21 Monday 53 2021-12 Q4 2021
12/28/21 Tuesday 53 2021-12 Q4 2021
12/29/21 Wednesday 53 2021-12 Q4 2021
12/30/21 Thursday 53 2021-12 Q4 2021
12/31/21 Friday 53 2021-12 Q4 2021
1/1/22 Saturday 53 2021-12 Q4 2021
</code></pre>
<p>The question here is how do we correct the year value for both <code>yyyy-mm</code> and <code>quarter</code> so that it satisfies both scenarios?</p>
|
<python><python-3.x><pandas><date>
|
2023-01-17 20:50:18
| 1
| 758
|
Jacky
|
75,151,973
| 5,865,393
|
Latin-9 exported CSV file Flask
|
<p>I am trying to export data from the database to a CSV file. The export is done perfectly, however, I have some Arabic text in the database where when exporting the data, I get Latin-9 characters. As below. (<em>I am on Windows</em>)</p>
<p><a href="https://i.sstatic.net/KWOxZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KWOxZ.png" alt="CSV Export" /></a></p>
<p>When this CSV file is opened in Notepad, I can see the correct values</p>
<pre><code>ID,Serial,City,Office
1,ASDF4321,Ω
Ψ΅Ψ±,Ω
Ψ΅Ψ±
2,FDSA1234,Ψ§ΩΨ³ΨΉΩΨ―ΩΨ©,Ψ§ΩΨ³ΨΉΩΨ―ΩΨ©
3,ASDF4321,Ω
Ψ΅Ψ±,Ω
Ψ΅Ψ±
4,FDSA1234,Ψ§ΩΨ³ΨΉΩΨ―ΩΨ©,Ψ§ΩΨ³ΨΉΩΨ―ΩΨ©
</code></pre>
<p>My code is:</p>
<pre class="lang-py prettyprint-override"><code>import csv
from io import BytesIO, StringIO
from flask import Flask, send_file
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__, instance_relative_config=True)
app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///app.sqlite3"
db = SQLAlchemy(app)
class Device(db.Model):
__tablename__ = "device"
id = db.Column(db.Integer, primary_key=True)
serial = db.Column(db.String(255), nullable=True)
city = db.Column(db.String(255), nullable=True)
office = db.Column(db.String(255), nullable=True)
def __repr__(self):
return f"<Device {self.serial!r}, {self.city!r}, {self.office!r}>"
with app.app_context():
db.create_all()
device1 = Device(serial="ASDF4321", city="Ω
Ψ΅Ψ±", office="Ω
Ψ΅Ψ±")
device2 = Device(serial="FDSA1234", city="Ψ§ΩΨ³ΨΉΩΨ―ΩΨ©", office="Ψ§ΩΨ³ΨΉΩΨ―ΩΨ©")
db.session.add(device1)
db.session.add(device2)
db.session.commit()
@app.route("/", methods=["GET"])
def index():
return "Home"
@app.route("/export", methods=["GET", "POST"])
def export():
si = StringIO()
devices = Device.query.all()
csvwriter = csv.writer(si, delimiter=",", quotechar='"', quoting=csv.QUOTE_MINIMAL)
csvwriter.writerow(["ID", "Serial", "City", "Office"])
for i, device in enumerate(devices, start=1):
csvwriter.writerow([i, device.serial, device.city, device.office])
mem = BytesIO()
mem.write(si.getvalue().encode())
mem.seek(0)
si.close()
return send_file(
mem, mimetype="text/csv", download_name="Export-File.csv", as_attachment=True
)
if __name__ == "__main__":
app.run(debug=True)
</code></pre>
<p>How can I export to a CSV file and have it look like this:</p>
<p><a href="https://i.sstatic.net/6PBSy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6PBSy.png" alt="CSV Correct" /></a></p>
|
<python><csv><flask>
|
2023-01-17 20:40:45
| 1
| 2,284
|
Tes3awy
|
75,151,887
| 569,976
|
matcher.knnMatch() error: unsupported format or combination of formats
|
<p>I was trying to simplify <a href="https://github.com/opencv/opencv/blob/4.x/samples/python/find_obj.py" rel="nofollow noreferrer">https://github.com/opencv/opencv/blob/4.x/samples/python/find_obj.py</a> and make it use <code>warpPerspective()</code> and am now getting an error.</p>
<p>Before I share the error here's the code:</p>
<pre><code>import sys
import cv2
import numpy as np
FLANN_INDEX_KDTREE = 1
def filter_matches(kp1, kp2, matches, ratio = 0.75):
mkp1, mkp2 = [], []
for m in matches:
if len(m) == 2 and m[0].distance < m[1].distance * ratio:
m = m[0]
mkp1.append( kp1[m.queryIdx] )
mkp2.append( kp2[m.trainIdx] )
p1 = np.float32([kp.pt for kp in mkp1])
p2 = np.float32([kp.pt for kp in mkp2])
kp_pairs = zip(mkp1, mkp2)
return p1, p2, list(kp_pairs)
def alignImages(im1, im2):
detector = cv2.AKAZE_create()
flann_params = dict(algorithm = FLANN_INDEX_KDTREE, trees = 5)
matcher = cv2.FlannBasedMatcher(flann_params, {})
kp1, desc1 = detector.detectAndCompute(im1, None)
kp2, desc2 = detector.detectAndCompute(im2, None)
raw_matches = matcher.knnMatch(desc1, trainDescriptors = desc2, k = 2)
p1, p2, kp_pairs = filter_matches(kp1, kp2, raw_matches)
if len(p1) < 4:
print('%d matches found, not enough for homography estimation' % len(p1))
sys.exit()
height, width = im2.shape
imResult = cv2.warpPerspective(im1, H, (width, height))
return imResult
imRef = cv2.imread('ref.png', cv2.IMREAD_GRAYSCALE)
im = cv2.imread('new.png', cv2.IMREAD_GRAYSCALE)
imNew = alignImages(im, imRef)
cv2.imwrite('output.png', imNew)
</code></pre>
<p>Here's the error:</p>
<pre><code>Traceback (most recent call last):
File "align.py", line 47, in <module>
imNew = alignImages(im, imRef)
File "align.py", line 32, in alignImages
raw_matches = matcher.knnMatch(desc1, trainDescriptors = desc2, k = 2)
cv2.error: OpenCV(4.4.0) /tmp/pip-req-build-sw_3pm_8/opencv/modules/flann/src/miniflann.cpp:315: error: (-210:Unsupported format or combination of formats) in function 'buildIndex_'
> type=0
>
</code></pre>
<p>Any ideas?</p>
<p>If you want to see the problem for yourself vs looking at the code then download <a href="https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png" rel="nofollow noreferrer">https://www.google.com/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png</a> and save that as ref.png and new.png. That gives me the error.</p>
|
<python><opencv>
|
2023-01-17 20:29:54
| 0
| 16,931
|
neubert
|
75,151,388
| 8,372,455
|
add numpy array to pandas df
|
<p>Im experimenting with time series predictions something like this:</p>
<pre><code>import pandas as pd
from statsmodels.tsa.statespace.sarimax import SARIMAX
model = SARIMAX(data.values,
order=order,
seasonal_order=seasonal_order)
result = model.fit()
train = data.sample(frac=0.8,random_state=0)
test = data.drop(train.index)
start = len(train)
end = len(train) + len(test) - 1
# Predictions for one-year against the test set
predictions = result.predict(start, end,
typ='levels')
</code></pre>
<p>where predictions is a numpy array. How do I add this to my <code>test</code> pandas df? If I try this:
<code>test['predicted'] = predictions.tolist()</code></p>
<p>This wont contact properly where I was hoping to add in the prediction as another column in my df. It looks like this below:</p>
<pre><code>hour
2021-06-07 17:00:00 75726.57143
2021-06-07 20:00:00 62670.06667
2021-06-08 00:00:00 16521.65
2021-06-08 14:00:00 71628.1
2021-06-08 17:00:00 62437.16667
...
2021-09-23 22:00:00 7108.533333
2021-09-24 02:00:00 13325.2
2021-09-24 04:00:00 13322.31667
2021-09-24 13:00:00 37941.65
predicted [13605.31231433516, 12597.907337725523, 13484.... <--- not coming in as another df column
</code></pre>
<p>Would anyone have any advice? Am hoping to ultimately plot the predicted values against the test values as well as calculate rsme maybe something like:</p>
<pre><code>from sklearn.metrics import mean_squared_error
from statsmodels.tools.eval_measures import rmse
# Calculate root mean squared error
rmse(test, predictions)
# Calculate mean squared error
mean_squared_error(test, predictions)
</code></pre>
<p><em><strong>EDIT</strong></em></p>
<pre><code>train = data.sample(frac=0.8,random_state=0)
test = data.drop(train.index)
start = len(train)
end = len(train) + len(test) - 1
</code></pre>
|
<python><pandas><statsmodels>
|
2023-01-17 19:30:14
| 1
| 3,564
|
bbartling
|
75,151,180
| 12,244,355
|
Python: How to explode two columns and set prefix
|
<p>I have a DataFrame as follows:</p>
<pre><code>time asks bids
2022-01-01 [{'price':'0.99', 'size':'32213'}, {'price':'0.98', 'size':'23483'}] [{'price':'1.0', 'size':'23142'}, {'price':'0.99', 'size':'6436'}]
2022-01-02 [{'price':'0.99', 'size':'33434'}, {'price':'0.98', 'size':'33333'}] [{'price':'1.0', 'size':'343'}, {'price':'0.99', 'size':'2342'}]
...
2022-01-21 [{'price':'0.98', 'size':'32333'}, {'price':'0.98', 'size':'23663'}] [{'price':'1.0', 'size':'23412'}]
</code></pre>
<p>I want to explode the asks and bids columns and set prefixes to get the following:</p>
<pre><code>time asks_price asks_size bids_price bids_size
2022-01-01 0.99 32213 1.0 23142
2022-01-01 0.98 23483 0.99 6436
2022-01-02 0.99 33434 1.0 343
2022-01-02 0.98 33333 0.99 2342
...
2022-01-21 0.98 32333 1.0 23412
2022-01-21 0.98 23663 NaN NaN
</code></pre>
<p>Notice how the last row has NaN values under the bids_price and bids_size columns since there are no corresponding values.</p>
<p>How can this be achieved using pandas?</p>
<p>EDIT: Here is a snippet of the data:</p>
<pre><code>{'time': {0: '2022-05-07T00:00:00.000000000Z',
1: '2022-05-07T01:00:00.000000000Z',
2: '2022-05-07T02:00:00.000000000Z',
3: '2022-05-07T03:00:00.000000000Z',
4: '2022-05-07T04:00:00.000000000Z'},
'asks': {0: [{'price': '0.9999', 'size': '4220492'},
{'price': '1', 'size': '2556759'},
{'price': '1.0001', 'size': '941039'},
{'price': '1.0002', 'size': '458602'},
{'price': '1.0003', 'size': '257955'},
</code></pre>
|
<python><pandas><dataframe><prefix><pandas-explode>
|
2023-01-17 19:10:23
| 3
| 785
|
MathMan 99
|
75,151,016
| 19,797,660
|
How to rewrite my Pinescript code to Python
|
<p>I am trying to rewrite this code to Python:</p>
<pre><code>src = input.source(close, "Source")
volStop(src) =>
var max = src
var min = src
max := math.max(max, src)
min := math.min(min, src)
[max, min]
[max, min] = volStop(src)
plot(max, "Max", style=plot.style_cross)
</code></pre>
<p>Precisely I have a problem with these lines:</p>
<pre><code> max := math.max(max, src)
min := math.min(min, src)
</code></pre>
<p>In Python I have a function, leta call it <code>func1</code>, and I want to get the same result the Pinescript is returning.</p>
<p>I have only tried for loop since from what I understand calling a function in Pinescript works kind of like for loop. And I tried to replicate the calculation but I couldn't achieve the expected results.</p>
<p>This is the line that is being plotted on Tradingview:</p>
<p><a href="https://i.sstatic.net/PxLnN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PxLnN.png" alt="enter image description here" /></a></p>
<p>And this is the line that is being plotted in Python: the area that is framed with red square is the approximate area that is visible on Tradingview screenshot.</p>
<p><a href="https://i.sstatic.net/S5SFJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S5SFJ.png" alt="enter image description here" /></a></p>
<p>My current code:</p>
<pre><code>maxmin = pd.DataFrame()
maxmin["max"] = price_df[f"{name_var}"]
maxmin["min"] = price_df[f"{name_var}"]
for i in range(price_df.shape[0]):
maxmin["max"].iloc[i] = max(maxmin["max"].shift(1).fillna(0).iloc[i], price_df[f"{name_var}"].iloc[i])
maxmin["min"].iloc[i] = min(maxmin["min"].shift(1).fillna(0).iloc[i], price_df[f"{name_var}"].iloc[i])
</code></pre>
<p>The <code>name_var</code> variable is set to 'Close' column.</p>
<p>How can I rewrite the Pinescript code to Python to get the same results?</p>
|
<python><pandas><pine-script>
|
2023-01-17 18:53:29
| 1
| 329
|
Jakub Szurlej
|
75,150,865
| 7,422,352
|
Getting "(InvalidToken) when calling the ListObjectsV2 operation" when MLFlow is trying to access the artefacts stored on S3
|
<p>I am trying to start the MLFlow server on my local machine inside a python virtual environment using the following command:</p>
<pre><code>mlflow server --backend-store-uri postgresql://mlflow_user:mlflow@localhost/mlflow --artifacts-destination S3://<S3 bucket name>/mlflow/ --serve-artifacts -h 0.0.0.0 -p 8000
</code></pre>
<p>I have exported the following environment variables inside the <strong>activated python venv</strong>:</p>
<pre><code>export AWS_ACCESS_KEY_ID=<access key>
export AWS_SECRET_ACCESS_KEY=<secret key>
export DEFAULT_REGION_NAME=<region name>
export DEFAULT_OUTPUT_FORMAT=<output format>
</code></pre>
<p>MLFlow gives the following error while accessing the model artefacts for all the runs:</p>
<pre><code>botocore.exceptions.ClientError: An error occurred (InvalidToken) when calling the ListObjectsV2 operation: The provided token is malformed or otherwise invalid.
</code></pre>
<p>Any workaround for this?</p>
|
<python><amazon-s3><mlflow><model-management>
|
2023-01-17 18:39:42
| 1
| 5,381
|
Deepak Tatyaji Ahire
|
75,150,849
| 17,473,587
|
Checking value is in a list of dictionary values in Jinja
|
<p>I have a list of tables coming from a SQLite database:</p>
<pre class="lang-py prettyprint-override"><code>tbls_name = db_admin.execute(
"""SELECT name FROM sqlite_master WHERE type='table';"""
)
</code></pre>
<p>And need to check if <code>table_name</code> is in the resulting list of dictionaries.</p>
<p>In Python, I would do:</p>
<pre class="lang-py prettyprint-override"><code>if table_name in [d['name'] for d in tbls_name]:
</code></pre>
<p>How can I do this in Jinja?</p>
<pre><code>{% set tbl_valid = (table_name in [d['name'] for d in tbls_name]) %}
</code></pre>
<p>This throws an error.</p>
<p>Note that the <code>tbls_name</code> is a list of dictionaries, e.g.:</p>
<pre class="lang-py prettyprint-override"><code>[
{'name': 'tableName1'},
{'name': 'tableName2'},
{'name': 'tableName3'},
]
</code></pre>
|
<python><jinja2>
|
2023-01-17 18:38:36
| 1
| 360
|
parmer_110
|
75,150,811
| 12,027,869
|
Python: Conditional Row Values From a New Column Using dictionary, function, and lambda
|
<p>I have a dataframe:</p>
<pre><code>id
id1
id2
id3
id8
id9
</code></pre>
<p>I want to add a new column <code>new</code> with conditional row values as follows:</p>
<p>If a row from <code>id</code> == <code>id1</code>, then the new row is <code>id1 is cat 1</code></p>
<p>If a row from <code>id</code> == <code>id2</code>, then the new row is <code>id2 is cat 2</code></p>
<p>If a row from <code>id</code> == <code>id3</code>, then the new row is <code>id3 is cat 3</code></p>
<p>else, <code>idx is cat 0</code>, where x is the id that is not one of <code>id1</code>, <code>id2</code>, or <code>id3</code></p>
<p>This is what I tried so far. I think the solution should be to wrap the for loop inside a function and use that function with <code>apply</code> and/or <code>lambda</code>.</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'id': ['id1', 'id2', 'id3', 'id8', 'id9']
})
df
dict = {'id1': '1', 'id2': '2', 'id3': '3'}
for k, val in dict.items():
if k == "id1" or k == "id2" or k == "id3" in df['state']:
print(str(k) + " is cat " + str(val))
else:
print(str(k) + " is cat 0")
</code></pre>
<p>Desired result:</p>
<pre><code>id new
id1 id1 is cat 1
id2 id2 is cat 2
id3 id3 is cat 3
id8 id8 is cat 0
id9 id9 is cat 0
</code></pre>
|
<python><pandas><dataframe><dictionary>
|
2023-01-17 18:35:07
| 2
| 737
|
shsh
|
75,150,658
| 3,247,006
|
How to display an uploaded image in "Change" page in Django Admin?
|
<p>I'm trying to display an uploaded image in <strong>"Change" page</strong> in Django Admin but I cannot do it as shown below:</p>
<p><a href="https://i.sstatic.net/w44zr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/w44zr.png" alt="enter image description here" /></a></p>
<p>This is my code below:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
from django.db import models
class Product(models.Model):
name = models.CharField(max_length=50)
price = models.DecimalField(decimal_places=2, max_digits=5)
image = models.ImageField()
def __str__(self):
return self.name
</code></pre>
<pre class="lang-py prettyprint-override"><code># "admin.py"
from django.contrib import admin
from .models import Product
@admin.register(Product)
class ProductAdmin(admin.ModelAdmin):
pass
</code></pre>
<p>So, how can I display an uploaded image in <strong>"Change" page</strong> in Django Admin?</p>
|
<python><django><image><django-models><django-admin>
|
2023-01-17 18:21:34
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
75,150,647
| 607,407
|
Cannot change enablement of isort extension because of its extension kind
|
<p>I have disabled multiple extensions because I had some trouble with clangd and wasn't sure what is causing it.</p>
<p>I now wanted to re-enable the Python and Pylance extensions, but I am getting this error:</p>
<pre><code>Cannot change enablement of isort extension because of its extension kind
</code></pre>
<p>I do not really see an "isort" extension in the list. I am not sure what could be causing this.</p>
|
<python><visual-studio-code><pylance>
|
2023-01-17 18:20:13
| 0
| 53,877
|
TomΓ‘Ε‘ Zato
|
75,150,609
| 12,942,095
|
Is there a standard way to implement optional keyword arguments in python classes?
|
<p>So I'm writing a class for the purpose of doing data analysis on a signal that I am measuring. There is many ways that I can process the signal and other optional metadata that can be associated with each trial that I measure the signal. I guess my questions boils down to the best way in which I can handle multiple keyword arguments in a way my class can auto-detect the relevant arguments that isn't just a bunch of if-else statements, I guess similar to how you can add many optional keywords to matplotlib plots?</p>
<p>For example, lets say I have this hypothetical class that looks like this:</p>
<pre><code>class Signal:
def __init__(self, filepath, **kwargs):
self.filepath = filepath
self.signal_df = pd.read_csv(self.filepath)
for k,v in kwargs.items():
setattr(self, key, value)
</code></pre>
<p>After the initial construction of the objects there would then be pertinent methods dependent on what keyword arguments have been passed. Thus I could easily create the two following objects with ease:</p>
<pre><code>signal_1 = Signal('filepath_0', **{'foo':1, 'bar':'9.2'})
signal_2 = Signal('filepath_1', **{'foo':12, 'baz':'red'})
</code></pre>
<p>To try and solve this I've pretty much just implemented statements in the <strong>init</strong>() method such that I'm doing something like this:</p>
<pre><code>class Signal:
def __init__(self, filepath, **kwargs):
self.filepath = filepath
self.signal_df = pd.read_csv(self.filepath)
for k,v in kwargs.items():
setattr(self, key, value)
if hasattr(self, 'foo'):
self.method_0(self.foo) # generic method that takes foo as argument
if hasattr(self, 'bar'):
self.method_1(self.bar) # generic method that takes bar as argument
else:
self.method_2(1.0) # alternate method if bar is not there
</code></pre>
<p>This just seems like a really clunky way of doing things and was hoping there may be a better solution. I appreciate any and all help!</p>
|
<python><python-3.x><oop><keyword-argument>
|
2023-01-17 18:16:46
| 1
| 367
|
rsenne
|
75,150,535
| 7,668,467
|
Polars Create Column with String Formatting
|
<p>I have a polars dataframe:</p>
<pre><code>df = pl.DataFrame({'schema_name': ['test_schema', 'test_schema_2'],
'table_name': ['test_table', 'test_table_2'],
'column_name': ['test_column, test_column_2','test_column']})
</code></pre>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>schema_name</th>
<th>table_name</th>
<th>column_name</th>
</tr>
</thead>
<tbody>
<tr>
<td>test_schema</td>
<td>test_table</td>
<td>test_column, test_column_2</td>
</tr>
<tr>
<td>test_schema_2</td>
<td>test_table_2</td>
<td>test_column</td>
</tr>
</tbody>
</table></div>
<p>I have a string:</p>
<pre><code>date_field_value_max_query = '''
select '{0}' as schema_name,
'{1}' as table_name,
greatest({2})
from {0}.{1}
group by 1, 2
'''
</code></pre>
<p>I would like to use polars to add a column by using string formatting. The target dataframe is this:</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th>schema_name</th>
<th>table_name</th>
<th>column_name</th>
<th>query</th>
</tr>
</thead>
<tbody>
<tr>
<td>test_schema</td>
<td>test_table</td>
<td>test_column, test_column_2</td>
<td>select test_schema, test_table, greatest(test_column, test_column_2) from test_schema.test_table group by 1, 2</td>
</tr>
<tr>
<td>test_schema_2</td>
<td>test_table_2</td>
<td>test_column</td>
<td>select test_schema_2, test_table_2, greatest(test_column) from test_schema_2.test_table_2 group by 1, 2</td>
</tr>
</tbody>
</table></div>
<p>In pandas, I would do something like this:</p>
<pre><code>df.apply(lambda row: date_field_value_max_query.format(row['schema_name'], row['table_name'], row['column_name']), axis=1)
</code></pre>
<p>For polars, I tried this:</p>
<pre><code>df.map_rows(lambda row: date_field_value_max_query.format(row[0], row[1], row[2]))
</code></pre>
<p>...but this returns only the one column, and I lose the original three columns. I know this approach is also not recommended for polars, when possible.</p>
<p>How can I perform string formatting across multiple dataframe columns with the output column attached to the original dataframe?</p>
|
<python><python-polars>
|
2023-01-17 18:09:52
| 1
| 2,434
|
OverflowingTheGlass
|
75,150,515
| 10,480,181
|
How to create click Command using API?
|
<p>I am trying to create a python click Command using the API instead of decorators. This is because I am trying to create commands dynamically from a <code>yaml</code> file.</p>
<p><strong>parsed yaml file</strong>:</p>
<pre><code>{'adhoc': {'args': ['abcd',
{'analytic-type': {'type': 'click.Choice(["prof", "fac"], '
'case_sensitive=False)'}},
{'lobplat': {'default': 'ALL',
'type': 'click.Choice(["A","B","C","D","ALL",],case_sensitive=False)'}}],
'options': [{'environment': {'alias': '-e',
'envvar': 'env',
'show_default': 'loc',
'type': 'click.Choice(["a", "b", '
'"c", "d", "e"], '
'case_sensitive=False)'}},
{'email': {'alias': '-m',
'default': 'test@test.com',
'multiple': True}},
{'runtype': {'alias': '-r',
'default': 'ADHOC',
'type': 'click.Choice(["TEST","ADHOC","SCHEDULED"], '
'case_sensitive=False)'}}],
'script': 'nohup '
'/path/to/script/script'}}
</code></pre>
<p>At the top level it defines a command called <code>adhoc</code> which has 3 parts:</p>
<ol>
<li>Arguments (<em>args</em>)</li>
<li>Options (<em>options</em>)</li>
<li>Script (This is the function of the command)</li>
</ol>
<p>Both argument and options have a list of different Parameters that I want to create.
Here is the class that I have written:</p>
<pre><code>import click
import yaml
class Commander():
def __init__(self) -> None:
pass
def run_command(self, script):
pass
def str_to_class(self,classname):
return getattr(sys.modules[__name__], classname)
def create_args(self,arguments):
all_args = []
for arg in arguments:
if isinstance(arg, str):
all_args.append(click.Argument([arg]))
else:
attributes = arg[list(arg.keys())[0]]
print(attributes)
all_args.append(click.Argument([arg],**attributes))
return all_args
def convert_to_command(self,yaml):
for key, value in yaml.items():
name = key
script = value["script"]
options = value["options"]
args = value["args"]
click_arg = self.create_args(args)
print(click_arg)
if __name__ == "__main__":
commander = Commander()
yaml = {'adhoc': {'args': ['abcd',
{'analytic-type': {'type': 'click.Choice(["prof", "fac"], '
'case_sensitive=False)'}},
{'lobplat': {'default': 'ALL',
'type': 'click.Choice(["A","B","C","D","ALL",],case_sensitive=False)'}}],
'options': [{'environment': {'alias': '-e',
'envvar': 'env',
'show_default': 'loc',
'type': 'click.Choice(["a", "b", '
'"c", "d", "e"], '
'case_sensitive=False)'}},
{'email': {'alias': '-m',
'default': 'test@test.com',
'multiple': True}},
{'runtype': {'alias': '-r',
'default': 'ADHOC',
'type': 'click.Choice(["TEST","ADHOC","SCHEDULED"], '
'case_sensitive=False)'}}],
'script': 'nohup '
'/path/to/script/script'}}
commander.convert_to_command(yaml)
</code></pre>
<p>These functions are not complete. Currently I am working on writing a function to create Arguments out of the Yaml dictionary. However upon running <code>create_command()</code> I get the following error:</p>
<pre><code> File "/project/helper/commander.py", line 111, in <module>
commander.convert_to_command(yaml)
File "/project/hassle/helper/commander.py", line 45, in convert_to_command
click_arg = self.create_args(args)
File "/project/hassle/helper/commander.py", line 32, in create_args
all_args.append(click.Argument([arg],**attributes))
File "/home/myself/miniconda3/envs/py_copier/lib/python3.7/site-packages/click/core.py", line 2950, in __init__
super().__init__(param_decls, required=required, **attrs)
File "/home/myself/miniconda3/envs/py_copier/lib/python3.7/site-packages/click/core.py", line 2073, in __init__
param_decls or (), expose_value
File "/home/myself/miniconda3/envs/py_copier/lib/python3.7/site-packages/click/core.py", line 2983, in _parse_decls
name = name.replace("-", "_").lower()
AttributeError: 'dict' object has no attribute 'replace'
</code></pre>
|
<python><python-click>
|
2023-01-17 18:08:02
| 1
| 883
|
Vandit Goel
|
75,150,460
| 2,224,218
|
What is the best way of Importing a module to work on both Python 2 and Python 3?
|
<p>I have two test launchers , one with python 2 env and another with python 3 env.</p>
<p>I use <code>from itertools import izip_longest</code> which worked fine in python2 env. But the same module is missing in python3 env. Reason is <code>izip_longest</code> was renamed to <code>zip_longest</code> in Python 3.</p>
<p>To make the script work in both the env, I did something like below</p>
<p>Solution 1:</p>
<pre><code>try:
from itertools import zip_longest
except ImportError:
from itertools import izip_longest as zip_longest
</code></pre>
<p>This worked as expected.</p>
<p>There is another way of handling this scenario.</p>
<p>Solution 2:</p>
<pre><code>import six
if six.PY2:
from itertools import izip_longest as zip_longest
else
from itertools import zip_longest
</code></pre>
<p>This also worked as expected.</p>
<p>Question: Which is the best way of handling such differences between python 2 and python 3 ?</p>
<p>In solution 1, when the code is run on python 2, there is an import error which will be handled and then again script would import the correct module.</p>
<p>In solution 2, there is no such import error which we need to worry about handling it.</p>
<p>I have these two solutions.
Please suggest more efficient ones if any. Thanks.</p>
|
<python><python-3.x><python-2.7>
|
2023-01-17 18:02:54
| 1
| 588
|
Subhash
|
75,150,439
| 3,156,300
|
ImageMagick auto converting colored RGB image to grayscale?
|
<p>How can i prevent imagemagick from converting my RGB channel images into a solid grayscale image? For some unknown buggy reason it's converting the final PSD image to a grayscale when it should remain as RGB.</p>
<p>It all breaks when i change the gradient of my background solid to be <code>'gradient:#f2f2f2-#f2f2f2'</code> instead of <code>'gradient:#FFA500-#f2f2f2'</code>.</p>
<p>What i want is this...</p>
<p><a href="https://i.sstatic.net/Iz9jT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Iz9jT.png" alt="enter image description here" /></a></p>
<p>but i get this...</p>
<p><a href="https://i.sstatic.net/yREzw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yREzw.png" alt="enter image description here" /></a></p>
<p>The process below is, I use one of the images as reference for size, in order to create a solid light gray background, i then stack all the images in sequential order starting with the solid background, then the shadow, then the chair and save it as a psd.</p>
<p>Below is all my code</p>
<p>Resources:
<a href="https://www.dropbox.com/s/4awtl0ce8zm4lz7/shadow.png?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/4awtl0ce8zm4lz7/shadow.png?dl=0</a>
<a href="https://www.dropbox.com/s/yupbaux5grfvo1j/foreground.png?dl=0" rel="nofollow noreferrer">https://www.dropbox.com/s/yupbaux5grfvo1j/foreground.png?dl=0</a></p>
<pre class="lang-py prettyprint-override"><code>import os
import subprocess
import sys
from PySide2 import QtGui, QtCore
CONVERT_EXE = '...ImageMagick-7.1.0-54-portable-Q16-x64/convert.exe'
def create_solid_background_im(filepath):
solidOutput = os.path.splitext(filepath)[0] + '_solid.jpg'
image_reader = QtGui.QImageReader(filepath)
image_size = image_reader.size()
cmd = [
CONVERT_EXE,
'-size',
'{}x{}'.format(image_size.width(), image_size.height()),
'-channel',
'RGB',
'-depth',
'8',
'gradient:#FFA500-#f2f2f2', # works as expected output RGB image
# 'gradient:#f2f2f2-#f2f2f2', # makes final image grayscale
solidOutput
]
subprocess.Popen(cmd).wait()
return solidOutput
def create_layered_image_im(filepath, files=[]):
cmd = [
CONVERT_EXE,
]
for i, fp in enumerate(files):
filename = os.path.splitext(os.path.basename(fp))[0]
args = ['(', fp, '-set', 'label', filename, '-channel', 'RGB', ')']
cmd.extend(args)
# photoshop bug need to duplicate first image added
if i == 0:
cmd.extend(args)
cmd.extend([
'-depth',
'8',
'-set',
'colorspace',
'RGB',
'-channel',
'RGB',
filepath
])
subprocess.Popen(cmd).wait()
return filepath
if __name__ == '__main__':
create_solid_background_im('foreground.png')
create_layered_image_im('comp.psd', [
'foreground_solid.jpg',
'shadow.png',
'foreground.png',
])
</code></pre>
|
<python><imagemagick>
|
2023-01-17 18:00:30
| 0
| 6,178
|
JokerMartini
|
75,150,428
| 392,041
|
apache beam to mongo
|
<p>I am trying to export the result of a pipeline to mongodb, the problem i have is that my to_mongo function receives an array ["122","sam"]. How can i convert this to an object to convert easily to a document for mongo</p>
<pre><code>import apache_beam as beam
def to_mongo(item):
"""
Uses the csv lib to escape everything correctly.
"""
print(item)
return {"ip": "133"}
class WriteToMongo(beam.PTransform):
"""
Write to mongo
"""
def expand(self, items):
return (items | 'Convert format' >> beam.Map(
to_mongo) | 'Write Output' >> beam.io.WriteToMongoDB(
uri='mongo',
db='test',
coll='test'))
</code></pre>
|
<python><mongodb><apache-beam>
|
2023-01-17 17:59:26
| 0
| 562
|
Josh Fradley
|
75,150,334
| 12,725,674
|
FileNotFoundError when writing txt file
|
<p>I am using the following code to write list items to a .txt. I get a FileNotFoundError error at a specific file and cannot figure what causes the issue.</p>
<pre><code>if os.path.isdir(f'C:/Users/Info/Desktop/Dissertation/Impression Management/Analyse/10 CEO Transcripts/Interviews WORK/{deal_no}/Earnings Calls Texts/{year}') == True:
with open(f'C:/Users/Info/Desktop/Dissertation/Impression Management/Analyse/10 CEO Transcripts/Interviews WORK/{deal_no}/Earnings Calls Texts/{year}/{fulldate}_{file}.txt', 'w') as filehandle:
for item in textlist_new:
filehandle.write(f'{item}\n')
</code></pre>
<p>Error message:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'C:/Users/Info/Desktop/Dissertation/Impression Management/Analyse/10 CEO Transcripts/Interviews WORK/3032771020/Earnings Calls Texts/2017/February 22, 2017_Event Brief of Q4 2016 Arris International PLC Earnings Call Q4 2016 and Acquisition Update - Final.pdf.txt'
</code></pre>
<p>However, the directory does exist and previous txt files were created without problems.</p>
|
<python>
|
2023-01-17 17:51:24
| 0
| 367
|
xxgaryxx
|
75,150,287
| 13,174,189
|
How to save indices, created with nmslib?
|
<p>I am using nmslib with hnsw method for vector similarity search. I have built index class for index creation:</p>
<pre><code>class NMSLIBIndex():
def __init__(self, vectors, labels):
self.dimention = vectors.shape[1]
self.vectors = vectors.astype('float32')
self.labels = labelsdef build(self):
self.index = nmslib.init(method='hnsw', space='cosinesimil')
self.index.addDataPointBatch(self.vectors)
self.index.createIndex({'post': 2})
def query(self, vector, k=10):
indices = self.index.knnQuery(vector, k=k)
return [self.labels[i] for i in indices[0]]
</code></pre>
<p>I was referring to this article <a href="https://towardsdatascience.com/comprehensive-guide-to-approximate-nearest-neighbors-algorithms-8b94f057d6b6" rel="nofollow noreferrer">https://towardsdatascience.com/comprehensive-guide-to-approximate-nearest-neighbors-algorithms-8b94f057d6b6</a></p>
<p>And now i want to load this indices to my database to use in online environment. My question is, how could i save these indices built on my vectors?</p>
|
<python><python-3.x><function><nmslib>
|
2023-01-17 17:47:23
| 1
| 1,199
|
french_fries
|
75,150,281
| 8,461,786
|
Why does __init__ of Exception require positional (not keyword) arguments in order to stringify exception correctly?
|
<p>Can someone clarify the role of <code>super().__init__(a, b, c)</code> in making the second assertion pass? It will fail without it (empty string is returned from <code>str</code>).</p>
<p>Per my understanding <code>Exception</code> is a built-in type that takes no keyword arguments.
But what exactly is happening in <code>Exception</code> when the <code>super().__init__(a, b, c)</code> is called?
Can calling <code>init</code> like that have some unwanted side-effects?</p>
<pre><code>class MyException(Exception):
def __init__(self, a, b, c):
self.a = a
self.b = b
self.c = c
super().__init__(a, b, c)
e1 = MyException('a', 'b', 'c')
assert str(e1) == "('a', 'b', 'c')"
e2 = MyException(a='a', b='b', c='c')
assert str(e2) == "('a', 'b', 'c')" # if "super()..." part above is commented out, this assertion will not pass, because str(e2) is an empty string
</code></pre>
|
<python>
|
2023-01-17 17:47:05
| 1
| 3,843
|
barciewicz
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.