QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
77,806,419
| 12,492,890
|
Generating cumulative counts in NumPy using a vectorized implementation
|
<p>Here's a sample of the numpy array I have:</p>
<pre><code>y = np.array([
[ 0],
[ 0],
[ 2],
[ 1],
[ 0],
[ 1],
[ 3],
[-1],
])
</code></pre>
<p>I'm attempting to generate a new column containing the cumulative counts with respect to each value in the input array:</p>
<pre><code>y = np.array([
[ 0, 1],
[ 0, 2],
[ 2, 1],
[ 1, 1],
[ 0, 3],
[ 1, 2],
[ 3, 1],
[-1, 1],
])
</code></pre>
<p>So far I've been using the following pandas implementation to solve this problem:</p>
<pre><code>y_pd = pd.DataFrame(y, columns=['LABEL'])
y_pd = pd.concat([
y_pd,
y_pd.groupby('LABEL').cumcount().to_frame().rename(columns = {0:'cumcounts'}) +1
], axis=1)
</code></pre>
<p>Although I'm looking towards a numpy implementation instead. Here's my numpy implementation of the same problem:</p>
<pre><code>y_np = np.hstack([y, y])
for label in np.unique(y_np):
slice_length = (y_np[:, -2]==label).sum()
y_np[y_np[:, -2]==label, -1] = range(1, slice_length+1)
</code></pre>
<p>Yet I'm feeling this aggregation using the for loop can be carried out with a faster vectorized implementation.</p>
<p>I've already checked the following links on SO to try solving this problem, with no success:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/38013778/is-there-any-numpy-group-by-function">Is there any numpy group by function?</a></li>
<li><a href="https://stackoverflow.com/questions/49238332/numpy-array-group-by-one-column-sum-another">Numpy array: group by one column, sum another</a></li>
<li><a href="https://stackoverflow.com/questions/49141969/vectorized-groupby-with-numpy">Vectorized groupby with NumPy</a></li>
<li><a href="https://stackoverflow.com/questions/69228055/numpy-group-by-returning-original-indexes-sorted-by-the-result">numpy group by, returning original indexes sorted by the result</a></li>
</ul>
<p>Could you provide any help in this regard?</p>
<p>Note: The numpy array I have is actually much bigger in terms of cardinality and number of fields, the order of the records should not be altered during the process.</p>
|
<python><pandas><numpy><group-by><count>
|
2024-01-12 12:00:35
| 1
| 15,647
|
lemon
|
77,806,225
| 12,390,973
|
How to use binary variable in another constraint in PYOMO?
|
<p>I have created an RTC energy dispatch model using PYOMO. There is a demand profile and to serve this demand profile we have three main components: <strong>Solar, Wind, and Battery</strong> just so that the model won't give infeasible errors in case there is not enough energy supply from the three main sources I have added one backup generator called <strong>Lost_Load</strong>
I have added one binary variable called <strong>insufficient_dispatch</strong> which will be <strong>1</strong> if <strong>lost_load >= 10% of demand profile</strong> else <strong>0</strong>. Here is the inputs that I am using:
<a href="https://i.sstatic.net/5DdCe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5DdCe.png" alt="enter image description here" /></a></p>
<p>Here is the code:</p>
<pre><code>import datetime
import pandas as pd
import numpy as np
from pyomo.environ import *
profiles = pd.read_excel('profiles.xlsx', sheet_name='15min', usecols='B:D')
solar_profile = profiles.iloc[:, 0].values
wind_profile = profiles.iloc[:, 1].values
load_profile = profiles.iloc[:, 2].values
interval_freq = 60
solar_capacity = 120
wind_capacity = 170
power = 90
energy = 360
model = ConcreteModel()
model.m_index = Set(initialize=list(range(len(load_profile))))
# model.total_instance = Param(initialize=35010000)
model.chargeEff = Param(initialize=0.92)
model.dchargeEff = Param(initialize=0.92)
model.storage_power = Param(initialize=power)
model.storage_energy = Param(initialize=energy)
#variable
# model.grid = Var(model.m_index, domain=Reals)
model.grid = Var(model.m_index, domain=NonNegativeReals)
model.p_max = Var(domain=NonNegativeReals)
# storage
#variable
model.e_in = Var(model.m_index, domain=NonNegativeReals)
model.e_out = Var(model.m_index, domain=NonNegativeReals)
model.soc = Var(model.m_index, domain=NonNegativeReals)
model.ein_eout_lmt = Var(model.m_index, domain=Binary)
# Solar variable
solar_ac = np.minimum(solar_profile * solar_capacity * (interval_freq / 60), solar_capacity * (interval_freq / 60))
model.solar_cap = Param(initialize=solar_capacity)
model.solar_use = Var(model.m_index, domain=NonNegativeReals)
# Wind variables
wind_ac = np.minimum(wind_profile * wind_capacity * (interval_freq / 60), wind_capacity * (interval_freq / 60))
model.wind_cap = Param(initialize=wind_capacity)
model.wind_use = Var(model.m_index, domain=NonNegativeReals)
# Solar profile
model.solar_avl = Param(model.m_index, initialize=dict(zip(model.m_index, solar_ac)))
# Wind profile
model.wind_avl = Param(model.m_index, initialize=dict(zip(model.m_index, wind_ac)))
# Load profile
model.load_profile = Param(model.m_index, initialize=dict(zip(model.m_index, load_profile * interval_freq / 60.0)))
model.lost_load = Var(model.m_index, domain=NonNegativeReals)
# Auxiliary binary variable for the condition
model.insufficient_dispatch = Var(model.m_index, domain=Binary)
# Objective function
def revenue(model):
total_revenue = sum(
model.grid[m] * 100 +
model.lost_load[m] * -1000000000 #* model.insufficient_dispatch[m]
for m in model.m_index)
return total_revenue
model.obj = Objective(rule=revenue, sense=maximize)
eps = 1e-3 # to create a gap for gen3_status constraints
bigm = 1e3 # choose this value high but not so much to avoid numeric instability
# Create 2 BigM constraints
def gen3_on_off1(model, m):
return model.lost_load[m] >= 0.1 * model.load_profile[m] + eps - bigm * (1 - model.insufficient_dispatch[m])
def gen3_on_off2(model, m):
return model.lost_load[m] <= 0.1 * model.load_profile[m] + bigm * model.insufficient_dispatch[m]
model.gen3_on_off1 = Constraint(model.m_index, rule=gen3_on_off1)
model.gen3_on_off2 = Constraint(model.m_index, rule=gen3_on_off2)
def energy_balance(model, m):
return model.grid[m] <= model.solar_use[m] + model.wind_use[m] + model.e_out[m] - model.e_in[m] + model.lost_load[m]
model.energy_balance = Constraint(model.m_index, rule=energy_balance)
def grid_limit(model, m):
return model.grid[m] >= model.load_profile[m]
model.grid_limit = Constraint(model.m_index, rule=grid_limit)
def max_solar_gen(model, m):
eq = model.solar_use[m] <= model.solar_avl[m] * 1
return eq
model.max_solar_gen = Constraint(model.m_index, rule=max_solar_gen)
def min_solar_gen(model, m):
eq = model.solar_use[m] >= model.solar_avl[m] * 0.6
return eq
# model.min_solar_gen = Constraint(model.m_index, rule=min_solar_gen)
def max_wind_gen(model, m):
eq = model.wind_use[m] <= model.wind_avl[m] * 1
return eq
model.max_wind_gen = Constraint(model.m_index, rule=max_wind_gen)
def min_wind_gen(model, m):
eq = model.wind_use[m] >= model.wind_avl[m] * 0.3
return eq
# model.min_wind_gen = Constraint(model.m_index, rule=min_wind_gen)
# Charging and discharging controller
def ein_limit1(model, m):
eq = model.e_in[m] + (1 - model.ein_eout_lmt[m]) * 1000000 >= 0
return eq
model.ein_limit1 = Constraint(model.m_index, rule=ein_limit1)
def eout_limit1(model, m):
eq = model.e_out[m] + (1 - model.ein_eout_lmt[m]) * -1000000 <= 0
return eq
model.eout_limit1 = Constraint(model.m_index, rule=eout_limit1)
def ein_limit2(model, m):
eq = model.e_out[m] + (model.ein_eout_lmt[m]) * 1000000 >= 0
return eq
model.ein_limit2 = Constraint(model.m_index, rule=ein_limit2)
def eout_limit2(model, m):
eq = model.e_in[m] + (model.ein_eout_lmt[m]) * -1000000 <= 0
return eq
model.eout_limit2 = Constraint(model.m_index, rule=eout_limit2)
#max charging
def max_charge(model, m):
return model.e_in[m] <= model.storage_power*interval_freq/60
model.max_charge = Constraint(model.m_index, rule=max_charge)
# max discharging
def max_discharge(model, m):
return model.e_out[m] <= model.storage_power*interval_freq/60
model.max_max_discharge = Constraint(model.m_index, rule=max_discharge)
def soc_update(model, m):
if m == 0:
eq = model.soc[m] == \
(
(model.storage_energy * 1) +
(model.e_in[m] * model.chargeEff) -
(model.e_out[m] / model.dchargeEff) #* model.insufficient_dispatch[m]
)
else:
eq = model.soc[m] == \
(
model.soc[m - 1] +
(model.e_in[m] * model.chargeEff) -
(model.e_out[m] / model.dchargeEff) #* model.insufficient_dispatch[m]
)
return eq
model.soc_update = Constraint(model.m_index, rule=soc_update)
def soc_min(model, m):
eq = model.soc[m] >= model.storage_energy * 0.2
return eq
model.soc_min = Constraint(model.m_index, rule=soc_min)
def soc_max(model, m):
eq = model.soc[m] <= model.storage_energy * 1
return eq
model.soc_max = Constraint(model.m_index, rule=soc_max)
def throughput_limit(model):
energy_sum = sum(model.e_out[idx] for idx in model.m_index)
return energy_sum <= model.storage_energy*365
# model.throughput_limit = Constraint(rule=throughput_limit)
Solver = SolverFactory('gurobi')
Solver.options['LogFile'] = "gurobiLog"
Solver.options['MIPGap'] = 0.50
print('\nConnecting to Gurobi Server...')
results = Solver.solve(model)
if (results.solver.status == SolverStatus.ok):
if (results.solver.termination_condition == TerminationCondition.optimal):
print("\n\n***Optimal solution found***")
print('obj returned:', round(value(model.obj), 2))
else:
print("\n\n***No optimal solution found***")
if (results.solver.termination_condition == TerminationCondition.infeasible):
print("Infeasible solution")
exit()
else:
print("\n\n***Solver terminated abnormally***")
exit()
grid_use = []
solar = []
wind = []
e_in = []
e_out=[]
soc = []
lost_load = []
insufficient_dispatch = []
load = []
for i in range(len(load_profile)):
grid_use.append(value(model.grid[i]))
solar.append(value(model.solar_use[i]))
wind.append(value(model.wind_use[i]))
lost_load.append(value(model.lost_load[i]))
e_in.append(value(model.e_in[i]))
e_out.append(value(model.e_out[i]))
soc.append(value(model.soc[i]))
load.append(value(model.load_profile[i]))
insufficient_dispatch.append(value(model.insufficient_dispatch[i]))
df_out = pd.DataFrame()
df_out["Grid"] = grid_use
df_out["Potential Solar"] = solar_ac
df_out["Potential Wind"] = wind_ac
df_out["Actual Solar"] = solar
df_out["Actual Wind"] = wind
# df_out["Solar Curtailment"] = solar_ac - np.array(solar)
# df_out["Wind Curtailment"] = wind_ac - np.array(wind)
df_out["Charge"] = e_in
df_out["Discharge"] = e_out
df_out["soc"] = soc
df_out["Lost Load"] = lost_load
df_out["Load profile"] = load
df_out["Dispatch"] = df_out["Grid"] - df_out["Lost Load"]
df_out["Insufficient Supply"] = insufficient_dispatch
summary = {
'Grid': [df_out['Grid'].sum()/1000],
'Wind': [df_out['Actual Wind'].sum()/1000],
'Solar': [df_out['Actual Solar'].sum()/1000],
'Storage Discharge': [df_out['Discharge'].sum()/1000],
'Storage Charge': [df_out['Charge'].sum()/1000],
'Shortfall': [df_out['Lost Load'].sum()/1000],
# 'Solar Curtailment': [df_out['Solar Curtailment'].sum()/1000],
# 'Wind Curtailment': [df_out['Wind Curtailment'].sum()/1000],
}
summaryDF = pd.DataFrame(summary)
timestamp = datetime.datetime.now().strftime('%y-%m-%d_%H-%M')
with pd.ExcelWriter('output solar '+str(solar_capacity)+'MW wind '+str(wind_capacity)+'MW ESS '+str(power)+'MW and '+str(energy)+'MWh '+str(timestamp)+'.xlsx') as writer:
df_out.to_excel(writer, sheet_name='dispatch')
summaryDF.to_excel(writer, sheet_name='Summary', index=False)
</code></pre>
<p>I want to use this binary variable <strong>insufficient_dispatch</strong> with another constraint <strong>energy_balance</strong> ( which is already there in the code ) :</p>
<pre><code>def energy_balance(model, m):
return model.grid[m] <= model.solar_use[m] + model.wind_use[m] + model.e_out[m] - model.e_in[m] + model.lost_load[m]
</code></pre>
<p>it should be like:</p>
<pre><code>if model.insufficient_dispatch[m] == 0:
return model.grid[m] <= model.solar_use[m] + model.wind_use[m] + model.e_out[m] - model.e_in[m] + model.lost_load[m]
else:
return model.grid[m] * 0.9 <= model.solar_use[m] + model.wind_use[m] + model.e_out[m] - model.e_in[m] + model.lost_load[m]
</code></pre>
<p>But this won't work, can someone please help?
This is the graphical representation of one of the months:
<a href="https://i.sstatic.net/75DGl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/75DGl.png" alt="enter image description here" /></a></p>
<p>how i want is something like this:
<a href="https://i.sstatic.net/TU0eW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TU0eW.png" alt="enter image description here" /></a></p>
|
<python><pyomo><gurobi>
|
2024-01-12 11:27:37
| 1
| 845
|
Vesper
|
77,805,985
| 5,392,289
|
Typing hinting nested dictionary with varying value types
|
<p>My dictionary looks as follows:</p>
<pre><code>spreadsheet_list: dict[str, dict[str, Union[str, list[str]]]] = {
'final_kpis': {
'gsheet_key': '1np6fM9d_BrBuDbBKWkWwUjhNypoCtto_84bLcx-HBdk',
'primary_key': ['KPI_Reference']
},
'maplecroft_risk_drivers': {
'gsheet_key': '1rV2oXbvvXcq6glLpHu1X4e_huR4mkZ4bbcZ2cQPv_A0',
'primary_key': ['Commodity_type', 'Commodity', 'Country']
}
}
</code></pre>
<p>I am looping over the dict with:</p>
<pre><code>for name, info in spreadsheet_list.items():
do_stuff(name, info['gsheet_key'], info['primary_key'])
</code></pre>
<p>Where the expected types are specified in the function args:</p>
<pre><code>def do_stuff(name: str, gsheet_key: str, primary_key: list[str]):
</code></pre>
<p>Error returned is:</p>
<pre><code>error: Argument 2 to "do_stuff" has incompatible type "str | list[str]"; expected "str" [arg-type]
error: Argument 3 to "do_stuff" has incompatible type "str | list[str]"; expected "list[str]" [arg-type]
</code></pre>
<p>I have seen <a href="https://stackoverflow.com/a/74287279/5392289">this</a> solution using <code>TypedDict</code>, but not for a nested dictionary.</p>
|
<python><mypy>
|
2024-01-12 10:42:17
| 1
| 1,305
|
Oliver Angelil
|
77,805,935
| 4,844,184
|
Inherit and modify methods from a parent class at run time in Python
|
<p>I have a parent class <strong>of which I don't know the methods a priori</strong>. Its methods can do arbitrary things such as modifying attributes of the class.</p>
<p>Say:</p>
<pre><code>class Parent():
def __init__(self):
self.a = 42
def arbitrary_name1(self, foo, bar):
self.a += 1
return foo + bar
def arbitrary_name2(self, foo, bar):
self.a -= 1
return foo ** 2 + bar
my_parent = Parent()
</code></pre>
<p>I want to dynamically create a child class where such methods exist but have been modified : for instance say each original methods should now be called twice and return the list of the results from the original parent method being called twice. <strong>Note that methods should modify the instance as the original methods did</strong>.
Aka if I knew the parent class beforehand and its methods I would have done (in a static fashion):</p>
<pre><code>class MyChild(Parent):
def __init__(self):
super().__init__()
def arbitrary_name1(self, foo, bar):
list_of_res = []
for _ in range(2):
list_of_res.append(super().arbitrary_name1(foo, bar))
return list_of_res
def arbitrary_name2(foo, bar):
list_of_res = []
for _ in range(2):
list_of_res.append(super().arbitrary_name2(a))
return list_of_res
</code></pre>
<p>Which would have given me the behavior I want:</p>
<pre><code>my_child = MyChild()
assert my_child.arbitrary_name1(1, 2) == [3, 3]
assert my_child.a == 44
print("Success!")
</code></pre>
<p>However you only get a <code>my_parent</code> instance, which can have arbitrary methods and you don't know them in advance.
Let's assume further to simplify that you can know the signature of the methods and that they all have the same (one could use <code>inspect.signature</code> if it was not the case or <code>functools.wraps</code> to catch arguments).
The way I approached the problem was to use <code>MethodTypes</code> to bind the old methods to the new methods in various ways and did variations of the following:</p>
<pre><code>import re
import types
# Get all methods of parent class dynamically
parent_methods_names = [method_name for method_name in dir(my_parent)
if callable(getattr(my_parent, method_name)) and not re.match(r"__.+__", method_name)]
def do_twice(function):
def wrapped_function(self, foo, bar):
res = []
for _ in range(2):
# Note that self is passed implicitly but can also be passed explicitly
res.append(function(foo, bar))
return res
return wrapped_function
my_dyn_child = Parent()
for method_name in parent_methods_names:
# following https://stackoverflow.com/questions/962962/python-changing-methods-and-attributes-at-runtime
f = types.MethodType(do_twice(getattr(my_parent, method_name)), my_dyn_child)
setattr(my_dyn_child, method_name, f)
assert getattr(my_dyn_child, parent_methods_names[0])(1, 2) == [3, 3]
assert my_dyn_child.a == 44
print("Success!")
</code></pre>
<pre><code>Success!
Traceback (most recent call last):
File "test.py", line 52, in <module>
assert my_dyn_child.a == 44
AssertionError
</code></pre>
<p>The method seems to work as intended as the first assess passes.</p>
<p>However to my horror <code>my_dyn_child.a = 42</code> thus the second assert fails. I imagine it is because the self is not bound properly and thus is not the same but there is some dark magic going on.</p>
<p>Is there a way to accomplish the desired behavior ?</p>
|
<python><class><types><decorator>
|
2024-01-12 10:35:45
| 1
| 2,566
|
jeandut
|
77,805,825
| 14,147,996
|
Efficient loading of directory sorted by modification date
|
<p>I am reading a directory with Python, containing <em>many</em> files, most of which are old.
However, I am only interested in the files generated in the past 24h.</p>
<p>Running something like</p>
<pre><code>for filename in os.listdir(dir_path):
if now - timedelta(hours=24) <= os.path.getmtime(filename) <= now:
do_something()
</code></pre>
<p>takes very long since the entire <code>dir_path</code> is scanned.</p>
<p>Is there a quicker possibility to load <code>dir_path</code> with the files sorted by modification date?</p>
|
<python><performance><operating-system>
|
2024-01-12 10:15:09
| 1
| 365
|
Vivian
|
77,805,804
| 896,670
|
How to replace substring from df1['Column 1'] with values from df2['Column 2'] when df1['Column 1'] contains df2['Column 1']?
|
<p>How to replace substring from df1['Column 1'] with values from df2['Column 2'] when df1['Column 1'] contains df2['Column 1']?</p>
<p>df1:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column1</th>
</tr>
</thead>
<tbody>
<tr>
<td>A&O Inc.</td>
</tr>
<tr>
<td>HP Canada</td>
</tr>
</tbody>
</table>
</div>
<p>df2:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column1</th>
<th>Column2</th>
</tr>
</thead>
<tbody>
<tr>
<td>A&O</td>
<td>Allen & Overy</td>
</tr>
<tr>
<td>HP</td>
<td>Hewlett Packard</td>
</tr>
</tbody>
</table>
</div>
<p>Expected Output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Column1</th>
</tr>
</thead>
<tbody>
<tr>
<td>Allen & Overy Inc.</td>
</tr>
<tr>
<td>Hewlett Packard Canada</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe>
|
2024-01-12 10:11:41
| 2
| 2,256
|
aleafonso
|
77,805,469
| 2,237,916
|
Pandas to Parquet with per-column compression
|
<p>The <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_parquet.html" rel="nofollow noreferrer">pandas documentation</a> marks the following:</p>
<blockquote>
<p><strong>compression</strong>: str or None, default ‘snappy’</p>
<p>Name of the compression to use. Use None for no compression. Supported options: ‘snappy’, ‘gzip’, ‘brotli’, ‘lz4’, ‘zstd’.</p>
</blockquote>
<p>In the <a href="https://arrow.apache.org/docs/python/generated/pyarrow.parquet.write_table.html" rel="nofollow noreferrer">arrow documentation</a>, they state the per-column compression:</p>
<blockquote>
<p><strong>compression</strong>: str or dict</p>
<p>Specify the compression codec, either on a general basis or per-column. Valid values: {‘NONE’, ‘SNAPPY’, ‘GZIP’, ‘BROTLI’, ‘LZ4’,‘ZSTD’}.</p>
</blockquote>
<p>As you can see, it seems from the documentation that it's not implemented. Has anyone tried it and make work pandas with parket per-column compression? Or, am I missing something?</p>
|
<python><pandas><parquet>
|
2024-01-12 09:11:08
| 1
| 7,301
|
silgon
|
77,805,320
| 5,510,540
|
Question - Asnwer with BERT: very very poor results
|
<p>Imagine I have the following text:</p>
<pre><code>text = "Once upon a time there was an old mother pig who had three little pigs and not enough food to feed them. So when they were old enough, she sent them out into the world to seek their fortunes. The first little pig was very lazy. He didn't want to work at all and he built his house out of straw. The second little pig worked a little bit harder but he was somewhat lazy too and he built his house out of sticks. Then, they sang and danced and played together the rest of the day. The third little pig worked hard all day and built his house with bricks. It was a sturdy house complete with a fine fireplace and chimney. It looked like it could withstand the strongest winds. The next day, a wolf happened to pass by the lane where the three little pigs lived, and he saw the straw house, and he smelled the pig inside. He thought the pig would make a mighty fine meal and his mouth began to water. So he knocked on the door and said: Little pig! Littl e pig! Let me in! Let me in! But the little pig saw the wolf's big paws through the keyhole, so he answered back: No! No! No! Not by the hairs on my chinny chin chin! Three Little Pigs, the straw houseThen the wolf showed his teeth and said: Then I'll huff and I'll puff and I'll blow your house down. So he huffed and he puffed and he blew the house down! The wolf opened his jaws very wide and bit down as hard as he could, but the first little pig escaped and ran away to hide with the second little pig. The wolf continued down the lane and he passed by the second house made of sticks, and he saw the house, and he smelled the pigs inside, and his mouth began to water as he thought about the fine dinner they would make. So he knocked on the door and said: Littl e pigs! Little pigs! Let me in! Let me in! But the little pigs saw the wolf's pointy ears through the keyhole, so they answered back: No! No! No! Not by the hairs on our chinny chin chin! So the wolf showed his teeth and said: Then I'll huff and I'll puff and I'll blow your house down! So he huffed and he puffed and he blew the house down! The wolf was greedy and he tried to catch both pigs at once, but he was too greedy and got neither! His big jaws clamped down on nothing but air and the two little pigs s crambled away as fast as their little hooves would carry them. The wolf chased them down the lane and he almost caught them. But they made it to the brick house and slammed the door closed before the wolf could catch them. The three little pigs they were v ery frightened, they knew the wolf wanted to eat them. And that was very, very true. The wolf hadn't eaten all day and he had worked up a large appetite chasing the pigs around and now he could smell all three of them inside and he knew that the three litt le pigs would make a lovely feast. Three Little Pigs at the Brick House So the wolf knocked on the door and said: Little pigs! Little pigs! Let me in! Let me in! But the little pigs saw the wolf's narrow eyes through the keyhole, so they answered back: No! No! No! Not by the hairs on our chinny chin chin! So the wolf showed his teeth and said: Then I'll huff and I'll puff and I'll blow your house down. Well! he huffed and he puffed. He puffed and he huffed. And he huffed , huffed, and he puffed, puffed, but he could not blow the house down. At last, he was so out of breath that he couldn't huff and he couldn't puff anymore. So he stopped to rest and thought a bit. But this was too much. The wolf danced about with rage and swore he would come down the chimney and eat up the little pig for his supper. But while he was climbing on to the roof the little pig made up a blazing fire and put on a big pot full of water to boil. Then, just as the wolf was coming down the chimney, the little piggy pulled off the lid, an d plop! in fell the wolf into the scalding water. So the little piggy put on the cover again, boiled the wolf up, and the three little pigs ate him for supper."
</code></pre>
<p>I want to develop a question answer model on different chunks of the text. For example, I want to ask the BERT model of whether or not the following text relates to pigs. To do so I do the following:</p>
<pre><code>import os
import pandas as pd
from transformers import AutoTokenizer, pipeline, AutoModelForQuestionAnswering
# Split the text into chunks
chunk_size = 50
overlap_size = 10
chunks = [text[i:i + chunk_size] for i in range(0, len(text), chunk_size - overlap_size)]
tokenizer_path = hf_downloader.download_model('distilbert-base-uncased', tokenizer_only=True)
model_path = hf_downloader.download_model('distilbert-base-uncased')
my_tokenizer = AutoTokenizer.from_pretrained(tokenizer_path)
my_model = AutoModelForQuestionAnswering.from_pretrained(model_path)
# Initialize the question-answering pipeline
qa_pipeline = pipeline('question-answering', model=my_model, tokenizer=my_tokenizer)
# List to store the scores
chunk_scores = []
threshold = 0.01
# List to store the interpretations
chunk_interpretations = []
# Process each chunk with the question-answering pipeline
for chunk in chunks:
# Define your question here
question = "Does the following text relates to pigs?"
answer = qa_pipeline({'question': question, 'context': chunk})
score = answer.get('score', 0)
chunk_scores.append(score)
# Create a DataFrame with the chunk scores
df = pd.DataFrame({'Chunk': chunks, 'Score': chunk_scores})
# Display the DataFrame
print(df)
</code></pre>
<p>This simple task gives really poor results:</p>
<pre><code> Chunk Score
0 Once upon a time there was an old mother pig w... 0.007304
1 pig who had three little pigs and not enough ... 0.009538
2 nough food to feed them. So when they were old... 0.005439
3 re old enough, she sent them out into the worl... 0.008510
4 e world to seek their fortunes. The first litt... 0.009954
.. ... ...
94 off the lid, an d plop! in fell the wolf into ... 0.005313
95 into the scalding water. So the little piggy ... 0.005894
96 piggy put on the cover again, boiled the wolf ... 0.006594
97 wolf up, and the three little pigs ate him fo... 0.008629
98 him for supper. 0.045236
</code></pre>
<p>Am I missing something? thanks very much in advance!</p>
|
<python><huggingface-transformers><bert-language-model>
|
2024-01-12 08:44:03
| 0
| 1,642
|
Economist_Ayahuasca
|
77,805,006
| 10,097,229
|
ufunc 'add' did not contain loop with matching data types
|
<p>I have a dataframe where some columns has values and some are NaN type (basically float)</p>
<p>This is the dataframe-</p>
<p><a href="https://i.sstatic.net/0iWj0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0iWj0.png" alt="enter image description here" /></a></p>
<p>I am looping over the <code>decribe</code> and <code>referring</code> columns and if that row has some file name, it wil refer to that particular value and populate the dataframe row based on that file name.</p>
<p>This is the code</p>
<pre><code>for i in range(0,len(table_names)):
if type(table_names['referring'][i])!=float:
a = table_names['referring'][i] #if the referring column row has some value, take that value
b = table_names['columns'][i] #take the row name from `columns` column
a = a + '.xlsx' #this will make the file name which i need to refer to populate data
product_name = pd.read_excel(a) #read the file
product_name = product_name.values.tolist()
df[b] = product_name #populate the particular column with the content of the file we just read
</code></pre>
<p>If all the values of <code>referring</code> and <code>describe</code> columns are <code>NaN</code>, then I am getting the following error-</p>
<pre><code>a = a + '.xlsx'
numpy.core._exceptions._UFuncNoLoopError: ufunc 'add' did not contain a loop with signature matching types (dtype('float64'), ftype('<U5')) -> None
</code></pre>
<p>I referred to this <a href="https://stackoverflow.com/questions/44527956/python-ufunc-add-did-not-contain-a-loop-with-signature-matching-types-dtype">question</a> but it shows to change type which I cannot do as it is a file name I am creating.</p>
|
<python><list><numpy>
|
2024-01-12 07:30:54
| 0
| 1,137
|
PeakyBlinder
|
77,804,589
| 10,035,190
|
OSError: Couldn't deserialize thrift: No more data to read. Deserializing page header failed
|
<p>I am fetching data from event hub and uploading it to blob with blob_type <code>AppendBlob</code> it appending correctly but when i download and try to read that parquet file then it showing this error <code>OSError: Couldn't deserialize thrift: No more data to read. Deserializing page header failed.</code> and sometime this error also <code>Unexpected end of stream: Page was smaller (4) than expected (13)</code> could anyone help me in understanding both error and help me in solving former error.</p>
<pre><code>import asyncio
from datetime import datetime
import time
from datetime import datetime
import pandas as pd
from io import BytesIO
from azure.storage.blob import BlobServiceClient
from azure.eventhub.aio import EventHubConsumerClient
from azure.eventhub.extensions.checkpointstoreblobaio import (BlobCheckpointStore)
EVENT_HUB_CONNECTION_STR = ""
EVENT_HUB_NAME = ""
BLOB_STORAGE_CONNECTION_STRING = ""
BLOB_CONTAINER_NAME = ""
async def on_event(partition_context, event):
global finalDF
try:
data = event.body_as_json(encoding='UTF-8')
df=pd.DataFrame(data,index=[0])
finalDF=pd.concat([finalDF,df])
if finalDF.shape[0]>100:
uniqueBPIds=(finalDF['batteryserialnumber'].unique()).tolist()
parquet = BytesIO()
for i in uniqueBPIds:
tempdf=finalDF[finalDF['batteryserialnumber']==i]
tempdf.to_parquet(parquet)
parquet.seek(0)
blob_service_client = BlobServiceClient.from_connection_string(BLOB_STORAGE_CONNECTION_STRING)
blob_path = f'new8_{year}/{month}/{i}/{i}_{year}_{month}_{day}.parquet'
blob_client = blob_service_client.get_blob_client(container=BLOB_CONTAINER_NAME, blob=blob_path)
blob_client.upload_blob(data = parquet,overwrite=False,blob_type='AppendBlob')
finalDF=pd.DataFrame()
print('done')
except Exception as e:
print('ERROR',e)
await partition_context.update_checkpoint(event)
async def main():
checkpoint_store = BlobCheckpointStore.from_connection_string(
BLOB_STORAGE_CONNECTION_STRING, BLOB_CONTAINER_NAME
)
client = EventHubConsumerClient.from_connection_string(
EVENT_HUB_CONNECTION_STR,
consumer_group="$Default",
checkpoint_store=checkpoint_store,
eventhub_name=EVENT_HUB_NAME,
)
async with client:
await client.receive(on_event=on_event, starting_position="-1")
if __name__ == "__main__":
k=0
finalDF = pd.DataFrame()
current_datetime = datetime.now()
year, month, day = current_datetime.year, current_datetime.month, current_datetime.day
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
</code></pre>
|
<python><azure><azure-blob-storage><parquet>
|
2024-01-12 05:51:50
| 1
| 930
|
zircon
|
77,804,537
| 10,518,698
|
How to ignore "error while decoding" and continue live stream?
|
<p>I am having the RTSP FFMPEG issue while decoding in OpenCV Livestream using PyFlask. I get this particular error <code>[h264 @ 0x23a0be0] error while decoding MB 59 42, bytestream -13</code> and then the live stream stops abruptly.</p>
<p>This is my code and seems like it when ret became <code>False</code> it goes to else statement and something happens there and the whole livestream gets stucked.</p>
<pre><code>import cv2
camera = cv2.VideoCapture('RTSP LINK HERE')
def gen_frames(): # generate frame by frame from camera
while (camera.isOpened()):
# Capture frame-by-frame
ret, frame = camera.read() # read the camera frame
if ret == True:
ret, buffer = cv2.imencode('.jpg', frame)
frame = buffer.tobytes()
yield (b'--frame\r\n'b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n')
else:
continue
@app.route('/video_feed')
def video_feed():
"""Video streaming route. Put this in the src attribute of an img tag."""
return Response(gen_frames(),
mimetype='multipart/x-mixed-replace; boundary=frame')
</code></pre>
<p>I also tried using threading as they mentioned here <a href="https://stackoverflow.com/questions/49233433/opencv-read-errorh264-0x8f915e0-error-while-decoding-mb-53-20-bytestream">opencv read error:[h264 @ 0x8f915e0] error while decoding MB 53 20, bytestream -7</a> but I get the same error.</p>
|
<python><opencv><flask><rtsp>
|
2024-01-12 05:35:48
| 0
| 513
|
JSVJ
|
77,804,343
| 12,425,004
|
Python: load_pem_private_key is failing to recognize my private key in production
|
<p>Locally, I have an .env file containing a single line RSA key like so:
<code>PRIVATE_KEY="-----BEGIN RSA PRIVATE KEY-----\ncontentsofkey\n-----END RSA PRIVATE KEY-----\n" </code></p>
<p>Every 64 characters contains a <code>\n</code> to escape new lines.
This key format works locally and allows my Python app to run as intended, the private key is able to load.</p>
<p>The issue begins in Production. I copy and paste the private key one line format into a GitHub Secret, however the EXACT same key and format is throwing back this error:</p>
<blockquote>
<p>ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [<OpenSSLError(code=503841036, lib=60, reason=524556, reason_text=unsupported)>])</p>
</blockquote>
<blockquote>
<p>During handling of the above exception, another exception occurred:</p>
</blockquote>
<blockquote>
<p>ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [<OpenSSLError(code=75497580, lib=9, reason=108, reason_text=no start line)>])</p>
</blockquote>
<p>As stated in the reason_text, it seems the key format is unsupported and there is no state line? Which leaves me extremely confused because this is working locally, but not in production. In production I am running a venv using the exact same requirements.txt and python version when I test local.</p>
<p>I have added a print statement before the load_pem_private_key() function to see the format of the key, and it prints in the exact same format as my .env file... which works just fine locally:</p>
<p><code>"-----BEGIN RSA PRIVATE KEY-----\ncontentsofkey\n-----END RSA PRIVATE KEY-----\n" </code></p>
<p>I don't see any difference between the print statements locally and in production to see if the keys are being manipulated somewhere.</p>
<p>Has anyone have had this happen before? I need to store the private key in a single line or else GitHub Actions workflow throws back YAML syntax errors.</p>
<p>Here is the code I am working with, again it works just fine locally but not in production:</p>
<pre><code> now = int(time.time())
payload = {"iat": now, "exp": now + expiration, "iss": self.id}
print(self.key)
# This line below is throwing the error showed above
encrypted = jwt.encode(payload, key=self.key, algorithm="RS256")
if isinstance(encrypted, bytes):
encrypted = encrypted.decode("utf-8")
return encrypted
</code></pre>
<p>The print statement in the code above is for debugging purposes.</p>
<p>Any ideas? The cryptography package is throwing the <code>_handle_key_loading_error</code> for <code>load_pem_public_key</code></p>
<p>Here is the GitHub workflow:</p>
<pre><code> - name: Deploy secrets
if: ${{ github.event_name == 'release' || github.event_name == 'workflow_dispatch' }}
run: |
eval "echo \"$(cat .deployment/secret.yaml)\"" | kubectl apply -f -
env:
GITHUBAPP_KEY: '${{ secrets.PRIVATE_KEY }}'
</code></pre>
<p>Below is deployment file for k8's secret</p>
<pre><code>apiVersion: v1
kind: Secret
metadata:
name: xxx
namespace: xxx
type: Opaque
stringData:
GITHUBAPP_KEY: '${GITHUBAPP_KEY}'
</code></pre>
<p>How GITHUBAPP_KEY is being read in python:</p>
<pre class="lang-py prettyprint-override"><code>app.config["GITHUBAPP_KEY"] = environ["GITHUBAPP_KEY"]
</code></pre>
|
<python><flask><cryptography><github-actions><rsa>
|
2024-01-12 04:36:02
| 3
| 1,826
|
yung peso
|
77,804,198
| 138,830
|
Replicate Pylance analysis in Pyright
|
<p>I've configured my VS Code project to use the Pylance extension. I know that Pylance is using Pyright internally.</p>
<p>We have a "no-warning" policy. I want to enforce this by running Pyright (in pre-commit and in the CI pipeline). Since Pyright is used internally by Pylance, I figured I should be able to run an equivalent analysis with a standalone Pyright.</p>
<p>My problem is that I don't know how to replicate the Pylance analysis in Pyright. They seem to disagree even if I use the same value for the <code>typeCheckingMode</code> parameter (i.e. <code>strict</code>).</p>
<p>For this code:</p>
<pre><code>from typing import Any
a: str = 12345
b: dict[str, Any] = {}
c = b.get("a", {}).get("b", [])
</code></pre>
<p>Pylance only spots the first problem:</p>
<pre><code>Expression of type "Literal[12345]" cannot be assigned to declared type "str"
"Literal[12345]" is incompatible with "str"
</code></pre>
<p>However, running Pyright on the command-line, I get an additional error:</p>
<pre><code>test.py
test.py:3:10 - error: Expression of type "Literal[12345]" cannot be assigned to declared type "str"
"Literal[12345]" is incompatible with "str" (reportGeneralTypeIssues)
test.py:6:1 - error: Type of "c" is partially unknown
Type of "c" is "Any | list[Unknown]" (reportUnknownVariableType)
</code></pre>
<p>2 errors, 0 warnings, 0 informations</p>
<p>I'm wondering:</p>
<ul>
<li>Am I missing a Pyright parameter?</li>
<li>Is it due to a mismatch between my Pyright version and the one embedded in Pylance?</li>
<li>Is this a bug in Pylance?</li>
</ul>
<p><strong>But the real question is: how can I validate, outside of VS Code, that we don't have any type error in the code that are reported by Pylance?</strong></p>
|
<python><pylance><pyright>
|
2024-01-12 03:36:19
| 1
| 14,239
|
gawi
|
77,803,998
| 16,405,935
|
Error when install pandas with Anaconda Prompt
|
<p>I'm trying to install new pandas version to my company's PC. Because my computer cannot connect to the internet so I downloaded the file <code>tar.gz</code> and use Anaconda Prompt to install but encountered the following problem:</p>
<pre><code>(base) C:\Users\admin>pip install pandas-2.1.4.tar.gz
Processing c:\users\admin\pandas-2.1.4.tar.gz
Installing build dependencies ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\admin\anaconda3\python.exe' 'C:\Users\admin\anaconda3\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\admin\AppData\Local\Temp\pip-build-env-4ynsmtd4\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- meson-python==0.13.1 meson==1.2.1 wheel 'Cython>=0.29.33,<3' 'oldest-supported-numpy>=2022.8.16; python_version<'"'"'3.12'"'"'' 'numpy>=1.26.0,<2; python_version>='"'"'3.12'"'"'' 'versioneer[toml]'
cwd: None
Complete output (8 lines):
Ignoring numpy: markers 'python_version >= "3.12"' don't match your environment
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000002775E8F02E0>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/meson-python/
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000002775E8F04F0>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/meson-python/
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000002775E8F06A0>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/meson-python/
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000002775E8F0850>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/meson-python/
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'ConnectTimeoutError(<pip._vendor.urllib3.connection.HTTPSConnection object at 0x000002775E8F0A00>, 'Connection to pypi.org timed out. (connect timeout=15)')': /simple/meson-python/
ERROR: Could not find a version that satisfies the requirement meson-python==0.13.1
ERROR: No matching distribution found for meson-python==0.13.1
----------------------------------------
WARNING: Discarding file:///C:/Users/admin/pandas-2.1.4.tar.gz. Command errored out with exit status 1: 'C:\Users\admin\anaconda3\python.exe' 'C:\Users\admin\anaconda3\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\admin\AppData\Local\Temp\pip-build-env-4ynsmtd4\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- meson-python==0.13.1 meson==1.2.1 wheel 'Cython>=0.29.33,<3' 'oldest-supported-numpy>=2022.8.16; python_version<'"'"'3.12'"'"'' 'numpy>=1.26.0,<2; python_version>='"'"'3.12'"'"'' 'versioneer[toml]' Check the logs for full command output.
ERROR: Command errored out with exit status 1: 'C:\Users\admin\anaconda3\python.exe' 'C:\Users\admin\anaconda3\lib\site-packages\pip' install --ignore-installed --no-user --prefix 'C:\Users\admin\AppData\Local\Temp\pip-build-env-4ynsmtd4\overlay' --no-warn-script-location --no-binary :none: --only-binary :none: -i https://pypi.org/simple -- meson-python==0.13.1 meson==1.2.1 wheel 'Cython>=0.29.33,<3' 'oldest-supported-numpy>=2022.8.16; python_version<'"'"'3.12'"'"'' 'numpy>=1.26.0,<2; python_version>='"'"'3.12'"'"'' 'versioneer[toml]' Check the logs for full command output.
</code></pre>
<p>As I understand I need to update new python version of Python too, but I don't know how. How can I solve this problem? (Please note that I just can use Anaconda in my PC). Thank you.</p>
|
<python><pandas><anaconda><conda>
|
2024-01-12 02:09:00
| 1
| 1,793
|
hoa tran
|
77,803,878
| 23,190,147
|
Error installing the correct version of pywin32 to install pybrowsers in python
|
<p>I'm trying to install the pybrowsers module in python. I tried installing it, using my normal command in the command prompt. I got a conflict error, that apparently my current version of pywin32 was too high, it needed to be lower than 306. So I uninstalled pywin32, but when I tried to install it again to the correct version, I got another error: ERROR: Could not find a version that satisfies the requirement... so I'm at a little bit of a loss as to how to proceed.</p>
|
<python><installation><browser>
|
2024-01-12 01:21:18
| 1
| 450
|
5rod
|
77,803,873
| 3,155,240
|
Vistual Studio C++ program exiting with code -1073741819 when trying to use Python.h
|
<p>I asked for Poe AI to write me some basic code using the Python / C API. I followed its instructions for setup, built the project, ran the code, and it exited with -1073741819.</p>
<p>Here's the code:</p>
<pre><code>#include <iostream>
#include <Python.h>
int main() {
Py_Initialize(); // Initialize the Python interpreter
// Execute a simple Python expression
PyObject* result = PyRun_String("2 + 2", Py_eval_input, PyEval_GetGlobals(), PyEval_GetGlobals());
// Check if execution was successful
if (result == nullptr) {
PyErr_Print(); // Print any Python errors
return 1;
}
// Extract the result as a C integer
int value = PyLong_AsLong(result);
Py_DECREF(result); // Cleanup
// Print the result
printf("Result: %d\n", value);
Py_Finalize(); // Clean up the Python interpreter
return 0;
}
</code></pre>
<p>The steps I followed from Poe AI were as follows:</p>
<ol>
<li><p>Install Python - which I already had installed (version 3.11). I ran <code>--version</code> in the command line to make sure. I have been able to use it in the command line for a long time, pip install packages, etc.</p>
</li>
<li><p>Install Visual Studio - I had 2019 installed. I uninstalled it, downloaded the new one (Visual Studio 2022; waited for an hour to download every package available in the installer), rebooted my computer, and continued.</p>
</li>
<li><p>Set up Environment Variables (under System Variables, not User Variables) -</p>
<ul>
<li>I created the "PYTHON_HOME" variable, because I didn't have one - Python was on the Path variable; I took it out and replaced it with "PYTHON_HOME" and still works like usual.</li>
<li>Add the following two entries to the "Variable value" field:
<ul>
<li>%PYTHON_HOME%</li>
<li>%PYTHON_HOME%\Scripts</li>
</ul>
</li>
<li>Python still works for me.</li>
</ul>
</li>
<li><p>Configure Visual Studio: Open Visual Studio and create a new C project or open an existing one. To configure the project to compile with Python.h, follow these steps:</p>
<ul>
<li>Right-click on the project in the Solution Explorer and select "Properties".</li>
<li>In the "Configuration Properties" section, select "C/C++".</li>
<li>In the "Additional Include Directories" field, add the following path:
<ul>
<li>%PYTHON_HOME%\include.</li>
</ul>
</li>
<li>Click "Apply" to save the changes.</li>
</ul>
</li>
<li><p>Compile and Link: Now you can write your C code that includes Python.h and compile it using Visual Studio. Make sure to include #include <Python.h> at the beginning of your C file. When compiling, ensure that you link against the Python library. To do this, add the Python library path to the linker settings.</p>
<ul>
<li>In the project properties, go to "Configuration Properties" -> "Linker" -> "Input".</li>
<li>In the "Additional Dependencies" field, add the following entry:
<ul>
<li>%PYTHON_HOME%\libs\python39.lib (replace python39 with your Python version if different).</li>
</ul>
</li>
</ul>
</li>
</ol>
<p>Doing this solved all the errors that were displayed in the Visual Studio 2022 IDE, yet when I built the project and compiled it, it threw an error in the console -> Can't find python311_d.lib.</p>
<p>So I go back to Poe AI, and it says to resolve this issue I have 1 of 3 options (I will spare you some, reading; I did option 2):</p>
<ol>
<li><p>Install the correct Python version: Make sure you have the correct version of Python installed on your system. If you are targeting Python 3.11, ensure that you have Python 3.11 installed. You can download the specific Python version you need from the Python website (<a href="https://www.python.org/downloads/" rel="nofollow noreferrer">https://www.python.org/downloads/</a>). Make sure to select the debug version of the Python installer if you require the debug library.</p>
</li>
<li><p>Update the project configuration: Open the project properties in Visual Studio and verify that the Python library path and library name are correctly configured. Follow these steps:</p>
<ul>
<li>Right-click on the project in the Solution Explorer and select "Properties".</li>
<li>In the "Configuration Properties" section, select "Linker" -> "Input".</li>
<li>Check the "Additional Dependencies" field and ensure that it specifies the correct library name for your Python version. For Python 3.11, it should be python311_d.lib for the debug version or python311.lib for the release version.</li>
<li>If the library name is incorrect, update it accordingly.</li>
<li>Also, double-check the "Additional Library Directories" field and ensure it points to the correct directory where the Python library is located.</li>
</ul>
</li>
<li><p>Switch to a different Python version: If you don't specifically require Python 3.11, you can consider using a different Python version that is already installed on your system. Update your project configuration to use the correct library name and library directories for the Python version you have installed.</p>
</li>
</ol>
<p>When I did option 2, it fixed the error in the console, so I ran the code in VS, which then returned -1073741819, and press any key to continue. Poe AI has not been helpful at all. I've deleted the project and tried the same steps again, with the same results (I'm not crazy /stupid enough to do it for a 3rd time whilst expecting different results). Does anyone on this planet (or forum) know why this happening to me?</p>
|
<python><c++><visual-studio>
|
2024-01-12 01:18:44
| 1
| 2,371
|
Shmack
|
77,803,680
| 2,059,584
|
CREATE EXTENSION silently fails in SQLAlchemy
|
<p>I'm trying to create a <code>pgvector</code> extension on my server programmatically though <code>sqlalchemy</code> but the execute instruction fails silently.</p>
<p>Here is the code:</p>
<pre class="lang-py prettyprint-override"><code>from sqlalchemy import create_engine, text
engine = create_engine(f"postgresql+psycopg2://{cfg.DB.USERNAME}:{cfg.DB.PASSWORD}@{cfg.DB.URL}/{cfg.DB.NAME}")
with engine.connect() as con:
con.execute(text("CREATE EXTENSION IF NOT EXISTS vector"))
</code></pre>
<p>This code doesn't raise an error, but the extension is not created.</p>
<p>I checked that connection is correct, since other SQL queries seem to work.</p>
<p>If I execute the same SQL through <code>psql</code> it successfully creates the extension.</p>
<p>The database is AWS Aurora Serverless V2 - PostgreSQL flavour.
<code>sqlalchemy==2.0.25</code>, <code>PostgreSQL 15.4 on aarch64-unknown-linux-gnu</code>.</p>
<p>What can be the problem?</p>
|
<python><postgresql><sqlalchemy><amazon-rds>
|
2024-01-12 00:04:29
| 1
| 854
|
Rizhiy
|
77,803,677
| 2,635,863
|
extract indexes corresponding to the range of nested list elements
|
<pre><code>df = pd.DataFrame({'x':[0,1,2,3,4,5,6,7]})
</code></pre>
<p>I'm trying to extract the rows with indexes that correspond to the range given in a nested list:</p>
<pre><code>a = [[2,4],[6,7]]
</code></pre>
<p>Basically, I need to find a way to expand <code>a</code> to this:</p>
<pre><code>df.loc[[2,3,4, 6,7]]
</code></pre>
<p>Is there a neat way to do this in python?</p>
|
<python><pandas>
|
2024-01-12 00:04:11
| 3
| 10,765
|
HappyPy
|
77,803,661
| 2,352,855
|
How can I iterate through a DataFrame to concatenate strings once an empty cell is reached?
|
<p>I've extracted some pdf tables using Camelot.<br />
The first column contains merged cells, which is often problematic.</p>
<p>Despite tweaking some of the advanced configurations, the merged cells for the first column, span across rows.</p>
<p>I'd like to iterate through the first column rows to achieve the following:</p>
<ol>
<li>Start from the top</li>
<li>if you find an empty cell, then move / concatenate each previous string sequentially (with a space in between), to the first instance of a non-empty cell.</li>
</ol>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: center;">Column</th>
<th style="text-align: center;">What I have now</th>
<th style="text-align: center;">What I'd like</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: center;">1</td>
<td style="text-align: center;">A</td>
<td style="text-align: center;">A B C D</td>
</tr>
<tr>
<td style="text-align: center;">2</td>
<td style="text-align: center;">B</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: center;">3</td>
<td style="text-align: center;">C</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: center;">4</td>
<td style="text-align: center;">D</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: center;">5</td>
<td style="text-align: center;"></td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: center;">6</td>
<td style="text-align: center;">F</td>
<td style="text-align: center;">F G</td>
</tr>
<tr>
<td style="text-align: center;">7</td>
<td style="text-align: center;">G</td>
<td style="text-align: center;"></td>
</tr>
<tr>
<td style="text-align: center;">8</td>
<td style="text-align: center;"></td>
<td style="text-align: center;"></td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe><python-camelot>
|
2024-01-11 23:58:19
| 1
| 406
|
NoExpert
|
77,803,656
| 555,129
|
Python: Mock a module that does not exist
|
<p>I have a large set of legacy python scripts that were developed and used on Linux.
Now the desire is to run parts of this code on Windows without too many changes.</p>
<p>But the code fails on Windows when trying to import Linux specific modules such as <code>fcntl</code>.</p>
<p>To be clear, I do not want to run the part of the code that's Linux specific.</p>
<p>Is there any way to use <code>mock</code> the modules that simply do not exist?</p>
<p>I tried below approach based on other answers, but it does not work:</p>
<p>main.py</p>
<pre><code>import mock
mock.patch.dict('sys.modules', fcntl=mock.MagicMock())
import fcntl
import os
</code></pre>
|
<python><mocking>
|
2024-01-11 23:56:41
| 1
| 1,462
|
Amol
|
77,803,570
| 1,907,924
|
Convert a single column with a list of values into one hot encoding using Power Query
|
<p>For now I have following table in Excel, which makes it inconvenient to use for some further processing.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Segments</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Food</td>
</tr>
<tr>
<td>2</td>
<td>Automation</td>
</tr>
<tr>
<td>3</td>
<td>Mechatronics</td>
</tr>
<tr>
<td>4</td>
<td>Automation;Mechatronics</td>
</tr>
</tbody>
</table>
</div>
<p>What I would like to achieve, is one hot encoding of this data, which should look as follows. I'm trying to use Power Query for that.</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>ID</th>
<th>Food</th>
<th>Automation</th>
<th>Mechatronics</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>2</td>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>3</td>
<td>0</td>
<td>0</td>
<td>1</td>
</tr>
<tr>
<td>4</td>
<td>0</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I have found <a href="https://stackoverflow.com/q/49198365/1907924">similar question</a> but with a simpler example, where we always have just a single value in a column. I have tried suggested approach but modified it a bit, by splitting my column by <code>;</code> into multiple ones and then trying to pivot for every one of a splatted columns. Unfortunately it's resulting in an error as we are trying to create duplicated columns that way.</p>
<p>There is at least 30 distinct segments I may have. It seems that there are at most 5 of those combined for single record. Adding this note so that it's easier to find a balance between automation and manual work.</p>
<p>In worst case when there is no simple solution using Power Query, I can do this operation using Python (don't know how to do it in this complex case too thought) and reimport it later on.</p>
|
<python><excel><powerquery>
|
2024-01-11 23:21:03
| 1
| 4,338
|
Dcortez
|
77,803,565
| 12,935,622
|
Matplotlib Error: 'LinearSegmentedColormap' object has no attribute 'resampled'
|
<p>I was running this in my jupyter notebook,</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
import matplotlib as mpl
from matplotlib.colors import LinearSegmentedColormap, ListedColormap
# Color map
top = mpl.colormaps['Oranges_r'].resampled(128)
bottom = mpl.colormaps['Blues'].resampled(128)
newcolors = np.vstack((top(np.linspace(0, 1, 128)),
bottom(np.linspace(0, 1, 128))))
newcmp = ListedColormap(newcolors, name='OrangeBlue')
</code></pre>
<p>but I got <code>AttributeError: 'LinearSegmentedColormap' object has no attribute 'resampled'</code>. It works when I run it on Colab. The code is from matplotlib documentation.</p>
|
<python><matplotlib>
|
2024-01-11 23:19:54
| 0
| 1,191
|
guckmalmensch
|
77,803,539
| 13,916,049
|
Plot silhouette for each cluster using Plotly
|
<p>I want to plot the silhouette whereby the concat_omics_df.index matches the labels (i.e., subtypes). My code only produces a single silhouette value and I'm unable to plot.</p>
<pre><code>import plotly.graph_objects as go
from sklearn.metrics import silhouette_score
# Extract labels as an array
labels = subtype_labels['subtype'].values
# Calculate the silhouette score for all samples
silhouette_values = silhouette_score(concat_omics_df, subtype_labels)
# Create a figure
fig = go.Figure()
# Add trace for each sample
for i, value in enumerate(silhouette_values):
fig.add_trace(go.Bar(x=[f'Sample {i+1}'], y=[value], marker_color='skyblue' if labels[i] == 0 else 'orange'))
# Add a line representing the overall silhouette score
overall_silhouette = silhouette_score(concat_omics_df, labels)
fig.add_trace(go.Scatter(x=['Overall'], y=[overall_silhouette], mode='lines', name='Overall', line=dict(color='red', dash='dash')))
# Update layout
fig.update_layout(title='Silhouette Plot for Subtypes',
xaxis_title='Sample',
yaxis_title='Silhouette Score',
showlegend=True)
# Show the plot
fig.show()
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Input In [570], in <cell line: 5>()
2 fig = go.Figure()
4 # Add trace for each sample
----> 5 for i, value in enumerate(silhouette_values):
6 fig.add_trace(go.Bar(x=[f'Sample {i+1}'], y=[value], marker_color='skyblue' if labels[i] == 0 else 'orange'))
8 # Add a line representing the overall silhouette score
TypeError: 'numpy.float64' object is not iterable
</code></pre>
<p>Desired plot style:
<a href="https://i.sstatic.net/A6lGS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/A6lGS.png" alt="enter image description here" /></a></p>
<p>Input:
<code>concat_omics_df</code></p>
<pre><code>pd.DataFrame({'gene_1': {'sample_3': 1.0,
'sample_24': 0.2769168880109726,
'sample_9': 0.8476336051124655,
'sample_8': 0.49885260905321305,
'sample_10': 0.00015952709680945414,
'sample_17': 0.27279460696450986,
'sample_5': 0.5168936220044964,
'sample_12': 0.8451105774614138,
'sample_1': 0.3278741815939788},
'gene_2': {'sample_3': 1.0,
'sample_24': 0.18508756837237228,
'sample_9': 0.3587636923498214,
'sample_8': 0.30927957360401426,
'sample_10': 0.20884413804480345,
'sample_17': 0.09314362915422024,
'sample_5': 0.12614513055724202,
'sample_12': 0.19876603646974633,
'sample_1': 0.43865733757032815},
'gene_3': {'sample_3': 0.9493869925654914,
'sample_24': 0.31909568662059734,
'sample_9': 0.5960829592441739,
'sample_8': 0.9197452632517414,
'sample_10': 0.7254737691610931,
'sample_17': 0.8464344154477329,
'sample_5': 0.3099505974689851,
'sample_12': 0.6142251810218543,
'sample_1': 0.1787158742828073},
'gene_4': {'sample_3': 0.7696267713784664,
'sample_24': 0.7022702008503343,
'sample_9': 0.7633376987181584,
'sample_8': 0.9111863049154145,
'sample_10': 0.8754352321751506,
'sample_17': 0.4992398818230355,
'sample_5': 0.9749214109205464,
'sample_12': 0.5652879964694317,
'sample_1': 0.9910274036892863},
'gene_5': {'sample_3': 0.840021891058572,
'sample_24': 0.7485833490014048,
'sample_9': 0.7328929486790542,
'sample_8': 0.0,
'sample_10': 0.2554971113552342,
'sample_17': 0.6611979085394857,
'sample_5': 0.2501088259705049,
'sample_12': 0.39054720080817934,
'sample_1': 0.22764644700817954},
'gene_6': {'sample_3': 1.0,
'sample_24': 0.0269706867994085,
'sample_9': 0.5467244495700448,
'sample_8': 0.9779605735946423,
'sample_10': 0.45029102750394195,
'sample_17': 0.7956796281636271,
'sample_5': 0.7383701259823617,
'sample_12': 0.5853584308141041,
'sample_1': 0.6824036034795679},
'gene_7': {'sample_3': 0.3717641612000655,
'sample_24': 0.5250321407158294,
'sample_9': 0.9799894589704538,
'sample_8': 0.544258949184075,
'sample_10': 0.1259574907035304,
'sample_17': 0.41054622347734504,
'sample_5': 0.2655683593754903,
'sample_12': 0.20444982520812793,
'sample_1': 0.8843169565702099},
'gene_8': {'sample_3': 0.7056397370873982,
'sample_24': 0.30881267102934884,
'sample_9': 0.03569245277955437,
'sample_8': 0.04689202481349479,
'sample_10': 0.2658718741294945,
'sample_17': 0.009515673001541103,
'sample_5': 0.8812144283469642,
'sample_12': 0.0752454916809566,
'sample_1': 0.619182723470481},
'gene_9': {'sample_3': 0.9599572931852776,
'sample_24': 0.13065213479363366,
'sample_9': 0.6661901837757194,
'sample_8': 0.0,
'sample_10': 0.041023004817151126,
'sample_17': 0.1493965537050469,
'sample_5': 0.7074476702007716,
'sample_12': 0.5508914936808614,
'sample_1': 0.8892921524401799}})
</code></pre>
<p><code>subtype_labels</code></p>
<pre><code>pd.DataFrame({'subtype': {'sample_3': 0,
'sample_24': 0,
'sample_9': 0,
'sample_8': 1,
'sample_10': 1,
'sample_17': 1,
'sample_5': 1,
'sample_12': 1,
'sample_1': 1}})
</code></pre>
|
<python><plotly><visualization><cluster-analysis>
|
2024-01-11 23:11:58
| 0
| 1,545
|
Anon
|
77,803,456
| 2,663,150
|
How does pytest "know" where to import code from?
|
<p>This thread is a follow-up to an unsuccessful attempt to resolve the issue on Reddit, which can be seen in its entirety <a href="https://www.reddit.com/r/learnpython/comments/18ugodb/confused_about_how_pytest_knows_where_to_import/" rel="nofollow noreferrer">here</a>, though I will of course replicate the essential information here as well, starting with the OP:</p>
<p>I am following along with <code>Python Testing with pytest</code> and am utterly baffled as to how pytest finds a certain class in the example I am working with. All of the code is here in a .zip file:</p>
<p><a href="https://pragprog.com/titles/bopytest2/python-testing-with-pytest-second-edition/" rel="nofollow noreferrer">https://pragprog.com/titles/bopytest2/python-testing-with-pytest-second-edition/</a></p>
<p>So <code>code/ch2/test_card.py</code> starts with <code>from cards import Card</code>. Class <code>Card</code> is defined in <code>cards_proj/src/cards/api.py</code>. I can confirm that removing <code>code/cards_proj/pyproject.toml</code> and <code>code/pytest.init</code> (which is just comments anyway), first severally, then jointly, did not prevent the successful run of the tests. pytest would not let me just print out <code>sys.path</code> in or out of any of the test functions, so I did something I am not terribly proud of instead:</p>
<pre><code>from cards import Card
import sys
def test_field_access():
with open('path.txt', 'w') as f:
f.writelines(p + '\n' for p in sys.path)
c = Card("something", "brian", "todo", 123)
assert c.summary == "something"
assert c.owner == "brian"
assert c.state == "todo"
assert c.id == 123
# ...remainder omitted
</code></pre>
<p>The result in <code>path.txt</code> did nothing to clarify matters:</p>
<pre><code>/home/readyready15728/programming/python/python-testing-with-pytest/code/ch2
/home/readyready15728/programming/python/python-testing-with-pytest/venv/bin
/usr/lib/python310.zip
/usr/lib/python3.10
/usr/lib/python3.10/lib-dynload
/home/readyready15728/programming/python/python-testing-with-pytest/venv/lib/python3.10/site-packages
</code></pre>
<p>There was only one serious attempt to help me, which did not go anywhere. Briefly:</p>
<p>I was asked about the contents of <code>src/__init__.py</code>, which did not exist, though I was able to divulge the contents of <code>code/cards_proj/src/cards/__init__.py</code>:</p>
<pre><code>"""Top-level package for cards."""
__version__ = "1.0.0"
from .api import * # noqa
from .cli import app # noqa
</code></pre>
<p>I was encouraged to upload the code to GitHub. After being unsure whether I should due to copyright I figured it probably wouldn't be too bad; after all, I found the first edition code from a third party without a fuss being made about it. Because multiple files are obviously involved, I can't replicate everything easily in a Stack Overflow thread; however, here is a link to the <a href="https://github.com/readyready15728/python-testing-with-pytest/commit/20a9f261cdb481c6ad6941d45a76af73b0a87e05" rel="nofollow noreferrer">initial commit</a> which will not change even if I do more commits.</p>
<p>I ran two commands to shed some light on the directory structure. Here is the output of <code>find -mindepth 1 -type d | sort</code> executed in <code>code/</code>:</p>
<pre><code>./cards_proj
./cards_proj/src
./cards_proj/src/cards
./ch1
./ch10
./ch11
./ch11/cards_proj
./ch11/cards_proj/.github
./ch11/cards_proj/.github/workflows
./ch11/cards_proj/src
./ch11/cards_proj/src/cards
./ch11/cards_proj/tests
./ch11/cards_proj/tests/api
./ch11/cards_proj/tests/cli
./ch12
./ch12/app
./ch12/app/src
./ch12/app/tests
./ch12/script
./ch12/script_funcs
./ch12/script_importable
./ch12/script_src
./ch12/script_src/src
./ch12/script_src/tests
./ch13
./ch13/cards_proj
./ch13/cards_proj/src
./ch13/cards_proj/src/cards
./ch13/cards_proj/tests
./ch13/cards_proj/tests/api
./ch13/cards_proj/tests/cli
./ch14
./ch14/random
./ch15
./ch15/just_markers
./ch15/local
./ch15/pytest_skip_slow
./ch15/pytest_skip_slow/examples
./ch15/pytest_skip_slow_final
./ch15/pytest_skip_slow_final/examples
./ch15/pytest_skip_slow_final/tests
./ch16
./ch1/.backup
./ch1/__pycache__
./ch2
./ch2/.backup
./ch2/__pycache__
./ch2/.pytest_cache
./ch2/.pytest_cache/v
./ch2/.pytest_cache/v/cache
./ch3
./ch3/a
./ch3/b
./ch3/c
./ch3/d
./ch4
./ch5
./ch6
./ch6/bad
./ch6/builtins
./ch6/combined
./ch6/multiple
./ch6/reg
./ch6/slow
./ch6/smoke
./ch6/strict
./ch6/tests
./ch7
./ch8
./ch8/alt
./ch8/dup
./ch8/dup/tests_no_init
./ch8/dup/tests_no_init/api
./ch8/dup/tests_no_init/cli
./ch8/dup/tests_with_init
./ch8/dup/tests_with_init/api
./ch8/dup/tests_with_init/cli
./ch8/project
./ch8/project/tests
./ch8/project/tests/api
./ch8/project/tests/cli
./ch9
./ch9/some_code
./exercises
./exercises/ch10
./exercises/ch11
./exercises/ch11/src
./exercises/ch12
./exercises/ch2
./exercises/ch5
./exercises/ch6
./exercises/ch8
./exercises/ch8/tests
./exercises/ch8/tests/a
./exercises/ch8/tests/b
./.pytest_cache
./.pytest_cache/v
./.pytest_cache/v/cache
</code></pre>
<p>I was also asked to carry out <code>tree</code> in the directory <code>ch2</code>, which I obliged specifically by running <code>tree -a</code>:</p>
<pre><code>├── .backup
│ └── test_card.py~
├── path.txt
├── __pycache__
│ └── test_card.cpython-310-pytest-7.4.3.pyc
├── .pytest_cache
│ ├── CACHEDIR.TAG
│ ├── .gitignore
│ ├── README.md
│ └── v
│ └── cache
│ ├── nodeids
│ └── stepwise
├── test_alt_fail.py
├── test_card_fail.py
├── test_card.py
├── test_classes.py
├── test_exceptions.py
├── test_experiment.py
├── test_helper.py
└── test_structure.py
</code></pre>
<p><code>.backup/</code> and <code>path.txt</code> are the results of my involvement. It should be clear from the output of the two commands that pytest "knows" to climb to a higher level in the file system, then climb down into <code>cards_proj/src/</code> to import the code. The other Reddit user suggested I wasn't just running pytest in the <code>ch2/</code> subdirectory. I made certain:</p>
<p><a href="https://i.sstatic.net/YbwZA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YbwZA.png" alt="I was in fact in ch2 at time" /></a></p>
<p>(Text description to accompany image: the terminal shows the working directory as <code>ch2/</code>)</p>
<p>I also looked at some cached files but found no leads there.</p>
<p>It was then suggested: "Somehow I'm [sure] the root of the project is being added to your path and so you're able to import the modules that are exported at the root of the project". I then printed out the output of <code>set -L</code> (I'm using fish), with the sole exception of <code>history</code>, both for brevity and for privacy reasons:</p>
<pre><code>CAML_LD_LIBRARY_PATH '/home/readyready15728/.opam/default/lib/stublibs' '/home/readyready15728/.opam/default/lib/ocaml/stublibs' '/home/readyready15728/.opam/default/lib/ocaml'
CLUTTER_BACKEND x11
CLUTTER_IM_MODULE ibus
CMD_DURATION 0
COLORTERM truecolor
COLUMNS 226
DBUS_SESSION_BUS_ADDRESS unix:path=/run/user/1000/bus
DESKTOP_SESSION xubuntu
DISPLAY :0.0
FISH_VERSION 3.3.1
GPG_AGENT_INFO /run/user/1000/gnupg/S.gpg-agent:0:1
GTK3_MODULES xapp-gtk3-module
GTK_IM_MODULE ibus
GTK_MODULES gail:atk-bridge
GTK_OVERLAY_SCROLLING 0
HOME /home/readyready15728
IFS \n\ \t
LANG en_US.UTF-8
LINES 53
LOGNAME readyready15728
LS_COLORS 'rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:mi=00:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arc=01;31:*.arj=01;31:*.taz=01;31:*.lha=01;31:*.lz4=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.tzo=01;31:*.t7z=01;31:*.zip=01;31:*.z=01;31:*.dz=01;31:*.gz=01;31:*.lrz=01;31:*.lz=01;31:*.lzo=01;31:*.xz=01;31:*.zst=01;31:*.tzst=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.alz=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.cab=01;31:*.wim=01;31:*.swm=01;31:*.dwm=01;31:*.esd=01;31:*.jpg=01;35:*.jpeg=01;35:*.mjpg=01;35:*.mjpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.webp=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.m4a=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.oga=00;36:*.opus=00;36:*.spx=00;36:*.xspf=00;36:'
MANPATH '' '/home/readyready15728/.opam/default/man'
OCAML_TOPLEVEL_PATH /home/readyready15728/.opam/default/lib/toplevel
OMF_CONFIG /home/readyready15728/.config/omf
OMF_INVALID_ARG 3
OMF_MISSING_ARG 1
OMF_PATH /home/readyready15728/.local/share/omf
OMF_UNKNOWN_ERR 4
OMF_UNKNOWN_OPT 2
OPAM_SWITCH_PREFIX /home/readyready15728/.opam/default
PAM_KWALLET5_LOGIN /run/user/1000/kwallet5.socket
PANEL_GDK_CORE_DEVICE_EVENTS 0
PATH '/home/readyready15728/programming/python/python-testing-with-pytest/venv/bin' '/home/readyready15728/.opam/default/bin' '/home/readyready15728/altera/13.0sp1/quartus' '/home/readyready15728/altera/13.0sp1/nios2eds/bin' '/home/readyready15728/altera/13.0sp1/quartus/bin' '/usr/local/sbin' '/usr/local/bin' '/usr/sbin' '/usr/bin' '/sbin' '/bin' '/usr/games' '/usr/local/games' '/snap/bin'
PWD /home/readyready15728/programming/python/python-testing-with-pytest/code/ch2
QT_ACCESSIBILITY 1
QT_IM_MODULE ibus
QT_QPA_PLATFORMTHEME gtk2
SESSION_MANAGER local/stygies-viii:@/tmp/.ICE-unix/1334,unix/stygies-viii:/tmp/.ICE-unix/1334
SHELL /usr/bin/fish
SHLVL 2
SSH_AGENT_PID 1454
SSH_AUTH_SOCK /run/user/1000/keyring/ssh
TERM xterm-256color
TERM_PROGRAM tmux
TERM_PROGRAM_VERSION 3.2a
TMUX /tmp/tmux-1000/default,86921,0
TMUX_PANE '%7'
TMUX_PLUGIN_MANAGER_PATH /home/readyready15728/.tmux/plugins/
USER readyready15728
VIRTUAL_ENV /home/readyready15728/programming/python/python-testing-with-pytest/venv
VIRTUAL_ENV_PROMPT '(venv) '
VTE_VERSION 6800
WINDOWID 71303171
XAUTHORITY /home/readyready15728/.Xauthority
XDG_CONFIG_DIRS /etc/xdg/xdg-xubuntu:/etc/xdg
XDG_CURRENT_DESKTOP XFCE
XDG_DATA_DIRS /usr/share/xubuntu:/usr/share/xfce4:/home/readyready15728/.local/share/flatpak/exports/share:/var/lib/flatpak/exports/share:/usr/local/share:/usr/share
XDG_MENU_PREFIX xfce-
XDG_RUNTIME_DIR /run/user/1000
XDG_SEAT seat0
XDG_SEAT_PATH /org/freedesktop/DisplayManager/Seat0
XDG_SESSION_CLASS user
XDG_SESSION_DESKTOP XFCE
XDG_SESSION_ID 3
XDG_SESSION_PATH /org/freedesktop/DisplayManager/Session3
XDG_SESSION_TYPE x11
XDG_VTNR 1
XMODIFIERS @im=ibus
_ set
_OLD_FISH_PROMPT_OVERRIDE /home/readyready15728/programming/python/python-testing-with-pytest/venv
_OLD_VIRTUAL_PATH '/home/readyready15728/.opam/default/bin' '/home/readyready15728/altera/13.0sp1/quartus' '/home/readyready15728/altera/13.0sp1/nios2eds/bin' '/home/readyready15728/altera/13.0sp1/quartus/bin' '/usr/local/sbin' '/usr/local/bin' '/usr/sbin' '/usr/bin' '/sbin' '/bin' '/usr/games' '/usr/local/games' '/snap/bin'
__fish_active_key_bindings fish_default_key_bindings
__fish_added_user_paths
__fish_bin_dir /usr/bin
__fish_cd_direction prev
__fish_config_dir /home/readyready15728/.config/fish
__fish_config_interactive_done
__fish_data_dir /usr/share/fish
__fish_help_dir /usr/share/doc/fish
__fish_initialized 3100
__fish_last_bind_mode default
__fish_locale_vars 'LANG' 'LC_ALL' 'LC_COLLATE' 'LC_CTYPE' 'LC_MESSAGES' 'LC_MONETARY' 'LC_NUMERIC' 'LC_TIME'
__fish_ls_color_opt --color=auto
__fish_ls_command ls
__fish_sysconf_dir /etc/fish
__fish_user_data_dir /home/readyready15728/.local/share/fish
dirprev '/media/readyready15728/LIBERET NOS DEUS MACHINARIUS AB IGNORANTIA/Pictures' '/home/readyready15728' '/home/readyready15728/src/vim' '/home/readyready15728/programming/python/python-testing-with-pytest' '/home/readyready15728/programming/python/python-testing-with-pytest/code'
fish_bind_mode default
fish_color_autosuggestion '555' 'brblack'
fish_color_cancel -r
fish_color_command 005fd7
fish_color_comment 990000
fish_color_cwd green
fish_color_cwd_root red
fish_color_end 009900
fish_color_error ff0000
fish_color_escape 00a6b2
fish_color_history_current --bold
fish_color_host normal
fish_color_host_remote yellow
fish_color_match --background=brblue
fish_color_normal normal
fish_color_operator 00a6b2
fish_color_param 00afff
fish_color_quote 999900
fish_color_redirection 00afff
fish_color_search_match 'bryellow' '--background=brblack'
fish_color_selection 'white' '--bold' '--background=brblack'
fish_color_status red
fish_color_user brgreen
fish_color_valid_path --underline
fish_complete_path '/home/readyready15728/.config/fish/completions' '/home/readyready15728/.local/share/omf/pkg/omf/completions' '/etc/fish/completions' '/usr/share/xubuntu/fish/vendor_completions.d' '/usr/share/xfce4/fish/vendor_completions.d' '/home/readyready15728/.local/share/flatpak/exports/share/fish/vendor_completions.d' '/var/lib/flatpak/exports/share/fish/vendor_completions.d' '/usr/local/share/fish/vendor_completions.d' '/usr/share/fish/vendor_completions.d' '/usr/share/fish/completions' '/home/readyready15728/.local/share/fish/generated_completions'
fish_function_path '/home/readyready15728/.config/fish/functions' '/home/readyready15728/.local/share/omf/pkg/omf/functions/compat' '/home/readyready15728/.local/share/omf/pkg/omf/functions/core' '/home/readyready15728/.local/share/omf/pkg/omf/functions/index' '/home/readyready15728/.local/share/omf/pkg/omf/functions/packages' '/home/readyready15728/.local/share/omf/pkg/omf/functions/themes' '/home/readyready15728/.local/share/omf/pkg/omf/functions/bundle' '/home/readyready15728/.local/share/omf/pkg/omf/functions/util' '/home/readyready15728/.local/share/omf/pkg/omf/functions/repo' '/home/readyready15728/.local/share/omf/pkg/omf/functions/cli' '/home/readyready15728/.local/share/omf/pkg/fish-spec/functions' '/home/readyready15728/.local/share/omf/pkg/omf/functions' '/home/readyready15728/.local/share/omf/lib' '/home/readyready15728/.local/share/omf/lib/git' '/home/readyready15728/.local/share/omf/themes/bobthefish' '/home/readyready15728/.local/share/omf/themes/bobthefish/functions' '/etc/fish/functions' '/usr/share/xubuntu/fish/vendor_functions.d' '/usr/share/xfce4/fish/vendor_functions.d' '/home/readyready15728/.local/share/flatpak/exports/share/fish/vendor_functions.d' '/var/lib/flatpak/exports/share/fish/vendor_functions.d' '/usr/local/share/fish/vendor_functions.d' '/usr/share/fish/vendor_functions.d' '/usr/share/fish/functions'
fish_greeting Welcome\ to\ fish,\ the\ friendly\ interactive\ shell\nType\ `help`\ for\ instructions\ on\ how\ to\ use\ fish
fish_handle_reflow 0
fish_key_bindings fish_default_key_bindings
fish_kill_signal 0
fish_killring 'scrapers/' 'misc/docker-deep-dive-2023/' 's2' 'vim' 'git ' 'clean' 'pushd ~/'
fish_pager_color_completion
fish_pager_color_description 'B3A06D' 'yellow'
fish_pager_color_prefix 'white' '--bold' '--underline'
fish_pager_color_progress 'brwhite' '--background=cyan'
fish_pid 87527
fish_user_paths '/home/readyready15728/altera/13.0sp1/quartus' '/home/readyready15728/altera/13.0sp1/nios2eds/bin' '/home/readyready15728/altera/13.0sp1/quartus/bin'
hostname stygies-viii
last_pid 1358117
omf_init_path /home/readyready15728/.local/share/omf/pkg/omf
pipestatus 2
status 2
status_generation 32
umask 0002
version 3.3.1
</code></pre>
<p>You'll notice I made sure to reactivate my venv to ensure that the conditions are the same but, even so, <code>PYTHONPATH</code> is not set and nothing else I saw looks like a lead. Like I said, how does pytest "know" where to import the code being tested from?</p>
|
<python><pytest>
|
2024-01-11 22:48:24
| 1
| 566
|
readyready15728
|
77,803,436
| 1,267,363
|
Python conditional replacement based on element type
|
<p>In Python 3.7 I am trying to remove pipe characters</p>
<pre><code>r=(('ab|cd', 1, 'ab|cd', 1), ('ab|cd', 1, 'ab|cd', 1))
[[x.replace('|', '') for x in l] for l in r]
</code></pre>
<p>Error: 'int' object has no attribute 'replace'</p>
<p>Desired outcome:
(('abcd', 1, 'abcd', 1), ('abcd', 1, 'abcd', 1))</p>
<p>How can I skip the replace command on elements that are not strings?</p>
|
<python>
|
2024-01-11 22:42:16
| 1
| 8,126
|
davidjhp
|
77,803,371
| 11,618,586
|
Normalizing a monotonically increasing function and calculate std
|
<p>I have an increasing function like so:</p>
<p><a href="https://i.sstatic.net/7KKJN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7KKJN.png" alt="image" /></a></p>
<p>I plan on breaking it up into intervals (between the red lines).
I want to rotate the segment horizontal and calculate the standard deviation.</p>
<p>I know this might seem silly, but I essentially want to calculate the variation after normalizing the increasing ramp per segment.
What method can I use to achieve this?</p>
<p>My initial thoughts are to take calculate the slope and draw a line from the beginning to the end of the segment with that slope. Then, calculate the delta of each data point with respect to the
line.</p>
|
<python><statistics><analytics>
|
2024-01-11 22:24:38
| 1
| 1,264
|
thentangler
|
77,803,362
| 1,171,899
|
SSH command times out the 6th time it calls after working correctly 5 times
|
<p>I have Python code in a script like this -</p>
<pre class="lang-py prettyprint-override"><code>def run_ssh_cmd(host, cmd):
global i
i += 1
ql(f"Running cmd {i}: {cmd}", "info", host)
cmds = ["ssh", "-t", "-p", "22", host, cmd]
process = Popen(cmds, stdout=PIPE, stderr=PIPE, stdin=PIPE)
stdout, stderr = process.communicate()
# Check if the command was successful
if process.returncode != 0:
ql(f"Error occurred on cmd {cmd}: {stderr}", "error", host)
raise CalledProcessError(process.returncode, cmds, stderr)
</code></pre>
<p>This code giving a timeout error on the 6th command, every time, when I run it from my PC. The error states "ssh: connect to host sub1.domain.io port 22: Connection timed out"</p>
<ul>
<li>I can ssh into the domain without any issue and the first 5 times it runs fine</li>
<li>My colleague can run this script without any issues</li>
<li>I can even run this command on other servers on a different domain without any issue. It's only when I try to hit one of the subdomains of "domain.io" that this happens</li>
<li>It always works 5 times then times out on the 6th regardless of the command</li>
<li>I checked the ssh logs on the server and there's nothing strange or different there</li>
<li>I don't have any .ssh\config file</li>
</ul>
<p><strong>Why would my computer consistently get a connection timeout after 5 successful commands?</strong></p>
|
<python><windows><ssh>
|
2024-01-11 22:21:40
| 1
| 3,463
|
Kyle H
|
77,803,295
| 6,303,377
|
How to handle default values in parents of python dataclasses
|
<p>This is a simplified version of my code:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class _AbstractDataTableClass:
"""All dataclasses should have a _unique_fields attribute to identify the
values in that table that must be unique. This assumption simplifies type hints for
a lot
of the implemented methods.
"""
_unique_fields: list[str]
@dataclass
class Order(_AbstractDataTableClass):
order_id: str
_unique_fields: list[str] = field(default_factory=lambda: ["order_id"])
</code></pre>
<p>However when executing I get the error <code>Attributes without a default cannot follow attributes with one</code>. I read this <a href="https://stackoverflow.com/questions/51575931/class-inheritance-in-python-3-7-dataclasses">stackoverflow post</a> and understand the problem, but I'm not sure how to find a solution in my case.
I just want to indicate that there is group of classes defined by the parent class of which there will be different subversions, but that python can always expect to find an attribute/property called _unique_fields. Whats the best way to do that?</p>
<p>The only work-around I found so far is this, but I don't have the feeling that this is the most pythonic way.</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class _AbstractDataTableClass:
@property
def _unique_fields(self) -> list[str]:
"""Property to access the _unique_fields.
Needed to define this as a property to be able to be able to handle issues
related to non-default values in inheritance"""
return []
@dataclass
class Order(_AbstractDataTableClass):
order_id: str
_unique_fields: list[str] = field(default_factory=lambda: ["order_id"])
</code></pre>
<p>Thanks a lot in advance!</p>
|
<python><python-3.x><python-dataclasses>
|
2024-01-11 22:05:06
| 1
| 1,789
|
Dominique Paul
|
77,802,979
| 2,977,092
|
How to draw a horizontal line at y=0 in an Altair line chart
|
<p>I'm creating a line chart using Altair. I have a DataFrame where my y-values move up and down around 0, and I'd like to add a phat line to mark y=0. Sounds easy enough, so I tried this:</p>
<pre><code># Add a horizontal line at y=0 to clearly distinguish between positive and negative values.
y_zero = alt.Chart().mark_rule().encode(
y=alt.value(0),
color=alt.value('black'),
size=alt.value(10),
)
</code></pre>
<p>This indeed draws a horizontal line, but it gets drawn at the top very top of my chart. It seems Altair uses a coordinate system where (0,0) is at the top-left corner. How do I move my line to my data's y=0 position?</p>
<p>Thanks!</p>
|
<python><altair>
|
2024-01-11 20:53:12
| 2
| 739
|
luukburger
|
77,802,815
| 15,781,591
|
How to reverse order of legend for horizontal stacked bar chart in seaborn?
|
<p>I have the following code that generates a horizontal stacked bar chart in python, using seaborn.</p>
<pre><code>sns.set_style('white')
ax = sns.histplot(
data=temp, y='tmp', weights='Weight', hue=field,
multiple='stack', palette='viridis', shrink=0.75
)
ax.set_xlabel('Weight')
ax.set_ylabel('{}_{}'.format(df, attribute))
ax.set_title('plot distribution')
sns.move_legend(ax, loc='upper left', bbox_to_anchor=(1, 0.97))
sns.despine()
plt.tight_layout()
plt.show()
</code></pre>
<p>This generates the following horizontal stacked bar chart:
<a href="https://i.sstatic.net/08vAM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/08vAM.png" alt="enter image description here" /></a></p>
<p>I would like to reverse the order of the legend, so that 10.0 is on top, and 0.0 is on the bottom, so reverse chronological order, reflecting the actual plot itself more intuitively.</p>
<p>I have tried the following:</p>
<pre><code>handles, labels = ax.get_legend_handles_labels()
ax.legend(handles[::-1], labels[::-1], title='Line', loc='upper left')
</code></pre>
<p>referenced from this Stackoverflow post: <a href="https://stackoverflow.com/questions/34576059/reverse-the-order-of-a-legend">Reverse the order of a legend</a></p>
<p>But all this did was just remove the legend.</p>
<p>How can I properly reverse the order of the legend categories?</p>
|
<python><matplotlib><seaborn><legend>
|
2024-01-11 20:20:55
| 2
| 641
|
LostinSpatialAnalysis
|
77,802,806
| 12,397,582
|
Decompressing large streams with Python tarfile
|
<p>I have a large <code>.tar.xz</code> file that I am downloading with python requests that needs to be decompressed before writing to the disk (Due to limited disk space). I have a solution which works for smaller files, but larger files hang indefinitely.</p>
<pre><code>import io
import requests
import tarfile
session = requests.Session()
response = session.get(url, stream=True)
compressed_data = io.BytesIO(response.content)
tar = tarfile.open(mode='r|*' ,fileobj=compressed_data, bufsize=16384)
tar.extractall(path='/path/')
</code></pre>
<p>It hangs at <code>io.BytesIO</code> for larger files.</p>
<p>Is there a way to pass the stream to <code>fileobj</code> without reading the entire stream? or is there a better approach to this?</p>
|
<python><python-3.x><python-requests><tarfile>
|
2024-01-11 20:19:13
| 2
| 683
|
Spiff
|
77,802,755
| 200,304
|
Python: recursively import all files in a directory
|
<p>I want to do something like this:</p>
<pre><code>myprog --tests-dir /my/dir
</code></pre>
<p>which will recursively traverse <code>/my/dir</code>, find all Python files, and import them. An extra challenge is that there may be dependencies between them.</p>
<p>Example. If the files are like this:</p>
<pre><code>/my/dir/a/__init__.py
/my/dir/a/x.py (imports a.y)
/my/dir/a/y.py (imports a)
/my/dir/a/b/__init__.py
/my/dir/a/b/z.py
</code></pre>
<p>this should result in:</p>
<pre><code>import a
import a.y
import a.x
import a.b
import a.b.z
</code></pre>
<p>Maybe I could use <a href="https://docs.python.org/3/library/importlib.html#importlib.machinery.PathFinder" rel="nofollow noreferrer">PathFinder</a> somehow, but the details escape me.</p>
|
<python>
|
2024-01-11 20:07:22
| 1
| 3,245
|
Johannes Ernst
|
77,802,458
| 44,330
|
Functional equivalent in pandas to assigning one or more elements in a series
|
<p>I am wondering if there is a functional but efficient equivalent to the following mutative code:</p>
<pre><code>import pandas as pd
s = pd.Series(...)
s[k] = v # I don't want to mutate s
</code></pre>
<p>so that the operation returns a new series <code>s2</code> with the mutation. I can use <code>copy()</code> and then mutate; for example:</p>
<pre><code>import pandas as pd
import numpy as np
t = np.arange(100)
s = pd.Series(t*t,t)
s2 = s.copy()
s2[[3,4]] = [44,55]
s2
</code></pre>
<p>but is there a way to do it in one swoop and return a new Series without changing the existing series, like with <code>apply</code> or <code>map</code> or <code>replace</code>... but here I know the indices and the values I want to change.</p>
|
<python><pandas>
|
2024-01-11 19:05:45
| 1
| 190,447
|
Jason S
|
77,802,033
| 15,671,866
|
C program and subprocess
|
<p>I wrote this simple C program to explain a more hard problem with the same characteristics.</p>
<pre><code>#include <stdio.h>
int main(int argc, char *argv[])
{
int n;
while (1){
scanf("%d", &n);
printf("%d\n", n);
}
return 0;
}
</code></pre>
<p>and it works as expected.</p>
<p>I also wrote a subprocess script to interact with this program:</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen, PIPE, STDOUT
process = Popen("./a.out", stdin=PIPE, stdout=PIPE, stderr=STDOUT)
# sending a byte
process.stdin.write(b'3')
process.stdin.flush()
# reading the echo of the number
print(process.stdout.readline())
process.stdin.close()
</code></pre>
<p>The problem is that, if I run my python script, the execution is freezed on the <code>readline()</code>. In fact, if I interrupt the script I get:</p>
<pre><code>/tmp » python script.py
^CTraceback (most recent call last):
File "/tmp/script.py", line 10, in <module>
print(process.stdout.readline())
^^^^^^^^^^^^^^^^^^^^^^^^^
KeyboardInterrupt
</code></pre>
<p>If I change my python script in:</p>
<pre class="lang-py prettyprint-override"><code>from subprocess import Popen, PIPE, STDOUT
process = Popen("./a.out", stdin=PIPE, stdout=PIPE, stderr=STDOUT)
with process.stdin as pipe:
pipe.write(b"3")
pipe.flush()
# reading the echo of the number
print(process.stdout.readline())
# sending another num:
pipe.write(b"4")
pipe.flush()
process.stdin.close()
</code></pre>
<p>I got this output:</p>
<pre><code>» python script.py
b'3\n'
Traceback (most recent call last):
File "/tmp/script.py", line 13, in <module>
pipe.write(b"4")
ValueError: write to closed file
</code></pre>
<p>That means that the first input is sent correctly, and also the read was done.</p>
<p>I really can't find something that explain this behaviour; can someone help me understanding?
Thanks in advance</p>
<p><strong>[EDIT]</strong>: since there are many points to clearify, I added this edit. I'm training on exploitation of buffer overflow vuln using the <code>rop</code> technique and I'm writing a python script to achieve that. To exploit this vuln, because of ASLR, I need to discover the <code>libc</code> address and make the program restart without terminating. Since the script will be executed on a target machine, I dont know which libraries will be avaiable, then I'm going to use subprocess because it's built-in in python. Without going into details, the attack send a sequence of <strong>bytes</strong> on the first <code>scanf</code> aim to leak <code>libc</code> base address and restart the program; then a second payload is sent to obtain a shell with which I will communicate in interactive mode.</p>
<p>That's why:</p>
<ol>
<li>I can only use built-in libraries</li>
<li>I have to send bytes and cannot append ending <code>\n</code>: my payload would not be aligned or may leeds to fails</li>
<li>I need to keep open the <code>stdin</code> open</li>
<li>I cannot change the C-code</li>
</ol>
|
<python><c><subprocess>
|
2024-01-11 17:38:08
| 2
| 585
|
ma4stro
|
77,801,604
| 12,771,298
|
Jupyer Notebook not hiding cells even with the hide-input tag
|
<p>I realize there are plenty of questions about hiding Jupyter Notebook cells, but even following all the instructions I can find, Jupyter still refuses to hide the cell.</p>
<p>I've added the 'hide-input' tag to my cell, and this is what the metadata looks like:</p>
<pre><code>{
"trusted": true,
"tags": [
"hide-input"
]
}
</code></pre>
<p>Is there some sort of trigger to implement the tag other than running the cell itself? Why won't it hide the input?</p>
<p>My version is 6.0.3, and as far as I can tell tags were implemented in the Version 5 range. I can't see why it isn't working for me.</p>
|
<python><jupyter-notebook><jupyter>
|
2024-01-11 16:26:56
| 1
| 375
|
Petra
|
77,801,556
| 10,200,497
|
How can I create a list of range of numbers as a column of dataframe?
|
<p>My DataFrame is:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(
{
'a': [20, 100],
'b': [2, 3],
'dir': ['long', 'short']
}
)
</code></pre>
<p>Expected output: Creating column <code>x</code>:</p>
<pre><code> a b dir x
0 20 2 long [22, 24, 26]
1 100 3 short [97, 94, 91]
</code></pre>
<p>Steps:</p>
<p><code>x</code> is a list with length of 3. <code>a</code> is starting point of <code>x</code> and <code>b</code> is step that <code>a</code> increases/decreases depending on <code>dir</code>. If <code>df.dir == long</code> <code>x</code> ascends otherwise it descends.</p>
<p>My Attempt based on this <a href="https://stackoverflow.com/a/45233237/10200497">answer</a>:</p>
<pre><code>df['x'] = np.arange(0, 3) * df.b + df.a
</code></pre>
<p>Which does not produce the expected output.</p>
|
<python><pandas><dataframe>
|
2024-01-11 16:20:15
| 4
| 2,679
|
AmirX
|
77,801,481
| 534,238
|
How to use both `with_outputs` and `with_output_types` in Apache Beam (Python SDK)?
|
<p>An Apache Beam <code>PTransform</code> can have <code>with_outputs</code> and <code>with_output_types</code> appended to it. Eg,</p>
<pre class="lang-py prettyprint-override"><code>pcoll | beam.CombinePerKey(sum).with_output_types(typing.Tuple[unicode, int])
</code></pre>
<p>and</p>
<pre class="lang-py prettyprint-override"><code>(words | beam.ParDo(ProcessWords(), cutoff_length=2, marker='x')
.with_outputs('above_cutoff_lengths', 'marked strings',
main='below_cutoff_strings')
)
</code></pre>
<p>(Both of these examples are taken from <a href="https://beam.apache.org/documentation/programming-guide/" rel="nofollow noreferrer">Apache Beam documentation</a>, if you want some context.)</p>
<p>But I cannot seem to find any documentation on how to <em>combine</em> them. For instance, <em>can I do something like this</em>?</p>
<pre class="lang-py prettyprint-override"><code>(words | beam.ParDo(ProcessWords(), cutoff_length=2, marker='x')
.with_outputs('above_cutoff_lengths', 'marked strings',
main='below_cutoff_strings')
.with_output_types(str, IndexError, str)
)
</code></pre>
|
<python><apache-beam>
|
2024-01-11 16:08:34
| 1
| 3,558
|
Mike Williamson
|
77,801,321
| 2,437,514
|
Undo operation in Xlwings
|
<p>Does Xlwings have an undo operation?</p>
<p>I have searched the documentation and cannot find anything.</p>
<pre class="lang-py prettyprint-override"><code>import xlwings as xw
wb_name = "my_wb.xlsx"
app = xw.App(visible=True)
wb: xw.Book = app.books.open(wb_name )
sh: xw.Sheet = wb.sheets[0]
sh.range("A1").value = 1
# undo here
xw.apps.active.quit()
</code></pre>
<p>I know I can manually implement an undo above by capturing the previous cell contents and reapplying the contents, but I'm looking for general access to the built-in Excel undo operation.</p>
|
<python><xlwings>
|
2024-01-11 15:44:13
| 1
| 45,611
|
Rick
|
77,801,279
| 5,072,692
|
pandas set_table_styles not working for table tag
|
<p>I am using the following code to set a border to table and and some formatting to the header column. The styling for the header column works fine but the code doesn't render the border for the table.</p>
<pre><code>styled_df = df.style.applymap(colorize).set_table_styles([
{'selector': 'table',
'props': 'border: 1; border-collapse: collapse; font-size: 13px;'
},
{'selector': 'thead',
'props': 'background-color : #3176B5; color: white; padding: 4px;'
}
])
</code></pre>
|
<python><pandas><dataframe><css-selectors>
|
2024-01-11 15:38:32
| 2
| 955
|
Adarsh Ravi
|
77,801,188
| 3,433,875
|
Get metadata from ttf font files in windows with python and fontTools library
|
<p>To be able to quickly find the font I want (windows only for now) I want to create a list I can query to find the font name, family and styles available for the font.</p>
<p>Using fontTools and ttlib , I am able to create the list but, when the font styles are not a part of the file name, I am not able to get them.</p>
<p>Example, I get the styles for the Arial font if they have a different file for them:
<a href="https://i.sstatic.net/1Yxy1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Yxy1.png" alt="Arial styles" /></a></p>
<p>But for Cascadia Code, one ttf, contains multiple styles:
<a href="https://i.sstatic.net/b8fme.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/b8fme.png" alt="Cascadia ttf file" /></a></p>
<p>I can see the styles on the windows app:
<a href="https://i.sstatic.net/a9mcM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/a9mcM.png" alt="cascadia styles" /></a></p>
<p>I found a code that does work for some fonts, but not all:</p>
<pre><code>from fontTools import ttLib
path = "C:\\Windows\\Fonts\\CascadiaCode.ttf"
font = ttLib.TTFont(path)
for instance in font["fvar"].instances:
style = font["name"].getName(instance.subfamilyNameID, 3, 1, 0x409)
print(f"{family} {style}")
</code></pre>
<p>Which returns the right styles:</p>
<pre><code>Cascadia Code ExtraLight
Cascadia Code Light
Cascadia Code SemiLight
Cascadia Code Regular
Cascadia Code SemiBold
Cascadia Code Bold
</code></pre>
<p>But if I use it with other fonts, ex. arial, it returns an error. Not sure why.</p>
<p>Here is my current code:</p>
<pre><code>import os
import pandas as pd
from fontTools import ttLib
paths = []
for file in os.listdir(r'C:\Windows\fonts'):
if file.endswith('.ttf'):
font_path = os.path.join("C:\\Windows\\Fonts\\", file)
paths.append(font_path)
table = []
for p in paths:
font = ttLib.TTFont(str(p))
fontFamilyName = font['name'].getDebugName(1)
style = font['name'].getDebugName(2)
fullName= font['name'].getDebugName(3)
table.append({'family':fontFamilyName, 'name':fullName, 'style':style, 'path': p})
df=pd.DataFrame(table, columns = ['family', 'name', 'style', 'path'])
df['name'] = df['name'].str.split(':').str[-1]
df = df.drop_duplicates()
df.head()
</code></pre>
<p>Any ideas?Thanks.</p>
|
<python><python-3.x><ttx-fonttools>
|
2024-01-11 15:24:25
| 1
| 363
|
ruthpozuelo
|
77,801,162
| 11,807,683
|
Applying islice to ijson, getting list of lists when applied?
|
<p>I'm trying to apply <code>islice</code> over a huge JSON file which contains around ~180k documents.</p>
<p>The file is, as an example:</p>
<pre><code>[
{"propertyA": "abc"},
{"propertyB": "bcd"},
...
]
</code></pre>
<p>As of now I'm doing <code>islice(ijson.items(file, prefix=''), 10000)</code> and getting OOMs, and when checking what <code>ijson.items(file, prefix='')</code> returns for the first element doing <code>ijson.items(file, prefix='').__next__()</code>, the result is as follows:</p>
<pre><code>[ {doc1}, {doc2}, {doc3}, ... ]
</code></pre>
<p>The json file I'm reading is not structured as list of lists, only a single list with the documents, why am I getting a list of lists with the first item being the whole content of the file? Shouldn't I be getting just <code>{doc1}</code> when asking for the <code>__next__()</code>? Am I using <code>ijson</code> erroneously that it's wrapping the file around yet another list?</p>
|
<python><json>
|
2024-01-11 15:22:02
| 1
| 591
|
czr_RR
|
77,800,955
| 3,702,377
|
How to use Depends or similar thing as dependency injection outside of FastAPI request methods?
|
<p>Can anybody tell me how to use dependency injection for my <code>get_db()</code> outside of the FastAPI routers methods? Apparently, <code>Depends()</code> only covers DI in request functions.</p>
<p>Here's the <code>get_db()</code> async generator:</p>
<pre class="lang-py prettyprint-override"><code>async def get_db() -> AsyncGenerator[AsyncSession, None]:
async with async_session() as session:
yield session
</code></pre>
<p>In the FastAPI router, I can simply use <code>Depends()</code> like this:</p>
<pre class="lang-py prettyprint-override"><code>@router.get("/interactions", response_model=List[schemas.Interaction])
async def get_all_interactions(db: Annotated[AsyncSession, Depends(get_db)]) -> List[schemas.Interaction]:
interactions = await crud.get_interactions(db=db)
return [
schemas.Interaction.model_validate(interaction) for interaction in interactions
]
</code></pre>
<p>Now, outside of the request, how can I inject the <code>get_db</code> within a new method and get rid of <code>async for</code> inside of method?</p>
<pre class="lang-py prettyprint-override"><code>@cli.command(name="create_superuser")
async def create_superuser(): # Note: how to pass db session here as param?
username = click.prompt("Username", type=str)
email = click.prompt("Email (optional)", type=str, default="")
password = getpass("Password: ")
confirm_password = getpass("Confirm Password: ")
if password != confirm_password:
click.echo("Passwords do not match")
return
async for db in database.get_db(): # Note: remove it from here
user = schemas.UserAdminCreate(
username=username,
email=None if not email else email,
password=password,
role="admin",
)
await crud.create_user(db=db, user=user)
</code></pre>
<hr />
<p>PS: The reason for this requirement, is that, I'm going to write a test case for the <code>create_superuser()</code> function, which has its own database and respective session so it would be beneficial to me to inject the session db within any methods.</p>
|
<python><dependency-injection><python-asyncio><fastapi><depends>
|
2024-01-11 14:53:56
| 2
| 35,654
|
Benyamin Jafari
|
77,800,759
| 9,212,313
|
Calling psutil.Process() fails when used inside of Docker container on Jenkins
|
<p>When running <code>psutil.Process()</code> in Docker, while using Jenkins runner, I get <code>psutil.NoSuchProcess: process PID not found</code> error, the whole error trace is below.</p>
<p>The interesting thing is that this works in Docker/Podman outside of Jenkins runner (tested with various versions of both Docker and Podman on Linux/Windows/MacOS) and it also works with different versions of <code>psutil</code> package. I always used Debian based docker images (python:3.11.7-bookworm and similar).</p>
<p>The only situation where I consistently have this error is in Jenkins pipeline.</p>
<p>Python script used for testing is just this:</p>
<pre><code>import psutil
p = psutil.Process()
</code></pre>
<p>An example of Jenkinsfile where this is tested is the following:</p>
<pre><code>pipeline {
agent {
label 'Linux'
}
stages {
stage("Test") {
steps {
script {
sh '''
podman run -dit --name test --rm docker_image
podman exec -it test /bin/bash -c "python test.py"
'''
}
}
}
}
}
</code></pre>
<p>Here is the complete error trace:</p>
<pre><code>Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/psutil/_pslinux.py", line 1643, in wrapper
return fun(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psutil/_common.py", line 486, in wrapper
raise raise_from(err, None)
^^^^^^^^^^^^^^^^^^^^^
File "<string>", line 3, in raise_from
File "/usr/local/lib/python3.11/site-packages/psutil/_common.py", line 484, in wrapper
return fun(self)
^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psutil/_pslinux.py", line 1705, in _parse_stat_file
data = bcat("%s/%s/stat" % (self._procfs_path, self.pid))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psutil/_common.py", line 820, in bcat
return cat(fname, fallback=fallback, _open=open_binary)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psutil/_common.py", line 808, in cat
with _open(fname) as f:
^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psutil/_common.py", line 772, in open_binary
return open(fname, "rb", buffering=FILE_READ_BUFFER_SIZE)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
FileNotFoundError: [Errno 2] No such file or directory: '/proc/2/stat'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.11/site-packages/psutil/__init__.py", line 350, in _init
self.create_time()
File "/usr/local/lib/python3.11/site-packages/psutil/__init__.py", line 735, in create_time
self._create_time = self._proc.create_time()
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psutil/_pslinux.py", line 1643, in wrapper
return fun(self, *args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psutil/_pslinux.py", line 1870, in create_time
ctime = float(self._parse_stat_file()['create_time'])
^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psutil/_pslinux.py", line 1652, in wrapper
raise NoSuchProcess(self.pid, self._name)
psutil.NoSuchProcess: process no longer exists (pid=2)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "test.py", line 3, in <module>
p = psutil.Process()
^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/psutil/__init__.py", line 313, in __init__
self._init(pid)
File "/usr/local/lib/python3.11/site-packages/psutil/__init__.py", line 362, in _init
raise NoSuchProcess(pid, msg='process PID not found')
psutil.NoSuchProcess: process PID not found (pid=2)
</code></pre>
|
<python><linux><docker><jenkins>
|
2024-01-11 14:22:24
| 0
| 315
|
robocat314
|
77,800,583
| 4,133,188
|
Converting XGBoost Shapely values to SHAP's Explanation object
|
<p>I am trying to convert XGBoost shapely values into an SHAP explainer object. Using the example [here][1] with the built in SHAP library takes days to run (even on a subsampled dataset) while the XGBoost library takes a few minutes. However. I would like to output a beeswarm graph that's similar to what's displayed in the example [here][2].</p>
<p>My thought was that I could use the XGBoost library to recover the shapely values and then plot them using the SHAP library, but the beeswarm plot requires an explainer object. How can I convert my XGBoost booster object into an explainer object?</p>
<p>Here's what I tried:</p>
<pre><code>import shap
booster = model.get_booster()
d_test = xgboost.DMatrix(X_test[0:100], y_test[0:100])
shap_values = booster.predict(d_test, pred_contribs=True)
shap.plots.beeswarm(shap_values)
</code></pre>
<p>Which returns:</p>
<pre><code>TypeError: The beeswarm plot requires an `Explanation` object as the `shap_values` argument.
</code></pre>
<p>To clarify, I would like to create the explainer object out of values generated by the xgboost built-in library, if possible. Avoiding the shap.explainer or shap.TreeExplainer function calls is a priority because they take much much longer (days) to return rather than minutes.
[1]: <a href="https://shap.readthedocs.io/en/latest/example_notebooks/tabular_examples/tree_based_models/Python%20Version%20of%20Tree%20SHAP.html" rel="nofollow noreferrer">https://shap.readthedocs.io/en/latest/example_notebooks/tabular_examples/tree_based_models/Python%20Version%20of%20Tree%20SHAP.html</a>
[2]: <a href="https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/beeswarm.html#A-simple-beeswarm-summary-plot" rel="nofollow noreferrer">https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/beeswarm.html#A-simple-beeswarm-summary-plot</a></p>
|
<python><machine-learning><xgboost><shap>
|
2024-01-11 13:55:27
| 1
| 771
|
BeginnersMindTruly
|
77,800,291
| 9,974,205
|
How can I get sequences of actions from a Pandas dataframe
|
<p>I have a Pandas dataframe in Python in which is registered when Ramirez enters and leaves a building. I also have a list in which all events in the building are registrered, from turning on the lights to flushing a toilet or calling an elevator. I have ordered them chronollogically as:</p>
<pre><code>Visit 1: C2, C4, C1 None, None
Visit 2: C2, C1, C2, None, None
Visit 3: C1, C3, C1, C2, C3
Visit 4: C1, C2, C3, None, None
</code></pre>
<p>and so on (this means that during Ramirez's first visit, first happened C2, the C4 and then C1).</p>
<p>I want to find out what are the events that Ramirez is responsible for. To do that I want to find the most common sequences in the data.
For example, C1, C2 appaears during his second, third and fourth visits. C2, C3 occur during his third and fourth visits.</p>
<p>I need a program to obtain the chains of events ordered from the most common to the less common, also, C1, C1 or any other pair like that shoul be ignored.</p>
<p>I would like some advice on how to obtain this code</p>
|
<python><pandas><statistics><pattern-matching><combinations>
|
2024-01-11 13:05:44
| 1
| 503
|
slow_learner
|
77,800,210
| 832,490
|
Pytest not raising an exception
|
<p>I have the following code</p>
<pre><code>@blueprint.post("/bulk")
@validate()
def bulk(body: BulkRequestBody) -> tuple[BulkResponseBody, int]:
try:
bulk_insert(body.items) # Should raise the exception here.
return BulkResponseBody(ok=True), 200
except: # noqa
pass
return BulkResponseBody(ok=False), 500
</code></pre>
<p>And I have this test for that (other tests works fine):</p>
<pre><code>def test_bulk_500(mocker, client):
# Not sure why the exception is not being raised.
mocker.patch("app.usecase.bulk.bulk_insert", side_effect=Exception) # Should raise the exception, right?
response = client.post(
"/v1/bulk",
json={
"items": [
{
"title": "title",
"uri": "uri",
"date": "2021-01-01",
}
]
},
)
assert response.status_code == 500 # Fails, because status_code is 200.
assert response.json == {"ok": False}
</code></pre>
<p>I tried many other forms, no way to make Pytest to raise an Exception when <code>bulk_insert</code> is called.</p>
<p>What I am doing wrong?</p>
|
<python><pytest>
|
2024-01-11 12:54:51
| 1
| 1,009
|
Rodrigo
|
77,800,061
| 7,341,904
|
different output between command execution and subprocess.run
|
<p>I am getting two different output for same command when i am trying to execute command from command line manual and from python.</p>
<p>I am trying to run one command <strong>python --version</strong> from linux to window over ssh from command line and python. but observed two different output.
following code i am using to call ssh_command from python:</p>
<pre><code> result = subprocess.run(ssh_command, shell=True, stdout=subprocess.PIPE)
return_results = str(result.stdout.decode())
</code></pre>
<p>Getting below result in python:</p>
<p>[?25l[2J[m[H+++++++</p>
<p>[2;1H]0;C:\windows\system32\conhost.exe[?25hpytest 7.4.4</p>
<p>+++++++</p>
<p>But proper result when i trying execution from command line:</p>
<p>+++++++</p>
<p>pytest 7.4.4</p>
<p>+++++++</p>
|
<python><ssh><subprocess>
|
2024-01-11 12:33:39
| 1
| 1,823
|
True Vision _ Zunna Berry
|
77,800,048
| 8,315,634
|
Refresh button using streamlit. Had to click on refresh twice to make it work
|
<p>How to make it work on single click.</p>
<pre><code>import streamlit as st
import pandas as pd
import numpy as np
# Function to read DataFrame from CSV
def read_dataframe():
return pd.read_excel(r"E:\projects\smartscan2_1\patients_info.xlsx")
def main():
st.title("Refresh DataFrame Example")
# Generate or retrieve the DataFrame
if 'data' not in st.session_state:
st.session_state.data = read_dataframe()
# Display the DataFrame
st.dataframe(st.session_state.data)
# Add a refresh button
if st.button("Refresh"):
# Read the DataFrame from CSV and store it in session_state
st.session_state.data = read_dataframe()
if __name__ == "__main__":
main()
</code></pre>
|
<python><streamlit>
|
2024-01-11 12:31:03
| 1
| 1,379
|
Soumya Boral
|
77,799,921
| 12,319,746
|
Autogen's human input reloads server
|
<p>So, basically I am using <a href="https://nicegui.io/" rel="nofollow noreferrer">NiceGui</a> to create a UI over <code>AutoGen</code>. I have set the <code>human_input_mode</code> to <code>ALWAYS</code>. Whenever <code>Autogen</code> asks for the human input, the server reloads and the conversation is lost. <code>reload=False</code> doesn't work.</p>
<p><code>ui.run(title='Chat with Autogen Assistant', on_air=True,reload=False</code> (same behaviour on <code>localhost</code> just to clarify.</p>
<pre><code>def check_termination_and_human_reply(
self,
messages: Optional[List[Dict]] = None,
sender: Optional[Agent] = None,
config: Optional[Any] = None,
) -> Tuple[bool, Union[str, None]]:
"""Check if the conversation should be terminated, and if human reply is provided.
This method checks for conditions that require the conversation to be terminated, such as reaching
a maximum number of consecutive auto-replies or encountering a termination message. Additionally,
it prompts for and processes human input based on the configured human input mode, which can be
'ALWAYS', 'NEVER', or 'TERMINATE'. The method also manages the consecutive auto-reply counter
for the conversation and prints relevant messages based on the human input received.
Args:
- messages (Optional[List[Dict]]): A list of message dictionaries, representing the conversation history.
- sender (Optional[Agent]): The agent object representing the sender of the message.
- config (Optional[Any]): Configuration object, defaults to the current instance if not provided.
Returns:
- Tuple[bool, Union[str, Dict, None]]: A tuple containing a boolean indicating if the conversation
should be terminated, and a human reply which can be a string, a dictionary, or None.
"""
# Function implementation...
if config is None:
config = self
if messages is None:
messages = self._oai_messages[sender]
message = messages[-1]
reply = ""
no_human_input_msg = ""
if self.human_input_mode == "ALWAYS":
reply = self.get_human_input(
f"Provide feedback to {sender.name}. Press enter to skip and use auto-reply, or type 'exit' to end the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply if reply or not self._is_termination_msg(message) else "exit"
else:
if self._consecutive_auto_reply_counter[sender] >= self._max_consecutive_auto_reply_dict[sender]:
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
terminate = self._is_termination_msg(message)
reply = self.get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
if terminate
else f"Please give feedback to {sender.name}. Press enter to skip and use auto-reply, or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply if reply or not terminate else "exit"
elif self._is_termination_msg(message):
if self.human_input_mode == "NEVER":
reply = "exit"
else:
# self.human_input_mode == "TERMINATE":
reply = self.get_human_input(
f"Please give feedback to {sender.name}. Press enter or type 'exit' to stop the conversation: "
)
no_human_input_msg = "NO HUMAN INPUT RECEIVED." if not reply else ""
# if the human input is empty, and the message is a termination message, then we will terminate the conversation
reply = reply or "exit"
# print the no_human_input_msg
if no_human_input_msg:
print(colored(f"\n>>>>>>>> {no_human_input_msg}", "red"), flush=True)
# stop the conversation
if reply == "exit":
# reset the consecutive_auto_reply_counter
self._consecutive_auto_reply_counter[sender] = 0
return True, None
# send the human reply
if reply or self._max_consecutive_auto_reply_dict[sender] == 0:
# reset the consecutive_auto_reply_counter
self._consecutive_auto_reply_counter[sender] = 0
return True, reply
# increment the consecutive_auto_reply_counter
self._consecutive_auto_reply_counter[sender] += 1
if self.human_input_mode != "NEVER":
print(colored("\n>>>>>>>> USING AUTO REPLY...", "red"), flush=True)
return False, None
</code></pre>
<p>Not sure what is causing it and how to solve this.</p>
|
<python><artificial-intelligence><ms-autogen>
|
2024-01-11 12:09:38
| 0
| 2,247
|
Abhishek Rai
|
77,799,809
| 1,473,517
|
Why do I get different random numbers with the same seed?
|
<p>I am using the numpy random number generator with the following MWE:</p>
<pre><code>import numpy as np
np.random.seed(40)
print(np.random.randint(-3, 4))
rng = np.random.default_rng(seed=40)
print(rng.integers(-3, 4))
</code></pre>
<p>This outputs:</p>
<pre><code>3
0
</code></pre>
<p>Why are the outputs different?</p>
|
<python><numpy><random-seed>
|
2024-01-11 11:50:05
| 2
| 21,513
|
Simd
|
77,799,804
| 4,377,632
|
Python - Add nested dictionary items from a list of keys
|
<p>I'm trying to find a quick and easy way to add nested items to an existing Python dictionary. This is my most recent dictionary:</p>
<pre class="lang-py prettyprint-override"><code>myDict = {
"a1": {
"a2": "Hello"
}
}
</code></pre>
<p>I want to use a function similar to this one to add new nested values:</p>
<pre class="lang-py prettyprint-override"><code>myDict = add_nested(myDict, ["b1", "b2", "b3"], "z2")
myDict = add_nested(myDict, ["c1"], "z3")
</code></pre>
<p>After running the function, I expect my dictionary to look like this:</p>
<pre><code>myDict = {
"a1": {
"a2": "z1"
},
"b1": {
"b2": {
"b3": "z2"
}
},
"c1": "z3"
}
</code></pre>
<p>How can I create a Pythonic function (<code>add_nested</code>) that achieves this?</p>
|
<python><dictionary>
|
2024-01-11 11:49:14
| 0
| 323
|
bkbilly
|
77,799,796
| 7,319,413
|
Trouble Extracting Recording URLs from website
|
<p>I am new to python and I have a requirement to parse all the recording URL from a website. I tried the below program but it's not able to find the recording links but It's printing other links in the web page. I am not aware of the website design, I tried with AI tools and Stackoverflow but I can find same solution every where. Can you please provide what is the mistake I am doing here or some other way I need to follow to parse this?</p>
<p></p>
<p>Sample recording URL which I found from the webpage using inspect element:</p>
<p><a href="https://www.vector.com/int/en/events/global-de-en/webinar-recordings/7307e0a9000c63ad7dce5523ec058af2-remote-diagnostics-and-flashing" rel="nofollow noreferrer">https://www.vector.com/int/en/events/global-de-en/webinar-recordings/7307e0a9000c63ad7dce5523ec058af2-remote-diagnostics-and-flashing</a></p>
<p><a href="https://i.sstatic.net/8zyjm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8zyjm.png" alt="enter image description here" /></a>
</p>
<p></p>
<p>Here is the code snipper I tried:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.95 Safari/537.36'}
def parse_page(url):
response = requests.get(url,headers=headers)
soup = BeautifulSoup(response.text, 'html.parser')
for quote in soup.find_all('a',href=True):
href = quote.get('href')
print(href)
base_url = 'https://www.vector.com/int/en/search/#type=%5B%22webinar_recording%22%5D&page=1&pageSize=50&sort=date&order=desc'
parse_page(base_url)
</code></pre>
|
<python><web-scraping><beautifulsoup>
|
2024-01-11 11:47:52
| 1
| 545
|
goodman
|
77,799,764
| 1,056,179
|
Undetstanding the Torch Cosine Similarity along Different Dimensions
|
<p>I have two tensors:</p>
<pre><code>a = tensor([[1., 2., 3., 5.],
[1., 9., 9., 5.]])
b = tensor([[ 7., 8., 9., 5.],
[10., -1., 3., 5.]])
</code></pre>
<p>I can not understand why calculating the cosine similarity along <em>dim = 0</em> results in a tensor with 4 distances:</p>
<pre><code>tensor([0.9848, 0.0942, 0.6000, 1.0000])
</code></pre>
<p>Calculating in dim = 0 should have collapsed [1., 2., 3., 5.] and [ 7., 8., 9., 5.], also [1., 9., 9., 5.] and [10., -1., 3., 5.] and then calculated the distance between [1., 2., 3., 5.] and [ 7., 8., 9., 5.], and then between [1., 9., 9., 5.] and [10., -1., 3., 5.].</p>
<p>However, it seems that it collapses [1,1] and [7, 10], calculates the similarity between them, and then does the same for [2, 9] and [8, -1],...</p>
<p>The complete code:</p>
<pre><code>a = torch.as_tensor([[1,2, 3, 5], [1,9, 9, 5]]).float()
b = torch.as_tensor([[7, 8, 9, 5], [10, -1, 3, 5]]).float()
F.cosine_similarity(a, b, dim = 0)
</code></pre>
|
<python><pytorch>
|
2024-01-11 11:41:50
| 0
| 2,059
|
Amir Jalilifard
|
77,799,470
| 4,340,985
|
How to turn dataframe rows into multiindex levels?
|
<p>I have a csv file that looks something like this:</p>
<pre><code>ID ; name; location; level; DATE19970901; DATE19970902; ...;DATE20201031;survey;person
001; foo; east; 500; 123.1; 342.5; ...; 234.5; A; John
002; bar; west; 50; 67.8; 98.3; ...; 76.6; A; Jenn
003; baz; north; 5000; 535.7; 99.9; ...; 432.6; B; John
</code></pre>
<p>which I need to turn into a dataframe like this:</p>
<pre><code>ID 001 002 003
name foo bar baz
location east west north
level 500 50 5000
survey A A B
person John Jenn John
date
1997-09-01 123.1 67.8 535,7
1991-09-02 342.5 98.3 99.9
...
2020-10-31 234.5 76.6 432.6
</code></pre>
<p>Now the easiest way seems to me to read it in, <code>.transpose()</code> it and then tell it to turn the data rows 0,1,2,8443,8444 into multiindex rows, but I'm missing a function for it. <a href="https://pandas.pydata.org/docs/reference/api/pandas.MultiIndex.from_frame.html#pandas.MultiIndex.from_frame" rel="nofollow noreferrer"><code>.MultiIndex.from_frame</code></a> does only seem to take a complete df to turn into a multiindex. I could probably split my df into a multiindex df and a data df and merge them, but that seems complicated and error prone to me.</p>
<p>What's an easy way to do it, is to read in the csv, transpose the df, export it to csv and read that in again, but that seems rather hacky (and slow, though that is not really an issue in my case).</p>
|
<python><pandas><dataframe><multi-index>
|
2024-01-11 10:52:25
| 1
| 2,668
|
JC_CL
|
77,799,262
| 3,810,748
|
Why doesn't BERT give me back my original sentence?
|
<p>I've started playing with <code>BERT</code> encoder through the <code>huggingface</code> module.
<a href="https://i.sstatic.net/N6J8m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/N6J8m.png" alt="enter image description here" /></a></p>
<p>I passed it a normal unmasked sentence and got the following results:
<a href="https://i.sstatic.net/GPUBZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPUBZ.png" alt="enter image description here" /></a></p>
<p>However, when I try to manually apply the softmax and decode the output:
<a href="https://i.sstatic.net/6oZ2i.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6oZ2i.png" alt="enter image description here" /></a></p>
<p>I get back a bunch of unexpected <code>tensor(1012)</code> instead of my original sentence. BERT is an autoencoder, no?</p>
<p>Shouldn't it be giving me back the original sentence with fairly high probability since none of the input words was <code>[MASK]</code>? Can anyone explain to me what is going on?</p>
|
<python><huggingface-transformers><bert-language-model><huggingface>
|
2024-01-11 10:19:52
| 1
| 6,155
|
AlanSTACK
|
77,799,221
| 7,112,039
|
SqlAlchemy add hybrid property that is a sum of a datetime and a numeric field
|
<p>I am trying to calculate a field combining two existing fields in a SQLAlchemy Model, and use it in the query parameters.</p>
<p>I have a model like this:</p>
<pre class="lang-py prettyprint-override"><code> class Booking(BaseORMModel):
start: Mapped[datetime] = mapped_column(TIMESTAMP(timezone=True))
minutes: duration_minutes: Mapped[int] = mapped_column(Numeric)
</code></pre>
<p>I want to dynamically calculate the end as the sum of <code>start</code> and <code>minutes</code>. This is my attempt:</p>
<pre class="lang-py prettyprint-override"><code> class Booking(BaseORMModel):
...
@hybrid_property
def ended_at(self):
return self.scheduled_at + timedelta(minutes=self.duration_minutes)
@ended_at.expression
def ended_at(cls):
duration_interval = func.cast(concat(cls.duration_minutes, ' MINUTES'), INTERVAL)
ended_at = func.sum(cls.scheduled_at + duration_interval)
return ended_at
</code></pre>
<p>This is my query</p>
<pre class="lang-py prettyprint-override"><code>query = query.where(Booking.ended_at < now)
</code></pre>
<p>This is what I get:</p>
<pre class="lang-bash prettyprint-override"><code>sqlalchemy.exc.ProgrammingError: (psycopg.errors.UndefinedFunction) function sum(timestamp with time zone) does not exist
</code></pre>
|
<python><sqlalchemy>
|
2024-01-11 10:13:01
| 1
| 303
|
ow-me
|
77,799,177
| 18,782,190
|
Apache2 not working with mod_wsgi: "ModuleNotFoundError: No module named 'encodings'"
|
<p>I run Apache2 in a Docker container and want to host my Django site using <code>mod_wsgi</code>. Anyway WSGI process fails to start and I get the error below. I tried using Python paths of the container and of a separate virtual environment, but I get the same error. What am I doing wrong?</p>
<pre><code>[Thu Jan 11 10:39:44.447690 2024] [wsgi:warn] [pid 316:tid 140618355098752] (2)No such file or directory: mod_wsgi (pid=316): Unable to stat Python home /var/www/html/my-site.com.com/python3.7. Python interpreter may not be able to be initialized correctly. Verify the supplied path and access permissions for whole of the path.
Fatal Python error: initfsencoding: unable to load the file system codec
ModuleNotFoundError: No module named 'encodings'
</code></pre>
<p>Here is my <code>Dockerfile</code>:</p>
<pre><code>FROM python:3.7.17-buster
WORKDIR /var/www/html/my-site.com
SHELL ["/bin/bash", "-c"]
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
ENV VIRTUAL_ENV=/var/www/html/my-site.com/python3.7
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
COPY ./dev .
COPY ./requirements.txt .
RUN apt update
RUN apt upgrade -y
RUN apt install apache2 libapache2-mod-wsgi-py3 -y
RUN a2enmod ssl rewrite wsgi headers macro
RUN python -m venv $VIRTUAL_ENV
RUN source $VIRTUAL_ENV/bin/activate
RUN pip install --upgrade pip && \
pip install --upgrade setuptools && \
pip install -r requirements.txt
</code></pre>
<p>Here is a relevant Apache config segment:</p>
<pre><code>WSGIScriptAlias / /var/www/html/my-site.com/main/wsgi.py
WSGIDaemonProcess prod_$site python-home=/var/www/html/my-site.com/python3.7 python-path=/var/www/html/my-site.com/python3.7/site-packages
WSGIProcessGroup prod_$site
</code></pre>
|
<python><docker><apache><mod-wsgi><wsgi>
|
2024-01-11 10:05:37
| 1
| 593
|
Karolis
|
77,798,960
| 755,371
|
How to specify python paths in pyproject.toml for poetry?
|
<p>I want to build a poetry python environment by setting the pyproject.toml file so when I activate it and running in python interpreter <code>import sys; print(sys.path)</code> will show the paths added in pyproject.toml</p>
<p>How to proceed ?</p>
|
<python><python-poetry>
|
2024-01-11 09:29:36
| 1
| 5,139
|
Eric
|
77,798,875
| 13,568,193
|
Cumulatative Subtraction in pyspark
|
<p>I want to achieve cumulative Subtraction in pyspark.
I have dataset like this</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>councs</th>
<th>coitm</th>
</tr>
</thead>
<tbody>
<tr>
<td>1000</td>
<td>1110</td>
</tr>
<tr>
<td>100</td>
<td>1110</td>
</tr>
<tr>
<td>50</td>
<td>1110</td>
</tr>
<tr>
<td>30</td>
<td>1110</td>
</tr>
<tr>
<td>20</td>
<td>1110</td>
</tr>
<tr>
<td>2000</td>
<td>1210</td>
</tr>
<tr>
<td>10</td>
<td>1210</td>
</tr>
<tr>
<td>200</td>
<td>1210</td>
</tr>
<tr>
<td>-100</td>
<td>1210</td>
</tr>
<tr>
<td>20</td>
<td>1210</td>
</tr>
</tbody>
</table>
</div>
<p>My desirable result is this :-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>councs</th>
<th>coitm</th>
<th>_uncs</th>
</tr>
</thead>
<tbody>
<tr>
<td>1000</td>
<td>1110</td>
<td>1000</td>
</tr>
<tr>
<td>100</td>
<td>1110</td>
<td>900</td>
</tr>
<tr>
<td>50</td>
<td>1110</td>
<td>850</td>
</tr>
<tr>
<td>30</td>
<td>1110</td>
<td>820</td>
</tr>
<tr>
<td>20</td>
<td>1110</td>
<td>800</td>
</tr>
<tr>
<td>2000</td>
<td>1210</td>
<td>2000</td>
</tr>
<tr>
<td>10</td>
<td>1210</td>
<td>1990</td>
</tr>
<tr>
<td>200</td>
<td>1210</td>
<td>1790</td>
</tr>
<tr>
<td>-100</td>
<td>1210</td>
<td>1890</td>
</tr>
<tr>
<td>20</td>
<td>1210</td>
<td>1870</td>
</tr>
</tbody>
</table>
</div>
<p>For this I tried following code:-</p>
<pre><code>df = _cost_srt.orderBy("coitm")
partition_by = Window().partitionBy("COITM").orderBy(F.desc("COCHDJ"))
df = df.withColumn("rnb", F.row_number().over(partition_by))
df = df.withColumn(
"_UNCS", F.when(F.col("rnb") == 1, F.col("COUNCS")).otherwise(F.lit(None))
)
_output = df.withColumn(
"_UNCS",
F.when(
F.col("rnb") > 1, F.lag(F.col("_UNCS")).over(partition_by) - F.col("COUNCS")
).otherwise(F.col("_UNCS")),
)
</code></pre>
<p>I achieve desirable output for rnb 1 and rnb 2 only, after that _uncs become null. How can I acheive my desirable code? Please help me.</p>
|
<python><pyspark><azure-databricks>
|
2024-01-11 09:14:17
| 1
| 383
|
Arpan Ghimire
|
77,798,692
| 4,872,294
|
Pyspark error in EMR writting parquet files to S3
|
<p>I have a process that reads data from S3, processes it and then saves it again to s3 in other location in parquet format. Sometimes I get this error when it is writting:</p>
<pre><code> y4j.protocol.Py4JJavaError: An error occurred while calling o426.parquet.
: org.apache.spark.SparkException: Job aborted due to stage failure: Authorized committer (attemptNumber=0, stage=17, partition=264) failed; but task commit success, data duplication may happen. reason=ExceptionFailure(org.apache.spark.SparkException,[TASK_WRITE_FAILED] Task failed while writing rows to s3://bucket/path.,[Ljava.lang.StackTraceElement;@397e21a9,org.apache.spark.SparkException: [TASK_WRITE_FAILED] Task failed while writing rows to s3://bucket/path.
at org.apache.spark.sql.errors.QueryExecutionErrors$.taskFailedWhileWritingRowsError(QueryExecutionErrors.scala:789)
at org.apache.spark.sql.execution.datasources.FileFormatWriter$.executeTask(FileFormatWriter.scala:421)
at org.apache.spark.sql.execution.datasources.WriteFilesExec.$anonfun$doExecuteWrite$1(WriteFiles.scala:100)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2(RDD.scala:888)
at org.apache.spark.rdd.RDD.$anonfun$mapPartitionsInternal$2$adapted(RDD.scala:888)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:364)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:328)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:92)
at org.apache.spark.TaskContext.runTaskWithListeners(TaskContext.scala:161)
at org.apache.spark.scheduler.Task.run(Task.scala:141)
at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:563)
at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1541)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:566)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)
Caused by: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: One or more of the specified parts could not be found. The part may not have been uploaded, or the specified entity tag may not match the part's entity tag. (Service: Amazon S3; Status Code: 400; Error Code: InvalidPart; Request ID: 0RGE13WMZ76BMPW6; S3 Extended Request ID: up90NKdAy7UIp3Rep2+J293TUhfFcno8iG/Y7Qr8uZOLMMzrQAwrZrfKojzKsq5iKiuGPQLz9/g=; Proxy: null), S3 Extended Request ID: up90NKdAy7UIp3Rep2+J293TUhfFcno8iG/Y7Qr8uZOLMMzrQAwrZrfKojzKsq5iKiuGPQLz9/g=
</code></pre>
<p>I get this error in some executions. EMR service role have permissions to write to S3.</p>
|
<python><apache-spark><amazon-s3><pyspark><amazon-emr>
|
2024-01-11 08:40:45
| 2
| 1,472
|
Shadowtrooper
|
77,798,604
| 2,234,246
|
Unable to parse dates with an optional day part in Pyspark
|
<p>I have a Pyspark data frame with string dates that may be either yyyyMM (e.g. 200802) or yyyyMMdd (e.g. 20080917). I am trying to parse these into dates. The function I am currently considering is <a href="https://spark.apache.org/docs/latest/api/python/reference/pyspark.sql/api/pyspark.sql.functions.to_date.html" rel="nofollow noreferrer"><code>to_date</code></a>. Looking at the <a href="https://spark.apache.org/docs/latest/sql-ref-datetime-pattern.html" rel="nofollow noreferrer">datetime parsing patterns documentation</a>, I should be able to use an optional section in square brackets. However, I cannot get this to work. Dates with a yyyy-MM or yyyy-MM-dd pattern do work with an optional section.</p>
<pre><code>from pyspark.sql import functions as F
df = spark.createDataFrame([('200802', '2008-02', ), ('20080917', '2008-09-17', )], ['t', 't2'])
display(df
.withColumn('fdate', F.to_date(F.col('t'), 'yyyyMM[dd]'))
.withColumn('fdate2', F.to_date(F.col('t2'), 'yyyy-MM[-dd]'))
)
</code></pre>
<p>The output is:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>t</th>
<th>t2</th>
<th>fdate</th>
<th>fdate2</th>
</tr>
</thead>
<tbody>
<tr>
<td>200802</td>
<td>2008-02</td>
<td>2008-02-01</td>
<td>2008-02-01</td>
</tr>
<tr>
<td>20080917</td>
<td>2008-09-17</td>
<td>null</td>
<td>2008-09-17</td>
</tr>
</tbody>
</table>
</div>
<p>You can see that the pattern with dashes correctly parses both date formats, but the strictly numeric pattern does not. Am I using this function incorrectly? Is there a way that I can parse these dates without using a UDF?</p>
<p>I am using Spark 3.5.0 in Databricks runtime 14.2.</p>
|
<python><datetime><pyspark><apache-spark-sql>
|
2024-01-11 08:22:22
| 0
| 323
|
Tim Keighley
|
77,798,563
| 18,876,759
|
Add support for new data type (quantiphy.Quantity)
|
<p>I'm having a Pydantic model which includes a custom data tpye (specifically <code>quantiphy.Quantity</code>):</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel
class SpecLimit(BaseModel):
label: str
minimum: Quantity | None = None
maximum: Quantity | None = None
typical: Quantity | None = None
is_informative: bool = False
</code></pre>
<p>I want Pydantic to serialize the <code>Quantity</code> objects as string (and de-serialize them from strings), but I can't figure out how. I always end up with:
<code>pydantic.errors.PydanticSchemaGenerationError: Unable to generate pydantic-core schema for <class 'quantiphy.quantiphy.Quantity'>...</code></p>
<p>I've tried to specify a <code>json_encoder</code>, but this doesn't work and the <code>json_encoders</code> attribute is deprecated.</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel as _BaseModel
from quantiphy import Quantity
class BaseModel(_BaseModel):
class Config:
json_encoders = {
Quantity: str,
}
class SpecLimit(BaseModel):
label: str
minimum: Quantity | None = None
maximum: Quantity | None = None
typical: Quantity | None = None
is_informative: bool = False
</code></pre>
|
<python><pydantic>
|
2024-01-11 08:14:53
| 2
| 468
|
slarag
|
77,798,015
| 2,178,942
|
Putting space between every two bars in seaborn's factorplot
|
<p>I have a drawn a plot using seaborn and the following command:</p>
<pre><code>ax = sns.factorplot(x="feat", y="acc", col="roi", hue="alpha", data=df_d_pt, kind="bar", dodge=True)
</code></pre>
<p>Results look like this:</p>
<p><a href="https://i.sstatic.net/rc2Zk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rc2Zk.png" alt="enter image description here" /></a></p>
<p>But I want to put space between every two bars, that is bars with dark blue and light blue would be very close to each other but some space between the dard blue bar and the light green one (and so on...)</p>
<p>How can I do that?</p>
<p>Update, data looks like this:
<a href="https://i.sstatic.net/4NdfH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4NdfH.png" alt="enter image description here" /></a></p>
<p>Thanks in advance</p>
|
<python><matplotlib><seaborn><figure>
|
2024-01-11 06:17:25
| 0
| 1,581
|
Kadaj13
|
77,797,969
| 3,810,748
|
Why does Python's Decimal class generate additional digits that weren't there before?
|
<p>Take a look at the following outputs</p>
<pre><code>>>> from decimal import Decimal
>>> print(2.4)
2.4
>>> print(Decimal(2.4))
2.399999999999999911182158029987476766109466552734375
</code></pre>
<p>Why exactly is this happening? If the explanation is that 2.4 couldn't be represented precisely, and therefore already had to be represented in approximate form, then how come the first print statement produced an exact result?</p>
|
<python><python-3.x><floating-point><decimal>
|
2024-01-11 06:02:42
| 0
| 6,155
|
AlanSTACK
|
77,797,689
| 2,987,552
|
langchain.document_loaders.ConfluenceLoader.load giving AttributeError: 'str' object has no attribute 'get' while reading all documents from space
|
<p>When I try sample code given <a href="https://python.langchain.com/docs/integrations/document_loaders/confluence" rel="nofollow noreferrer">here</a>:</p>
<pre><code>from langchain.document_loaders import ConfluenceLoader
loader = ConfluenceLoader(
url="<my confluence link>", username="<my user name>",
api_key="<my token>"
)
documents = loader.load(space_key="<my space>", include_attachments=True, limit=1, max_pages=1)
</code></pre>
<p>I get an error:</p>
<pre><code>AttributeError: 'str' object has no attribute 'get'
</code></pre>
<p>Here is the last part of the stack:</p>
<pre><code> 554 """
555 Get all pages from space
556
(...)
568 :return:
569 """
570 return self.get_all_pages_from_space_raw(
571 space=space, start=start, limit=limit, status=status, expand=expand, content_type=content_type
--> 572 ).get("results")
</code></pre>
<p>Any ideas? I see an issue <a href="https://github.com/langchain-ai/langchain/issues/14113" rel="nofollow noreferrer">here</a> but it is still open.</p>
<p>I have now also opened <a href="https://github.com/langchain-ai/langchain/issues/15869" rel="nofollow noreferrer">bug</a> specifically for this issue.</p>
<p>Here is the summary of the fixes required in the original code:</p>
<ol>
<li>Do not suffix the URL with /wiki/home</li>
<li>suffix the user name with @ your domain name</li>
<li>use ID of the space as in the URL and not its display name</li>
</ol>
<p>then it works. The error handling is poor to point to these issues otherwise.</p>
|
<python><langchain><confluence><document-loader>
|
2024-01-11 04:31:15
| 1
| 598
|
Sameer Mahajan
|
77,797,420
| 5,269,749
|
How to increase the resolution of axis when plotting via hist2d
|
<p>Using the following code I create a 2d histogram.</p>
<pre><code>import matplotlib.pyplot as plt
import numpy as np
# Fixing random state for reproducibility
np.random.seed(19680801)
x, y = np.random.rand(2, 1000) * 10
hist, xedges, yedges, im = plt.hist2d(x, y, bins=10, range=[[0, 10], [0, 10]])
for i in range(len(hist)):
for j in range(len(hist[i])):
plt.text(xedges[i] + (xedges[i + 1] - xedges[i]) / 2,
yedges[j] + (yedges[j + 1] - yedges[j]) / 2,
hist[i][j], ha="center", va="center", color="w")
plt.show()
</code></pre>
<p>It looks like the following</p>
<p><a href="https://i.sstatic.net/xpXFp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xpXFp.png" alt="enter image description here" /></a></p>
<p>I am wondering if there is a way to increase the axis details such that the x and y axis are like <code>[0, 1, 2, ..., 8, 9, 10]</code>. Basically, instead of jumping in steps of 2, I want to see all the integers in the range.</p>
|
<python><matplotlib>
|
2024-01-11 02:54:20
| 0
| 1,264
|
Alex
|
77,797,208
| 13,916,049
|
Iteratively identify columns with binary values in a nested dictionary
|
<p>I want to identify columns with binary or label-encoded values as the <code>y_label_encoded_columns variable</code>. My code returns an empty dictionary.</p>
<p>Code:</p>
<pre><code># Identify label-encoded columns from all dataframes in the nested dictionary
y_label_encoded_columns = {}
for outer_key, inner_dict in train_data_dict.items():
for inner_key, inner_value in inner_dict.items():
if isinstance(inner_value, pd.DataFrame):
label_encoded_columns = inner_value.select_dtypes(include=['int', 'float']).columns[inner_value.nunique() == 2]
y_label_encoded_columns[(outer_key, inner_key)] = label_encoded_columns
</code></pre>
<p>Input:
<code>train_data_dict</code></p>
<pre><code>{'transcriptomics': {'transcriptomics_df': ( gene_1 gene_2 gene_3 gene_4 gene_5 gene_6 \
sample_8 0.324889 0.282243 0.921885 0.408865 0.000000 0.519652
sample_3 0.715960 0.232156 0.310729 0.498760 0.573144 1.000000
sample_5 1.000000 0.532265 0.619240 0.192590 1.000000 0.916358
sample_4 0.000000 1.000000 1.000000 1.000000 0.216677 0.592965
sample_7 0.392615 0.574217 0.785394 0.000000 0.821214 0.000000
gene_7 gene_8 gene_9 gene_10
sample_8 0.905142 0.000000 0.757505 0.378347
sample_3 0.000000 0.929344 0.493086 0.690365
sample_5 1.000000 0.192423 0.243973 0.311958
sample_4 0.912722 0.725308 0.133332 0.867666
sample_7 0.818003 0.325888 0.000000 1.000000 ,
survival immune
sample_8 1 0
sample_3 0 0
sample_5 0 1
sample_4 0 0
sample_7 0 1),
'mrna_deconv': ( mrna_cell_type_1 mrna_cell_type_2 mrna_cell_type_3 \
sample_8 0.366512 0.000000 0.245887
sample_3 0.332385 0.682703 0.522181
sample_5 1.000000 0.025130 0.358275
sample_4 0.412620 1.000000 1.000000
sample_7 0.000000 0.600609 0.284344
mrna_cell_type_4 mrna_cell_type_5
sample_8 0.143968 0.850287
sample_3 0.902649 0.132099
sample_5 0.115818 1.000000
sample_4 0.000000 0.959242
sample_7 1.000000 0.934358 ,
survival immune
sample_8 1 0
sample_3 0 0
sample_5 0 1
sample_4 0 0
sample_7 0 1)},
'epigenomics': {'epigenomics_df': ( methyl_1 methyl_2 methyl_3 methyl_4 methyl_5 methyl_6 \
sample_8 0.648307 0.000000 0.317773 0.261844 0.178545 0.466456
sample_3 0.403001 0.494575 0.847600 1.000000 0.455849 0.252746
sample_5 0.767676 0.359736 0.705968 0.272183 0.045604 0.138116
sample_4 0.047227 1.000000 0.000000 0.000000 0.034345 1.000000
sample_7 0.000000 0.130327 1.000000 0.703201 0.553393 0.116700
methyl_7 methyl_8
sample_8 0.953612 0.210986
sample_3 0.581519 0.509216
sample_5 0.000000 0.349948
sample_4 0.754646 1.000000
sample_7 0.818478 0.180805 ,
survival immune
sample_8 1 0
sample_3 0 0
sample_5 0 1
sample_4 0 0
sample_7 0 1),
'meth_deconv': ( meth_cell_type_1 meth_cell_type_2 meth_cell_type_3 \
sample_8 0.683553 0.299173 0.952748
sample_3 0.000000 0.028041 0.706878
sample_5 0.027151 0.470113 0.796396
sample_4 1.000000 0.179501 1.000000
sample_7 0.913862 1.000000 0.000000
meth_cell_type_4 meth_cell_type_5
sample_8 0.020950 0.897815
sample_3 0.000000 0.000000
sample_5 0.014384 0.089535
sample_4 1.000000 0.795399
sample_7 0.419708 0.425495 ,
survival immune
sample_8 1 0
sample_3 0 0
sample_5 0 1
sample_4 0 0
sample_7 0 1)},
'proteomics': {'proteomics_df': ( protein_1 protein_2 protein_3 protein_4 protein_5
sample_8 0.640386 0.158279 0.127003 0.246877 0.126281
sample_3 0.995708 0.000000 0.077220 0.582296 1.000000
sample_5 1.000000 0.388522 0.000000 0.223085 0.944714
sample_4 0.000000 0.131567 0.489785 0.748195 0.925549
sample_7 0.923793 0.612186 0.066448 0.238219 0.000000,
survival immune
sample_8 1 0
sample_3 0 0
sample_5 0 1
sample_4 0 0
sample_7 0 1)}}
</code></pre>
<p>Desired output format:</p>
<pre><code>pd.DataFrame({'Overall_Survival': {'sample_8': 1,
'sample_3': 0,
'sample_5': 0,
'sample_4': 1,
'sample_7': 0},
'Immune_Response': {'sample_8': 1,
'sample_3': 0,
'sample_5': 0,
'sample_4': 1,
'sample_7': 0}})
</code></pre>
|
<python><pandas><numpy>
|
2024-01-11 01:26:14
| 1
| 1,545
|
Anon
|
77,797,152
| 3,380,902
|
ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+
|
<p>I am seeing this error when attempting to launch a jupyter notebook from the terminal.</p>
<pre><code>Error loading server extension jupyterlab
Traceback (most recent call last):
File "/Users/kevalshah/myvenv/lib/python3.7/site-packages/notebook/notebookapp.py", line 2047, in init_server_extensions
mod = importlib.import_module(modulename)
File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1006, in _gcd_import
File "<frozen importlib._bootstrap>", line 983, in _find_and_load
File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 677, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 728, in exec_module
File "<frozen importlib._bootstrap>", line 219, in _call_with_frames_removed
File "/Users/kevalshah/myvenv/lib/python3.7/site-packages/jupyterlab/__init__.py", line 7, in <module>
from .handlers.announcements import ( # noqa
File "/Users/kevalshah/myvenv/lib/python3.7/site-packages/jupyterlab/handlers/announcements.py", line 15, in <module>
from jupyterlab_server.translation_utils import translator
File "/Users/kevalshah/myvenv/lib/python3.7/site-packages/jupyterlab_server/__init__.py", line 5, in <module>
from .app import LabServerApp
File "/Users/kevalshah/myvenv/lib/python3.7/site-packages/jupyterlab_server/app.py", line 14, in <module>
from .handlers import LabConfig, add_handlers
File "/Users/kevalshah/myvenv/lib/python3.7/site-packages/jupyterlab_server/handlers.py", line 18, in <module>
from .listings_handler import ListingsHandler, fetch_listings
File "/Users/kevalshah/myvenv/lib/python3.7/site-packages/jupyterlab_server/listings_handler.py", line 8, in <module>
import requests
File "/Users/kevalshah/myvenv/lib/python3.7/site-packages/requests/__init__.py", line 43, in <module>
import urllib3
File "/Users/kevalshah/myvenv/lib/python3.7/site-packages/urllib3/__init__.py", line 42, in <module>
"urllib3 v2.0 only supports OpenSSL 1.1.1+, currently "
ImportError: urllib3 v2.0 only supports OpenSSL 1.1.1+, currently the 'ssl' module is compiled with 'OpenSSL 1.1.0h 27 Mar 2018'. See: https://github.com/urllib3/urllib3/issues/2168
</code></pre>
<p>I have tried to upgrade and installed the following versions:</p>
<p><code>requests==2.31.0</code></p>
<p><code>urllib3==2.0.7</code></p>
<p>how do I resolve this issue? Do I need to upgrade my systems OpenSSL?</p>
|
<python><python-requests><openssl><importerror><urllib3>
|
2024-01-11 01:03:04
| 1
| 2,022
|
kms
|
77,797,041
| 12,393,400
|
Can't Access SQLite3 Database With pypyodbc
|
<p>I am switching a python program over from my Windows laptop which had Microsoft SQL Studio 18 on it to a raspberry pi so I can keep it running 24/7. On the laptop, it worked just fine using pyodbc to connect to the local Microsoft SQL Server, but on the raspberry pi, it isn't working. I first had to switch out pyodbc for pypyodbc because there was an issue with the pip installation for pyodbc. Then, I had the database copied over as an .sql file and turned it into a .db file with SQLite3. However, the suggested Driver value for SQLite3 in the pypyodbc.connect() function, <code>Driver=SQLite3 ODBC Driver</code> is throwing the error</p>
<p>('01000', "[01000] [unixODBC][Driver Manager]Can't open lib 'SQLite3 ODBC Driver' : file not found")</p>
<p>I have no idea how to solve this. I tried to download Microsoft SQL Server for Linux Debian 9, but it couldn't find the msodbcsql18 library. I tried to install FreeTDS ODBC, but my terminal doesn't recognize the tsql command, for some reason. I have not been able to locate this mythical "odbc.ini", either. I've been pulling my hair out over what should have been the most trivial step in this transition, and I need help. Anything you can tell me about will be much appreciated.</p>
|
<python><sqlite><odbc><pyodbc><pypyodbc>
|
2024-01-11 00:16:04
| 1
| 616
|
Frasher Gray
|
77,796,979
| 2,142,728
|
VSCode not suggesting imports for symbols in Poetry-managed path dependency
|
<p>I have two Python projects managed by Poetry, ProjectA and ProjectB, where ProjectA depends on ProjectB (using <a href="https://python-poetry.org/docs/dependency-specification/#path-dependencies" rel="nofollow noreferrer">path dependency</a>). When using symbols (such as classes or variables) from external libraries such as <code>fastapi</code> or <code>requests</code>, VSCode successfully suggests most relevant imports (quick fix).</p>
<p>However, when I use symbols of ProjectB in ProjectA, VSCode fails to provide automatic import suggestions.</p>
<p>Notably, this auto-import suggestion feature functions as expected for symbols from external libraries. Can anyone shed light on why this issue may be occurring and provide guidance on resolving it effectively?</p>
<p>May it be related to <code>py.typed</code>? (I don't know what this is)</p>
|
<python><visual-studio-code><intellisense><python-poetry>
|
2024-01-10 23:51:23
| 1
| 3,774
|
caeus
|
77,796,696
| 1,144,854
|
Type hinting a dataclass for instance variable that accepts multiple types as an argument and stores single type
|
<p>I want my class constructor to accept variable inputs, then normalize them for storage and access. For example:</p>
<pre class="lang-py prettyprint-override"><code>class Ticket:
def __init__(self, number: int | str):
self.number: int = int(number)
# so that it's flexible in creation:
t = Ticket(6)
t = Ticket('7')
# but consistent when accessed:
isinstance(t.number, int) # True
</code></pre>
<p>I don't know the right OOP term, but I want to make sure my class's interface? signature? correctly reflects that it will accept <code>.number</code> as int or string, but accessing <code>.number</code> will always give an int.</p>
<p>The above works (though I'm open to suggestions), but attempting to do the equivalent with a dataclass gives a type error in Pylance:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Ticket:
number: int | str
#^^^^^^ Pylance: Declaration "number" is obscured by a declaration of the same name
def __post_init__(self):
self.number: int = int(self.number)
</code></pre>
<p>Is this fixable in the dataclass version? Or just a limit of dataclasses? I'd like to keep the other benefits of dataclass if possible.</p>
|
<python><python-typing><python-dataclasses>
|
2024-01-10 22:20:06
| 1
| 763
|
Jacktose
|
77,796,659
| 20,122,390
|
How can I get the index ranges of a pandas series that are NaN?
|
<p>I have a dataframe in Pandas where the indices are dates and the columns are codes, like this:
<a href="https://i.sstatic.net/ahA0E.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ahA0E.png" alt="enter image description here" /></a></p>
<p>I need to identify the columns that have NaN values, I implemented this part like this:</p>
<pre><code>boundaries_with_incomplete_days = boundaries.columns[
boundaries.isna().any()
].to_list()
</code></pre>
<p>So, boundaries_with_incomplete_days is a list where I have the codes (columns containing NaN values). The problem is that now I need to identify the date ranges in which there are NaN values. For example, for frt00338:
From 2024-01-03 2:00:00 to 2024-01-03 8:00:00,
From 2024-01-07 2:00:00 to 2024-01-07 12:00:00
The way I get this is irrelevant, it could be a list of tuples for example:</p>
<pre><code>[("2024-01-03 2:00:00", "2024-01-03 8:00:00"), ("2024-01-07 2:00:00", "2024-01-07 12 :00:00")]
</code></pre>
<p>My idea is to iterate over boundaries_with_incomplete_days, and identify those ranges for each code, however I'm not sure how I could efficiently find these ranges, I wouldn't want to have to loop over all the data for each code. How could I implement it?</p>
|
<python><pandas><dataframe>
|
2024-01-10 22:12:42
| 1
| 988
|
Diego L
|
77,796,630
| 8,328,007
|
creating dynamic frame using glue context catalog from Athena view
|
<p>I have a view created in Athena and I am trying to execute the following inside Glue job:</p>
<pre><code>from awsglue.context import GlueContext
dataframe = glueContext.create_dynamic_frame.from_catalog(
database=db_name,
table_name=view_name,
push_down_predicate=f"year='2023' and month='1' and date='12'",
)
</code></pre>
<p>However I get the following error:</p>
<pre><code>An error occurred while calling o117.getDynamicFrame. User's pushdown predicate: year='2023' and month='1' and date='12' can not be resolved against partition columns: []
</code></pre>
<p>The underlying table of the view does have year, month and date as partitions. But the error message seems to indicate there are none on the view.</p>
<p>Can anyone please guide on how Athena view can be used in glue's catalog method.</p>
|
<python><aws-glue><amazon-athena>
|
2024-01-10 22:04:48
| 1
| 346
|
python_enthusiast
|
77,796,550
| 9,290,374
|
pd.to_datetime() DateParseError when run in Airflow
|
<p><strong>Goal</strong></p>
<p>Date field [YYYY-MM-DD] comes into a dataframe via <code>pd.read_sql()</code>. Because of destination system constraints, the field needs to be created as a datetime string. Then it is reformatted as a datetime so it can be uploaded to BigQuery. The entirety of the script is being run via Airflow</p>
<p><strong>Error</strong></p>
<p>The following error only occurs when the script is triggered from Airflow. <em>Manually running the script produces no errors.</em></p>
<blockquote>
<p>[2024-01-10T15:52:06.801-0500] {subprocess.py:93} INFO - df['tx_date_time'] = pd.to_datetime(df['tx_date_time']).dt.strftime('%Y-%m-%d 00:00:00')
pandas._libs.tslibs.parsing.DateParseError: Unknown datetime string
format, unable to parse: 2024-01-02 00:00%:00, at position 0</p>
</blockquote>
<p><strong>Script/Process</strong></p>
<pre><code>import pandas as pd
#sample data coming from pd.read_sql
df = pd.DataFrame()
df['tx_date_time'] = ['2024-01-01', '2024-01-04']
#error appears to occur here
df['tx_date_time'] = pd.to_datetime(df['tx_date_time']).dt.strftime('%Y-%m-%d 00:00:00')
#convert to a datetime object
df['tx_date_time'] = pd.to_datetime(df['tx_date_time'], format='%Y-%m-%d %H:%M:%S')
</code></pre>
|
<python><pandas><datetime><airflow>
|
2024-01-10 21:44:55
| 0
| 490
|
hSin
|
77,796,418
| 1,783,593
|
Overriding a route dependency in FastAPI
|
<p>I am using FastAPI</p>
<p>I have a route:</p>
<pre><code>@router.post("/product", tags=["product"])
async def create_product(request: Request,
id_generator: IdGenerator = Depends(get_id_generator)):
</code></pre>
<p>I have a dependencies file.</p>
<pre><code>def get_id_generator() -> IdGenerator:
return UUIDIdGenerator()
</code></pre>
<p>I also have <code>SomeOtherIdGenerator</code> I want to use for testing. I just can't get it right.</p>
<pre><code>@pytest.fixture
def test_id_generator():
return SomeOtherIdGenerator()
@pytest.mark.asyncio
async def test_create_product(test_id_generator):
data = '{ "a": "b" }'
app.dependency_overrides['get_id_generator'] = lambda : test_id_generator
client = TestClient(app)
response = client.post("/product", json={'stuff': data})
assert response.status_code == 201
response_data = response.json()
assert response_data['id'] == "some known value"
</code></pre>
<p>The result is that I'm still getting a UUID</p>
<pre><code>Expected :'some known value'
Actual :'06864d25-f88c-4382-9d1a-08c8e6951885'
</code></pre>
<p>I tested with and without the <code>lambda</code></p>
<p><strong>Solution</strong></p>
<pre><code> app.dependency_overrides[get_id_generator] = test_id_generator
</code></pre>
<p>not</p>
<pre><code> app.dependency_overrides['get_id_generator'] = test_id_generator
</code></pre>
<p>or</p>
<pre><code> app.dependency_overrides['get_id_generator'] = lambda : test_id_generator
</code></pre>
|
<python><testing><dependencies><overriding><fastapi>
|
2024-01-10 21:16:56
| 1
| 993
|
stevemarvell
|
77,796,345
| 5,269,749
|
How to print the value for each bin on the plot when plotting via seaborn histplot
|
<p>Lets say I am plotting a histogram with following code,
is there a way to print the value of each bin (basically the height of each bar) on the plot?</p>
<pre><code>import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
# Fixing random state for reproducibility
np.random.seed(19680801)
x, y = np.random.rand(2, 100) * 4
sns.histplot(x=x, y=y, bins=4, binrange=[[0, 4], [0, 4]])
</code></pre>
<p>I am basically want to have a plot like this:</p>
<p><a href="https://i.sstatic.net/MiAhh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MiAhh.png" alt="enter image description here" /></a></p>
|
<python><seaborn>
|
2024-01-10 21:02:11
| 1
| 1,264
|
Alex
|
77,796,003
| 251,589
|
How to redirect tensorflow import errors from `stderr` to `stdout`
|
<h3>Question</h3>
<p>When you import tensorflow, it prints to <code>stderr</code> - <a href="https://github.com/tensorflow/tensorflow/issues/62770#issue-2073291734" rel="nofollow noreferrer">bug report</a>.</p>
<p>Currently this is confusing my monitoring system and logging these informative messages as errors.</p>
<p>I would like to redirect these messages from <code>stderr</code> to <code>stdout</code>.</p>
<p>In theory, this should work:</p>
<pre class="lang-py prettyprint-override"><code>def redirect_tensorflow_logs():
print("Before - Outside redirect block", file=sys.stderr)
with redirect_stderr(sys.stdout):
print("Before - Inside redirect block", file=sys.stderr)
import tensorflow as tf
print("After - Inside redirect block", file=sys.stderr)
print("After - Outside redirect block", file=sys.stderr)
</code></pre>
<p>Unfortunately, it is not. This is the output I am getting:</p>
<p>Output - stderr:</p>
<pre><code>Before - Outside redirect block
2024-01-10 14:34:44.164579: I tensorflow/core/platform/cpu_feature_guard.cc:182] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
After - Outside redirect block
</code></pre>
<p>Output - stdout:</p>
<pre><code>Before - Inside redirect block
After - Inside redirect block
</code></pre>
<p>Is there a way to redirect these messages from <code>stderr</code> to <code>stdout</code>?</p>
<h3>Alternate solutions</h3>
<p>I know that set <code>os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'</code> (<a href="https://stackoverflow.com/a/42121886/251589">https://stackoverflow.com/a/42121886/251589</a>) to remove these messages entirely. I would prefer to not do this because at some point, I will want to address these and hiding them will make this harder for future devs to understand what is going on.</p>
<p>I could also redirect ALL output from my program from stderr to stdout via something like this: <code>python -m myprogram 2>&1</code>. This would stop my monitoring system from raising errors but may cause me to miss some items in the future that are <strong>real errors</strong>.</p>
|
<python><tensorflow><logging><stderr>
|
2024-01-10 19:41:51
| 1
| 27,385
|
sixtyfootersdude
|
77,795,931
| 11,637,422
|
Vectorising or Using Multiprocessing on a Large Dataframe
|
<p>I have a weird dataframe where there are amounts for theoretical deposits up to Day30, so 30 triplets of columns. I want to gather all the data into one single column, where I have all the dates and deposits, regardless of if they happened on Day1,Day2 or DayX for that player. I have tried the following code, which gives a correct output, but takes over >120 mins to run on a large dataset:</p>
<pre><code>import pandas as pd
# Sample DataFrame with Day 1 and Day 2 data
data = {
'Day_date_1': ['2024-01-01', '2024-01-02'],
'Day_date_1_week_year': ['2024-W01', '2024-W01'],
'Day1_deposit': [100, 200],
'Day_date_2': ['2024-01-02', '2024-01-03'],
'Day_date_2_week_year': ['2024-W01', '2024-W01'],
'Day2_deposit': [150, 250],
'PLAYER_ID': [1, 2],
}
date_columns = [f'Day_date_{i}' for i in range(1, 3)] # Includes Day 1 and Day 2
deposit_columns = [f'Day{i}_deposit' for i in range(1, 3)] # Includes Day 1 and Day 2
deposit_week_columns = [f'Day_date_{i}_week_year' for i in range(1, 3)] # Includes Day 1 and Day 2
# Empty DataFrame to store results
result_df = pd.DataFrame()
# Processing the DataFrame
for date_col, week_col, value_col in zip(date_columns, deposit_week_columns, deposit_columns):
temp_df = df[[date_col, week_col, value_col, "PLAYER_ID"]]
temp_df.columns = ["deposit_date", 'deposit_week', "deposit_amount", "PLAYER_ID"]
result_df = result_df.append(temp_df, ignore_index=True)
result_df['deposit_date'] = pd.to_datetime(result_df['deposit_date'])
result_deposit = result_df.groupby(['PLAYER_ID', 'deposit_date', 'deposit_week'])['deposit_amount'].mean().reset_index().rename(columns={'deposit_amount': 'mean_deposit_amount'})
# Output the result
print(result_deposit)
</code></pre>
<p>Is there any way to vectorise the loop or speeding up processing through multiprocessing?</p>
<p>The output I want is as follows:</p>
<pre><code>PLAYER_ID deposit_date deposit_week mean_deposit_amount
1 2024-01-01 2024-W01 100.0
1 2024-01-02 2024-W01 150.0
2 2024-01-02 2024-W01 200.0
2 2024-01-03 2024-W01 250.0
</code></pre>
|
<python><pandas><multiprocessing><vectorization>
|
2024-01-10 19:27:28
| 1
| 341
|
bbbb
|
77,795,874
| 15,781,591
|
Unable to set custom color palette in pandas pie chart, colors keep getting set based on value
|
<p>I have a for loop in python that generates many pie charts for different calculations from a dataframe. I want the color labelling between each pie chart for each for loop iteration to be the same, and so I try to set them to the same color palette dictionary.</p>
<pre><code>color_mapping = dict(zip(df_pc_region.index, sns.color_palette('Set2')))
plot_colors = [color_mapping[region] for region in df_pc_region.index]
plot = df_pc[geog].value_counts(normalize=True).plot(kind='pie', startangle=90, colors=plot_colors, autopct='%1.1f%%', fontsize=9, pctdistance=0.80, explode=[0.05]*len(df_pc[geog].unique()))
</code></pre>
<p>And so, for each chart created for each for loop generation, the colors should be labelled the same, regardless of which label gets the higher value, since the "color" parameter is set to "plot_colors".</p>
<p>And yet, I see that with my first two plots from two separate for loop iterations, the color labelling is not consistent, each with the color labels based on value, rather than the name of each category as I intended by setting "color" to "plot_colors". Why might the color labeling here not be registering with the code?</p>
|
<python><pandas><matplotlib>
|
2024-01-10 19:15:46
| 0
| 641
|
LostinSpatialAnalysis
|
77,795,735
| 5,429,320
|
Failed to connect to database: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')
|
<p>I have an Azure Function app, which has a couple endpoints, written in Python and it is running in a docker container in Docker Desktop. The Function App seems to run correctly but when I try to call the endpoint I get the following error:</p>
<pre><code>{
"error": "Database connection failed: Failed to connect to database: ('HYT00', '[HYT00] [Microsoft][ODBC Driver 17 for SQL Server]Login timeout expired (0) (SQLDriverConnect)')"
}
</code></pre>
<p>My SQL Server is installed on my local windows desktop. The Function App works as expected and connects to the database if I run the function application within VSCode using Azurite. I am not sure its struggling with the connection to the database.</p>
<p>database_helper.py</p>
<pre><code>...
def load_environment_variables():
# Loads required environment variables for database connection.
# Raises an error if any required variable is missing.
required_vars = ['DB_HOST', 'DB_NAME', 'DB_USERNAME', 'DB_PASS']
env_vars = {}
for var in required_vars:
value = os.getenv(var)
if not value:
raise EnvironmentError(f"Missing required environment variable: {var}")
env_vars[var] = value
return env_vars
def create_connection_string(env_vars):
# Constructs a connection string for the database using environment variables.
return (
f"Driver={{ODBC Driver 17 for SQL Server}};"
f"Server={env_vars['DB_HOST']};"
f"Database={env_vars['DB_NAME']};"
f"UID={env_vars['DB_USERNAME']};"
f"PWD={env_vars['DB_PASS']};" #NOSONAR
)
def get_connection():
# Establishes a database connection using the connection string.
# Handles pyodbc connection errors and logs them.
env_vars = load_environment_variables()
connection_string = create_connection_string(env_vars)
try:
return pyodbc.connect(connection_string, autocommit=False)
except pyodbc.Error as e:
logger.error(f"Error connecting to database: {e}")
raise DatabaseConnectionError(f"Failed to connect to database: {e}") from e
class DatabaseManager:
# Manages database operations within a context manager.
def __init__(self):
# Initializes a database connection and cursor.
self.connection = get_connection()
self.cursor = self.connection.cursor()
...
</code></pre>
<p>Dockerfile:</p>
<pre><code># Use the Azure Functions Python image
FROM mcr.microsoft.com/azure-functions/python:4-python3.10-core-tools
# Set the working directory
WORKDIR /app
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
unixodbc-dev \
gnupg \
&& rm -rf /var/lib/apt/lists/* /packages-microsoft-prod.deb
# Install Microsoft ODBC Driver for SQL Server (Debian 11)
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/11/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& ACCEPT_EULA=Y apt-get install -y msodbcsql17 \
&& rm -rf /var/lib/apt/lists/* /packages-microsoft-prod.deb
# Copy only the requirements file and install Python dependencies.
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application code.
COPY . .
# Expose the port on which the app will run.
EXPOSE 7071
# Copy the ODBC configuration files.
COPY odbcinst.ini /etc/
COPY odbc.ini /etc/
# Start the function app.
CMD ["func", "start", "--python"]
</code></pre>
<p>odbcinst.ini & odbc.ini</p>
<pre><code>[ODBC Driver 17 for SQL Server]
Description=Microsoft ODBC Driver 17 for SQL Server
Driver=/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.10.so.5.1
UsageCount=1
[ODBC Driver 17 for SQL Server]
Driver=ODBC Driver 17 for SQL Server
Server=host.docker.internal\\SQLEXPRESS
Database=dev.local
</code></pre>
<p>Docker Compose:</p>
<pre class="lang-yaml prettyprint-override"><code>version: '3.8'
services:
app:
image: func-app:latest
ports:
- "8001:7071"
environment:
- AzureWebJobsStorage=...
- AzureWebJobsFeatureFlags=EnableWorkerIndexing
- FUNCTIONS_WORKER_RUNTIME=python
- ENVIRONMENT=dev
- DB_HOST=host.docker.internal\\SQLEXPRESS
- DB_NAME=dev.local
- DB_USERNAME=user
- DB_PASS=admin
- AZURE_STORAGE_ACCOUNT_CONNECTIONSTRING=...
- AZURE_CONTAINER=media
- DEV_SHOP=...
- SETS_PROCESSED=100
command: ["func", "start", "--python"]
</code></pre>
|
<python><sql-server><azure><docker><odbc>
|
2024-01-10 18:47:15
| 1
| 2,467
|
Ross
|
77,795,681
| 9,642
|
How to specify OpenAI Organization ID in Microsoft AutoGen?
|
<p>In order to create a "config_list" for OpenAI in <a href="https://github.com/microsoft/autogen" rel="nofollow noreferrer">Microsoft AutoGen</a> I do the following:</p>
<pre class="lang-py prettyprint-override"><code>import os
config_list = [{
'model': 'gpt-3.5-turbo-1106',
'api_key': os.getenv("OPENAI_API_KEY"),
}]
</code></pre>
<p>How does one specify an <a href="https://platform.openai.com/docs/api-reference/organization-optional" rel="nofollow noreferrer">OpenAI organization ID</a>?</p>
<p>P.S. - I didn't find an appropriate tag for Microsoft's AutoGen so I just used openai-api. Do we need to create one?</p>
|
<python><openai-api><ms-autogen>
|
2024-01-10 18:36:01
| 1
| 20,614
|
Neil C. Obremski
|
77,795,612
| 3,929,525
|
How to run TensorFlow GPU version on Google Colab with Python 2.7?
|
<p>I have a code written using the <strong>TensorFlow version 1.15</strong> for an image to image translation task and I want to run it on Google Colab environment which currently has Python 3.10.12 installed by default.</p>
<p>Due to my source code's dependencies, I have to use TensorFlow version 1.15 and that's why I have used other dependencies that match with this version of TensorFlow.</p>
<p>First, I install Python version 2.7 on Google Colab using the command below:</p>
<pre><code>!apt-get install python2
</code></pre>
<p>After that, I check the installed Python's version:</p>
<pre><code>!python2 --version
</code></pre>
<p>And I get: Python <strong>2.7.18</strong></p>
<p>Next, I connect to Google Drive and refer to the path where my source is located (<strong>main.py, util.py, model.py and ops.py</strong>):</p>
<pre><code>from google.colab import drive
drive.mount('/content/drive')
%cd /content/drive/MyDrive/TFCode/
</code></pre>
<p>Then to install the necessary packages, first I install pip using the commands below:</p>
<pre><code>!curl https://bootstrap.pypa.io/pip/2.7/get-pip.py -o get-pip.py
!python2 get-pip.py
</code></pre>
<p>And then:</p>
<pre><code>!python2 -m pip install six
!python2 -m pip install scipy
!python2 -m pip install imageio==2.4.1
!apt-get install build-essential
!python2 -m pip install grpcio==1.26.0
!python2 -m pip install tensorflow-gpu==1.15
!python2 -m pip install opencv-python==3.4.8.29
!python2 -m pip install scikit-image==0.14.5
</code></pre>
<p>Finally, I run my source using <code>!python2 main.py</code> and despite the fact that the code runs successfully, it takes a long time to finish the task becuase my code is not being run on GPU and I know this because of the following code block inside my <strong>main.py</strong> file:</p>
<pre><code>if tf.test.gpu_device_name():
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
print("Please install GPU version of TF")
</code></pre>
<p>Inside my results, I see the "Please install GPU version of TF" string output which means the GPU cannot be detected but when I run this code like below directly in Colab:</p>
<pre><code>import tensorflow as tf
if tf.test.gpu_device_name():
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
else:
print("Please install GPU version of TF")
</code></pre>
<p>I get this: <strong>Default GPU Device: /device:GPU:0</strong> which means it finds the GPU but as this code is being run directly in Colab environment, I know that it uses Python 3.10.12 and TensorFlow version 2.</p>
<p>How can I run my own source code with GPU?</p>
<p>Using version 2.7 of Python is not a must but version 1.15 of TensorFlow must be used and other dependencies must match with this version of TensorFlow.</p>
|
<python><python-2.7><tensorflow><google-colaboratory><tensorflow1.15>
|
2024-01-10 18:24:19
| 1
| 1,312
|
Naser.Sadeghi
|
77,795,529
| 18,476,381
|
SQLAlchemy Async Join not bringing in columns
|
<p>I am trying to join two tables using sqlalchemy, the query I am trying to acheive is:</p>
<pre><code>SELECT *
FROM service_order
JOIN vendor ON service_order.vendor_id = vendor.vendor_id
WHERE service_order.service_order_id = 898;
</code></pre>
<p>My current sqlalchemy statement below just generates an object <db.model.service_order.ServiceOrder> with columns only from the service_order table and instead of adding the columns into this object from the vendor table, creates a vendor object <db.model.vendor.Vendor> as one of the values.</p>
<pre><code>from typing import List, Sequence, Dict, Union
from sqlalchemy import select
from sqlalchemy import func
from sqlalchemy.orm import Session, joinedload, lazyload, selectinload
from api.model import CreateServiceOrderRequest
from db.model import ServiceOrder as DBServiceOrder
from db.model import ServiceOrderItem as DBServiceOrderItem
from db.model import Vendor as DBVendor
from db.engine import DatabaseSession as AsyncSession
from ..exceptions import DBRecordNotFoundException, InvalidPropertyException
from core.domains.service_order.service_order_model import ServiceOrderModel
async def get_service_order_by_id(
session: AsyncSession, service_order_id: int
) -> ServiceOrderModel:
async with session:
statement = (
select(DBServiceOrder)
.options(
joinedload(DBServiceOrder.service_order_item).joinedload(
DBServiceOrderItem.service_order_item_receive
),
)
.join(DBVendor, DBServiceOrder.vendor_id == DBVendor.vendor_id)
.where(DBServiceOrder.service_order_id == service_order_id)
)
result = await session.scalars(statement)
service_order = result.first()
return service_order
</code></pre>
<p>Below are my DB models for both service_order and vendor.</p>
<pre><code>from datetime import datetime
from typing import Optional, List
from sqlalchemy import func, ForeignKey, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from .base_model import BaseModel
class ServiceOrder(BaseModel):
__tablename__ = "service_order"
service_order_id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
vendor_id: Mapped[float] = mapped_column(ForeignKey("vendor.vendor_id"))
vendor: Mapped["Vendor"] = relationship(
"Vendor", lazy="selectin", back_populates="service_order"
)
service_order_item: Mapped[List["ServiceOrderItem"]] = relationship(
"ServiceOrderItem", back_populates="service_order"
)
from datetime import datetime
from typing import Optional, List
from sqlalchemy import func, String
from sqlalchemy.orm import Mapped, mapped_column, relationship
from .base_model import BaseModel
class Vendor(BaseModel):
__tablename__ = "vendor"
vendor_id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
company_name: Mapped[Optional[str]] = mapped_column(String(200))
component: Mapped[List["Component"]] = relationship(back_populates="vendor")
motor: Mapped[List["Motor"]] = relationship(foreign_keys="Motor.vendor_id")
motor_field_track: Mapped[List['MotorFieldTrack']] = relationship('MotorFieldTrack', back_populates='vendor')
service_order: Mapped[List['ServiceOrder']] = relationship('ServiceOrder', back_populates='vendor')
motor_order: Mapped[List['MotorOrder']] = relationship('MotorOrder', back_populates='vendor')
</code></pre>
<p>I've tried various combinations of joinedloading, lazyloading, joininload but nothing seems to work. Instead of creating an object with all the columns it keeps creating a vendor object instead of the columns from that table. How can I return one object with all the columns from both tables? Even better if I plan to pick and choose what columns from the second table, how do I achieve that as well?</p>
|
<python><sql><sqlalchemy><orm>
|
2024-01-10 18:09:14
| 1
| 609
|
Masterstack8080
|
77,795,452
| 1,473,517
|
How fast can a max sum rectangle be found?
|
<p>I need to find a rectangle in a large matrix of integers that has the maximum sum. There is an O(n^3) time algorithm as described <a href="https://www.interviewbit.com/blog/maximum-sum-rectangle/" rel="nofollow noreferrer">here</a> and <a href="https://stackoverflow.com/questions/69368984/maximum-sum-rectangle-in-a-2d-matrix-using-divide-and-conquer">here</a> for example.</p>
<p>These both work well but they are slow, because of Python partly. How much can the code be sped up for an 800 by 800 matrix for example? It takes 56 seconds on my PC.</p>
<p>Here is my sample code which is based on code from geeksforgeeks:</p>
<pre><code>import numpy as np
def kadane(arr, start, finish, n):
# initialize subarray_sum, max_subarray_sum and
subarray_sum = 0
max_subarray_sum = float('-inf')
i = None
# Just some initial value to check
# for all negative values case
finish = -1
# local variable
local_start = 0
for i in range(n):
subarray_sum += arr[i]
if subarray_sum < 0:
subarray_sum = 0
local_start = i + 1
elif subarray_sum > max_subarray_sum:
max_subarray_sum = subarray_sum
start = local_start
finish = i
# There is at-least one
# non-negative number
if finish != -1:
return max_subarray_sum, start, finish
# Special Case: When all numbers
# in arr[] are negative
max_subarray_sum = arr[0]
start = finish = 0
# Find the maximum element in array
for i in range(1, n):
if arr[i] > max_subarray_sum:
max_subarray_sum = arr[i]
start = finish = i
return max_subarray_sum, start, finish
# The main function that finds maximum subarray_sum rectangle in M
def findMaxsubarray_sum(M):
num_rows, num_cols = M.shape
# Variables to store the final output
max_subarray_sum, finalLeft = float('-inf'), None
finalRight, finalTop, finalBottom = None, None, None
left, right, i = None, None, None
temp = [None] * num_rows
subarray_sum = 0
start = 0
finish = 0
# Set the left column
for left in range(num_cols):
# Initialize all elements of temp as 0
temp = np.zeros(num_rows, dtype=np.int_)
# Set the right column for the left
# column set by outer loop
for right in range(left, num_cols):
temp += M[:num_rows, right]
#print(temp, start, finish, num_rows)
subarray_sum, start, finish = kadane(temp, start, finish, num_rows)
# Compare subarray_sum with maximum subarray_sum so far.
# If subarray_sum is more, then update maxsubarray_sum
# and other output values
if subarray_sum > max_subarray_sum:
max_subarray_sum = subarray_sum
finalLeft = left
finalRight = right
finalTop = start
finalBottom = finish
# final values
print("(Top, Left)", "(", finalTop, finalLeft, ")")
print("(Bottom, Right)", "(", finalBottom, finalRight, ")")
print("Max subarray_sum is:", max_subarray_sum)
# np.random.seed(40)
square = np.random.randint(-3, 4, (800, 800))
# print(square)
%timeit findMaxsubarray_sum(square)
</code></pre>
<p>Can numba or pythran or parallelization or just better use of numpy be used to speed this up a lot? Ideally I would like it to take under a second.</p>
<p>There is claimed to be a <a href="https://cstheory.stackexchange.com/a/39138/1864">faster algorithm</a> but I don't know how hard it would be to implement.</p>
<h1>Test cases</h1>
<pre><code>[[ 3 0 2]
[-3 -3 -1]
[-2 1 -1]]
</code></pre>
<p>The correct answer is the rectangle covering the top row with score 5.</p>
<pre><code>[[-1 3 0]
[ 0 0 -2]
[ 0 2 1]]
</code></pre>
<p>The correct answer is the rectangle covering the second column with score 5.</p>
<pre><code>[[ 2 2 -1]
[-1 -1 0]
[ 3 1 1]]
</code></pre>
<p>The correct answer is the rectangle covering the first two columns with score 6.</p>
|
<python><numpy><optimization><numba><pythran>
|
2024-01-10 17:55:19
| 1
| 21,513
|
Simd
|
77,795,419
| 7,287,543
|
Python globbing to match find's "-print -quit" behavior
|
<p>I have a few arbitrarily deep directories, each of which contains a single file with a consistent name. In the command line I can use <code>find <dir> -name <filename> -print -quit</code> for optimized searching: once it finds the file, it stops looking through the directory. I can do that because it's found exactly what I'm looking for.</p>
<p>I can use glob (or os.walk, I suppose) to do the same thing. But neither of these options seem to have a way to <em>stop</em> once the file I'm looking for is found: both index the full directory regardless - globbing looks for as many matches as possible, and os.walk will only allow me to filter after the index is complete.</p>
<p>Is there a way to get that optimized <code>find</code> behavior in Python, short of doing a <code>find</code> in a subprocess?</p>
|
<python><find><glob><subdirectory>
|
2024-01-10 17:50:54
| 1
| 1,893
|
Yehuda
|
77,795,406
| 1,593,077
|
Can I get argparse not to repeat the argument indication after the two option names?
|
<p>When I specify a parameter for argparse with both a short and long name, e.g.:</p>
<pre><code>parser.add_argument("-m", "--min", dest="min_value", type=float, help="Minimum value")
</code></pre>
<p>and ask for <code>--help</code>, I get:</p>
<pre><code> -m MIN_VALUE, --min MIN_VALUE
Minimum value
</code></pre>
<p>This annoys me. I would like <code>MIN_VALUE</code> not to be repeated. So, for example:</p>
<pre><code> [-m | --min-value] MIN_VALUE
Minimum value
</code></pre>
<p>can I get argparse to print that (other than by overriding the <code>--help</code> message entirely)?</p>
|
<python><command-line-arguments><argparse>
|
2024-01-10 17:47:31
| 1
| 137,004
|
einpoklum
|
77,795,398
| 3,446,051
|
Regular Expresssion does not catch the last digit
|
<p>I have the following cases I want to capture:</p>
<pre><code>| Task| Text | Capture |
|:---- |:------| :-----|
| Capture| 1.304 /XXX 0.0000 XX 15/Oct/2000 | 1.304 and 0.000 |
| Capture | XXX 1.304% - XXXX 15.10.2044 XXX | 1.304 but not part of the date 15.10.2044|
| Capture| XXX 11,8275% XXX1 AAA | 11,8275|
| Capture| XX 0.0. vs. 2.895 | 2.895 |
| Capture| XX 0.0. vs. 2.895. | 2.895 |
</code></pre>
<p>I have created the following regular expression:</p>
<pre><code>(?<![,\.])(\d+[,.]\d+)[^,\.]%*
</code></pre>
<p>with</p>
<pre><code>re.findall(r'(?<![,\.])(\d+[,.]\d+)[^,\.]%*',text)
</code></pre>
<p>The problem is that it is not able to detect the last two cases with <code>2.895</code>. In one case it detects <code>2.89</code> and in the last case it is not able to detect because of the full stop. I want it detect the decimal at the end of the sentence even if the sentence ends with a full stop.</p>
|
<python><regex>
|
2024-01-10 17:46:40
| 2
| 5,459
|
Code Pope
|
77,795,362
| 6,160,119
|
How to migrate from typing.TypeAlias to type statements
|
<p>I have created a type alias for defining member variables in a <a href="https://docs.python.org/3/library/dataclasses.html#dataclasses.dataclass" rel="nofollow noreferrer"><code>dataclass</code></a> through type annotations:</p>
<pre><code>>>> from typing import TypeAlias
>>> Number: TypeAlias = int | float
>>> n, x = 1, 2.5
>>> isinstance(n, Number)
True
>>> isinstance(x, Number)
True
</code></pre>
<p>According to the <a href="https://docs.python.org/3/library/typing.html#typing.TypeAlias" rel="nofollow noreferrer">docs</a>, this syntax has been deprecated:</p>
<blockquote>
<p><em>Deprecated since version 3.12:</em> <code>TypeAlias</code> is deprecated in favor of the <code>type</code> statement, which creates instances of <code>TypeAliasType</code> and which natively supports forward references. Note that while <code>TypeAlias</code> and <code>TypeAliasType</code> serve similar purposes and have similar names, they are distinct and the latter is not the type of the former. Removal of <code>TypeAlias</code> is not currently planned, but users are encouraged to migrate to <code>type</code> statements.</p>
</blockquote>
<p>I tried to use the new syntax, but I got an error:</p>
<pre><code>>>> type Number = int | float
>>> isinstance(n, Number)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: isinstance() arg 2 must be a type, a tuple of types, or a union
</code></pre>
<p>How should I go about this?</p>
|
<python><python-typing>
|
2024-01-10 17:39:50
| 1
| 13,793
|
Tonechas
|
77,795,233
| 4,704,065
|
Combine two panda series of the same data frame with different lengths
|
<p>I have a Data frame which consist of multiple Pandas series . I want to combine three series with different lengths .</p>
<p>My data frame looks like this:</p>
<pre><code>X message Main table:
_tunneled _iTOW _messageIndex version numSv
0 0.0 518431.0 1387.0 2.0 8.0
0
1 0.0 518431.0 1388.0 2.0 8.0
1
2 0.0 518432.0 1443.0 2.0 8.0
2
3 0.0 518432.0 1444.0 2.0 8.0
3
4 0.0 518433.0 1488.0 2.0 8.0
4
... ... ... ... ... ...
14333 0.0 525597.0 307330.0 2.0 19.0
14334 0.0 525598.0 307370.0 2.0 19.0
14335 0.0 525598.0 307371.0 2.0 19.0
14336 0.0 525599.0 307411.0 2.0 19.0
14337 0.0 525599.0 307412.0 2.0 19.0
[280550 rows x 41 columns]Sub-table svData:
epochIx _iTOW _parentMessageIndex ionoEst ionoEstAcc svDataGnssId svDataSvId svStatus
0 1.0 518432.0 1387.0 0.0000 0.0269 2.0 9.0 7.0
1 1.0 518432.0 1387.0 0.0000 0.0156 2.0 10.0 7.0
2 1.0 518432.0 1387.0 0.0000 0.0210 2.0 4.0 7.0
3 1.0 518432.0 1387.0 0.0000 0.0289 2.0 36.0 7.0
4 1.0 518432.0 1387.0 0.0000 0.0156 2.0 2.0 7.0
... ... ... ... ... ... ... ... ...
280545 14338.0 525600.0 307412.0 0.0114 0.0109 2.0 34.0 3.0
280546 14338.0 525600.0 307412.0 0.0408 0.0108 0.0 3.0 3.0
280547 14338.0 525600.0 307412.0 -0.0119 0.0100 2.0 11.0 3.0
280548 14338.0 525600.0 307412.0 -0.0053 0.0096 0.0 17.0 3.0
280549 14338.0 525600.0 307412.0 -0.0758 0.0106 0.0 1.0 3.0
</code></pre>
<p>I want to combine <strong>numSv</strong> from Main table , <strong>_iTOW</strong> and <strong>svDataSvId</strong> from Sub-table <strong>svData</strong></p>
<p>I tried to use concat method but it gives me error: TypeError: first argument must be an iterable of pandas objects, you passed an object of type "Series"</p>
<pre><code>x=pd.concat(sv_id, msg)
</code></pre>
<p>Any pointers</p>
|
<python><pandas>
|
2024-01-10 17:18:25
| 1
| 321
|
Kapil
|
77,795,176
| 3,707,564
|
multiple with statement in one line for python
|
<p>Can I write multiple <code>with</code> statement in one line in python? For</p>
<pre><code>with suppress(Exception): del a;
with suppress(Exception): del b;
</code></pre>
<p>is there something like?</p>
<pre><code>with suppress(Exception): del a;|| with suppress(Exception): del b;
</code></pre>
|
<python><with-statement>
|
2024-01-10 17:09:27
| 1
| 1,930
|
user40780
|
77,795,150
| 577,805
|
Cannot import name 'Actor' from 'pygame.sprite'
|
<p>I have the following python / pygame code:</p>
<pre><code>import pygame
import time
pygame.init()
WIDTH = 800
HEIGHT = 600
from pygame.sprite import Actor
alien = Actor('alien')
def draw():
alien.draw()
pygame.quit()
</code></pre>
<p>When I run it I get the error:</p>
<pre><code>ImportError: cannot import name 'Actor' from 'pygame.sprite' (/Library/Python/3.9/lib/python/site-packages/pygame/sprite.py)
</code></pre>
<p>Do I need my own image? I also have an images/alien.png on the same folder as the code.</p>
<p>How to solve this?</p>
|
<python><pygame><pgzero>
|
2024-01-10 17:06:04
| 1
| 40,016
|
Miguel Moura
|
77,795,119
| 10,425,150
|
How to import locally installed library?
|
<p>I've installed my library using the following command:</p>
<pre><code>pip install .
</code></pre>
<p><strong>Here is the directory structure:</strong></p>
<pre><code>└───module1
├───__init__.py
└───mod_1.py
└───module2
├───__init__.py
└───mod_2.py
__init__.py
setup.py
</code></pre>
<p><strong>inside setup.py</strong></p>
<pre><code>from setuptools import setup,find_packages
setup(
name = "my_lib",
version="1.0.0",
packages=find_packages(),
python_requires='>=3.7',
include_package_data=True,
zip_safe=False)
</code></pre>
<p><strong>Installed in:</strong></p>
<pre><code>.\Python\Python312\Lib\site-packages
└───module1
├───____pycache__
├───__init__.py
└───mod_1.py
└───module2
├───____pycache__
├───__init__.py
└───mod_2.py
└───my_lib-1.0.0.dist-info
├───direct_url.json
├───INSTALLER
├───METADATA
├───RECORD
├───REQUESTED
├───top_level.txt
└───WHEEL
</code></pre>
<p><strong>Expected behavior/import:</strong></p>
<pre><code>from my_lib import mod_1, mod_2
</code></pre>
<p><strong>Current error is:</strong></p>
<pre><code>ModuleNotFoundError: No module named 'my_lib'
</code></pre>
<p><strong>Work around:</strong></p>
<pre><code>import mod_1, mod_2
</code></pre>
<p><strong>Need help with:</strong></p>
<p>What do I need to change in my structure\setup.py in order to import "my_lib" as following?</p>
<pre><code>from my_lib import mod_1, mod_2
</code></pre>
|
<python><python-3.x><pip><python-import><setuptools>
|
2024-01-10 17:00:27
| 1
| 1,051
|
Gооd_Mаn
|
77,795,013
| 2,386,113
|
How to stack arrays and compute inverse using Numba?
|
<p>I need to compute some vectors, stack them vertically in an array and finally calculate the inverse of the stacked vectors. I am able to do that using numpy but for better performance (and to assign it to a separate thread), I want to the calculations using Numba.</p>
<p>The code below works without the <code>@njit</code> decortor but takes ages with larges values of <code>nRows x ncols x nFrames</code>, such as 151 x 151 x 24.</p>
<pre><code>import numpy as np
from numba import njit
@njit
def compute_inverse_numba():
nRows = 15
nCols = 15
nFrames = 2
result_list = []
for frame in range(nFrames):
for row in range(nRows):
for col in range(nCols):
array_of_args = np.random.normal(3, 2.5, size=(10, 3)) #dummy array, DIFFERENT LENGHTS
vectors_list = []
for arg in array_of_args:
vec = np.zeros((4, 10), dtype=np.float64)
vec[0, 0] = arg[0]
vec[0, 1] = arg[1]
vec[0, 2] = arg[2]
vec[0, 3] = arg[0] * arg[2]
vec[0, 4] = arg[0] * arg[2]
vec[0, 5] = arg[1]
vec[0, 6] = arg[0]
vec[0, 7] = arg[1]
vec[0, 8] = arg[2]
vec[0, 9] = 2.0
vec[1, 0] = arg[0]
vec[1, 3] = 2.0 * arg[1]
vec[1, 4] = arg[2]
vec[1, 6] = 1.0
vec[2, 1] = arg[1]
vec[2, 3] = arg[0]
vec[2, 5] = 2.0 * arg[2]
vec[2, 7] = 1.0
vec[3, 2] = arg[2]
vec[3, 4] = arg[0]
vec[3, 5] = 3.0 * arg[1]
vec[3, 8] = 1.0
vectors_list.append(vec)
# vertically stack the results
vectors_list = np.vstack(vectors_list)
# compute inverse matrix
inv = np.linalg.pinv(vectors_list)
result_list.append(inv)
return result_list
###########----main()
compute_inverse_numba()
print()
</code></pre>
<p><strong>Problem:</strong> With the <code>@njit</code> decorator, I keep getting an exception for the stacking-related step:</p>
<pre class="lang-none prettyprint-override"><code>TypingError: No implementation of function Function(<function vstack at 0x0000028EFFBB8180>) found for signature:
vstack(list(array(float64, 2d, C))<iv=None>)
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload of function 'vstack': File: numba\np\arrayobj.py: Line 6005.
With argument(s): '(list(array(float64, 2d, C))<iv=None>)':
No match.
</code></pre>
<p>I tried different options but nothing really worked out.</p>
|
<python><numba>
|
2024-01-10 16:44:42
| 2
| 5,777
|
skm
|
77,794,842
| 4,564,080
|
Spinner doesn't show and then everything updates once the task in the spinner completes
|
<p>I am creating a chat app using Streamlit that will be connected to an LLM to respond to the user.</p>
<p>While the LLM is generating a response, I would like a spinner to show until the response can be printed.</p>
<p>Currently, I am mocking the LLM's response generation with a simple <code>time.sleep(5)</code>. However, the spinner doesn't show for these 5 seconds, and then the UI updates with the response.</p>
<p>The Streamlit app:</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
from sensei.ui import text, utils
st.chat_input("Your response...", key="disabled_chat_input", disabled=True)
if "messages" not in st.session_state:
st.session_state["messages"] = [
{"name": "Sensei", "avatar": "🥷", "content": message, "translated": True, "printed": False}
for message in text.ONBOARDING_START_MESSAGES[st.session_state.source_language]
]
for message in st.session_state.messages:
with st.chat_message(name=message["name"], avatar=message["avatar"]):
if message["name"] == "Sensei" and not message["printed"]:
utils.stream_message(message=message["content"])
message["printed"] = True
else:
st.markdown(body=message["content"])
if st.session_state.messages[-1]["name"] == "user":
with st.spinner("Thinking..."):
sensei_response = utils.temp_get_response()
st.session_state.messages.append(
{"name": "Sensei", "avatar": "🥷", "content": sensei_response, "translated": True, "printed": False}
)
st.rerun()
if user_response := st.chat_input(placeholder="Your response...", key="enabled_chat_input"):
st.session_state.messages.append({"name": "user", "avatar": "user", "content": user_response})
st.rerun()
</code></pre>
<p>The <code>temp_get_response</code> function:</p>
<pre class="lang-py prettyprint-override"><code>def temp_get_response() -> str:
"""Get a response from the user."""
time.sleep(5)
return "Well isn't that just wonderful!"
</code></pre>
<p>The <code>stream_message</code> function (this isn't the issue as the behaviour is the same if I write normally without streaming):</p>
<pre class="lang-py prettyprint-override"><code>def stream_message(message: str) -> None:
"""Stream a message to the chat."""
message_placeholder = st.empty()
full_response = ""
for chunk in message.split():
full_response += chunk + " "
time.sleep(0.1)
message_placeholder.markdown(body=full_response + "▌")
message_placeholder.markdown(body=full_response)
</code></pre>
|
<python><streamlit>
|
2024-01-10 16:19:02
| 1
| 4,635
|
KOB
|
77,794,807
| 7,236,077
|
Using pulumi python sdk to retrieve resource's attributes
|
<p>I feel stupid. I've been trying to figure this out for a few hours - totally clueless.</p>
<p>I've set up several services through pulumi on aws in a successful manner.</p>
<p>Each of these services has an id (s3 buckets), or an endpoint (rds database) that other applications will require.</p>
<p>While I can manually set those, I would like to fetch those through the python sdk.</p>
<p>I would assume that is possible - but I just cant seem to do it.
I can retrieve 'em by running:</p>
<pre><code> result = subprocess.run(["pulumi", "stack", "output", "--json", "--stack", stack_name], capture_output=True, text=True)
</code></pre>
<p>But not by using the pulumi python library.</p>
<p>Can anyone provide a simple functional example of how to do it?</p>
<p>ty</p>
|
<python><amazon-web-services><pulumi>
|
2024-01-10 16:14:54
| 1
| 2,498
|
epattaro
|
77,794,794
| 5,734,793
|
pycharm TypeError: Additional arguments should be named <dialectname>_<argument>, got 'autoload'
|
<p>I am new to python and my program had been working and now I am getting this error -</p>
<pre><code>File "C:\Repos\caers-api-to-cedars\venv\Lib\site-packages\sqlalchemy\sql\base.py", line 599, in _validate_dialect_kwargs
raise TypeError(
TypeError: Additional arguments should be named <dialectname>_<argument>, got 'autoload'
</code></pre>
<p>my packages are</p>
<pre><code>certifi==2022.5.18.1
pytz==2022.1
requests>=2.31.0
click==8.1.3
cx-oracle==8.3.0
sqlalchemy==2.0.25
</code></pre>
<p>I have tried older sqlalchemy packages and get this error</p>
<pre><code>sqlalchemy.exc.DatabaseError: (cx_Oracle.DatabaseError) DPI-1047: Cannot locate a 64-bit Oracle Client library: "C:\app\client\xxxx\product\12.2.0\client_2\oci.dll is not the correct architecture". See https://cx-oracle.readthedocs.io/en/latest/user_guide/installation.html for help
(Background on this error at: https://sqlalche.me/e/14/4xp6)
</code></pre>
<p>I am at a loss on what to change or add.</p>
<p>I'm not sure if this is enough to recreate it. I don't know where the failure is taking place. This is the main part of my code -</p>
<pre><code>import click
import cx_Oracle
import requests
import sqlalchemy
from sqlalchemy import null, select, create_engine
from sqlalchemy.orm import scoped_session, sessionmaker
from sqlalchemy.ext.declarative import declarative_base
</code></pre>
<p>main class</p>
<pre><code>class CaersToCedarsDB:
def __init__(self, env="DEV"):
self.Base = None
self.engine = None
self.db_session = None
self.logger = logger
self.connection = None
self.cursor = None
self.token: str = ""
self.token_type = None
self.db_view_list: list = []
self.features_dict: dict = {}
self.jenkins_pass = "test"
self.env = env
instant_client_dir = r"O:\ORANT\instantclient_21_3"
</code></pre>
<p>connect to database</p>
<pre><code>def create_db_connection(self, env, user, password):
if env == "PROD":
db = "PRD1"
else:
db = "BETA"
self.engine = create_engine(f"oracle+cx_oracle://{user}:{password}@{self.database_server}")
self.db_session = scoped_session(sessionmaker(autocommit=False,
autoflush=False,
bind=self.engine))
self.Base = declarative_base()
self.Base.query = self.db_session.query_property()
</code></pre>
<p>program start</p>
<pre><code>def start(self, user, password, caers_user, caers_pass):
self.create_db_connection(env=self.env, user=user, password=password)
self.init_db()
def run_data_importer(env: str, db_pass: str, db_user: str, caers_pass:
str, caers_user: str):
updater = CaersToCedarsDB(env=f"{env}")
updater.start(user=db_user, password=db_pass, caers_pass=caers_pass, caers_user=caers_user)
if __name__ == "__main__":
run_data_importer()
</code></pre>
|
<python><sqlalchemy>
|
2024-01-10 16:12:52
| 2
| 975
|
Ethel Patrick
|
77,794,741
| 2,382,483
|
Control how uncaught exceptions are logged on exit in Python?
|
<p>I've got the following python program:</p>
<pre class="lang-py prettyprint-override"><code>import logging
if __name__ == "__main__":
# configure logger to write logs in json format, and put exception traceback on "exception" field
logger = logging.getLogger(__name__)
logger.info("worker started")
try:
# do some worker stuff
logger.info("worker finished successfully")
except Exception:
logger.exception("unknown error occurred")
# what to do here?
</code></pre>
<p>I want to be able to write all logs in json format so I can easily search through them in CloudWatch, therefore I am configuring my logger with a custom formatter to do this. When an exception is raised, I want to control how that is logged as well so it has all of the same fields as the other logs, and the traceback doesn't spill across cloudwatch as a bunch of separate events. However, I also need this program to exit with an error status when this happens.</p>
<p>I'm aware of a few choices. The following logs correctly, but since the exception is handled it exits as though all went well:</p>
<pre class="lang-py prettyprint-override"><code>except Exception:
logger.exception("unknown error occurred")
</code></pre>
<p>This next one exits correctly and will log the exception once the way I want, but then a second time in the ugly way, where it doesn't follow my formatting and comes out as multiple lines in ECS:</p>
<pre class="lang-py prettyprint-override"><code>except Exception:
logger.exception("unknown error occurred")
raise
</code></pre>
<p>This gets the closest, but I see use of <code>sys.exit(1)</code> discouraged and I don't love the indirection of another exception. For example, debugging in VS Code will handily take you to the last line in the traceback if your program exits on an exception, but this is ruined if you use <code>sys.exit(1)</code> in this way (you'll always go to the sys.exit() line instead of the location of the real exception):</p>
<pre class="lang-py prettyprint-override"><code>except Exception:
logger.exception("unknown error occurred")
sys.exit(1)
</code></pre>
<p>Is there a "right" way to just directly exit on the <em>real</em> exception while also <em>only</em> logging the traceback how I want though my configured logger?</p>
|
<python>
|
2024-01-10 16:03:43
| 0
| 3,557
|
Rob Allsopp
|
77,794,637
| 8,521,346
|
LlamaIndex Not Checking All Documents When Queried
|
<p>I'm loading a large amount of documents into LlamaIndex and I am able to ask questions about each of these documents individually, but when it comes to asking questions about the documents overall, there are gaps in its knowledge.</p>
<p>An example of the document in context of real estate would be.</p>
<pre><code>Document(
id_=data['full_address'],
metadata={
"address": data['address'],
"price": data['price'],
"sqft": data['sqft'],
},
text=data['tax_history']
)
</code></pre>
<p>I am able to ask questions like "what are the tax records for address 123 foo street" and get reliable responses, but I am unable to ask questions like "what is the median house price on foo street" I begin to notice gaps in the data. Say there are 15 houses on that street, it would only grab 3 or so to perform the calculation.</p>
<p>My index and embedding settings are below.</p>
<pre><code>llm = OpenAI(temperature=.5, model="gpt-4")
embed_model = OpenAIEmbedding()
prompt_helper = PromptHelper(
context_window=4096,
chunk_overlap_ratio=0.1,
chunk_size_limit=None,
)
service_context = ServiceContext.from_defaults(llm=llm, chunk_size=1028, prompt_helper=prompt_helper, embed_model=embed_model)
vector_store = PineconeVectorStore(
index_name='real-estate',
environment='us-east1-gcp',
)
storage_context = StorageContext.from_defaults(vector_store=vector_store)
index = VectorStoreIndex.from_documents(
[],
storage_context=storage_context,
service_context=service_context,
)
</code></pre>
<p>I feel like I may be using the wrong tool for the job here. How do I make LlamaIndex query through all (or more of) the documents to give an answer? or is there a better tool for this job?</p>
|
<python>
|
2024-01-10 15:49:31
| 1
| 2,198
|
Bigbob556677
|
77,794,386
| 1,473,517
|
Compute the max sum circular area
|
<p>I have an n by n matrix of integers and I want to find the circular area, with origin at the top left corner, with maximum sum. Consider the following grid with a circle imposed on it.</p>
<p><a href="https://i.sstatic.net/Gp0WV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gp0WV.png" alt="enter image description here" /></a></p>
<p>This is made with:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.patches import Circle
import numpy as np
plt.yticks(np.arange(0, 10.01, 1))
plt.xticks(np.arange(0, 10.01, 1))
plt.xlim(0,9)
plt.ylim(0,9)
plt.gca().invert_yaxis()
# Set aspect ratio to be equal
plt.gca().set_aspect('equal', adjustable='box')
plt.grid()
np.random.seed(40)
square = np.empty((10, 10), dtype=np.int_)
for x in np.arange(0, 10, 1):
for y in np.arange(0, 10, 1):
plt.scatter(x, y, color='blue', s=2, zorder=2, clip_on=False)
for x in np.arange(0, 10, 1):
for y in np.arange(0, 10, 1):
value = np.random.randint(-3, 4)
square[int(x), int(y)] = value
plt.text(x-0.2, y-0.2, str(value), ha='center', va='center', fontsize=8, color='black')
r1 = 3
circle1 = Circle((0, 0), r1, color="blue", alpha=0.5, ec='k', lw=1)
plt.gca().add_patch(circle1)
</code></pre>
<p>In this case the matrix is:</p>
<pre><code>[[ 3 0 2 -3 -3 -1 -2 1 -1 0]
[-1 0 0 0 -2 -3 -2 2 -2 -3]
[ 1 3 3 1 1 -3 -1 -1 3 0]
[ 0 0 -2 0 2 1 2 2 -1 -1]
[-1 0 3 1 1 3 -2 0 0 -1]
[-1 -1 1 2 -3 -2 1 -2 0 0]
[-3 2 2 3 -2 0 -1 -1 3 -2]
[-2 0 2 1 2 2 1 -1 -3 -3]
[-2 -2 1 -3 -2 -1 3 2 3 -3]
[ 2 3 1 -1 0 1 -1 3 -2 -1]]
</code></pre>
<p>When the circle has radius 3, there are 11 points in the grid within the circle. As the radius increases, more and more points fall into the circle.</p>
<p>I am looking for a fast way to find a radius which maximizes the sum of the integers of grid points within it. The radius will not be unique so any one that maximizes the sum is ok. I will ultimately want to do this with much larger matrices.</p>
<p>This <a href="https://stackoverflow.com/questions/77707524/how-to-remove-redundancy-when-computing-sums-for-many-rings">question</a> is related but I am not sure how to extend it to my new question.</p>
|
<python><algorithm><numpy><performance><optimization>
|
2024-01-10 15:10:14
| 4
| 21,513
|
Simd
|
77,793,991
| 2,386,113
|
Numba Exception: Cannot determine Numba type of <class 'type'>
|
<p>I want to convert a function to Numba for performance reasons. My <strong>MWE</strong> example is below. If I remove the <code>@njit</code> decorator then, the code works but with <code>@njit</code>, I am getting a runtime exception. The exception is most likely coming because of the <code>dtype=object</code> to define the <code>result_arr</code> but I tried using <code>dtype=float64</code> also, but I get similar exception.</p>
<pre><code>import numpy as np
from numba import njit
from timeit import timeit
######-----------Required NUMBA function----------###
#@njit #<----without this, the code works
def required_numba_function():
nRows = 151
nCols = 151
nFrames = 24
result_arr = np.empty((151* 151 * 24), dtype=object)
for frame in range(nFrames):
for row in range(nRows):
for col in range(nCols):
size_rows = np.random.randint(8, 15)
size_cols = np.random.randint(2, 6)
args = np.random.normal(3, 2.5, size=(size_rows, size_cols)) # size is random
flat_idx = frame * (nRows * nCols) + (row * nCols + col)
result_arr[flat_idx] = args
return result_arr
######------------------main()-------##################
if __name__ == "__main__":
required_numba_function()
print()
</code></pre>
<p>How can I resolve the Numba exception?</p>
|
<python><numba>
|
2024-01-10 14:09:28
| 1
| 5,777
|
skm
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.