QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
76,474,098
| 15,452,168
|
Grouping and aggregating data in pandas DataFrame
|
<p>I have a Pandas DataFrame containing transaction data, and I want to perform grouping and aggregation operations to analyze the data at different levels. I have tried using the groupby and agg functions, but I'm facing some difficulties in achieving the desired results.</p>
<p>Here is an example of the DataFrame structure and data:</p>
<pre><code>import pandas as pd
data = {
'Product': ['A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C', 'A', 'D'],
'Transaction_ID': ['id1', 'id1', 'id2', 'id3', 'id3', 'id3', 'id4', 'id4', 'id4', 'id5','id6'],
'Size': ['S', 'M', 'L', 'S', 'M', 'L', 'S', 'S', 'M', 'S','M'],
'Demand_Qty': [5, 3, 2, 2, 1,2, 1, 4, 1, 1,1]
}
df1 = pd.DataFrame(data)
</code></pre>
<p>I want to perform the following operations:</p>
<ol>
<li>Check if there are multiple sizes in each transaction</li>
<li>Check if there are transactions with the same size but multiple quantities</li>
<li>total transaction count</li>
</ol>
<p>I tried using the groupby and agg functions, but I'm not getting the desired output. Here's the code I have tried:</p>
<pre><code>product_order_grouped = df1.groupby(['Product', 'Transaction_ID']).agg(
multiple_sizes_in_transaction=('Size', lambda s: s.nunique() > 1),
same_sizes_in_transaction=('Size', lambda s: s.nunique() == 1 and df1.loc[s.index, 'Demand_Qty'] > 1)
).reset_index()
product_grouped = product_order_grouped.groupby('Product').agg(
Total_Transactions=('Transaction_ID', 'count'),
Transactions_with_Multiple_Sizes=('multiple_sizes_in_transaction', 'sum'),
Transactions_with_Same_Size_and_Multiple_Quantities=('same_sizes_in_transaction', 'sum'),
).reset_index()
print(product_grouped)
</code></pre>
<p>The output I'm getting is not as expected. Could someone guide me on how to correctly perform these grouping and aggregation operations on the DataFrame to get the desired results?</p>
<hr />
<h2>Current output</h2>
<pre><code> Product Total_Transactions Transactions_with_Multiple_Sizes Transactions_with_Same_Size_and_Multiple_Quantities
0 A 3 1 1
1 B 2 1 0
2 C 1 1 0
3 D 1 0 0
</code></pre>
<hr />
<h2>Expected output</h2>
<pre><code> Product Total_Transactions Transactions_with_Multiple_Sizes Transactions_with_Same_Size_and_Multiple_Quantities
0 A 3 1 2
1 B 2 1 1
2 C 1 1 1
3 D 1 0 0
</code></pre>
<p><strong>logic to get the desired results Transactions_with_Same_Size_and_Multiple_Quantities</strong></p>
<pre><code> Product Transaction_ID Size Demand_Qty
0 A id1 S 5
1 A id1 M 3
2 A id2 L 2
3 B id3 S 2
4 B id3 M 1
5 B id3 L 2
6 B id4 S 1
7 C id4 S 4
8 C id4 M 1
9 A id5 S 1
10 D id6 M 1
</code></pre>
<p><strong>If we just look at Product 'A'</strong></p>
<pre><code> Product Transaction_ID Size Demand_Qty
0 A id1 S 5
1 A id1 M 3
2 A id2 L 2
9 A id5 S 1
</code></pre>
<p><em>Then id1 & id2 are 2 transactions where we have demand qty more than 1 so the value should be 2</em></p>
<p><em><strong>Similarly, for Products B & C it should be 1 as only 1 Transaction_ID has more than 1 value in demand qty and for & D it is 0</strong></em></p>
<p>I would greatly appreciate any guidance or suggestions on how to correctly perform these grouping and aggregation operations on the DataFrame to obtain the desired results.</p>
<p><strong>P.S. - I am also open if someone can suggest me to look at any other metric that can share better insights on this data || because for example again if we look at Product A there are actually 3 instances where demand is more than 1 so I am not sure if my metrics are good enough to analyse the data</strong></p>
<hr />
<p>Adding more test Data</p>
<pre><code>import pandas as pd
data = {
'Transaction_ID': [1357778791, 1357779263, 1357779570, 1357779583, 1357779893, 1357779893, 1357782347, 1357782681, 1357782681, 1357783510, 1357784048, 1357784401, 1357784564, 1357784564, 1357784670, 1357784816, 1357784816, 1357785798, 1357786529, 1357787012, 1357787208, 1357787837, 1357788325, 1357788326, 1357788452, 1357788542, 1357788585, 1357788585, 1357789168, 1357789633, 1357789633, 1357790352, 1357790366, 1357790379, 1357790730, 1357792699, 1357794652, 1357795141, 1357795141, 1357795147, 1357795805, 1357796833, 1357797368, 1357797714, 1357797789, 1357798619, 1357799260, 1357799933, 1357802692, 1357802692, 1357802771, 1357802818, 1357803663, 1357804255, 1357804868, 1357805887, 1357805941, 1357807095, 1357807122, 1357807122, 1357807897, 1357808324, 1357808324],
'Product': [2199692] * 63,
'Size': [48, 46, 36, 44, 44, 42, 36, 38, 36, 48, 36, 36, 44, 42, 38, 40, 38, 46, 36, 36, 40, 40, 36, 44, 48, 42, 44, 42, 42, 46, 44, 36, 48, 40, 36, 48, 38, 46, 44, 38, 46, 40, 36, 36, 36, 36, 44, 48, 42, 44, 38, 38, 38, 48, 48, 46, 40, 38, 44, 40, 40, 40, 38],
'Demand_Qty': [1] * 63
}
df1 = pd.DataFrame(data)
# Print the DataFrame
print(df1)
</code></pre>
|
<python><pandas><dataframe><group-by><logic>
|
2023-06-14 13:25:41
| 1
| 570
|
sdave
|
76,473,800
| 15,283,041
|
"ImportError: cannot import name 'chi2' from 'scipy.stats' (unknown location)" when trying to import biogeme
|
<p>When I check the installation, it looks fine. Not sure what can cause this error. I don't have an environment set up.</p>
<p>Using pip install biogeme, it says requirement fulfilled.
When I import biogeme, I get this exact error:</p>
<pre><code>"ImportError: cannot import name 'chi2' from 'scipy.stats' (unknown location)"
</code></pre>
<p>What should I do?</p>
|
<python><import><error-handling><python-import><importerror>
|
2023-06-14 12:52:33
| 0
| 493
|
Victor Nielsen
|
76,473,682
| 1,485,926
|
pylint unused-variable error in global function which is used in other module
|
<p>I run pylint (2.15.10) this way:</p>
<pre><code>pylint --allow-global-unused-variables=no src/
</code></pre>
<p>I get errors like this:</p>
<pre><code>************* Module builder.documentation
src\builder\documentation.py:582:0: W0612: Unused variable 'build_documentation' (unused-variable)
</code></pre>
<p>I have in that line this:</p>
<pre><code>def build_documentation(folder: Path):
...
</code></pre>
<p>It's correct that <code>build_documentation()</code> function is not used in documentation.py file. However, other files in my project are using it, eg: in src\builder\builder.py file I have:</p>
<pre><code>from documentation import build_documentation
...
build_documentation(folder)
</code></pre>
|
<python><pylint>
|
2023-06-14 12:39:19
| 1
| 12,442
|
fgalan
|
76,473,668
| 891,959
|
How do I get fractional seconds in func.now() with SQLAlchemy ORM and SQLite
|
<p>In short: when I add entries to a table in sqlite with the primary key being a default timestamp, I get a "UNIQUE constraint failed" error because the timestamps have no fractional seconds. How do I get SQLalchemy to use fractional seconds in func.now()l</p>
<p>In long:
Given the following code:</p>
<pre><code>import datetime as dt
import time
from sqlalchemy import create_engine
from sqlalchemy import func, Column, DateTime, Time
from sqlalchemy.engine.base import Connection, Engine
from sqlalchemy.orm import Session
from sqlalchemy.orm import DeclarativeBase, Mapped, MappedAsDataclass, mapped_column
class Base(DeclarativeBase):
"""Base class for the database."""
class Table1(Base, MappedAsDataclass):
"""Table."""
__tablename__ = "dummy table"
timestamp: Mapped[dt.datetime] = mapped_column(primary_key=True,
autoincrement=False,
server_default=func.now())
data: Mapped[int] = mapped_column(nullable = True)
if __name__ == "__main__":
engine: Engine = create_engine("sqlite:///:memory:")
Base.metadata.create_all(engine)
record1 = Table1(data=1)
record2 = Table1(data=2)
with Session(engine) as session:
session.add(record1)
session.commit()
# time.sleep(2)
session.add(record2)
session.commit()
</code></pre>
<p>And based on the <a href="https://docs.sqlalchemy.org/en/20/dialects/sqlite.html#sqlalchemy.dialects.sqlite.DATETIME" rel="nofollow noreferrer">SQLAlchemy SQLite Datetime doc</a>:</p>
<p>The default string storage format is:</p>
<pre><code>"%(year)04d-%(month)02d-%(day)02d %(hour)02d:%(minute)02d:%(second)02d.%(microsecond)06d"
</code></pre>
<p>e.g.:</p>
<pre><code>2021-03-15 12:05:57.105542
</code></pre>
<p>I would think I would get fractional seconds. Instead, the output from the code above with the <code>time.sleep(2)</code> not commented out is:</p>
<pre><code>2023-06-14 12:32:17 1
2023-06-14 12:32:19 2
</code></pre>
<p>with no fractional seconds.</p>
<p>On that same page above, I see there is a way to customize the format, but I don't quite see how to get the new DATETIME object into the sqlite dialect.</p>
<p>So: how do I get func.now() to return a time with fractional seconds for a server side default timestamp?</p>
|
<python><sqlite><sqlalchemy><orm><python-datetime>
|
2023-06-14 12:37:59
| 1
| 321
|
InformationEntropy
|
76,473,428
| 13,038,144
|
Obtaining paths to a specific value of a dictionary, expressed as nested dictionary
|
<p>I want to write a function that, given as input a nested dictionary like the following:</p>
<h2>Input</h2>
<pre class="lang-py prettyprint-override"><code>d = {'a': {'b': {'c': 'here', 'd': 'here'}}, 'e': {'f': 'here'}}
</code></pre>
<p>unpacks the dictionary until it finds the word <code>"here"</code>. When it does, I want it to save a nested dictionary that represents the path followed to reach that point. I want as output a list of all the paths that lead to the word <code>"here"</code> So, in this example the expected output is a list of three dictionaries:</p>
<h2>Expected output</h2>
<pre class="lang-py prettyprint-override"><code>[
{'a': {'b': 'c'}},
{'a': {'b': 'd'}}
{'e': 'f'}
]
</code></pre>
<p>Note that the dictionary could have an arbitrary number of nested levels, this was just a simple example.</p>
<h2>My best attempt</h2>
<p>Currently, I was only able to implement a version of this which outputs a list instead of a nested dictionary.</p>
<pre class="lang-py prettyprint-override"><code>def find_word(dictionary, path=[]):
results = []
for key, value in dictionary.items():
new_path = path + [key]
if isinstance(value, dict):
results.extend(find_word(value, new_path))
elif value == 'here':
results.append(new_path)
return results
# Example usage
d = {'a': {'b': {'c': 'here', 'd': 'here'}}, 'e': {'f': 'here'}}
paths = find_word(d)
for path in paths:
print(path)
</code></pre>
<h4>Output</h4>
<pre class="lang-py prettyprint-override"><code>['a', 'b', 'c']
['a', 'b', 'd']
['e', 'f']
</code></pre>
<p>but for my application I would need a list of nested dicts rather than a list of lists.</p>
|
<python><dictionary><recursion>
|
2023-06-14 12:15:26
| 2
| 458
|
gioarma
|
76,473,361
| 18,018,869
|
url routing to same view sometimes with kwargs, sometimes without
|
<p>I have multiple tools in my app (about 20). Each can be accessed with an object already selected, or without connection to an object. Currently I am utilizing this like:</p>
<pre class="lang-py prettyprint-override"><code>urlpatterns = [
path('tool_one/', views.ToolOneView.as_view(), name='tool-one'),
path('<int:id>/tool_one/', views.ToolOneView.as_view(), name='tool-one'),
# [... 18 more paths ...]
path('tool_twenty/', views.ToolTwentyView.as_view(), name='tool-twenty'),
path('<int:id>/tool_twenty/', views.ToolTwentyView.as_view(), name='tool-twenty')
]
</code></pre>
<p>Now I can link to my tool in two ways:</p>
<pre class="lang-html prettyprint-override"><code><a href="{% url 'tools:tool-one' id=object.id %}">
Option A: Tool one with object attached
</a>
<!--or-->
<a href="{% url 'tools:tool-one' %}">
Option B: Tool one without object attached
</a>
</code></pre>
<p>It would be much easier for me if I could always just use option A but allowing that value for <code>object.id</code> to be <code>None</code>. In the case of <code>object.id = None</code> it should direct the user the the view without the object attached.</p>
<p>With my current approach I dislike the repetitiveness in the <code>urlpatterns</code> and also within the templates.
Example: I have a navigation bar linking to all the tools without an object attached. On the <code>DetailView</code> of my <code>Object</code> I have linking to all the tools with the object attached. It is basically the same snippet that in theory I could reuse but apparently can't because with the latter I have to add <code> id=object.id %}</code></p>
<p>This should be easy to do since the usecase probably occurs very often, so please enlighten me!</p>
|
<python><django><django-views><django-templates><django-urls>
|
2023-06-14 12:05:51
| 1
| 1,976
|
Tarquinius
|
76,473,296
| 1,686,236
|
Python Multiprocessing Too Many Files Open
|
<p>I'm using Python 3.8.10 on Ubuntu, trying to run a function over rows of a pandas dataframe - the function takes the data in the row and uses it to create many more (84 to be exact) rows for a new dataframe. The input dataframe has over 15M rows, so this is a slow process to run serially. To process in parallel, I'm using the following code:</p>
<pre><code>from multiprocessing import Pool, Process, Queue
def expandDates(row):
expd = do stuff
return resExpand.put(expd)
resExpand, results = Queue(), []
procs = []
for row in thisData.itertuples():
p = Process(target=expandDates, args=[row])
p.start()
procs.append(p)
for proc in procs:
results.append(resExpand.get())
proc.join()
proc.close()
</code></pre>
<p>This was giving me an OS Error - Too Many Files Open, so I thought I would process the dataframe in chunks, giving me this:</p>
<pre><code>chunkSize = 500
chunks = math.ceil(len(thisData)/chunkSize)
chunkIndices = list(range(0, chunks*chunkSize, chunkSize))
for (indx, (fm, to)) in enumerate(zip(chunkIndices, chunkIndices[1:])):
# get the chunk
thisChunk = thisData.iloc[fm:to]
# process
procs = []
for row in thisChunk.itertuples():
p = Process(target=expandDates, args=[row])
p.start()
procs.append(p)
for proc in procs:
results.append(resExpand.get())
proc.join()
proc.close()
</code></pre>
<p>As I understand, this should limit the multiprocessing to 500 rows at a time, which should be well under the limit of the number of files that can be opened. What am I missing? I'm not sure how I can even debug this. Thanks.</p>
|
<python><ubuntu><multiprocessing>
|
2023-06-14 11:58:25
| 1
| 2,631
|
Dr. Andrew
|
76,473,293
| 4,821,169
|
Search filtering results via Discogs API in Python
|
<p>I am playing about with the Discogs API for a personal project, and I want a Python workflow where I can search for a string (could be song or artist) and retrieve all the releases/master records where that title occurs, BUT for certain years and those results with no year (if there are any).</p>
<p>So, in other words, I want (a) to pre-filter results by year of release within a certain range, e.g. 1991-1994, INCLUDING those records with no year of release, and (b) output the master/release IDs to a csv so I can iterate/filter them further afterwards by various attributes (country, genre, etc etc). I know how to output the IDs to csv, but I can't do (a), filter initially by years (or, crucially, no years!).</p>
<p>To establish connection and search generally I use the following, which returns a lot of paginated results, and the following prints the first page...</p>
<pre><code>import discogs_client
import time
d = discogs_client.Client('my_user_agent/1.0', user_token='MYTOKENISHERE')
results = d.search('My Special Love Song')
print(len(results))
print(results.page(1))
</code></pre>
<p>That'll give you thousands of pages of results, starting with printing page 1...</p>
<pre><code>[<Master 117350 'LaToya Jackson* - My Special Love'>, <Master 368326 'Charlie Rich - Very Special Love Songs'>, <Release 2315998 'LaToya Jackson* - My Special Love'>, <Release 3104531 'Charlie Rich - Very Special Love Songs'>, <Master 889231 'Percy Sledge - My Special Prayer / Pledging My Love'>, <Release 1434850 'Percy Sledge - My Special Prayer / Pledging My Love'>, <Master 258256 'Extreme (2) - Song For Love'>, <Master 506555 'Special Delivery - Special Delivery'>, <Release 2394297 'LaToya Jackson* - My Special Love'>, <Release 13779043 'LaToya Jackson* - My Special Love'>, <Release 2026440 'Special Delivery - Special Delivery'>, <Release 1826434 'LaToya Jackson* - My Special Love'>, <Release 2960819 'LaToya Jackson* - My Special Love'>, <Release 2495100 'Malik (3) / Special Force (5) - My Love / A Pretty Song For You'>, <Master 878003 'Jim Nabors - A Very Special Love Song'>, <Release 2062115 'Jim Nabors - A Very Special Love Song'>, <Release 3900553 'Charlie Rich - Very Special Love Songs'>, <Release 16073124 'Labi Siffre - Remember My Song'>, <Release 2338491 'Extreme (2) - Song For Love'>, <Release 11331156 'LaToya Jackson* - My Special Love'>, ...
</code></pre>
<p>My questions here are:</p>
<p>1.a) Is there a way to pre-filter the results so that I am only searching for results within a certain range (e.g. all results 1991-1994 AND those with no year)?</p>
<p>1.b) Is there a way to filter as well for those with no year of release? (if that's actually possible)</p>
<ol start="2">
<li>If not, do I need to first save ALL those IDs and then query them individually for their details? That seems potentially very usage heavy as I'll have to iterate over them all and make thousands of queries?</li>
</ol>
<p>Thanks!</p>
|
<python><discogs-api>
|
2023-06-14 11:58:12
| 0
| 1,293
|
the_t_test_1
|
76,473,155
| 11,484,423
|
Sympy.solve can't eliminate intermediate symbols from systems of equations
|
<h3>WHAT I WANT TO DO</h3>
<p>I am building a DIY robot arm. I need to design the length of the links, the joint configuration, and motors size function of my working area and payload.</p>
<p>I decided it would be a fun project to make a program to help me with the task as it would help simulating the reach, and finding the forward and inverse kinematic matrices that control the robot.</p>
<p>I want the python program to be parametric, so i can define how many motors and links are present, how they are connected, and I want the python program to do three things:</p>
<ul>
<li>Plot me the reach with given parameters</li>
<li>Find me the sensitivities of the end effector position in respect to the motor angles</li>
<li>Compile the forward and inverse transformation matricies with given parameters</li>
</ul>
<p>It's very simple to solve manually for two motors and two links, I want to try configurations and generally have fun with shapes and working areas, that's why I'm trying to make a more generic solver. For reference, this is the kind of construction that I want the program to provide the equations for:
<a href="https://i.sstatic.net/vE743.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vE743.jpg" alt="enter image description here" /></a></p>
<h3>WHERE I GOT SO FAR</h3>
<p>I made a python program using the Sympy library to replicate what I do on paper: I define the equations of the mechanical system, individual joints and links and I get a system of equations, then use Sympy to solve the equations, both forward and reverse, scan and plot the position over given input angle range. It demonstrates that the Sympy solver can do what I need, solving a two joint system:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as lib_numpy
# Import sympy library
import sympy as lib_sympy
#Display
import matplotlib.pyplot as plt
#create a symbolic 2D rotation matrix (2x2)
def create_rotation_matrix_2d( theta : lib_sympy.Symbol):
return lib_sympy.Matrix([[lib_sympy.cos(theta), -lib_sympy.sin(theta)], [lib_sympy.sin(theta), lib_sympy.cos(theta)]])
#create a symbolic 2D vector 2x1 vertical x, y
def create_vector_2d( x : lib_sympy.Symbol, y : lib_sympy.Symbol ):
return lib_sympy.Matrix([[x, y]])
#A segment is composed by a joint connected to a link
def create_segment( in_phi : lib_sympy.Symbol, in_theta : lib_sympy.Symbol, in_length : lib_sympy.Symbol ):
#joints
nnn_rotation_matrix = create_rotation_matrix_2d( in_phi +in_theta )
#link from this joint to the next joint to come
nn_link_vector = create_vector_2d( in_length, 0 )
#equation of the link
return nn_link_vector *nnn_rotation_matrix
def system():
#---------------------------------------------------
# WORLD - J1
#---------------------------------------------------
eq_link_w_1 = create_segment( 0, 0, 0 )
#---------------------------------------------------
# J1 - J2
#---------------------------------------------------
n_theta_1 = lib_sympy.Symbol('T1')
n_length_1_2 = lib_sympy.Symbol('L1E')
eq_link_1_2 = create_segment( 0, n_theta_1, n_length_1_2 )
#---------------------------------------------------
# J2 - EE
#---------------------------------------------------
n_theta_2 = lib_sympy.Symbol('T2')
n_length_2_e = lib_sympy.Symbol('L2E')
eq_link_2_e = create_segment( 0, n_theta_1 +n_theta_2, n_length_2_e )
#---------------------------------------------------
# END EFFECTOR
#---------------------------------------------------
#spawn the output of the system (referred to world)
n_x = lib_sympy.Symbol('x')
n_y = lib_sympy.Symbol('y')
nn_end_effector = create_vector_2d( n_x, n_y )
#---------------------------------------------------
# EQUATION
#---------------------------------------------------
#build the equation
eq = lib_sympy.Eq( nn_end_effector, eq_link_w_1 +eq_link_1_2 +eq_link_2_e )
print("Equation: ", eq )
solution_forward = lib_sympy.solve( eq, [n_x, n_y], dict = True )
print("num solutions: ", len(solution_forward))
print("Forward Solution theta->x,y: ", solution_forward )
#Construct equations from solutions
for n_solution_index in solution_forward: #loop over the solutions
#equation from solution so I can substitute
equation_forward_x = lib_sympy.Eq( n_x, n_solution_index[n_x] )
equation_forward_y = lib_sympy.Eq( n_y, n_solution_index[n_y] )
print("Equation X: ", equation_forward_x )
print("Equation Y: ", equation_forward_y )
#---------------------------------------------------
# MAP THE REACH
#---------------------------------------------------
# I initialize a "result" array. it contains the X and Y output
# I decide the minimum and maximum angle range of the joints
# for each solution
# first I set the length of the link, a parameter
# I replace the angle of the joints in the equation
# I solve the equation, and record the result
#I initialize a “result” array. it contains the X and Y output
result = []
#samples
n_steps = 8
#Angles J1
min = -lib_numpy.pi/2
max = lib_numpy.pi/2
nn_angles_j1 = lib_numpy.linspace( min, max, n_steps )
#Angles J2
min = -lib_numpy.pi/2
max = lib_numpy.pi/2
nn_angles_j2 = lib_numpy.linspace( min, max, n_steps )
#link length
in_length_1_2 = 5
in_length_2_e = 3
#scan angles for J1
for n_angle_j1 in nn_angles_j1:
for n_angle_j2 in nn_angles_j2:
#I replace the angle of the joints in the equation
n_eval_x = equation_forward_x.evalf( subs={n_theta_1: n_angle_j1, n_theta_2 : n_angle_j2, n_length_1_2 : in_length_1_2, n_length_2_e : in_length_2_e } )
sol_x = lib_sympy.solve( n_eval_x, dict=True )
#print("solution X: ", sol_x )
n_eval_y = equation_forward_y.evalf( subs={n_theta_1: n_angle_j1, n_theta_2 : n_angle_j2, n_length_1_2 : in_length_1_2, n_length_2_e : in_length_2_e } )
sol_y = lib_sympy.solve( n_eval_y, dict=True )
#print("solution Y: ", sol_y )
sol = [ sol_x[0][n_x], sol_y[0][n_y] ]
#solve the now simple equation and get a number
#print("solution: ", sol )
#I solve the equation, and record the result
result.append( sol )
#---------------------------------------------------
# PRINT XY
#---------------------------------------------------
# Print a scatter XY chart showing all the X,Y points
#---------------------------------------------------
#Print a scatter XY chart showing all the X,Y points
x_values = [r[0] for r in result]
y_values = [r[1] for r in result]
plt.scatter(x_values, y_values)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Reachable points for a single link")
plt.show()
#if execution detected
if __name__ == '__main__':
system()
</code></pre>
<p>Output:</p>
<pre><code>Equation: Eq(Matrix([[x, y]]), Matrix([[L1E*cos(T1) + L2E*cos(T1 + T2), -L1E*sin(T1) - L2E*sin(T1 + T2)]]))
num solutions: 1
Forward Solution theta->x,y: [{x: L1E*cos(T1) + L2E*cos(T1 + T2), y: -L1E*sin(T1) - L2E*sin(T1 + T2)}]
Equation X: Eq(x, L1E*cos(T1) + L2E*cos(T1 + T2))
Equation Y: Eq(y, -L1E*sin(T1) - L2E*sin(T1 + T2))
</code></pre>
<p>Forward End Effector XY map for some T1 T2 joint configuration
<a href="https://i.sstatic.net/8FhH9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8FhH9.png" alt="Reach" /></a></p>
<h3>PROBLEM</h3>
<p>Sympy only seems to be able to solve if there are no intermediate vars, which is baffling to me.</p>
<p>If I provide the full system of equations, Sympy.solve seems unable to see the equations M1X = 0 and M2X = M1X and eliminate the M1X symbol, even if I add it to the exclude list of the solver.</p>
<p>E.g. The following code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as lib_numpy
# Import sympy library
import sympy as lib_sympy
#Display
import matplotlib.pyplot as plt
def create_rotation_matrix_2d( theta : lib_sympy.Symbol):
return lib_sympy.Matrix([[lib_sympy.cos(theta), lib_sympy.sin(theta)], [-lib_sympy.sin(theta), lib_sympy.cos(theta)]])
def create_vector_2d( x : lib_sympy.Symbol, y : lib_sympy.Symbol ):
return lib_sympy.Matrix([[x, y]])
def create_segment( in_phi : lib_sympy.Symbol, in_theta : lib_sympy.Symbol, in_length : lib_sympy.Symbol ):
#joints
nnn_rotation_matrix = create_rotation_matrix_2d( in_phi +in_theta )
#link from this joint to the next joint to come
nn_link_vector = create_vector_2d( in_length, 0 )
#equation of the link
return nn_link_vector *nnn_rotation_matrix
def create_segment_offset( in_start_x : lib_sympy.Symbol, in_start_y : lib_sympy.Symbol, in_phi : lib_sympy.Symbol, in_theta : lib_sympy.Symbol, in_length : lib_sympy.Symbol ):
nn_offset = create_vector_2d( in_start_x, in_start_y )
nn_segment = create_segment( in_phi, in_theta, in_length )
return nn_offset +nn_segment
def create_segment_equations( in_length : lib_sympy.Symbol, in_start_x : lib_sympy.Symbol, in_start_y : lib_sympy.Symbol, in_phi : lib_sympy.Symbol, in_theta : lib_sympy.Symbol, in_end_x : lib_sympy.Symbol, in_end_y : lib_sympy.Symbol, in_end_theta : lib_sympy.Symbol ):
l_equation = []
#Segment X,Y equations function of angle
equation_1 = lib_sympy.Eq( create_vector_2d( in_end_x, in_end_y ), create_segment_offset( in_start_x, in_start_y, in_phi, in_theta, in_length) )
solution_1 = lib_sympy.solve( [equation_1], [in_end_x])
solution_2 = lib_sympy.solve( [equation_1], [in_end_y])
#Segment T angle equation function of angle
equation_theta = lib_sympy.Eq( in_end_theta, in_phi+in_theta )
#compose segment equations
l_equation.append( lib_sympy.Eq( in_end_x, solution_1[in_end_x] ) )
l_equation.append( lib_sympy.Eq( in_end_y, solution_2[in_end_y] ) )
l_equation.append( equation_theta )
return l_equation
def double_pendulum_system():
#forward equations
#T1,T2->EX,EY,ET
l_equation = []
#Motor 1 segment from World to its joint
n_segment_1_length = lib_sympy.Symbol('L1')
n_motor_1_theta = lib_sympy.Symbol('T1')
n_segment_1_x = lib_sympy.Symbol('M2X')
n_segment_1_y = lib_sympy.Symbol('M2Y')
n_segment_1_theta = lib_sympy.Symbol('M2T')
l_equation = l_equation +create_segment_equations( n_segment_1_length, 0, 0, 0, n_motor_1_theta, n_segment_1_x, n_segment_1_y, n_segment_1_theta )
#Motor 2 segment from Motor 1 Joint to End Effector
n_segment_2_length = lib_sympy.Symbol('L2')
n_motor_2_theta = lib_sympy.Symbol('T2')
n_end_effector_x = lib_sympy.Symbol('EX')
n_end_effector_y = lib_sympy.Symbol('EY')
n_end_effector_theta = lib_sympy.Symbol('ET')
l_equation = l_equation +create_segment_equations( n_segment_1_length, n_segment_1_x, n_segment_1_y, n_segment_1_theta, n_motor_2_theta, n_end_effector_x, n_end_effector_y, n_end_effector_theta )
print( "Equations", l_equation )
#Forward Equation
l_forward_solution = lib_sympy.solve( l_equation, [n_end_effector_x, n_end_effector_y, n_end_effector_theta],exclude=(n_segment_1_x,n_segment_1_y,n_segment_1_y,) )
print( "Forward", l_forward_solution )
#Forward Sensitivity
#Sensitivity of End Effector X in respect to variations in T1 angle
n_end_effector_sensitivity_x_t1 = lib_sympy.Symbol('EXdT1')
l_equation.append( lib_sympy.Eq( n_end_effector_sensitivity_x_t1, lib_sympy.Derivative(n_end_effector_x, n_motor_1_theta) ) )
l_sensitivity = lib_sympy.solve( l_equation, [n_end_effector_sensitivity_x_t1] )
print("Forward Sensitivity", l_sensitivity )
return
if __name__ == '__main__':
print("TEST12")
print("Forward density chart")
#STEP1: compile forward equations
double_pendulum_system()
#STEP2: compile forward sensitivity, how sensitive is position to T1 and T2
</code></pre>
<p>unhelpful outputs:</p>
<pre class="lang-py prettyprint-override"><code>TEST12
Forward density chart
Equations [Eq(M2X, L1*cos(T1)), Eq(M2Y, L1*sin(T1)), Eq(M2T, T1), Eq(EX, L1*cos(M2T + T2) + M2X), Eq(EY, L1*sin(M2T + T2) + M2Y), Eq(ET, M2T + T2)]
Forward {EX: L1*cos(M2T + T2) + M2X, EY: L1*sin(M2T + T2) + M2Y, ET: M2T + T2}
Forward Sensitivity {EXdT1: Derivative(EX, T1)}
</code></pre>
<p>Which tells me that the derivative is the derivative, which is unhelpful to say the least.
The correct output I want to get is like:</p>
<pre class="lang-py prettyprint-override"><code>TEST12
Forward density chart
Equations [Eq(M2X, L1*cos(T1)), Eq(M2Y, L1*sin(T1)), Eq(M2T, T1), Eq(EX, L1*cos(M2T + T2) + M2X), Eq(EY, L1*sin(M2T + T2) + M2Y), Eq(ET, M2T + T2)]
Forward {EX: L1*cos(T1+ T2) + L1*cos(T1), EY: L1*sin(T1+ T2) + L1*sin(T1), ET: T1+ T2}
Forward Sensitivity {EXdT1: -L1*sin(T1)-L1*sin(T1+ T2)}
</code></pre>
<p>I tried <em>linsolve</em> to no avail, the equations are inherently trascendent, linsolve can't do anything with them.</p>
<h3>QUESTION</h3>
<ol>
<li>How can I get Sympy.solve to sobstitute/eliminate unneded variables and get it to give me a closed solution?</li>
<li>is there anothe Sympy call or another way to properly symbolically solve the system of equations I provided in the example?</li>
</ol>
<p>below a minimal snippet to replicate the problem in Sympy.solve</p>
<pre class="lang-py prettyprint-override"><code>#Try to solve X^2=Y+ReplaceMe for X with ReplaceMe=-1
import sympy as lib_sympy
def good( in_x : lib_sympy.Symbol, in_y : lib_sympy.Symbol, in_z : lib_sympy.Symbol ):
equation = lib_sympy.Eq( in_x*in_x, in_y+ in_z )
equation = equation.evalf( subs={in_z: -1} )
solution = lib_sympy.solve( equation, in_x )
return solution
#The solver doesn't realize that Z=0 and doesn't sobstitute
def bad( in_x : lib_sympy.Symbol, in_y : lib_sympy.Symbol, in_z : lib_sympy.Symbol ):
l_equation = []
l_equation.append( lib_sympy.Eq( in_z, -1 ) )
l_equation.append( lib_sympy.Eq( in_x*in_x, in_y+ in_z ) )
solution = lib_sympy.solve( l_equation, in_x, exclude = (in_z,) )
return solution
if __name__ == '__main__':
n_x = lib_sympy.Symbol('X')
n_y = lib_sympy.Symbol('Y')
n_z = lib_sympy.Symbol('ReplaceMe')
print("Manual Solution: ", good( n_x, n_y, n_z ) )
print("Unhelpful Solution: ", bad( n_x, n_y, n_z ) )
</code></pre>
<p>Output:</p>
<pre><code>Manual Solution: [-sqrt(Y - 1.0), sqrt(Y - 1.0)]
Unhelpful Solution: [(-sqrt(ReplaceMe + Y),), (sqrt(ReplaceMe + Y),)]
</code></pre>
|
<python><sympy><symbolic-math><equation-solving>
|
2023-06-14 11:41:19
| 1
| 670
|
05032 Mendicant Bias
|
76,473,149
| 4,391,133
|
HelloSign python SDK generated with OPS is different from the SDK project provided in github
|
<p>I am trying to generate the python SDK of the hellosign from the opensource repo with this yaml <a href="https://github.com/hellosign/hellosign-openapi/blob/main/openapi-sdk.yaml" rel="nofollow noreferrer">https://github.com/hellosign/hellosign-openapi/blob/main/openapi-sdk.yaml</a></p>
<p>The below is the command I am using to generate the SDK</p>
<blockquote>
<p>java -jar openapi-generator-cli-6.6.0.jar generate -i D:\openapi-sdk.yaml -g python -o c:\temp\hellosign_python --skip-validate-spec --additional-properties packageName=hellosign.sdk</p>
</blockquote>
<p>But the SDK project generated is different from the once they have generated here <a href="https://github.com/hellosign/hellosign-openapi/tree/main/sdks/python" rel="nofollow noreferrer">https://github.com/hellosign/hellosign-openapi/tree/main/sdks/python</a>.</p>
<p>Few API classes are not there. <strong>init</strong>.py is different with separate clients and so on. How to generate the SDK project same as they have generated?</p>
|
<python><openapi-generator><openapi-generator-cli><hellosign-api>
|
2023-06-14 11:40:52
| 1
| 718
|
IMK
|
76,473,106
| 1,108,473
|
ValueError: object not in sequence
|
<p>I'm using CircuitPython to draw text and shapes on an LCD, which works. I detect interaction (a rotary encoder) and want to change a shape's colour, when I detect a certain state. This also works but I have created a function to remove the shape and the text overlaying it then place another shape (same size & position but different colour) and then replace the original text back over that. I'm getting the following error</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 249, in <module>
File "<stdin>", line 206, in toggleCircle
ValueError: object not in sequence
</code></pre>
<p>The relevant parts of my code:</p>
<pre><code>...
# Make the displayio SPI bus and the GC9A01 display
display_bus = displayio.FourWire(spi, command=tft_dc, chip_select=tft_cs, reset=tft_rst)
display = gc9a01.GC9A01(display_bus, width=240, height=240, backlight_pin=tft_bl)
...
# Make the main display context
main = displayio.Group()
display.show(main)
circle = Circle(-25,120,circleRad, fill= 0x555555 )
modeCircle = Circle(-25,120,circleRad, fill= 0x0045D0 )
...
#Draw central hub text
tmp = "18.5°C"
hum = "53%"
delta = "4:19"
tim = "13:34"
TempTxt = label.Label(terminalio.FONT, text=tmp, color=0xFFFFFF, anchor_point=(0.5,0.5), anchored_position=(0,0))
HumTxt = label.Label(terminalio.FONT, text=hum, color=0xFFFFFF, anchor_point=(0.5,0.5), anchored_position=(0,-12))
DeltaTxt = label.Label(terminalio.FONT, text=delta, color=0xFFFFFF, anchor_point=(0.5,0.5), anchored_position=(0,12))
TimeTxt = label.Label(terminalio.FONT, text=tim, color=0xFFFFFF, anchor_point=(0.5,0.5), anchored_position=(0,24))
StatusGrp = displayio.Group(scale=2)
StatusGrp.append(TempTxt)
StatusGrp.append(HumTxt)
StatusGrp.append(DeltaTxt)
StatusGrp.append(TimeTxt)
StatusGrp.x = 50
StatusGrp.y = 108
main.append(HumGrp)
main.append(TimerGrp)
main.append(MaxGrp)
main.append(OffGrp)
main.append(circle)
main.append(StatusGrp)
...
mode = 0 #position and therefore mode seems to be a bit lossy, so use x & y coordinate check to determine mode
ActiveModeX = int(sweepRad * math.cos(0))-re
ActiveModeY = int(sweepRad * math.sin(0))+120
# cater for rounding errors or other minor inaccuracies with a comparison with a tolerance margin
def near(a,b,within):
if ((a >= b-within) and (a <= b+within)):
return True
else:
return False
def toggleCircle(sel):
if sel == 1:
main.remove(circle)
main.remove(StatusGrp)
main.append(modeCircle) #remove grey circle and replace it with coloured circle ie mode selected
main.append(StatusGrp)
elif sel == 0:
main.remove(modeCircle)
main.remove(StatusGrp)
main.append(circle) #remove grey circle and replace it with coloured circle ie mode selected
main.append(StatusGrp)
while True:
position = encoder.position
if last_position is None or position != last_position:
theta = position * math.pi/virtualDetents
HumGrp.x = int(sweepRad * math.cos(theta))-re
HumGrp.y = int(sweepRad * math.sin(theta))+120
TimerGrp.x = int(sweepRad * math.cos((theta-math.pi/2))-re) #-(math.pi/2)
TimerGrp.y = int(sweepRad * math.sin((theta-math.pi/2))+120)
MaxGrp.x = int(sweepRad * math.cos((theta-math.pi))-re) #-(math.pi/2)
MaxGrp.y = int(sweepRad * math.sin((theta-math.pi))+120)
OffGrp.x = int(sweepRad * math.cos((theta+math.pi/2))-re) #-(math.pi/2)
OffGrp.y = int(sweepRad * math.sin((theta+math.pi/2))+121)
time.sleep(0.2)
#use x & y coordinate check to determine mode - much more reliable than using rotary encoder, which can get out of sync
if (near(HumGrp.x,ActiveModeX,1) and near(HumGrp.y,ActiveModeY,1) and mode != "humidity"):
print("Mode: Humidity")
mode = "humidity"
toggleCircle(1)
elif (near(TimerGrp.x,ActiveModeX,1) and near(TimerGrp.y,ActiveModeY,1) and mode != "timer"):
print("Mode: Timer")
mode = "timer"
toggleCircle(1)
elif (near(MaxGrp.x,ActiveModeX,1) and near(MaxGrp.y,ActiveModeY,1) and mode != "max"):
print("Mode: Max")
mode = "max"
toggleCircle(1)
elif (near(OffGrp.x,ActiveModeX,1) and near(OffGrp.y,ActiveModeY,1) and mode != "off"):
print("Mode: Off")
mode = "off"
toggleCircle(1)
# else:
# print("Mode: Not a mode")
# toggleCircle(0)
</code></pre>
<p>Despite the error message, I'm not sure what I'm looking for. Please help</p>
|
<python><lcd><adafruit><adafruit-circuitpython>
|
2023-06-14 11:36:26
| 1
| 377
|
Greg
|
76,472,782
| 1,473,517
|
How to make color bar ticks white and internal
|
<p>I am drawing a heatmap and this is my MWE:</p>
<pre><code>import matplotlib
import seaborn as sns
import numpy as np
matplotlib.rcParams.update({"figure.dpi": 96})
np.random.seed(7)
A = np.random.randint(0,100, size=(20,20))
cmap = matplotlib.cm.get_cmap('viridis').copy()
g = sns.heatmap(A, vmin=10, vmax=90, cmap=cmap, cbar_kws={})
# Get the colorbar
cbar = g.collections[0].colorbar
tick_locations = [*range(15, 86, 10)]
# Set the tick positions and labels
cbar.set_ticks(tick_locations)
cbar.set_ticklabels(tick_locations)
plt.show()
</code></pre>
<p>This gives me:</p>
<p><a href="https://i.sstatic.net/1EmEC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1EmEC.png" alt="enter image description here" /></a></p>
<p>But I would like the little horizontal tick marks on the color bar to be white and inside the color bar as in:</p>
<p><a href="https://i.sstatic.net/wxA1l.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wxA1l.png" alt="enter image description here" /></a></p>
<p>How can I do that?</p>
<p>(What I am looking for seems to be the default in <a href="https://stackoverflow.com/a/72140120/1473517">plotnine/ggplot</a>.)</p>
|
<python><matplotlib><colorbar>
|
2023-06-14 10:56:18
| 1
| 21,513
|
Simd
|
76,472,773
| 7,959,614
|
Integrate over multiple limits at once using Quadpy
|
<p>My goal is to integrate over multiple limits at once using <code>quadpy</code>.
The documentation says the following:</p>
<blockquote>
<p>quadpy is fully vectorized, so if you like to compute the integral of
a function on many domains at once, you can provide them all in one
integrate() call, e.g.,</p>
</blockquote>
<p>However, its still unclear to me which <code>quadpy</code>-function I should use for this objective.
My script looks as follows:</p>
<pre><code>import quadpy
import numpy as np
class Kelly:
def __init__(self):
self.odds = 1.952
self.kelly = 0.08961344537815132
self.i = 0.001
self.f = np.arange(0, 1 + self.i, self.i).flatten()
self.c1 = 1
self.c2 = 2
self.k = 1.5
def loss_function(self, p):
p = p[:, 0]
loss_function = np.where(p[:, None] < (self.f * self.odds - 1) + 1 / self.odds,
(self.c1 + self.c2) * abs(self.f - self.kelly) ** self.k, 0)
return loss_function
def integrate(self):
xmin = np.zeros(len(self.f))
xmax = np.array([self.f * (self.odds - 1) + 1 / self.odds]).flatten()
# vals, errors = quadpy.quad(self.loss_function, xmin, xmax)
return vals, errors
kelly = Kelly()
vals, errors = kelly.integrate()
print(vals, errors)
</code></pre>
<p>This results in the following error:</p>
<blockquote>
<p>ValueError: The truth value of an array with more than one element is
ambiguous. Use a.any() or a.all()</p>
</blockquote>
<p>Please advice</p>
|
<python><numerical-integration>
|
2023-06-14 10:55:30
| 2
| 406
|
HJA24
|
76,472,682
| 1,014,217
|
Cant make memory work with ConversationalRetrievalChain in Langchain
|
<p>I have simple txt file indexed in pine cone, and question answering works perfectly fine without memory.</p>
<p>When I add ConversationBufferMemory and ConversationalRetrievalChain using session state the 2nd question is not taking into account the previous conversation.</p>
<p>1st Question: Who is John Doe?
He is a male, 70 years old, etc,etc
2nd Question. How old is he?
To who are you referring to?</p>
<p>But chat history looks like this:
<img src="https://github.com/hwchase17/langchain/assets/6962857/ecf5f5f1-9772-45aa-afa5-4ee30aef7fa4" alt="langchain" /></p>
<p>MY code looks like this, what am I missing?</p>
<pre><code>
import streamlit as st
import openai
import os
import pinecone
import streamlit as st
from dotenv import load_dotenv
from langchain.chains.question_answering import load_qa_chain
from dotenv import load_dotenv
from langchain.chat_models import AzureChatOpenAI
from langchain.embeddings.openai import OpenAIEmbeddings
from langchain.vectorstores import Pinecone
import streamlit as st
from streamlit_chat import message
from langchain.memory import ConversationBufferMemory
from langchain.prompts import PromptTemplate
from langchain.chains import ConversationChain
from langchain.chains import ConversationalRetrievalChain
#load environment variables
load_dotenv()
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
OPENAI_DEPLOYMENT_ENDPOINT = os.getenv("OPENAI_DEPLOYMENT_ENDPOINT")
OPENAI_DEPLOYMENT_NAME = os.getenv("OPENAI_DEPLOYMENT_NAME")
OPENAI_MODEL_NAME = os.getenv("OPENAI_MODEL_NAME")
OPENAI_EMBEDDING_DEPLOYMENT_NAME = os.getenv("OPENAI_EMBEDDING_DEPLOYMENT_NAME")
OPENAI_EMBEDDING_MODEL_NAME = os.getenv("OPENAI_EMBEDDING_MODEL_NAME")
OPENAI_API_VERSION = os.getenv("OPENAI_API_VERSION")
OPENAI_API_TYPE = os.getenv("OPENAI_API_TYPE")
#pinecone
PINECONE_API_KEY = os.getenv("PINECONE_API_KEY")
PINECONE_ENV = os.getenv("PINECONE_ENV")
#init Azure OpenAI
openai.api_type = OPENAI_API_TYPE
openai.api_version = OPENAI_API_VERSION
openai.api_base = OPENAI_DEPLOYMENT_ENDPOINT
openai.api_key = OPENAI_API_KEY
st.set_page_config(
page_title="Streamlit Chat - Demo",
page_icon=":robot:"
)
chat_history = []
def get_text():
input_text = st.text_input("You: ","Who is John Doe?", key="input")
return input_text
def query(payload, chain,query,chat_history ):
result = chain({"question": query, "chat_history": chat_history})
chat_history.append((query, result["answer"]))
thisdict = {
"generated_text": result['answer']
}
return thisdict, chat_history
def main():
st.title('Scenario 2: Question Aswering on documents with langchain, pinecone and openai')
st.markdown(
"""
This scenario shows how to chat wih a txt file which was indexed in pinecone.
"""
)
pinecone.init(
api_key=PINECONE_API_KEY, # find at app.pinecone.io
environment=PINECONE_ENV # next to api key in console
)
if 'generated' not in st.session_state:
st.session_state['generated'] = []
if 'past' not in st.session_state:
st.session_state['past'] = []
if 'chat_history' not in st.session_state:
st.session_state['chat_history'] = []
index_name = "default"
embed = OpenAIEmbeddings(deployment=OPENAI_EMBEDDING_DEPLOYMENT_NAME, model=OPENAI_EMBEDDING_MODEL_NAME, chunk_size=1)
retriever = Pinecone.from_existing_index(index_name, embed)
user_input = get_text()
llm = AzureChatOpenAI(
openai_api_base=OPENAI_DEPLOYMENT_ENDPOINT,
openai_api_version=OPENAI_API_VERSION ,
deployment_name=OPENAI_DEPLOYMENT_NAME,
openai_api_key=OPENAI_API_KEY,
openai_api_type = OPENAI_API_TYPE ,
model_name=OPENAI_MODEL_NAME,
temperature=0)
memory = ConversationBufferMemory(memory_key="chat_history", return_messages=True)
chain = ConversationalRetrievalChain.from_llm(llm, retriever.as_retriever(), memory=memory)
if user_input:
output, chat_history = query({
"inputs": {
"past_user_inputs": st.session_state.past,
"generated_responses": st.session_state.generated,
"text": user_input,
},"parameters": {"repetition_penalty": 1.33}
},
chain=chain,
query=user_input,
chat_history=st.session_state["chat_history"])
st.session_state.past.append(user_input)
st.session_state.generated.append(output["generated_text"])
st.session_state.chat_history.append(chat_history)
if st.session_state['generated']:
for i in range(len(st.session_state['generated'])-1, -1, -1):
message(st.session_state["generated"][i], key=str(i))
message(st.session_state['past'][i], is_user=True, key=str(i) + '_user')
if __name__ == "__main__":
main()
</code></pre>
|
<python><streamlit><langchain><pinecone>
|
2023-06-14 10:44:30
| 1
| 34,314
|
Luis Valencia
|
76,472,672
| 11,415,809
|
Boolean logic in filters when loading parquet file
|
<p>I want to remove persons that are born in 1900 and have not died yet.</p>
<p>Code below works, but I need two filters to remove specific rows. Is there a simpler way to remove the rows with one filter?</p>
<p>Minimal code to reproduce:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
data = [
(1900, None,), # needs to be removed
(1900, 2000,),
(2000, None,),
(2000, 2020,),
]
df = pd.DataFrame(data, columns=['birth', 'death'])
df.to_parquet('test.parquet')
# Rows which do not match the filter predicate will be removed
filters= [
[
('birth', '!=', 1900),
],
[
('birth', '=', 1900),
('death', 'not in', [None]),
]
]
df2 = pd.read_parquet('test.parquet', filters=filters)
df2.head()
</code></pre>
<p>Documentation: <a href="https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html#pyarrow.parquet.read_table" rel="nofollow noreferrer">https://arrow.apache.org/docs/python/generated/pyarrow.parquet.read_table.html#pyarrow.parquet.read_table</a></p>
|
<python><pandas><parquet><boolean-logic>
|
2023-06-14 10:43:07
| 1
| 481
|
3UqU57GnaX
|
76,472,567
| 12,863,331
|
Applying explode on a pandas dataframe results in the error 'ValueError: column must be a scalar'
|
<p>Here the dataframe before attempting to explode it:</p>
<pre><code> Acc # Match Length Gene Identity (%)
0 CP034360.1 [312] [b4172] [88.462]
1 CP098173.1 [1655, 180] [rne, rne] [84.955, 85.556]
2 NTCW01000078.1 [986] [recA] [90.974]
3 NZ_JASISD010000003.1 [312] [b4172] [88.462]
4 PEGY01000004.1 [1619, 183] [rne, rne] [85.238, 85.246]
5 PEHS01000009.1 [312] [b4172] [87.500]
6 PEHS01000011.1 [1608] [rne] [85.759]
7 PEHV01000004.1 [1014] [recA] [90.335]
8 PEHV01000005.1 [2417] [gyrB] [88.746]
9 PEHV01000008.1 [312] [b4172] [87.500]
10 VOOC01000097.1 [2423] [gyrB] [88.816]
</code></pre>
<p>The command that causes the error is</p>
<pre><code>b_df.explode(['Match Length', 'Gene', 'Identity (%)'])
</code></pre>
<p>I couldn't find what might cause this error, or relevant posts.<br />
Any ideas will be welcomed.<br />
Thanks.</p>
|
<python><pandas><pandas-explode>
|
2023-06-14 10:31:13
| 1
| 304
|
random
|
76,472,513
| 7,745,011
|
Is it possible to use inheritance on the `schema_extra` config setting of a Pydantic model?
|
<p>For example I have the following toy example of a <code>Parent</code> Model:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, Extra
class Parent(BaseModel):
class Config:
extra = Extra.ignore
validate_assignment = True
schema_extra = {
"version": "00.00.00",
"info": "Parent description",
"name": "Parent Name",
}
</code></pre>
<p>The goal here is that all child models inherit the same schema and maybe add additional stuff to that schema.</p>
<pre class="lang-py prettyprint-override"><code>class Child(Parent):
class Config:
extra = Extra.ignore
validate_assignment = True
@staticmethod
def schema_extra(
schema: dict[str, object], model: type["Child"]
) -> None:
schema['info'] = 'Child Description'
schema['name'] = 'Child Name'
schema['additional stuff'] = 'Something else'
</code></pre>
<p>The code above does not work, since the <code>Config</code> in the child class completely overwrites the inherited <code>Config</code> from the parent, therefore in the child's schema we are missing the <code>version</code> for example.</p>
<p>As mentioned, I want for all inherited classes to have the same basic schema layout with some meta information and possibly change values or add to it. Is this even possible?</p>
<p><strong>Edit</strong>
The solution from @Daniil-Fajnberg works well with one caveat. Having for example:</p>
<pre><code>class Child(Parent):
class Config:
@staticmethod
def schema_extra(schema: dict[str, object]) -> None:
schema["info"] = "Child Description"
schema["additional stuff"] = "Something else"
schema["name"] = f"{schema.get('title')}v{schema.get('version')}"
</code></pre>
<p>the resulting entry for <code>name</code> in the schema will be for example:</p>
<blockquote>
<p>'name': "Nonev00.00.00"</p>
</blockquote>
<p>FYI: In my setup I use <code>schema_extra</code> as static method on the parent as well</p>
|
<python><inheritance><pydantic>
|
2023-06-14 10:24:28
| 2
| 2,980
|
Roland Deschain
|
76,472,363
| 943,524
|
How comes all decimals with a few digits are printed correctly?
|
<p>As we all know, computers store floating point numbers on finite memory in an approximate way (see <a href="https://en.wikipedia.org/wiki/IEEE_754" rel="nofollow noreferrer">IEEE 754</a>).</p>
<p>This results in weird behaviour like (here in Python, but it shouldn't matter much):</p>
<pre><code>>>> 0.1 + 0.2
0.30000000000000004
</code></pre>
<p>Or</p>
<pre><code>>>> 0.10000000000000001
0.1
</code></pre>
<p><strong>However</strong>, when we print numbers with "a few" digits like <code>0.1</code>, <code>0.2</code>, <code>0.3</code>, ... we never end with an approximation of the number (in my second example above, it's true the other way: <code>0.10000000000000001</code> renders as <code>0.1</code>).</p>
<p>How does IEEE 754 (or Python, if this behavior is due to Python implem) achieves this?</p>
|
<python><math><floating-point><numbers><ieee-754>
|
2023-06-14 10:05:44
| 1
| 1,439
|
Weier
|
76,472,270
| 1,581,090
|
How do I set up a virtualenv on Ubuntu running in VirtualBox on windows?
|
<p>In a VirtualBox running Ubuntu 20.04.6 on windows 10 I tried to set up a virtualenv on a disk space that is mounted in/from(?) windows (i.e. the space can be accessed from the Ubuntu system as well as windows). I tried:</p>
<pre><code>$ python -m venv venv
Error: [Errno 1] Operation not permitted: 'lib' -> '/home/myuser/venv/lib64'
$ virtualenv venv
PermissionError: [Errno 1] Operation not permitted: '/usr/bin/python3' -> '/home/myuser/venv/bin/python'
</code></pre>
<p>Any idea if it is possible some other way?</p>
|
<python><windows><virtualbox>
|
2023-06-14 09:55:37
| 1
| 45,023
|
Alex
|
76,472,167
| 15,452,168
|
Calculating average items in webshop order for each USIM in Pandas DataFrame
|
<p>I have a Pandas DataFrame with the following structure:</p>
<pre><code>import pandas as pd
data = {
'USIM': ['1111111', '2199608', '2222222', '4444444', '1111111', '2111111', '2222222', '4444444'],
'WEBSHOP_ORDER': [0, 0, 0, 0, 1, 1, 1, 1],
'DEMAND_QTY': [1, 3, 2, 1, 5, 9, 8, 6]
}
df = pd.DataFrame(data)
</code></pre>
<p>I want to calculate the average number of items in the webshop order for each USIM. The USIM column represents the unique identifiers, the WEBSHOP_ORDER column indicates the order ID for each entry, and the DEMAND_QTY column represents the number of items in each order.</p>
<p>I would like to obtain the following output:</p>
<pre><code>USIM AVG_ITEMS_IN_WEBSHOP_ORDER
0 1111111 17.5 # (28+7)/2 *
1 2111111 28.0
2 2199608 7.0
3 2222222 17.5
4 4444444 17.5
# * 28 is the sum of WEBSHOP_ORDER == 1
# 7 is the sum of WEBSHOP_ORDER == 0
</code></pre>
<p>The AVG_ITEMS_IN_WEBSHOP_ORDER column represents the average number of items in webshop orders for each unique USIM value.</p>
<p>Could someone please help me with the logic or code to achieve this? Thank you!</p>
|
<python><pandas><dataframe><logic>
|
2023-06-14 09:43:09
| 1
| 570
|
sdave
|
76,472,040
| 16,797,805
|
Track API calls from Streamlit app in browser developer tools
|
<p>Is there a way to track API calls made from a Streamlit app in the Network tab of the browser developer tools?</p>
<p>I made a Streamlit app that retrieves data through API calls to a server, but I can't find those calls in the developer tools. This will help me to debug and optimize the app more quickly.</p>
|
<python><browser><streamlit>
|
2023-06-14 09:28:58
| 1
| 857
|
mattiatantardini
|
76,472,032
| 1,082,349
|
How to install brotli from pip, when anaconda requires brotlipy
|
<p>I have an environment from anaconda, which includes among others, <code>scipy</code>. The anaconda dependencies require the library <code>brotlipy</code> for <code>scipy</code> (<code>conda install brotlipy</code>). This is unfortunate because I require the google-provided <code>brotli</code> package itself (<code>pip install brotli</code>). This is because the google bindings support a newer version of the brotli algorithm, with support for chunk-wise compression.</p>
<p>I can install brotli with <code>pip</code> inside my environment (so that I have both packages side-by-side, <code>brotlipy</code> for <code>scipy</code>, and <code>brotli</code> for myself). However, a simple import statement will always refer to the brotlipy:</p>
<pre><code>import brotli
brotli
Out[4]: <module 'brotli' from 'C:\\Users\\x\\.conda\\envs\\myenv3\\lib\\site-packages\\brotli\\__init__.py'>
brotli.Compressor().process
Traceback (most recent call last):
File "C:\Users\x\.conda\envs\myenv3\lib\site-packages\IPython\core\interactiveshell.py", line 3397, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File "<ipython-input-5-121c31e74b28>", line 1, in <cell line: 1>
brotli.Compressor().process
AttributeError: 'Compressor' object has no attribute 'process'
</code></pre>
<p>(Since it doesn't have support <code>__version__</code>, I'm calling to <code>.process</code> to test whether I'm calling the right version of <code>brotli</code>, which should yield</p>
<pre><code>brotli.Compressor().process
Out[5]: <function Compressor.process>
</code></pre>
<hr />
<p>How can I force Python to load the correct brotli?</p>
<hr />
<p>Here is me attempting to remove <code>brotlipy</code>, which leads to a removal of scipy.</p>
<pre><code>> conda remove brotlipy -n myenv3
Collecting package metadata (repodata.json): done
Solving environment: done
## Package Plan ##
environment location: C:\Users\x\.conda\envs\myenv3
removed specs:
- brotlipy
The following packages will be downloaded:
package | build
---------------------------|-----------------
bottleneck-1.3.5 | py38h080aedc_0 106 KB
colorama-0.4.6 | py38haa95532_0 32 KB
freetype-2.12.1 | ha860e81_0 490 KB
giflib-5.2.1 | h8cc25b3_3 88 KB
glib-2.69.1 | h5dc1a3c_2 1.8 MB
gst-plugins-base-1.18.5 | h9e645db_0 1.7 MB
gstreamer-1.18.5 | hd78058f_0 1.7 MB
jpeg-9e | h2bbff1b_1 320 KB
kiwisolver-1.4.4 | py38hd77b12b_0 60 KB
krb5-1.19.4 | h5b6d351_0 786 KB
lerc-3.0 | hd77b12b_0 120 KB
libclang-14.0.6 |default_hb5a9fac_1 154 KB
libclang13-14.0.6 |default_h8e68704_1 22.5 MB
libdeflate-1.17 | h2bbff1b_0 151 KB
libffi-3.4.4 | hd77b12b_0 113 KB
libiconv-1.16 | h2bbff1b_2 651 KB
libogg-1.3.5 | h2bbff1b_1 33 KB
libpng-1.6.39 | h8cc25b3_0 369 KB
libtiff-4.5.0 | h6c2663c_2 1.2 MB
libvorbis-1.3.7 | he774522_0 202 KB
libwebp-1.2.4 | hbc33d0d_1 73 KB
libwebp-base-1.2.4 | h2bbff1b_1 304 KB
libxml2-2.10.3 | h0ad7f3c_0 2.9 MB
libxslt-1.1.37 | h2bbff1b_0 448 KB
lz4-c-1.9.4 | h2bbff1b_0 143 KB
matplotlib-inline-0.1.6 | py38haa95532_0 17 KB
numexpr-2.8.4 | py38h5b0cc5e_0 127 KB
packaging-23.0 | py38haa95532_0 69 KB
pcre-8.45 | hd77b12b_0 382 KB
pillow-9.4.0 | py38hd77b12b_0 1015 KB
ply-3.11 | py38_0 81 KB
prompt-toolkit-3.0.36 | py38haa95532_0 566 KB
pygments-2.15.1 | py38haa95532_1 1.7 MB
pyparsing-3.0.9 | py38haa95532_0 152 KB
pyqt-5.15.7 | py38hd77b12b_0 3.7 MB
pyqt5-sip-12.11.0 | py38hd77b12b_0 75 KB
pytz-2022.7 | py38haa95532_0 210 KB
qt-main-5.15.2 | he8e5bd7_8 59.4 MB
qt-webengine-5.15.9 | hb9a9bb5_5 48.9 MB
qtwebkit-5.212 | h2bbfb41_5 11.6 MB
setuptools-67.8.0 | py38haa95532_0 1.0 MB
sip-6.6.2 | py38hd77b12b_0 434 KB
sqlite-3.41.2 | h2bbff1b_0 894 KB
tbb-2021.8.0 | h59b6b97_0 149 KB
tk-8.6.12 | h2bbff1b_0 3.1 MB
toml-0.10.2 | pyhd3eb1b0_0 20 KB
tornado-6.2 | py38h2bbff1b_0 609 KB
traitlets-5.7.1 | py38haa95532_0 205 KB
wheel-0.38.4 | py38haa95532_0 83 KB
xz-5.4.2 | h8cc25b3_0 592 KB
zlib-1.2.13 | h8cc25b3_0 113 KB
zstd-1.5.5 | hd43e919_0 682 KB
------------------------------------------------------------
Total: 172.0 MB
The following NEW packages will be INSTALLED:
giflib pkgs/main/win-64::giflib-5.2.1-h8cc25b3_3
glib pkgs/main/win-64::glib-2.69.1-h5dc1a3c_2
gst-plugins-base pkgs/main/win-64::gst-plugins-base-1.18.5-h9e645db_0
gstreamer pkgs/main/win-64::gstreamer-1.18.5-hd78058f_0
krb5 pkgs/main/win-64::krb5-1.19.4-h5b6d351_0
lerc pkgs/main/win-64::lerc-3.0-hd77b12b_0
libclang pkgs/main/win-64::libclang-14.0.6-default_hb5a9fac_1
libclang13 pkgs/main/win-64::libclang13-14.0.6-default_h8e68704_1
libdeflate pkgs/main/win-64::libdeflate-1.17-h2bbff1b_0
libffi pkgs/main/win-64::libffi-3.4.4-hd77b12b_0
libiconv pkgs/main/win-64::libiconv-1.16-h2bbff1b_2
libogg pkgs/main/win-64::libogg-1.3.5-h2bbff1b_1
libvorbis pkgs/main/win-64::libvorbis-1.3.7-he774522_0
libwebp-base pkgs/main/win-64::libwebp-base-1.2.4-h2bbff1b_1
libxml2 pkgs/main/win-64::libxml2-2.10.3-h0ad7f3c_0
libxslt pkgs/main/win-64::libxslt-1.1.37-h2bbff1b_0
pcre pkgs/main/win-64::pcre-8.45-hd77b12b_0
ply pkgs/main/win-64::ply-3.11-py38_0
pyqt5-sip pkgs/main/win-64::pyqt5-sip-12.11.0-py38hd77b12b_0
qt-main pkgs/main/win-64::qt-main-5.15.2-he8e5bd7_8
qt-webengine pkgs/main/win-64::qt-webengine-5.15.9-hb9a9bb5_5
qtwebkit pkgs/main/win-64::qtwebkit-5.212-h2bbfb41_5
toml pkgs/main/noarch::toml-0.10.2-pyhd3eb1b0_0
The following packages will be REMOVED:
appdirs-1.4.4-pyhd3eb1b0_0
brotlipy-0.7.0-py38h294d835_1004
cffi-1.15.1-py38hd8c33c5_0
charset-normalizer-2.0.4-pyhd3eb1b0_0
cryptography-39.0.1-py38h21b164f_0
icc_rt-2022.1.0-h6049295_2
idna-3.4-py38haa95532_0
patsy-0.5.2-py38haa95532_1
pooch-1.4.0-pyhd3eb1b0_0
pycparser-2.21-pyhd8ed1ab_0
pyopenssl-23.0.0-py38haa95532_0
pysocks-1.7.1-py38haa95532_0
python_abi-3.8-2_cp38
qt-5.9.7-vc14h73c81de_0
requests-2.28.1-py38haa95532_1
scipy-1.10.0-py38h321e85e_1
seaborn-0.11.2-pyhd3eb1b0_0
statsmodels-0.13.2-py38h2bbff1b_0
urllib3-1.26.14-py38haa95532_0
win_inet_pton-1.1.0-py38haa95532_0
wincertstore-0.2-py38haa95532_2
The following packages will be UPDATED:
bottleneck 1.3.4-py38h080aedc_0 --> 1.3.5-py38h080aedc_0
colorama pkgs/main/noarch::colorama-0.4.4-pyhd~ --> pkgs/main/win-64::colorama-0.4.6-py38haa95532_0
freetype 2.10.4-hd328e21_0 --> 2.12.1-ha860e81_0
jpeg 9e-h2bbff1b_0 --> 9e-h2bbff1b_1
kiwisolver 1.3.2-py38hd77b12b_0 --> 1.4.4-py38hd77b12b_0
libpng 1.6.37-h2a8f88b_0 --> 1.6.39-h8cc25b3_0
libtiff 4.2.0-hd0e1b90_0 --> 4.5.0-h6c2663c_2
libwebp 1.2.2-h2bbff1b_0 --> 1.2.4-hbc33d0d_1
lz4-c 1.9.3-h2bbff1b_1 --> 1.9.4-h2bbff1b_0
matplotlib-inline pkgs/main/noarch::matplotlib-inline-0~ --> pkgs/main/win-64::matplotlib-inline-0.1.6-py38haa95532_0
numexpr 2.8.1-py38hb80d3ca_0 --> 2.8.4-py38h5b0cc5e_0
packaging pkgs/main/noarch::packaging-21.3-pyhd~ --> pkgs/main/win-64::packaging-23.0-py38haa95532_0
pillow 9.0.1-py38hdc2b20a_0 --> 9.4.0-py38hd77b12b_0
prompt-toolkit pkgs/main/noarch::prompt-toolkit-3.0.~ --> pkgs/main/win-64::prompt-toolkit-3.0.36-py38haa95532_0
pygments pkgs/main/noarch::pygments-2.11.2-pyh~ --> pkgs/main/win-64::pygments-2.15.1-py38haa95532_1
pyparsing pkgs/main/noarch::pyparsing-3.0.4-pyh~ --> pkgs/main/win-64::pyparsing-3.0.9-py38haa95532_0
pyqt 5.9.2-py38hd77b12b_6 --> 5.15.7-py38hd77b12b_0
pytz pkgs/main/noarch::pytz-2021.3-pyhd3eb~ --> pkgs/main/win-64::pytz-2022.7-py38haa95532_0
setuptools 61.2.0-py38haa95532_0 --> 67.8.0-py38haa95532_0
sip 4.19.13-py38hd77b12b_0 --> 6.6.2-py38hd77b12b_0
sqlite 3.38.3-h2bbff1b_0 --> 3.41.2-h2bbff1b_0
tbb 2021.5.0-h59b6b97_0 --> 2021.8.0-h59b6b97_0
tk 8.6.11-h2bbff1b_1 --> 8.6.12-h2bbff1b_0
tornado 6.1-py38h2bbff1b_0 --> 6.2-py38h2bbff1b_0
traitlets pkgs/main/noarch::traitlets-5.1.1-pyh~ --> pkgs/main/win-64::traitlets-5.7.1-py38haa95532_0
wheel pkgs/main/noarch::wheel-0.37.1-pyhd3e~ --> pkgs/main/win-64::wheel-0.38.4-py38haa95532_0
xz 5.2.5-h8cc25b3_1 --> 5.4.2-h8cc25b3_0
zlib 1.2.12-h8cc25b3_2 --> 1.2.13-h8cc25b3_0
zstd 1.4.9-h19a0ad4_0 --> 1.5.5-hd43e919_0
</code></pre>
|
<python><pip><anaconda>
|
2023-06-14 09:27:39
| 1
| 16,698
|
FooBar
|
76,472,009
| 567,059
|
How to properly use the 'pytest_assertion_pass' hook to count all assertions that pass
|
<p>I'm attempting to count the number of assertions that pass in a <code>pytest</code> run, but the hook only appears to be firing for a few tests.</p>
<p>As can be seen from the output, I have 43 test items, each of which has at least one assertion.</p>
<p>The <a href="https://docs.pytest.org/en/7.1.x/reference/reference.html#pytest.hookspec.pytest_assertion_pass" rel="nofollow noreferrer">documentation</a> states "Use this hook to do some processing after a passing assertion." To me, that suggests that the hook should fire after every assertion, but that is not the case.</p>
<p>How can I properly count all assertions that pass?</p>
<h3><code>pytest.ini</code></h3>
<pre><code>[pytest]
enable_assertion_pass_hook=true
</code></pre>
<h3><code>conftest.py</code></h3>
<pre class="lang-py prettyprint-override"><code>assertion_count = 0
def pytest_assertion_pass(item, lineno, orig, expl):
global assertion_count
assertion_count += 1
def pytest_terminal_summary(terminalreporter, exitstatus, config):
print(f'\n{assertion_count} assertions tested.')
</code></pre>
<h3>Output</h3>
<pre class="lang-py prettyprint-override"><code>platform linux -- Python 3.10.6, pytest-7.3.1, pluggy-1.0.0
rootdir: /home/user/project
configfile: pytest.ini
plugins: cov-4.1.0
collected 43 items
az/test/colours_test.py ............................... [ 72%]
az/test/logging_test.py ............ [100%]
5 assertions tested.
</code></pre>
<hr />
<h1>Update</h1>
<p>I have written a suite of assertions functions, and it seems that <code>pytest</code> doesn't count an assertion as passed if it passes out side of a <code>test</code> function.</p>
<p>The 5 assertions I was seeing were because I'd missed one test function and was directly using <code>assert</code>.</p>
<p>So the question has evolved to why does <code>pytest</code> count a passed assertion like this...</p>
<pre class="lang-py prettyprint-override"><code>def test_one():
assert 1 == 1
</code></pre>
<p>but not like this...</p>
<pre class="lang-py prettyprint-override"><code>def assert_equal(expected, actual):
assert expected == actual
def test_two():
assert_equal(2, 2)
</code></pre>
|
<python><pytest>
|
2023-06-14 09:25:04
| 1
| 12,277
|
David Gard
|
76,471,992
| 1,814,420
|
Why do pylint and flake8 raise different messages?
|
<p>Suppose I have this method:</p>
<pre class="lang-py prettyprint-override"><code>def test_method():
#comment here
my_variable=1
print(my_variable)
raise Exception('Exception message')
</code></pre>
<p>When I use <code>pylint</code>, these messages were raised:</p>
<pre><code>test.py:1:0: C0116: Missing function or method docstring (missing-function-docstring)
test.py:5:4: W0719: Raising too general exception: Exception (broad-exception-raised)
</code></pre>
<p>And with <code>flake8</code>, other messages appeared:</p>
<pre><code>test.py:2:5: E265 block comment should start with '# '
test.py:3:16: E225 missing whitespace around operator
</code></pre>
<p>Why do they raise different messages? I expect one of them should raise all above messages, since they are all PEP8 standards.</p>
<p>Did I miss some configuration? Or do I have to use both <code>pylint</code> and <code>flake8</code> in a project?</p>
|
<python><pylint><pep8><flake8>
|
2023-06-14 09:22:27
| 1
| 12,163
|
Triet Doan
|
76,471,880
| 14,720,380
|
gdal2tiles.py: Attempt to create 0x1249 dataset is illegal,sizes must be larger than zero
|
<p>I am trying to convert some weather data into tiles so I can show it on a website. I am downloading the weather data from NOAA using their <a href="https://nomads.ncep.noaa.gov/gribfilter.php?ds=gfs_0p25" rel="nofollow noreferrer">grib filer</a>.</p>
<p>I download the file, convert to tiff, convert to 8bit tiff and then try and convert to tiles with the following commands:</p>
<pre><code>gdal_translate -of GTiff gfs.t00z.pgrb2.0p25.f000 output.tif
gdal_translate -ot Byte output.tif output_8bit.tif
gdal2tiles.py --zoom=4-6 output_8bit.tif tiles
</code></pre>
<p>However I get the error:</p>
<pre><code>RuntimeError: Attempt to create 0x1249 dataset is illegal,sizes must be larger than zero.
</code></pre>
<p>How can I convert this grib file into tiles using gdal?</p>
<p>(for reference, the download url for the grib file I am using is <a href="https://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p25.pl?dir=%2Fgfs.20230614%2F00%2Fatmos&file=gfs.t00z.pgrb2.0p25.f000&var_UGRD=on&lev_10_m_above_ground=on" rel="nofollow noreferrer">https://nomads.ncep.noaa.gov/cgi-bin/filter_gfs_0p25.pl?dir=%2Fgfs.20230614%2F00%2Fatmos&file=gfs.t00z.pgrb2.0p25.f000&var_UGRD=on&lev_10_m_above_ground=on</a> however the url may become invalid if you view this question at a later date).</p>
<p>Also for reference, the full stack trace is:</p>
<pre><code>Traceback (most recent call last):
File "/home/tom/miniconda3/bin/gdal2tiles.py", line 15, in <module>
sys.exit(main(sys.argv))
File "/home/tom/miniconda3/lib/python3.9/site-packages/osgeo_utils/gdal2tiles.py", line 4527, in main
return submain(argv)
File "/home/tom/miniconda3/lib/python3.9/site-packages/osgeo_utils/gdal2tiles.py", line 4544, in submain
single_threaded_tiling(input_file, output_folder, options)
File "/home/tom/miniconda3/lib/python3.9/site-packages/osgeo_utils/gdal2tiles.py", line 4385, in single_threaded_tiling
conf, tile_details = worker_tile_details(input_file, output_folder, options)
File "/home/tom/miniconda3/lib/python3.9/site-packages/osgeo_utils/gdal2tiles.py", line 4289, in worker_tile_details
gdal2tiles.open_input()
File "/home/tom/miniconda3/lib/python3.9/site-packages/osgeo_utils/gdal2tiles.py", line 2207, in open_input
self.warped_input_dataset = reproject_dataset(
File "/home/tom/miniconda3/lib/python3.9/site-packages/osgeo_utils/gdal2tiles.py", line 1102, in reproject_dataset
return gdal.Warp(
File "/home/tom/miniconda3/lib/python3.9/site-packages/osgeo/gdal.py", line 788, in Warp
return wrapper_GDALWarpDestName(destNameOrDestDS, srcDSTab, opts, callback, callback_data)
File "/home/tom/miniconda3/lib/python3.9/site-packages/osgeo/gdal.py", line 4815, in wrapper_GDALWarpDestName
return _gdal.wrapper_GDALWarpDestName(*args)
RuntimeError: Attempt to create 0x1249 dataset is illegal,sizes must be larger than zero.
</code></pre>
|
<python><gdal>
|
2023-06-14 09:09:13
| 1
| 6,623
|
Tom McLean
|
76,471,697
| 1,045,755
|
Data from FastAPI has no key
|
<p>I am trying to fetch some data from my backend to my frontend. However, in doing so, the data I receive looks something like:</p>
<pre><code>[
0: {"date": "2023-01-01", "id": "2023-01-01_0_GB", "product": "Snickers"},
1: {"date": "2023-01-02", "id": "2023-01-02_0_GB", "product": "Bounty"},
2: {"date": "2023-01-03", "id": "2023-01-03_0_GB", "product": "Twix"},
...
]
</code></pre>
<p>As you can see, the key is just numbers from 0 and up. But I would actually like the key do be the <code>id</code> of each dict.</p>
<p>From my backend I return the data by:</p>
<pre><code>return DataResponse(
df=df.to_dict(orient="records"),
)
</code></pre>
<p>And it doesn't matter if I use <code>reset_index()</code> or not. If I do, that data will not be a part of the data received in the frontend, hence, I always use <code>reset_index()</code> so I have everything.</p>
<p>So yeah, how do I do that?</p>
|
<python><reactjs><fastapi>
|
2023-06-14 08:47:51
| 1
| 2,615
|
Denver Dang
|
76,471,613
| 827,927
|
How to implement in Python an adapter that can handle many input types?
|
<p>I have many functions that solve the same problem in different ways (different algorithms with different properties). The functions were programmed by different people, and each function accepts its input in a different format. As a simplified example, let's say that the input to all functions is a set of items.</p>
<ul>
<li>Function A accepts a dict, where each key is an item name and the value is the number of copies of that item.</li>
<li>Function B accepts a list, where each element is simply the item name, and there is only one copy of each item.</li>
</ul>
<p>I would like to build an adapter function, that allows the user to use both A and B with both a dict and a list. A pseudocode of the adapter could be:</p>
<pre><code>def adapter(func:callable, input:any):
if (input is a dict and func expects a dict):
return func(input)
elif (input is a list and func expects a list):
return func(input)
elif (input is a list and func expects a dict):
return func({key:1 for key in list}) # convert list to dict
elif (input is a dict and func expects a list):
return func(list(input.keys())) # convert dict to list
else:
raise TypeError
</code></pre>
<p>The problem with this pseudocode is that, for <code>n</code> input formats, <code>n^2</code> conditions are required.</p>
<p>Is there a more efficient way to implement such an adapter function?</p>
|
<python><design-patterns>
|
2023-06-14 08:38:07
| 2
| 37,410
|
Erel Segal-Halevi
|
76,471,591
| 292,291
|
How do I get the generic class property in python
|
<p>I have a class like:</p>
<pre><code>class ExampleClass(BaseClass[Someclass]):
pass
class BaseClass(AbstractBaseClass, Generic[T]):
pass
</code></pre>
<p>I want to be able to do something like <code>ExampleClass.targetType</code> where I will return <code>Someclass.__name__</code> how can I do this inside <code>BaseClass</code> I cannot seem to do <code>T.__name__</code></p>
<p>I can workaround by defining a method like</p>
<pre><code>class ExampleClass(BaseClass[Something]):
version = 1
def get_target_class_name() -> str:
return Something.__name__
</code></pre>
<p>But I will need to copy this for every class</p>
|
<python><generics>
|
2023-06-14 08:34:38
| 6
| 89,109
|
Jiew Meng
|
76,471,523
| 972,647
|
dagster: run every hour and pass last execution time to asset
|
<p>I'm new to dagster to please be patient for not fully grasping it yet.</p>
<p>I want to "run an asset" every hour which should be possible with schedules. What i also want is to pass the last execution date into the asset. This is used to fetch only data modified since the last call.</p>
<p>So the issue is 2-fold:</p>
<ul>
<li>How do I store the last execution date?</li>
<li>How do I load the last exectuion date upon execution and pass it to the asset?</li>
</ul>
|
<python><dagster>
|
2023-06-14 08:26:23
| 1
| 7,652
|
beginner_
|
76,471,478
| 1,864,162
|
Spatial join crashes when called from script
|
<p>I have a geopandas spatial join statement that runs fine when performing from Jupyter Notebook.
However, when called from a windows task scheduler script (pointing to a <code>.bat</code> file every morning at 07:00) the logfile I create stops giving output precisely before the call to spatial join.</p>
<pre><code>geodf = geodf[geodf.geometry.notnull()]
print(len(geodf) - before, 'geometries were dropped that had null values')
# spatial join the dataframe with the airspace will 'place' the points in the right airspace
df3 = geodf.sjoin(AirspaceNL, how='left' )
print('spatial join done with AirspaceNL')
</code></pre>
<p>The statement <code>geodf = geodf[geodf.geometry.notnull()]</code> is executed without problems. I can read the 'geometries were dropped ...' etc. in the logfile. Then it stops. Apparently the spatial join of the next statement is the problem? Note that when run directly from Jupyter Notebook, all is fine.</p>
<p>Well... not <em>all</em> is fine. In Jupyter Notebook, I get the following warning with pink background:</p>
<pre><code>C:\Users\bruggenj\AppData\Roaming\Python\Python38\site-packages\pandas\core\reshape\merge.py:1204: RuntimeWarning: invalid value encountered in cast
if not (lk == lk.astype(rk.dtype))[~np.isnan(lk)].all():
</code></pre>
<p>What could possibly be the issue?</p>
|
<python><jupyter-notebook><spatial><geopandas>
|
2023-06-14 08:20:53
| 2
| 303
|
Job Brüggen
|
76,471,350
| 7,658,051
|
community.postgresql.postgresql_query raises `Failed to import the required Python library (psycopg2) on YYY's Python XXX/ansible-venv/bin/python3.7`
|
<p>I am running an Ansible playbook with virtual environment <code>ansible_venv</code> activated.</p>
<p>My playbook get struck here</p>
<pre><code>collect data from DB
localhost failed | msg: Failed to import the required Python library (psycopg2) on YYY's Python XXX/ansible-venv/bin/python3.7. Please read the module documentation and install it in the appropriate location. If the required library is installed, but Ansible is using the wrong Python interpreter, please consult the documentation on ansible_python_interpreter
</code></pre>
<p>where XXX is hidden path.</p>
<p>the task <code>collect data from DB</code> is :</p>
<pre><code>- name: collect data from DB
community.postgresql.postgresql_query:
login_host: '{{ db_host }}'
login_user: '{{ db_username }}'
login_password: '{{ db_password }}'
db: '{{ db_database }}'
port: '{{ db_database_port }}'
query: "SELECT EXISTS (SELECT 1 FROM information_schema.tables WHERE table_name = '{{ my_table }}')"
register: qres
</code></pre>
<p>The machine which runs this playbook does not have access to the internet, so I have manually installed these packages on my machine:</p>
<pre><code>/home/myuser/XXX/ansible-venv/lib/python3.7/site-packages/psycopg2
/home/myuser/XXX/ansible-venv/lib/python3.7/site-packages/psycopg2_binary-2.9.6.dist-info
/home/myuser/XXX/ansible-venv/lib/python3.7/site-packages/psycopg2_binary.libs
</code></pre>
<p>In order to do this, I have downloaded these packages on another machine with direct access to the internet, by using the terminal command <code>pip install psycopg2-binary</code>, then copied the files to the "ansible machine".</p>
<p>Theyr permissions and user are the same of the others modules installed there.</p>
<p>At path <code>/home/myuser/XXX/ansible-venv/lib/</code> I don't have any other interpreter than <code>python3.7</code>.</p>
<p>However, I am still getting the same error.</p>
<h2>update</h2>
<p>The error is raised only on localhost (YYY in my example).</p>
<p>tasks using the <code>community.postgresql.postgresql_query</code> on other machines work.</p>
|
<python><postgresql><ansible><psycopg2><python-3.7>
|
2023-06-14 08:02:34
| 1
| 4,389
|
Tms91
|
76,471,160
| 5,661,667
|
Counting zeros in large numpy arrays without creating them
|
<p>To illustrate the problem I am facing, here is some example code:</p>
<pre><code>a = np.round(np.random.rand(10, 15))
counta = np.count_nonzero(a, axis=-1)
print(counta)
A = np.einsum('im,mj->ijm', a, a.T)
countA = np.count_nonzero(A, axis=-1)
print(countA)
</code></pre>
<p>It creates a 2D array and counts its nonzero elements along the last axis. Then it creates a 3D array, of which the nonzero elements are again counted along the last axis.</p>
<p>My problem is that my array <code>a</code> is so large, that I can perform the first step, but not the second step, since the <code>A</code> array would take up too much memory.</p>
<p><strong>Is there any way to still get <code>countA</code>? That is to count the zeros in A along a given axis without actually creating the array?</strong></p>
|
<python><arrays><numpy><counting><numpy-einsum>
|
2023-06-14 07:38:43
| 1
| 1,321
|
Wolpertinger
|
76,471,139
| 2,412,909
|
I Implemented sklearn's Lasso in tensorflow but not having the same results
|
<p>I have a model implemented in Scikit-learn that is based on <code>Lasso</code>.
Here is the code</p>
<pre><code># Initiaze the hyperparameters for each dictionary
param1 = {}
param1['classifier'] = [LinearRegression()]
param2 = {}
param2['classifier__alpha'] = [0.1, 0.5, 1]
param2['classifier'] = [Ridge()]
param3 = {}
param3['classifier__alpha'] = [0.1, 0.5, 1]
param3['classifier'] = [Lasso()]
param4 = {}
param4['classifier__n_neighbors'] = [2,5,10,25,50]
param4['classifier'] = [KNeighborsRegressor()]
pipeline = pipe = Pipeline(steps=[("scaler", CustomScaler(MEAN,STD)),
("flatten", FlattenTransformer()),
("classifier", LinearRegression())])
params = [param1, param2, param3, param4] # param5
grid_search = GridSearchCV(pipeline, params, cv=3, scoring='neg_mean_squared_error').fit(train_images,train_biomasses)
grid_search.best_params_
# {'classifier': Lasso(alpha=0.1), 'classifier__alpha': 0.1}
</code></pre>
<p>Here is the code I wrote</p>
<pre class="lang-py prettyprint-override"><code>from functools import partial
def build_model():
# RegularizedDense = partial(
# tf.keras.layers.Dense,
# activation="relu",
# kernel_initializer="he_normal",
# kernel_regularizer=tf.keras.regularizers.l1(1.0)
# )
model = tf.keras.Sequential([
tf.keras.Input(shape=(2700,), name="inputs"),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(1, kernel_initializer="he_normal",
kernel_regularizer=tf.keras.regularizers.l1(0.1))
])
return model
def exponential_decay(lr0, s):
def exponential_decay_fn(epoch):
return lr0 * 0.1 ** (epoch/s)
return exponential_decay_fn
def build_optimizer(learning_rate=1e-4, decay=None):
# optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=0.9)
optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rate)
return optimizer
def build_losses_metrics():
loss_fn = tf.keras.losses.MeanSquaredError()
train_rmse_metric = tf.keras.metrics.RootMeanSquaredError()
val_rmse_metric = tf.keras.metrics.RootMeanSquaredError()
return (loss_fn, (train_rmse_metric, val_rmse_metric))
def build_datasets(batch_size=32):
train_dataset = tf.data.Dataset.from_tensor_slices((train_images, train_biomasses))
train_dataset = train_dataset.shuffle(512).batch(batch_size)
validate_dataset = tf.data.Dataset.from_tensor_slices((validate_images, validate_biomasses))
validate_dataset = validate_dataset.batch(batch_size)
return train_dataset, validate_dataset
def train(
model,
optimizer,
train_dataset, validation_dataset,
epochs=50,
lr=1e-3,
lr_decay=False,
lr_step=10
):
loss_fn, (train_rmse_metric, val_rmse_metric) = build_losses_metrics()
train_losses = []
val_losses = []
learning_rate_decay_fn = exponential_decay(lr, lr_step)
current_lr = lr
for epoch in range(epochs):
if lr_decay:
learning_rate = learning_rate_decay_fn(epoch)
optimizer.learning_rate.assign(learning_rate)
loss_epoch = 0.
for batch, (x_batch_train, y_batch_train) in enumerate(train_dataset):
with tf.GradientTape() as tape:
y_ = model(x_batch_train, training=True)
loss_batch = loss_fn(y_batch_train, y_)
loss_epoch += loss_batch
grads = tape.gradient(loss_batch, model.trainable_weights)
optimizer.apply_gradients(zip(grads, model.trainable_weights))
train_losses.append(tf.math.reduce_mean(loss_epoch))
val_loss_epoch = 0.
for x_batch_val, y_batch_val in validation_dataset:
val_y_ = model(x_batch_val, training=False)
val_loss_batch = loss_fn(y_batch_val, val_y_)
val_loss_epoch += val_loss_batch
val_losses.append(tf.math.reduce_mean(val_loss_epoch))
print(f"Epoch {epoch}: loss: {train_losses[-1]} - val_loss: {val_losses[-1]}")
return train_losses, val_losses
def run(epochs=50, learning_rate=1e-4, decay=1e-4):
tf.keras.backend.clear_session()
np.random.seed(1992)
tf.random.set_seed(1992)
model = build_model()
optimizer = build_optimizer(learning_rate, decay=decay)
train_dataset, validation_dataset = build_datasets(batch_size=32)
return train(model=model,
optimizer=optimizer,
train_dataset=train_dataset,
validation_dataset=validation_dataset,
lr=learning_rate,
epochs=epochs)
epochs = 400
learning_rate=1e-3
train_losses, validation_losses = run(epochs=epochs, learning_rate=learning_rate)
</code></pre>
<p>Here is the result of the execution</p>
<pre><code>Epoch 0: loss: 847.075927734375 - val_loss: 158.7601776123047
Epoch 1: loss: 584.6080322265625 - val_loss: 159.98387145996094
Epoch 2: loss: 566.870361328125 - val_loss: 116.31696319580078
Epoch 3: loss: 563.249267578125 - val_loss: 120.13416290283203
Epoch 4: loss: 551.9866333007812 - val_loss: 106.21666717529297
Epoch 5: loss: 545.043701171875 - val_loss: 106.00625610351562
Epoch 6: loss: 556.5231323242188 - val_loss: 139.84605407714844
Epoch 7: loss: 563.5502319335938 - val_loss: 119.96963500976562
Epoch 8: loss: 547.5770874023438 - val_loss: 138.95558166503906
Epoch 9: loss: 547.4338989257812 - val_loss: 113.98006439208984
Epoch 10: loss: 542.088134765625 - val_loss: 108.67926788330078
Epoch 11: loss: 536.9991455078125 - val_loss: 111.34986877441406
Epoch 12: loss: 539.1815795898438 - val_loss: 106.76664733886719
Epoch 13: loss: 529.169677734375 - val_loss: 116.39300537109375
Epoch 14: loss: 549.8356323242188 - val_loss: 111.42301940917969
Epoch 15: loss: 531.5046997070312 - val_loss: 121.7109603881836
Epoch 16: loss: 532.90869140625 - val_loss: 110.92386627197266
Epoch 17: loss: 527.7339477539062 - val_loss: 131.94984436035156
Epoch 18: loss: 519.3847045898438 - val_loss: 137.0780487060547
Epoch 19: loss: 522.2667236328125 - val_loss: 114.2005615234375
Epoch 20: loss: 515.1701049804688 - val_loss: 104.97427368164062
Epoch 21: loss: 523.9987182617188 - val_loss: 111.52166748046875
Epoch 22: loss: 511.7991638183594 - val_loss: 107.33300018310547
Epoch 23: loss: 511.7978210449219 - val_loss: 106.44991302490234
Epoch 24: loss: 517.2681274414062 - val_loss: 116.37968444824219
Epoch 25: loss: 507.038330078125 - val_loss: 111.16574096679688
Epoch 26: loss: 507.4341735839844 - val_loss: 108.5770492553711
Epoch 27: loss: 492.9891662597656 - val_loss: 124.1153793334961
Epoch 28: loss: 503.1880187988281 - val_loss: 101.68212127685547
Epoch 29: loss: 509.0335388183594 - val_loss: 100.03245544433594
Epoch 30: loss: 503.455322265625 - val_loss: 100.5913314819336
Epoch 31: loss: 497.4955749511719 - val_loss: 127.98064422607422
Epoch 32: loss: 489.5680236816406 - val_loss: 102.95386505126953
Epoch 33: loss: 497.2449645996094 - val_loss: 114.24052429199219
Epoch 34: loss: 496.8539733886719 - val_loss: 108.60224914550781
Epoch 35: loss: 493.4102783203125 - val_loss: 99.18386840820312
Epoch 36: loss: 499.2474060058594 - val_loss: 107.8803939819336
Epoch 37: loss: 481.9783935546875 - val_loss: 108.40951538085938
Epoch 38: loss: 492.92291259765625 - val_loss: 103.66437530517578
Epoch 39: loss: 485.2967529296875 - val_loss: 104.97471618652344
Epoch 40: loss: 483.46575927734375 - val_loss: 115.77208709716797
Epoch 41: loss: 483.249755859375 - val_loss: 151.1378631591797
Epoch 42: loss: 488.1317138671875 - val_loss: 115.21915435791016
Epoch 43: loss: 483.76031494140625 - val_loss: 116.80110931396484
Epoch 44: loss: 487.7470397949219 - val_loss: 110.9793930053711
Epoch 45: loss: 481.63507080078125 - val_loss: 108.36771392822266
Epoch 46: loss: 482.9681701660156 - val_loss: 103.23908996582031
Epoch 47: loss: 477.44970703125 - val_loss: 106.9701156616211
Epoch 48: loss: 480.35711669921875 - val_loss: 109.85052490234375
Epoch 49: loss: 476.7109680175781 - val_loss: 107.3569107055664
Epoch 50: loss: 474.3595275878906 - val_loss: 105.09278869628906
Epoch 51: loss: 477.8955383300781 - val_loss: 99.09465026855469
Epoch 52: loss: 472.75164794921875 - val_loss: 105.4655532836914
Epoch 53: loss: 476.35858154296875 - val_loss: 110.5615005493164
Epoch 54: loss: 472.6387939453125 - val_loss: 102.34373474121094
Epoch 55: loss: 469.86981201171875 - val_loss: 101.6429214477539
Epoch 56: loss: 471.7933044433594 - val_loss: 99.83200073242188
Epoch 57: loss: 464.2228088378906 - val_loss: 100.06639862060547
Epoch 58: loss: 475.8570861816406 - val_loss: 100.1411361694336
Epoch 59: loss: 469.5304260253906 - val_loss: 106.68754577636719
Epoch 60: loss: 475.5203857421875 - val_loss: 100.88705444335938
Epoch 61: loss: 470.883544921875 - val_loss: 110.94203186035156
Epoch 62: loss: 467.8878479003906 - val_loss: 101.2822494506836
Epoch 63: loss: 466.05792236328125 - val_loss: 100.43091583251953
Epoch 64: loss: 467.42303466796875 - val_loss: 101.63124084472656
Epoch 65: loss: 467.1925048828125 - val_loss: 110.7307357788086
Epoch 66: loss: 467.52593994140625 - val_loss: 99.30562591552734
Epoch 67: loss: 463.7406921386719 - val_loss: 100.6451644897461
Epoch 68: loss: 465.1089782714844 - val_loss: 100.5382080078125
Epoch 69: loss: 465.9606628417969 - val_loss: 110.55416107177734
Epoch 70: loss: 463.6502990722656 - val_loss: 105.38463592529297
Epoch 71: loss: 463.7268371582031 - val_loss: 98.45337677001953
Epoch 72: loss: 468.327392578125 - val_loss: 102.80609893798828
Epoch 73: loss: 460.4519958496094 - val_loss: 104.26699829101562
Epoch 74: loss: 464.2290344238281 - val_loss: 98.02379608154297
Epoch 75: loss: 459.0707092285156 - val_loss: 110.16661071777344
Epoch 76: loss: 463.8221435546875 - val_loss: 102.0076904296875
Epoch 77: loss: 457.8291931152344 - val_loss: 97.28372192382812
Epoch 78: loss: 464.15667724609375 - val_loss: 105.54214477539062
Epoch 79: loss: 460.37872314453125 - val_loss: 97.70496368408203
Epoch 80: loss: 460.96063232421875 - val_loss: 102.3382568359375
Epoch 81: loss: 459.4374084472656 - val_loss: 97.27777862548828
Epoch 82: loss: 458.4649658203125 - val_loss: 98.91461944580078
Epoch 83: loss: 459.7217102050781 - val_loss: 99.78578186035156
Epoch 84: loss: 460.1854248046875 - val_loss: 97.59530639648438
Epoch 85: loss: 458.0945739746094 - val_loss: 98.57728576660156
Epoch 86: loss: 458.51641845703125 - val_loss: 102.36064147949219
Epoch 87: loss: 457.6965637207031 - val_loss: 98.78395080566406
Epoch 88: loss: 465.5695495605469 - val_loss: 105.8673324584961
Epoch 89: loss: 456.6957702636719 - val_loss: 99.67933654785156
Epoch 90: loss: 456.6771545410156 - val_loss: 98.16921997070312
Epoch 91: loss: 459.624755859375 - val_loss: 109.00318145751953
Epoch 92: loss: 455.58251953125 - val_loss: 98.51361846923828
Epoch 93: loss: 463.1281433105469 - val_loss: 97.90188598632812
Epoch 94: loss: 455.00921630859375 - val_loss: 99.96898651123047
Epoch 95: loss: 449.8885192871094 - val_loss: 96.98976135253906
Epoch 96: loss: 460.0137023925781 - val_loss: 96.2021255493164
Epoch 97: loss: 456.7822265625 - val_loss: 99.29813385009766
Epoch 98: loss: 456.98211669921875 - val_loss: 98.63404846191406
Epoch 99: loss: 453.4152526855469 - val_loss: 97.25257873535156
Epoch 100: loss: 454.2278747558594 - val_loss: 97.14910125732422
Epoch 101: loss: 453.0674133300781 - val_loss: 97.72257995605469
Epoch 102: loss: 457.1690979003906 - val_loss: 100.79244995117188
Epoch 103: loss: 457.0799255371094 - val_loss: 96.21158599853516
Epoch 104: loss: 450.21173095703125 - val_loss: 100.99701690673828
Epoch 105: loss: 452.1216125488281 - val_loss: 98.9791488647461
Epoch 106: loss: 452.4648132324219 - val_loss: 98.90581512451172
Epoch 107: loss: 454.3359069824219 - val_loss: 100.54856872558594
Epoch 108: loss: 457.5542297363281 - val_loss: 105.72451782226562
Epoch 109: loss: 453.9736328125 - val_loss: 97.04446411132812
Epoch 110: loss: 452.2619323730469 - val_loss: 97.46793365478516
Epoch 111: loss: 455.6089782714844 - val_loss: 100.0567855834961
Epoch 112: loss: 455.6871032714844 - val_loss: 99.18238830566406
Epoch 113: loss: 450.9131164550781 - val_loss: 98.396240234375
Epoch 114: loss: 451.55987548828125 - val_loss: 101.84715270996094
Epoch 115: loss: 455.97705078125 - val_loss: 99.65447998046875
Epoch 116: loss: 452.658935546875 - val_loss: 97.8386001586914
Epoch 117: loss: 451.8746643066406 - val_loss: 96.30899047851562
Epoch 118: loss: 454.3466796875 - val_loss: 106.23787689208984
Epoch 119: loss: 452.0690612792969 - val_loss: 99.08953857421875
Epoch 120: loss: 452.4654846191406 - val_loss: 97.02693176269531
Epoch 121: loss: 449.90234375 - val_loss: 97.01411437988281
Epoch 122: loss: 453.8165283203125 - val_loss: 100.29322052001953
Epoch 123: loss: 451.3995666503906 - val_loss: 96.72403717041016
Epoch 124: loss: 451.07806396484375 - val_loss: 101.33114624023438
Epoch 125: loss: 452.2001647949219 - val_loss: 96.57894134521484
Epoch 126: loss: 452.9191589355469 - val_loss: 98.23367309570312
Epoch 127: loss: 449.3941345214844 - val_loss: 99.61105346679688
Epoch 128: loss: 452.7442626953125 - val_loss: 98.24333953857422
Epoch 129: loss: 452.52923583984375 - val_loss: 98.70343780517578
Epoch 130: loss: 450.1275329589844 - val_loss: 96.91618347167969
Epoch 131: loss: 452.13299560546875 - val_loss: 95.6628189086914
Epoch 132: loss: 449.8660583496094 - val_loss: 97.2622299194336
Epoch 133: loss: 449.65887451171875 - val_loss: 97.45806884765625
Epoch 134: loss: 449.7442626953125 - val_loss: 96.74287414550781
Epoch 135: loss: 452.2209167480469 - val_loss: 96.0275650024414
Epoch 136: loss: 450.0367431640625 - val_loss: 98.23724365234375
Epoch 137: loss: 448.8655700683594 - val_loss: 98.300537109375
Epoch 138: loss: 451.7554016113281 - val_loss: 101.69792938232422
Epoch 139: loss: 454.3482971191406 - val_loss: 97.0973129272461
Epoch 140: loss: 451.4029235839844 - val_loss: 96.46884155273438
Epoch 141: loss: 450.0864562988281 - val_loss: 97.57160949707031
Epoch 142: loss: 452.6539611816406 - val_loss: 98.06654357910156
Epoch 143: loss: 452.9635009765625 - val_loss: 96.98999786376953
Epoch 144: loss: 449.375732421875 - val_loss: 97.08302307128906
Epoch 145: loss: 455.64300537109375 - val_loss: 98.29078674316406
Epoch 146: loss: 449.7545166015625 - val_loss: 97.31458282470703
Epoch 147: loss: 448.3454895019531 - val_loss: 97.41017150878906
Epoch 148: loss: 451.1374206542969 - val_loss: 96.47671508789062
Epoch 149: loss: 450.7874450683594 - val_loss: 95.87786102294922
Epoch 150: loss: 451.97589111328125 - val_loss: 95.6712646484375
Epoch 151: loss: 450.7940673828125 - val_loss: 99.03018951416016
Epoch 152: loss: 448.3888244628906 - val_loss: 96.31082153320312
Epoch 153: loss: 451.3653259277344 - val_loss: 96.55095672607422
Epoch 154: loss: 449.8733215332031 - val_loss: 97.61719512939453
Epoch 155: loss: 448.97149658203125 - val_loss: 95.97036743164062
Epoch 156: loss: 452.4737854003906 - val_loss: 97.90641021728516
Epoch 157: loss: 448.4120178222656 - val_loss: 97.59449005126953
Epoch 158: loss: 448.980712890625 - val_loss: 99.00843811035156
Epoch 159: loss: 449.7715148925781 - val_loss: 98.71858978271484
Epoch 160: loss: 450.5384521484375 - val_loss: 100.6188735961914
Epoch 161: loss: 451.2767639160156 - val_loss: 97.51863861083984
Epoch 162: loss: 449.3151550292969 - val_loss: 97.367919921875
Epoch 163: loss: 452.6103820800781 - val_loss: 97.79457092285156
Epoch 164: loss: 448.2057189941406 - val_loss: 96.92853546142578
Epoch 165: loss: 449.69683837890625 - val_loss: 96.72354888916016
Epoch 166: loss: 450.1744079589844 - val_loss: 96.32561492919922
Epoch 167: loss: 450.008056640625 - val_loss: 96.28145599365234
Epoch 168: loss: 451.03887939453125 - val_loss: 96.93020629882812
Epoch 169: loss: 449.4725646972656 - val_loss: 96.45865631103516
Epoch 170: loss: 450.4540100097656 - val_loss: 95.40909576416016
Epoch 171: loss: 450.4195251464844 - val_loss: 96.52227783203125
Epoch 172: loss: 450.5582275390625 - val_loss: 97.4793472290039
Epoch 173: loss: 449.9660949707031 - val_loss: 95.69512939453125
Epoch 174: loss: 446.5926818847656 - val_loss: 96.79833984375
Epoch 175: loss: 449.91522216796875 - val_loss: 95.8263168334961
Epoch 176: loss: 447.26873779296875 - val_loss: 98.99745178222656
Epoch 177: loss: 450.3222961425781 - val_loss: 96.22772216796875
Epoch 178: loss: 448.63177490234375 - val_loss: 98.7435531616211
Epoch 179: loss: 446.5481262207031 - val_loss: 97.00751495361328
Epoch 180: loss: 450.5780334472656 - val_loss: 97.88690948486328
Epoch 181: loss: 449.1116638183594 - val_loss: 97.2146987915039
Epoch 182: loss: 450.4496154785156 - val_loss: 96.44254302978516
Epoch 183: loss: 446.32012939453125 - val_loss: 99.34430694580078
Epoch 184: loss: 446.66485595703125 - val_loss: 96.51043701171875
Epoch 185: loss: 446.88909912109375 - val_loss: 97.46516418457031
Epoch 186: loss: 447.5033874511719 - val_loss: 96.38636779785156
Epoch 187: loss: 448.890625 - val_loss: 96.17085266113281
Epoch 188: loss: 448.4964904785156 - val_loss: 99.32489776611328
Epoch 189: loss: 449.4647521972656 - val_loss: 95.74967193603516
Epoch 190: loss: 450.39349365234375 - val_loss: 95.82449340820312
Epoch 191: loss: 448.1368713378906 - val_loss: 98.2160415649414
Epoch 192: loss: 450.2295227050781 - val_loss: 96.32608032226562
Epoch 193: loss: 446.3965759277344 - val_loss: 99.42900085449219
Epoch 194: loss: 449.540283203125 - val_loss: 96.3811264038086
Epoch 195: loss: 450.3630065917969 - val_loss: 96.3318099975586
Epoch 196: loss: 446.8701171875 - val_loss: 95.79061126708984
Epoch 197: loss: 449.6808776855469 - val_loss: 96.84124755859375
Epoch 198: loss: 448.4356384277344 - val_loss: 96.3101806640625
Epoch 199: loss: 448.7823791503906 - val_loss: 96.1176528930664
Epoch 200: loss: 449.4550476074219 - val_loss: 96.64998626708984
Epoch 201: loss: 450.10888671875 - val_loss: 95.34820556640625
Epoch 202: loss: 449.8490295410156 - val_loss: 95.7176742553711
Epoch 203: loss: 449.1422119140625 - val_loss: 96.12824249267578
Epoch 204: loss: 448.55694580078125 - val_loss: 97.94233703613281
Epoch 205: loss: 450.0185546875 - val_loss: 96.29656982421875
Epoch 206: loss: 448.07818603515625 - val_loss: 96.52961730957031
Epoch 207: loss: 447.2069091796875 - val_loss: 97.3646469116211
Epoch 208: loss: 448.92919921875 - val_loss: 95.40940856933594
Epoch 209: loss: 449.58251953125 - val_loss: 98.36994171142578
Epoch 210: loss: 452.70086669921875 - val_loss: 96.01040649414062
Epoch 211: loss: 449.98699951171875 - val_loss: 96.78478240966797
Epoch 212: loss: 449.91546630859375 - val_loss: 96.15846252441406
Epoch 213: loss: 450.1114807128906 - val_loss: 96.40387725830078
Epoch 214: loss: 449.07830810546875 - val_loss: 97.43226623535156
Epoch 215: loss: 449.9971008300781 - val_loss: 96.98865509033203
Epoch 216: loss: 449.01336669921875 - val_loss: 95.88896942138672
Epoch 217: loss: 449.2881774902344 - val_loss: 97.53218078613281
Epoch 218: loss: 448.5071716308594 - val_loss: 95.47907257080078
Epoch 219: loss: 447.61895751953125 - val_loss: 96.29920196533203
Epoch 220: loss: 449.47625732421875 - val_loss: 97.40425109863281
Epoch 221: loss: 448.6600036621094 - val_loss: 101.34588623046875
Epoch 222: loss: 449.4443054199219 - val_loss: 96.1139907836914
Epoch 223: loss: 448.8155517578125 - val_loss: 98.19176483154297
Epoch 224: loss: 448.0467834472656 - val_loss: 98.63390350341797
Epoch 225: loss: 449.181396484375 - val_loss: 96.06834411621094
Epoch 226: loss: 448.3404846191406 - val_loss: 98.01530456542969
Epoch 227: loss: 447.5462646484375 - val_loss: 95.62828826904297
Epoch 228: loss: 449.4240417480469 - val_loss: 95.88542175292969
Epoch 229: loss: 451.1175231933594 - val_loss: 96.28475189208984
Epoch 230: loss: 448.94256591796875 - val_loss: 96.25310516357422
Epoch 231: loss: 449.68048095703125 - val_loss: 97.33654022216797
Epoch 232: loss: 448.2848815917969 - val_loss: 96.2500228881836
Epoch 233: loss: 449.83624267578125 - val_loss: 96.42926025390625
Epoch 234: loss: 449.0020751953125 - val_loss: 95.73167419433594
Epoch 235: loss: 448.7664489746094 - val_loss: 97.0711898803711
Epoch 236: loss: 446.8410949707031 - val_loss: 95.59280395507812
Epoch 237: loss: 448.98236083984375 - val_loss: 95.69206237792969
Epoch 238: loss: 450.6343994140625 - val_loss: 98.03248596191406
Epoch 239: loss: 448.12982177734375 - val_loss: 96.1108169555664
Epoch 240: loss: 449.7079772949219 - val_loss: 98.2567138671875
Epoch 241: loss: 447.1879577636719 - val_loss: 96.62137603759766
Epoch 242: loss: 447.3613586425781 - val_loss: 96.51026153564453
Epoch 243: loss: 448.68011474609375 - val_loss: 98.67510986328125
Epoch 244: loss: 450.47894287109375 - val_loss: 96.25545501708984
Epoch 245: loss: 445.7900390625 - val_loss: 95.72764587402344
Epoch 246: loss: 446.232421875 - val_loss: 95.74934387207031
Epoch 247: loss: 448.12860107421875 - val_loss: 97.63236999511719
Epoch 248: loss: 448.5235595703125 - val_loss: 95.3633804321289
Epoch 249: loss: 447.5834045410156 - val_loss: 99.32067108154297
Epoch 250: loss: 447.3578186035156 - val_loss: 98.52676391601562
Epoch 251: loss: 448.1993103027344 - val_loss: 97.82453918457031
Epoch 252: loss: 447.4939880371094 - val_loss: 95.96961212158203
Epoch 253: loss: 445.65338134765625 - val_loss: 97.19148254394531
Epoch 254: loss: 448.2876892089844 - val_loss: 95.78370666503906
Epoch 255: loss: 448.3876647949219 - val_loss: 100.29200744628906
Epoch 256: loss: 451.384521484375 - val_loss: 95.96392059326172
Epoch 257: loss: 448.0782470703125 - val_loss: 95.96009826660156
Epoch 258: loss: 447.6029052734375 - val_loss: 97.1167984008789
Epoch 259: loss: 449.1065368652344 - val_loss: 99.19495391845703
Epoch 260: loss: 448.4313659667969 - val_loss: 95.46548461914062
Epoch 261: loss: 447.4554748535156 - val_loss: 96.4761734008789
Epoch 262: loss: 448.15673828125 - val_loss: 96.36026000976562
Epoch 263: loss: 447.81134033203125 - val_loss: 97.19379425048828
Epoch 264: loss: 447.1545104980469 - val_loss: 95.80008697509766
Epoch 265: loss: 448.50030517578125 - val_loss: 96.77977752685547
Epoch 266: loss: 447.1940002441406 - val_loss: 99.17420959472656
Epoch 267: loss: 447.97357177734375 - val_loss: 97.48926544189453
Epoch 268: loss: 448.6673583984375 - val_loss: 95.76939392089844
Epoch 269: loss: 448.72479248046875 - val_loss: 96.87129211425781
Epoch 270: loss: 448.6628723144531 - val_loss: 96.73013305664062
Epoch 271: loss: 448.1092224121094 - val_loss: 96.96997833251953
Epoch 272: loss: 450.6113586425781 - val_loss: 99.54457092285156
Epoch 273: loss: 447.33380126953125 - val_loss: 96.0779800415039
Epoch 274: loss: 448.7845458984375 - val_loss: 96.4703140258789
Epoch 275: loss: 448.4481506347656 - val_loss: 96.36518859863281
Epoch 276: loss: 449.53863525390625 - val_loss: 96.36363220214844
Epoch 277: loss: 449.01904296875 - val_loss: 95.36627197265625
Epoch 278: loss: 448.4521179199219 - val_loss: 96.10830688476562
Epoch 279: loss: 448.6177673339844 - val_loss: 96.2638931274414
Epoch 280: loss: 447.91741943359375 - val_loss: 96.5622787475586
Epoch 281: loss: 447.18426513671875 - val_loss: 96.10635375976562
Epoch 282: loss: 450.2124328613281 - val_loss: 95.98343658447266
Epoch 283: loss: 445.8919982910156 - val_loss: 96.40095520019531
Epoch 284: loss: 448.9225158691406 - val_loss: 99.08332061767578
Epoch 285: loss: 451.8327331542969 - val_loss: 96.44686126708984
Epoch 286: loss: 450.9258117675781 - val_loss: 96.49341583251953
Epoch 287: loss: 449.3422546386719 - val_loss: 96.29725646972656
Epoch 288: loss: 445.6938171386719 - val_loss: 98.02930450439453
Epoch 289: loss: 445.10467529296875 - val_loss: 95.53852844238281
Epoch 290: loss: 447.1038818359375 - val_loss: 97.08586120605469
Epoch 291: loss: 449.4007568359375 - val_loss: 96.09859466552734
Epoch 292: loss: 447.41497802734375 - val_loss: 96.6626205444336
Epoch 293: loss: 448.1705627441406 - val_loss: 96.74125671386719
Epoch 294: loss: 448.3666076660156 - val_loss: 97.35401153564453
Epoch 295: loss: 450.4263610839844 - val_loss: 95.55182647705078
Epoch 296: loss: 446.4584655761719 - val_loss: 96.44827270507812
Epoch 297: loss: 448.71038818359375 - val_loss: 97.3622055053711
Epoch 298: loss: 450.2142028808594 - val_loss: 96.57585144042969
Epoch 299: loss: 447.4092102050781 - val_loss: 99.58077239990234
Epoch 300: loss: 447.806884765625 - val_loss: 101.20033264160156
Epoch 301: loss: 449.25482177734375 - val_loss: 96.4958724975586
Epoch 302: loss: 446.1800537109375 - val_loss: 96.3758316040039
Epoch 303: loss: 447.6533508300781 - val_loss: 97.42721557617188
Epoch 304: loss: 447.7839660644531 - val_loss: 96.5372543334961
Epoch 305: loss: 447.0205078125 - val_loss: 97.30355072021484
Epoch 306: loss: 447.7018127441406 - val_loss: 96.26265716552734
Epoch 307: loss: 446.2525939941406 - val_loss: 95.49018859863281
Epoch 308: loss: 449.7401123046875 - val_loss: 95.94829559326172
Epoch 309: loss: 450.55377197265625 - val_loss: 95.80023193359375
Epoch 340: loss: 447.4643249511719 - val_loss: 97.12619018554688
Epoch 341: loss: 449.16058349609375 - val_loss: 96.62779235839844
Epoch 342: loss: 448.22314453125 - val_loss: 96.62793731689453
Epoch 347: loss: 448.8013610839844 - val_loss: 95.95989227294922
Epoch 348: loss: 448.94854736328125 - val_loss: 97.42015075683594
Epoch 349: loss: 449.9473876953125 - val_loss: 98.4897689819336
Epoch 350: loss: 449.37835693359375 - val_loss: 95.75738525390625
Epoch 351: loss: 448.22723388671875 - val_loss: 95.62594604492188
Epoch 352: loss: 448.29669189453125 - val_loss: 96.71765899658203
Epoch 353: loss: 449.94085693359375 - val_loss: 96.61383056640625
Epoch 354: loss: 449.84027099609375 - val_loss: 95.7657699584961
Epoch 355: loss: 447.8819885253906 - val_loss: 95.76840209960938
Epoch 356: loss: 447.8974609375 - val_loss: 95.89303588867188
Epoch 357: loss: 447.2488708496094 - val_loss: 96.25495147705078
Epoch 358: loss: 446.8042907714844 - val_loss: 95.79498291015625
Epoch 359: loss: 449.310302734375 - val_loss: 97.06275939941406
Epoch 360: loss: 447.7292785644531 - val_loss: 96.94458770751953
Epoch 361: loss: 450.7572326660156 - val_loss: 96.6223373413086
Epoch 362: loss: 448.0852966308594 - val_loss: 97.0022201538086
Epoch 363: loss: 447.17364501953125 - val_loss: 95.81920623779297
Epoch 364: loss: 448.5683898925781 - val_loss: 96.20494079589844
Epoch 365: loss: 447.81427001953125 - val_loss: 96.55271911621094
Epoch 366: loss: 448.688720703125 - val_loss: 95.55145263671875
Epoch 367: loss: 448.43988037109375 - val_loss: 97.6124038696289
Epoch 368: loss: 447.0206298828125 - val_loss: 95.83692932128906
Epoch 369: loss: 448.39703369140625 - val_loss: 98.09520721435547
Epoch 370: loss: 448.4558410644531 - val_loss: 95.79071807861328
Epoch 371: loss: 449.0579833984375 - val_loss: 95.97594451904297
Epoch 372: loss: 449.515869140625 - val_loss: 95.78282165527344
Epoch 373: loss: 448.605224609375 - val_loss: 96.56583404541016
Epoch 374: loss: 449.01422119140625 - val_loss: 95.94359588623047
Epoch 375: loss: 446.36358642578125 - val_loss: 96.34536743164062
Epoch 376: loss: 447.9954833984375 - val_loss: 97.00811767578125
Epoch 377: loss: 446.4378356933594 - val_loss: 97.61347961425781
Epoch 378: loss: 448.4219970703125 - val_loss: 96.28117370605469
Epoch 379: loss: 448.88427734375 - val_loss: 96.4834976196289
Epoch 380: loss: 447.9678955078125 - val_loss: 95.76192474365234
Epoch 381: loss: 447.5173645019531 - val_loss: 95.57508850097656
Epoch 382: loss: 450.2215881347656 - val_loss: 95.90723419189453
Epoch 383: loss: 446.4511413574219 - val_loss: 96.56519317626953
Epoch 384: loss: 448.1116027832031 - val_loss: 95.84264373779297
Epoch 385: loss: 446.3388977050781 - val_loss: 97.37928771972656
Epoch 386: loss: 446.2598876953125 - val_loss: 97.44355773925781
Epoch 387: loss: 448.97418212890625 - val_loss: 95.93624114990234
Epoch 388: loss: 448.1927490234375 - val_loss: 95.74329376220703
Epoch 389: loss: 449.95098876953125 - val_loss: 97.33592224121094
Epoch 390: loss: 448.2254333496094 - val_loss: 96.584228515625
Epoch 391: loss: 448.4384460449219 - val_loss: 96.26628875732422
Epoch 392: loss: 450.11444091796875 - val_loss: 95.92509460449219
Epoch 393: loss: 447.34954833984375 - val_loss: 95.9112319946289
Epoch 394: loss: 450.5757751464844 - val_loss: 96.1070785522461
Epoch 395: loss: 449.10260009765625 - val_loss: 96.11609649658203
Epoch 396: loss: 446.2992248535156 - val_loss: 95.81678009033203
Epoch 397: loss: 448.07379150390625 - val_loss: 98.16239929199219
Epoch 398: loss: 447.3256530761719 - val_loss: 98.20945739746094
Epoch 399: loss: 447.22650146484375 - val_loss: 96.81021118164062
</code></pre>
<p>Here are the charts
<a href="https://i.sstatic.net/wFYHe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wFYHe.png" alt="Losses vs Epochs" /></a>
<a href="https://i.sstatic.net/8BMzW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8BMzW.png" alt="enter image description here" /></a></p>
<p>The issue I have is that the validation has never been below <strong>94</strong>.
I don't know how to reduce it.</p>
<p>Please, any clue to reduce the validation loss?</p>
|
<python><tensorflow><scikit-learn><neural-network><tensorflow2.0>
|
2023-06-14 07:36:44
| 0
| 400
|
Mael Fosso
|
76,471,032
| 726,730
|
Python order list by datetime then by duration
|
<p>I have a list like:</p>
<pre class="lang-py prettyprint-override"><code>a = [
{
"dt_1":datetime.datetime(...),
"duration_milliseconds":<int>
},
{
"dt_1":datetime.datetime(...),
"duration_milliseconds":<int>
},
{
"dt_1":datetime.datetime(...),
"duration_milliseconds":<int>
},
{
"dt_1":datetime.datetime(...),
"duration_milliseconds":<int>
}
]
</code></pre>
<p>I want to sort this list this way: First sort by dt_1 (older datetimes first), then if some dt_1's are the same sort by duration_milliseconds (little durations first).</p>
<p>Thanks in advance,</p>
<p>Chris Pappas</p>
|
<python><sorting><datetime>
|
2023-06-14 07:21:57
| 2
| 2,427
|
Chris P
|
76,470,952
| 836,674
|
More pythonic way to replace empty strings in all dataframe except one column
|
<p>Example dataframe only -</p>
<pre><code>data = {'Header One' : [' ','assda','thisisatest','20230606'],
'Header Two' : ['xx ','hahah','thisistest','20230706'],
'Header Three' : [' ','a--s==sda',' is a test','20230603'],
'Header Four': [' xx2', '\n', 'thisiatest', '20230609']}
</code></pre>
<p>I can replace the 'empty' values in this dataframe using</p>
<pre><code>df=pd.DataFrame(data=data)
for col in df.loc[:, df.columns != "Header Two"]:
df[col].replace(r'^\s*$',np.nan,regex=True,inplace=True)
</code></pre>
<p>However, I would prefer to be able to do this without the loop but do not seem to be able to figure out how or if this is possible.</p>
<p>I have tried using concat with a drop and replace but this doesn't seem to have the desired results.</p>
<pre><code>pd.concat([df.pop('Header Two'), df.replace(r'^\s*$',np.nan,regex=True,inplace=True)])
</code></pre>
|
<python><pandas>
|
2023-06-14 07:11:01
| 2
| 909
|
Enchanterkeiby
|
76,470,779
| 5,080,177
|
How to understand the following fancy index behaviour for multi-dimensional arrays?
|
<p>We noticed that the mixed usage of fancy indexing and slicing is so confusing and undocumented for multi-dimensional arrays, for example:</p>
<pre class="lang-py prettyprint-override"><code>In [114]: x = np.arange(720).reshape((2,3,4,5,6))
In [115]: x[:,:,:,0,[0,1,2,4,5]].shape
Out[115]: (2, 3, 4, 5)
In [116]: x[:,:,0,:,[0,1,2,4,5]].shape
Out[116]: (5, 2, 3, 5)
</code></pre>
<p>I have read the usage of fancy indexing on <a href="https://numpy.org/doc/stable/user/basics.indexing.html" rel="nofollow noreferrer">https://numpy.org/doc/stable/user/basics.indexing.html</a> and I can understand that <code>x[:,0,:,[1,2]] = [x[:,0,:,1], x[:,0,:,2]]</code>. However I cannot understand why the result for above <code>Input [115]</code> and <code>Input [116]</code> differ <strong>on the first dimension</strong>. Can someone point to where such broadcasting rules are documented?</p>
<p>Thanks!</p>
<p>I have tried searching the documentation for fancy indexing as well as posting issues to the numpy repo on Github.</p>
|
<python><numpy><numpy-ndarray><array-broadcasting><numpy-slicing>
|
2023-06-14 06:44:57
| 2
| 841
|
sighingnow
|
76,470,596
| 14,820,295
|
Divide numbers of a string column for another column in Python
|
<p>Starting from a dataset with the string column "code_input", I would (by Python) to divide only numbers for a column "days".
I would preserve the rest of string in a column "code_output".</p>
<p>Example of my dataset:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>code_input</th>
<th>days</th>
<th>code_output</th>
</tr>
</thead>
<tbody>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
<tr>
<td>15.5C + 72HT</td>
<td>3</td>
<td>5.16C + 24HT</td>
</tr>
<tr>
<td>46C + 28HT +6T</td>
<td>2</td>
<td>23C + 14HT + 3T</td>
</tr>
<tr>
<td>...</td>
<td>...</td>
<td>...</td>
</tr>
</tbody>
</table>
</div>
<p>Thank u!</p>
|
<python><python-3.x>
|
2023-06-14 06:14:40
| 2
| 347
|
Jresearcher
|
76,470,553
| 11,082,866
|
Date format changes after 12th of every month
|
<p>I am transforming my date field in dataframe like this</p>
<pre><code>df3['transaction_date'] = pd.to_datetime(df3['transaction_date']).dt.tz_convert('Asia/Kolkata').dt.strftime(
'%d/%m/%Y').apply(lambda x: pd.to_datetime(x).date())
</code></pre>
<p>But its gives an output like this in csv:</p>
<pre><code>transaction_date
13/06/23
13/06/23
13/06/23
13/06/23
13/06/23
13/06/23
13/06/23
13/06/23
13/06/23
13/06/23
13/06/23
06/12/23
06/12/23
06/12/23
06/12/23
06/12/23
06/12/23
06/11/23
06/11/23
06/11/23
06/11/23
06/11/23
06/11/23
06/11/23
06/11/23
06/10/23
06/10/23
06/10/23
06/10/23
06/10/23
06/10/23
</code></pre>
<p>where the format should always be like 13/06/23 (dd/mm/yy)but as soon as the date drops below 13 it converts it to 06/12/23(mm/dd/yy)
how to solve this such that it gives dd/mm/yy every time?</p>
|
<python><pandas>
|
2023-06-14 06:09:12
| 1
| 2,506
|
Rahul Sharma
|
76,470,494
| 2,514,130
|
How to find nearest exclusive polygon with Shapely 2.0
|
<p>If I have a dictionary with multiple Polygons like so:</p>
<pre><code>poly_dict = {'1': <Polygon1>, '2': <Polygon2>}
</code></pre>
<p>How can I find the nearest polygon to one of the polygons in the dictionary using Shapely 2.0?</p>
<pre><code># Creating the STRtree
poly_rtree = STRtree(list(poly_dict.values())) # The 2.0 way of making an STRtree
# This works with older versions of Shapely
# rtree.nearest_item(poly_dict[bldg_id], exclusive=True)
# Shapely 2.0 uses `nearest`
nearest_poly = rtree.nearest(poly_dict[bldg_id])
</code></pre>
<p>But with Shapely 2.0, <code>exclusive</code> is not a parameter of <code>nearest</code>, so it just returns the (EDIT: index of the) same geometry I send it. How can I make it return <code>'2'</code> when <code>bldg_id = '1'</code> and <code>'1'</code> when <code>bldg_id = '2'</code>?</p>
|
<python><shapely>
|
2023-06-14 05:56:32
| 1
| 5,573
|
jss367
|
76,470,297
| 7,585,973
|
return '<SimpleProducer batch=%s>' % self.async while using KafkaProducer
|
<p>Here's <code>run.py</code></p>
<pre><code>from app import create_app
if __name__ == '__main__':
app = create_app()
app.run(host='0.0.0.0', port=8000, debug=True)
</code></pre>
<p>Here's the <code>/app/app/__init__.py</code></p>
<pre><code>from flask import Flask
from flask_cors import CORS
from app.config import Config
from threading import Thread
from app.consumer.kafka_consumer import consume_messages
def create_app():
app = Flask(__name__)
app.config.from_object(Config)
app.config['SQLALCHEMY_DATABASE_URI'] = Config.MYSQL_DATABASE_URI
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
CORS(app)
consumer_thread = Thread(target=consume_messages)
consumer_thread.start()
# Register blueprints
from app.controller.segmentation_controller import segmentation_bp
app.register_blueprint(segmentation_bp)
return app
</code></pre>
<p>Here's the <code>/app/app/consumer/kafka_consumer.py</code></p>
<pre><code>from kafka import KafkaConsumer
from app.config import Config
from app.service.segmentation_service import SegmentationService
def consume_messages():
segmentation_service = SegmentationService()
consumer = KafkaConsumer(
Config.KAFKA_TOPIC,
bootstrap_servers=['localhost:9092'],
auto_offset_reset='latest', # Start reading from the latest available offset
enable_auto_commit=True,
group_id='my-group',
value_deserializer=lambda x: x.decode('utf-8'),
)
for message in consumer:
segmentation_service.process_messages(message)
</code></pre>
<p>Here's the error message</p>
<pre><code>2023-06-14 11:44:43 Traceback (most recent call last):
2023-06-14 11:44:43 File "run.py", line 1, in <module>
2023-06-14 11:44:43 from app import create_app
2023-06-14 11:44:43 File "/app/app/__init__.py", line 5, in <module>
2023-06-14 11:44:43 from app.consumer.kafka_consumer import consume_messages
2023-06-14 11:44:43 File "/app/app/consumer/kafka_consumer.py", line 1, in <module>
2023-06-14 11:44:43 from kafka import KafkaConsumer
2023-06-14 11:44:43 File "/usr/local/lib/python3.8/site-packages/kafka/__init__.py", line 23, in <module>
2023-06-14 11:44:43 from kafka.producer import KafkaProducer
2023-06-14 11:44:43 File "/usr/local/lib/python3.8/site-packages/kafka/producer/__init__.py", line 4, in <module>
2023-06-14 11:44:43 from .simple import SimpleProducer
2023-06-14 11:44:43 File "/usr/local/lib/python3.8/site-packages/kafka/producer/simple.py", line 54
2023-06-14 11:44:43 return '<SimpleProducer batch=%s>' % self.async
</code></pre>
|
<python><apache-kafka>
|
2023-06-14 05:12:54
| 1
| 7,445
|
Nabih Bawazir
|
76,469,996
| 10,870,817
|
"ModuleNotFoundError: No module named ..." throwed when trying to make unit tests in Python
|
<p>I am trying to make unit tests for my project, but I am having issues:</p>
<pre><code>Error
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 34, in testFailure
raise self._exception
ImportError: Failed to import test module: data_normailizer_list9_test
Traceback (most recent call last):
File "/usr/lib/python3.8/unittest/loader.py", line 154, in loadTestsFromName
module = __import__(module_name)
File "/home/bao/aeris/amp-datasweep/src/test/data_normailizer_list9_test.py", line 1, in <module>
from src.main import data_normalizer
File "/home/bao/aeris/amp-datasweep/src/main/data_normalizer.py", line 6, in <module>
import logger_util
ModuleNotFoundError: No module named 'logger_util'
</code></pre>
<p>. I am using PyCharm as my IDE.</p>
<p>My directories look like this:</p>
<pre><code>project_folder
+ src
+ main
- data_normalizer.py
- logger_util.py
- __init__.py (JUST AN EMPTY FILE)
+ test
- data_normailizer_list9_test.py
- __init__.py (JUST AN EMPTY FILE)
</code></pre>
<p>My <code>data_normailizer_list9_test.py</code> file looks like this:</p>
<pre><code>from src.main import data_normalizer
import unittest
class DataNormalizerList9Test(unittest.TestCase):
def test(self):
#DO STUFF
</code></pre>
<p>My <code>data_normalizer.py</code>:</p>
<pre><code>import json
import logger_util (THIS LINE THROW ERROR)
import math
def a_func:
#DO STUFF
#return a result
</code></pre>
<p>However, I keep getting the error at the under test file <code>data_normalizer.py</code>:</p>
<pre><code>ModuleNotFoundError: No module named 'logger_util'
</code></pre>
<p>Edit: <code>logger_util.py</code> is in the same place with <code>data_normalizer.py</code>. And the main function works, means there is no issue with business code, just an issue with unit test code.</p>
|
<python>
|
2023-06-14 03:35:18
| 2
| 1,257
|
SoT
|
76,469,957
| 6,000,739
|
How to apply groupby lambda function in Python?
|
<p>I use R much but new to Python. Take the popular <code>iris</code> data for example,</p>
<pre class="lang-r prettyprint-override"><code>str(iris)
## 'data.frame': 150 obs. of 5 variables:
## $ Sepal.Length: num 5.1 4.9 4.7 4.6 5 5.4 4.6 5 4.4 4.9 ...
## $ Sepal.Width : num 3.5 3 3.2 3.1 3.6 3.9 3.4 3.4 2.9 3.1 ...
## $ Petal.Length: num 1.4 1.4 1.3 1.5 1.4 1.7 1.4 1.5 1.4 1.5 ...
## $ Petal.Width : num 0.2 0.2 0.2 0.2 0.2 0.4 0.3 0.2 0.2 0.1 ...
## $ Species : Factor w/ 3 levels "setosa","versicolor",..: 1 1 1 1 1 1 1 1 1 1 ...
</code></pre>
<p>my demo first in R is given as follow:</p>
<pre class="lang-r prettyprint-override"><code>dat = iris #iris dataset in base R
names(dat) = c(paste0('x',1:4),'g') #rename columns
dat$g = as.numeric(dat$g) #re-coding 3-level factor
dat[,1:4] = dat[,1:4]*10 #enlarge x1-x4 values; just a test!
dim(dat)
## [1] 150 5
head(dat) #g=group, 3-level: 1,2,3
## x1 x2 x3 x4 g
## 1 51 35 14 2 1
## 2 49 30 14 2 1
## 3 47 32 13 2 1
## 4 46 31 15 2 1
## 5 50 36 14 2 1
## 6 54 39 17 4 1
by(dat[,-5],dat[,5],nrow) #row size for each group; n_i
## dat[, 5]: 1
## [1] 50
## ------------------------------------------------------------
## dat[, 5]: 2
## [1] 50
## ------------------------------------------------------------
## dat[, 5]: 3
## [1] 50
by(dat[,-5],dat[,5],colMeans) #mean vector for data in each group; \bar{x}_i
## dat[, 5]: 1
## x1 x2 x3 x4
## 50.06 34.28 14.62 2.46
## ------------------------------------------------------------
## dat[, 5]: 2
## x1 x2 x3 x4
## 59.36 27.70 42.60 13.26
## ------------------------------------------------------------
## dat[, 5]: 3
## x1 x2 x3 x4
## 65.88 29.74 55.52 20.26
m = dat[,-5] |> colMeans(); m #mean vector for whole data; \bar{x}
## x1 x2 x3 x4
## 58.43333 30.57333 37.58000 11.99333
# by(dat[,-5],dat[,5],cov) #covariance matrix for data in each group
H = by(dat[,-5],dat[,5],function(x) nrow(x)*tcrossprod(colMeans(x)-m) ) |> Reduce(f="+") ; H
## [,1] [,2] [,3] [,4]
## [1,] 6321.213 -1995.267 16524.84 7127.933
## [2,] -1995.267 1134.493 -5723.96 -2293.267
## [3,] 16524.840 -5723.960 43710.28 18677.400
## [4,] 7127.933 -2293.267 18677.40 8041.333
E = by(dat[,-5],dat[,5],function(x)(nrow(x)-1)*cov(x)) |> Reduce(f="+"); E
## x1 x2 x3 x4
## x1 3895.62 1363.00 2462.46 564.50
## x2 1363.00 1696.20 812.08 480.84
## x3 2462.46 812.08 2722.26 627.18
## x4 564.50 480.84 627.18 615.66
</code></pre>
<p>which match the following manual results very well!</p>
<p><a href="https://i.sstatic.net/uqcMQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uqcMQ.png" alt="enter image description here" /></a></p>
<p>Then I want to <strong>translate</strong> R to Python. Surely, I can do it in Python in a ugly way, i.e, separating the data one by one then integrating them. However, I want realize it in Python using a advanced way via <code>groupby lambda function</code>. I tried my best as follows:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
from sklearn import datasets
iris = datasets.load_iris() #iris dataset in sklearn for python
dat = pd.DataFrame(data=iris.data*10, columns=[f'x{i}' for i in range(1,5)] ) #same treatment as in R
dat["g"] = iris.target+1 #re-coding 3-level factor
dat.shape
## (150, 5)
dat.head() #g=group, 3-level: 1,2,3
## x1 x2 x3 x4 g
## 0 51.0 35.0 14.0 2.0 1
## 1 49.0 30.0 14.0 2.0 1
## 2 47.0 32.0 13.0 2.0 1
## 3 46.0 31.0 15.0 2.0 1
## 4 50.0 36.0 14.0 2.0 1
dat.groupby('g').size()
## g
## 1 50
## 2 50
## 3 50
## dtype: int64
dat.groupby('g').mean() #mean vector for data in each group; \bar{x}_i
## x1 x2 x3 x4
## g
## 1 50.06 34.28 14.62 2.46
## 2 59.36 27.70 42.60 13.26
## 3 65.88 29.74 55.52 20.26
m = dat.iloc[:,:-1].mean(); m #mean vector for whole data; \bar{x}
## x1 58.433333
## x2 30.573333
## x3 37.580000
## x4 11.993333
## dtype: float64
# dat.groupby('g').cov()
H = dat.groupby('g').apply(lambda x: np.size(x)*(x.mean()-m)@(x.mean()-m).T).agg(sum,axis=0); H
E = dat.groupby('g').apply(lambda x: (np.size(x)-1)*np.cov(x)).agg(sum,axis=0); E
</code></pre>
<p>So far so good, <strong>except the last two lines</strong>; i.e., wrong outputs of <code>H</code> and <code>E</code> in Python! Any help?</p>
|
<python><r>
|
2023-06-14 03:24:21
| 0
| 715
|
John Stone
|
76,469,795
| 1,187,621
|
Does anyone see where the error in the following GEKKO-IPOPT nonlinear optimization problem is?
|
<p>In my code, I get the following error when running:</p>
<pre><code>Exception: @error: Equation Definition
Equation without an equality (=) or inequality (>,<)
((((((((((-cos(v4)))*(sin(v5)))-((((sin(v4))*(cos(v5))))*(cos(v3)))))*(((sqrt((
398574405096000.0/((v1)*((1-((v2)^(2))))))))*([(-sin(v6))(v2+cos(v6))0])))))^(2
))+((((((((-sin(v4)))*(sin(v5)))+((((cos(v4))*(cos(v5))))*(cos(v3)))))*(((sqrt(
(398574405096000.0/((v1)*((1-((v2)^(2))))))))*([(-sin(v6))(v2+cos(v6))0])))))^(
2)))+((((((cos(v5))*(sin(v3))))*(((sqrt((398574405096000.0/((v1)*((1-((v2)^(2))
))))))*([(-sin(v6))(v2+cos(v6))0])))))^(2)))
STOPPING...
</code></pre>
<p>I tried searching my code for variations of the math functions (sqrt, cos, etc.) to see if I could find something that looked like the above equation, but I cannot find it in any way. I assume GEKKO manipulated some things around to get it, likely as part of the solver. My thinking is that the 'v' values are equivalent to my orbital elements, and I see that the value of mu is expressed. I'm hoping someone can put another set of eyes on my code and maybe help me out.</p>
<p>Here is my code:</p>
<pre><code>from gecko import GEKKO
import numpy as np
import matplotlib.pyplot as plt
import math
def oe2rv(oe):
a = oe[0]
e = oe[1]
i = oe[2]
Om = oe[3]
om = oe[4]
nu = oe[5]
p = a * (1 - e**2)
r = p/(1 + e * m.cos(nu))
rv = np.array([r * m.cos(nu), r * m.sin(nu), 0])
vv = m.sqrt(mu/p) * np.array([-m.sin(nu), e + m.cos(nu), 0])
cO = m.cos(Om)
sO = m.sin(Om)
co = m.cos(om)
so = m.sin(om)
ci = m.cos(i)
si = m.sin(i)
R = np.array([[cO * co - sO * so * ci, -cO * so - sO * co * ci, sO * si],
[sO * co + cO * so * ci, -sO * so + cO * co * ci,-cO * si],
[so * si, co * si, ci]])
ri = R * rv
vi = R * vv
return ri, vi
def TwoBody(ri, vi):
ri_dot[0] = vi[0]
ri_dot[1] = vi[1]
ri_dot[2] = vi[2]
r_mag = m.sqrt(ri[0]**2 + ri[1]**2 + ri[2]**2)
r3 = r_mag**3
c = -mu/r3
vi_dot[0] = c * ri[0]
vi_dot[1] = c * ri[1]
vi_dot[2] = c * ri[2]
return ri_dot, vi_dot
def Euler(ri, vi):
ri_dot, vi_dot = TwoBody(ri, vi)
for i in range(0, 3):
ri_new[i] = ri[i] + ri_dot[i] * dt
vi_new[i] = vi[i] + vi_dot[i] * dt
return ri_new, vi_new
def trap(E_en, E_ex, e):
dE = (E_en - E_ex)/20
E = np.linspace(E_en, E_ex, 20)
nu_new = m.acos((m.cos(E[0]) - e)/(1 - e * m.cos(E[0])))
for i in range(1, 19):
nu_new = nu_new + 2 * m.acos((m.cos(E[i]) - e)/(1 - e * m.cos(E[i])))
nu_new = m.acos((m.cos(E[19]) - e)/(1 - e * m.cos(E[19])))
nu_new = (dE/2) * nu_new
return nu_new
def propagate(a, e, i, Om, om, nu, mass):
oe = np.array([a, e, i, Om, om, nu])
ri, vi = oe2rv(oe)
r = m.sqrt(ri[0]**2 + ri[1]**2 + ri[2]**2)
v = m.sqrt(vi[0]**2 + vi[1]**2 + vi[2]**2)
h = np.cross(ri, vi)
d1 = m.sqrt(4 * ((al_a * a * v**2)/mu + l_e * (e + m.cos(nu)))**2 + l_e**2 * (r**2/a**2) * m.sin(nu)**2)
s_a = (-l_e * (r/a) * m.sin(nu))/d1
c_a = (-2 * (((al_a * a * v**2)/mu) + l_e * (e + m.cos(nu))))/d1
d2 = m.sqrt(l_i**2 * ((r**2 * v**2)/h**2) * (m.cos(om + nu))**2 + (4 * al_a**2 * a**2 * v**4 * c_a**2)/mu**2 + l_e**2 * ((2 * (e + m.cos(nu)) * c_a + r/a * m.sin(nu) * s_a)**2))
s_b = (-l_i * ((r * v)/h) * m.cos(om + nu))/d2
c_b = (((-al_a * 2 * a * v**2)/mu) * c_a - l_e * (2 * (e + m.cos(nu)) * c_a + (r/a) * m.sin(nu) * s_a))/d2
a_n = aT * s_a * c_b
a_t = aT * c_a * c_b
a_h = aT * s_b
n = m.sqrt(mu/a**3)
Om_J2 = ((-3 * n * R_E**2 * J2)/(2 * a**2 * (1 - e**2)**2)) * m.cos(i)
om_J2 = ((3 * n * R_E**2 * J2)/(4 * a**2 * (1 - e**2)**2)) * (4 - 5 * (m.sin(i))**2)
nu_new = trap(E_en, E_ex, e)
da_dt = a_t * (2 * a**2 * v)/mu
de_dt = (1/v) * (2 * (e + m.cos(nu)) * a_t + (r/a) * a_n * m.sin(nu))
di_dt = (r/h) * a_h * m.cos(om + nu)
dOm_dt = (r/(h * m.sin(i))) * a_h * m.sin(om + nu) + Om_J2
dom_dt = (1/(e * v)) * (2 * a_t * m.sin(nu) - (2 * e + (r/a) * m.cos(nu)) * a_n) - (r/(h * m.sin(i))) * a_h * m.sin(om + nu) * m.cos(i) + om_J2
dnu_dt = nu_new - nu
dm_dt = (-2 * eta * P)/pow((g * Isp), 2)
dt_dE = r/(n * a)
Tp = (2 * math.pi/m.sqrt(mu)) * a**(3/2)
deltas = np.array([da_dt, de_dt, di_dt, dOm_dt, dom_dt, dnu_dt, dm_dt, dt_dE])
return deltas, Tp
#initialize model
m = GEKKO()
#optional solver settings with APOPT
Nsim = 100 #number of steps with constant thrust
m.time = np.linspace(0, 0.2, Nsim)
#constants
mu = 3.98574405096E14
g = 9.81
R_E = 6.2781E6
J2 = 1.08262668E-3
P = 10E3
eta = 0.65
Isp = 3300
m0 = 1200
aT = (2 * eta * P)/(m0 * g * Isp)
delta_t = 3600
t_max = 86400 * 200
E_en = math.pi
E_ex = -math.pi
oe_i = np.array([6927000, 0, math.radians(28.5), 0, 0, 0])
oe_f = np.array([42164000, 0, 0, 0, 0, 0])
v_i = m.sqrt(mu/oe_i[0])
v_f = m.sqrt(mu/oe_f[0])
dv = abs(v_i - v_f)
dm = (2 * eta * P)/pow((g * Isp), 2)
m_f = m0 * m.exp(-dv/(g * Isp))
#manipulating variables and initial guesses
al_a = m.MV(value = -1, lb = -2, ub = 2)
al_a.STATUS = 1
l_e = m.MV(value = 0.001, lb = 0, ub = 10**6)
l_e.STATUS = 1
l_i = m.MV(value = 1, lb = 0, ub = 10**6)
l_i.STATUS = 1
#variables and initial guesses
a = m.Var(value = oe_i[0], lb = oe_i[0] - 6378000, ub = oe_f[0] + 6378000)
e = m.Var(value = oe_i[1], lb = 0, ub = 1)
i = m.Var(value = oe_i[2], lb = 0, ub = math.radians(90))
Om = m.Var(value = oe_i[3], lb = 0, ub = math.radians(360))
om = m.Var(value = oe_i[4], lb = 0, ub = math.radians(360))
nu = m.Var(value = oe_i[5], lb = 0, ub = math.radians(360))
mass = m.Var(value = m0, lb = 0, ub = m0)
#objective function
tf = m.FV(value = 1.2 * ((m0 - m_f)/dm), lb = 0, ub = t_max)
tf.STATUS = 1
#propagation
deltas, Tp = propagate(a, e, i, Om, om, nu, mass)
m.Equation(a.dt() == (deltas[0] * delta_t * deltas[7])/Tp)
m.Equation(e.dt() == (deltas[1] * delta_t * deltas[7])/Tp)
m.Equation(i.dt() == (deltas[2] * delta_t * deltas[7])/Tp)
m.Equation(Om.dt() == (deltas[3] * delta_t * deltas[7])/Tp)
m.Equation(om.dt() == (deltas[4] * delta_t * deltas[7])/Tp)
m.Equation(nu.dt() == deltas[5] * delta_t)
m.Equation(mass.dt() == (deltas[6] * delta_t * deltas[7])/Tp)
#starting constraints
m.fix(a, pos = 0, val = oe_i[0])
m.fix(e, pos = 0, val = oe_i[1])
m.fix(i, pos = 0, val = oe_i[2])
m.fix(Om, pos = 0, val = oe_i[3])
m.fix(om, pos = 0, val = oe_i[4])
m.fix(nu, pos = 0, val = oe_i[5])
m.fix(mass, pos = 0, val = m0)
#boundary constraints
m.fix(a, pos = len(m.time) - 1, val = oe_f[0])
m.fix(e, pos = len(m.time) - 1, val = oe_f[1])
m.fix(i, pos = len(m.time) - 1, val = oe_f[2])
m.fix(Om, pos = len(m.time) - 1, val = oe_f[3])
m.fix(om, pos = len(m.time) - 1, val = oe_f[4])
m.fix(nu, pos = len(m.time) - 1, val = oe_f[5])
m.fix(mass, pos = len(m.time) - 1, val = 0)
m.Obj(tf) #minimize final time
m.options.IMODE = 6 # non-linear model
m.options.SOLVER = 3 # solver (IPOPT)
m.options.MAX_ITER = 15000
m.options.RTOL = 1e-7
m.options.OTOL = 1e-7
m.solve(disp=True, debug=True) # Solve
print('Optimal time: ' + str(tf.value[0]))
m.solve(disp=True)
m.open_folder(infeasibilities.txt)
</code></pre>
<p>After doing some playing around, I believe the issue is that I am using the manipulating variables ('al_a', 'l_e' and 'l_i') in the 'propagate' function. Does that make sense as a possible problem? If that is the problem, is it possible to use the values of those variables in that function - and, if so, how?</p>
|
<python><nonlinear-optimization><gekko><ipopt>
|
2023-06-14 02:38:42
| 2
| 437
|
pbhuter
|
76,469,741
| 4,451,521
|
How to pytest inside a container?
|
<p>I have a script that is run inside a container.</p>
<p>I am planning to write some tests and run pytest on them.</p>
<p>So far so good. The problem is that I want to test the behavior of this script <em>inside the container</em><br />
It is no use to me to test this script under a separate virtual environment.</p>
<p>Now this container does not have pytest installed.</p>
<p>Before I have used poetry to have the tests environments ready but under the circumstances described, how do I set the pytest environment correctly to run these tests?</p>
<p>EDIT:</p>
<p>More details.</p>
<ul>
<li>I have a container with python 3.6.9 and pandas 0.22.0
Pytest is not installed</li>
<li>I created a virtual environment where I installed pytest. However I cannot use it since here pandas is not installed</li>
<li>Well, then I have to install it right? I do install it and now pandas is 1.1.5</li>
<li>Which defeats the purpose since I want to test the code under the same conditions</li>
</ul>
<p>What can I do?</p>
|
<python><docker><unit-testing><containers><pytest>
|
2023-06-14 02:18:43
| 1
| 10,576
|
KansaiRobot
|
76,469,732
| 3,121,975
|
Fixing BeautifulSoup linting errors
|
<p>I am using BeautifulSoup 4 to handle some JSF-related scraping. The scraping works fine, but <code>mypy</code> is throwing back some errors on this code:</p>
<pre><code>soup = bs4.BeautifulSoup(resp.text, 'lxml')
headers = {i['name']: i['value'] if i.has_attr('value') else i['id']
for i in soup.body.form.find_all(type = 'hidden')}
btns = soup.body.form.find_all(type = 'submit')
</code></pre>
<p>The errors I get are:</p>
<pre><code>handler.py:40: error: Item "None" of "Optional[Tag]" has no attribute "form" [union-attr]
handler.py:40: error: Item "None" of "Union[Tag, None, Any]" has no attribute "find_all" [union-attr]
handler.py:42: error: Item "None" of "Optional[Tag]" has no attribute "form" [union-attr]
handler.py:42: error: Item "None" of "Union[Tag, None, Any]" has no attribute "find_all" [union-attr]
Found 4 errors in 1 file (checked 1 source file)
</code></pre>
<p>It seems that mypy appears to think that <code>body</code> is a None type. I'm rather new to both of these tools so I'm not entirely sure how to deal with this issue. Has anyone else encountered this? How do I fix these?</p>
|
<python><beautifulsoup><mypy>
|
2023-06-14 02:16:26
| 1
| 8,192
|
Woody1193
|
76,469,624
| 348,168
|
How to print raw string with Beautifulsoup
|
<p>I have a piece of code here to extract div.statement.p.text</p>
<pre><code>a = """<div class="theorem" id="theorem-AA" acro="AA" titletext="Adjoint of an Adjoint"> <h5 class="theorem"> <span class="type">Theorem </span><span class="acro">AA</span><span class="titletext"> Adjoint of an Adjoint</span> </h5> <div class="statement"><p>Suppose that $A$ is a matrix. Then $\adjoint{\left(\adjoint{A}\right)}=A$.</p></div> <div class="proof"><a knowl="./knowls/proof.AA.knowl">Proof</a></div> </div><div class="context"><a href="http://linear.pugetsound.edu/html/section-MO.html#theorem-AA" class="context" title="Section MO">(in context)</a></div> """
from bs4 import BeautifulSoup as bs
soup = bs(repr(a),features = 'lxml')
statement = bs(repr(soup.find_all("div", {"class": "statement"})[0])).find('p').text
print(statement)
</code></pre>
<p>The output came was</p>
<pre><code>Suppose that $A$ is a matrix. Then $\x07djoint{\\left(\x07djoint{A}\right)}=A$.
</code></pre>
<p>I need the output to be:</p>
<pre><code>Suppose that $A$ is a matrix. Then $\adjoint{\left(\adjoint{A}\right)}=A$.
</code></pre>
<p>How can I do this?</p>
|
<python><string><beautifulsoup>
|
2023-06-14 01:33:51
| 1
| 4,378
|
Vinod
|
76,469,557
| 2,251,058
|
Dictionary comprehension while creating column in spark dataframe
|
<p>I have to create a column in a dataframe which tracks old values vs new values</p>
<p>I have two types of columns in the dataframe, one is sot (source of truth) another are normal columns (metrics).</p>
<p>Example value of the resultant column which is a comparison of both types of columns would look like</p>
<pre><code>"{'template_name': {'old_value': '1-en_US-Travel-Guide-Hotels-citymc-Blossom-Desktop-Like-HSR', 'new_value': '1-en_US-HTG_CMC_BEXUS_SecondaryKW_Test_Variant'}, 'template_id': {'old_value': '14949', 'new_value': '37807'}, 'num_questions': {'old_value': 29.0, 'new_value': 28}, 'duplicate_questions': {'old_value': '[]', 'new_value': []}}"
</code></pre>
<p>If we want to do something similar with normal dictionary comprehension in python it looks like this</p>
<pre><code>>>> metrics = [1,2,3,4,5,6]
>>> sot = [3,1,6,2,5,1]
>>> str({i: {"old_value":sot[i], "new_value": metrics[i]} for i in range(6) if metrics[i] != sot[i]})
"{0: {'old_value': 3, 'new_value': 1}, 1: {'old_value': 1, 'new_value': 2}, 2: {'old_value': 6, 'new_value': 3}, 3: {'old_value': 2, 'new_value': 4}, 5: {'old_value': 1, 'new_value': 6}}"
</code></pre>
<p>But I can't do something similar with spark dataframe</p>
<pre><code>metrics_cols = extract_metrics_spark_df.columns
temp.withColumn("flagged", str({ i : {"old_value" : f.col("sot_"+i) , "new_value": f.col(i)} for i in metrics_cols if f.col(i) != f.col("sot_"+i) }))
</code></pre>
<p>I couldn't figure how I could also use a udf in this case</p>
<p>Any help trying to create the column is appreciated.</p>
|
<python><pyspark><dictionary-comprehension>
|
2023-06-14 01:09:18
| 2
| 3,287
|
Akshay Hazari
|
76,469,453
| 13,154,548
|
Tensorforce 'Adam' object has no attribute '_create_all_weights'
|
<p>I have an M1 Mac, so I've created a virtual env with Python 3.9 and installed tensorforce 0.6.5 and I am trying to set up a simple agent and train it to play Snake. Here is my main code:</p>
<pre><code>from tensorforce.agents import PPOAgent
from Game import Game
from SnakeEnvironment import SnakeEnvironment
if __name__ == "__main__":
print(tensorforce.__version__)
game = Game()
environment = SnakeEnvironment(game)
agent = PPOAgent(
states=environment.states(),
actions=environment.actions(),
batch_size=128,
max_episode_timesteps=100,
)
agent.initialize()
for episodes in range(500):
state = environment.reset()
done = False
while not done:
actions = agent.act(states=state)
next_state, done, reward = environment.execute(actions)
agent.observe(terminal=done, reward=reward)
state = next_state
agent.save(directory='models')
</code></pre>
<p>But I am getting an error:</p>
<pre><code>WARNING:root:No min_value bound specified for state.
WARNING:absl:At this time, the v2.11+ optimizer `tf.keras.optimizers.Adam` runs slowly on M1/M2 Macs, please use the legacy Keras optimizer instead, located at `tf.keras.optimizers.legacy.Adam`.
Traceback (most recent call last):
File "/Users/dmytro/PycharmProjects/tensorforce/venv/lib/python3.9/site-packages/tensorforce/core/optimizers/tf_optimizer.py", line 142, in initialize_given_variables
self.tf_optimizer._create_all_weights(var_list=variables)
AttributeError: 'Adam' object has no attribute '_create_all_weights'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/dmytro/PycharmProjects/tensorforce/main.py", line 19, in <module>
agent.initialize()
File "/Users/dmytro/PycharmProjects/tensorforce/venv/lib/python3.9/site-packages/tensorforce/agents/agent.py", line 280, in initialize
self.model.initialize()
File "/Users/dmytro/PycharmProjects/tensorforce/venv/lib/python3.9/site-packages/tensorforce/core/models/tensorforce.py", line 600, in initialize
super().initialize()
File "/Users/dmytro/PycharmProjects/tensorforce/venv/lib/python3.9/site-packages/tensorforce/core/models/model.py", line 290, in initialize
self.core_initialize()
File "/Users/dmytro/PycharmProjects/tensorforce/venv/lib/python3.9/site-packages/tensorforce/core/models/tensorforce.py", line 690, in core_initialize
self.optimizer.initialize_given_variables(variables=self.policy.trainable_variables)
File "/Users/dmytro/PycharmProjects/tensorforce/venv/lib/python3.9/site-packages/tensorforce/core/optimizers/optimizer.py", line 42, in initialize_given_variables
module.initialize_given_variables(variables=variables)
File "/Users/dmytro/PycharmProjects/tensorforce/venv/lib/python3.9/site-packages/tensorforce/core/optimizers/optimizer.py", line 42, in initialize_given_variables
module.initialize_given_variables(variables=variables)
File "/Users/dmytro/PycharmProjects/tensorforce/venv/lib/python3.9/site-packages/tensorforce/core/optimizers/optimizer.py", line 42, in initialize_given_variables
module.initialize_given_variables(variables=variables)
File "/Users/dmytro/PycharmProjects/tensorforce/venv/lib/python3.9/site-packages/tensorforce/core/optimizers/tf_optimizer.py", line 144, in initialize_given_variables
self.tf_optimizer._create_hypers()
AttributeError: 'Adam' object has no attribute '_create_hypers'
</code></pre>
<p>I've also tried to run the same code on Windows 10 but the problem is still the same.
Any ideas how to resolve this?</p>
|
<python><tensorflow>
|
2023-06-14 00:30:40
| 1
| 439
|
eternal
|
76,469,441
| 20,122,390
|
How can I guarantee that a piece of code will only run if nothing goes wrong in python?
|
<p>I've thought about how to title the question correctly but I'm not sure I can accurately describe what I want. I will explain below.
I have an application in Python and FastAPI and in one functionality of the application I need to do things with two databases at the same time. For example, in the following code, I have a method that I use on an endpoint to delete a user and I need to delete the user from both my own database and the firebase database:</p>
<pre><code>async def delete_db_firebase(self, *, _id: Union[int, str], route: Optional[str] = "") -> Any:
"""
Method in charge of deletin user by uid in own database and in
firebase.
"""
url_database = f"{self.url}{route}/{_id}"
url_firebase = f"{settings.AT_PRONOSTICOS_AUTH}/api/users/{_id}"
response_database = await self._client.delete(url_service=url_database)
await self._check_codes.check_codes(response=response_database, delete_method=True)
response_firebase = await self._client.delete(url_service=url_firebase)
await self._check_codes.check_codes(response=response_firebase, delete_method=True)
return response_database
</code></pre>
<p>(self._client is simply a class that I created to handle the different HTTP requests in my application, I believe that for the context of the application it is not necessary to add it)
I then use my method in an endpoint:</p>
<pre><code>@router.delete(
"/{uid}",
response_class=Response,
status_code=204,
responses={
204: {"description": "User deleted"},
401: {"description": "User unauthorized"},
},
)
async def delete(
*,
uid: str,
current_user=Security(get_current_user, scopes=["admin:user", "admin:guane"]),
) -> Response:
"""
Delete user by uid send in the param.
**Args**:
- **id** (str, optional): the id of the register to delete.
**Returns**:
- **None**
"""
await user_service.delete_db_firebase(_id=uid)
return Response(status_code=204)
</code></pre>
<p>Perhaps you have already guessed my problem. In my code it is first deleted in my database and then in the firebase database. But one of the two could fail and I just want both deletions to be effective if neither fails. It would be like doing a "transaction" but I am in front of two different services. Is there any way to handle this in python?</p>
|
<python><firebase><transactions><fastapi>
|
2023-06-14 00:27:07
| 1
| 988
|
Diego L
|
76,469,419
| 2,543,622
|
Python rfecv select less than recommended features
|
<p>I have this example code that comes from <a href="https://scikit-learn.org/stable/auto_examples/feature_selection/plot_rfe_with_cross_validation.html#sphx-glr-auto-examples-feature-selection-plot-rfe-with-cross-validation-py" rel="nofollow noreferrer">here</a>. Optimal number of features recommended by <a href="https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFECV.html" rel="nofollow noreferrer">rfecv</a> is 3. But if want to build the model with only 1 or 2 features, how should I select those features?</p>
<pre><code>"""
===================================================
Recursive feature elimination with cross-validation
===================================================
A Recursive Feature Elimination (RFE) example with automatic tuning of the
number of features selected with cross-validation.
"""
# %%
# Data generation
# ---------------
#
# We build a classification task using 3 informative features. The introduction
# of 2 additional redundant (i.e. correlated) features has the effect that the
# selected features vary depending on the cross-validation fold. The remaining
# features are non-informative as they are drawn at random.
from sklearn.datasets import make_classification
X, y = make_classification(
n_samples=500,
n_features=15,
n_informative=3,
n_redundant=2,
n_repeated=0,
n_classes=8,
n_clusters_per_class=1,
class_sep=0.8,
random_state=0,
)
# %%
# Model training and selection
# ----------------------------
#
# We create the RFE object and compute the cross-validated scores. The scoring
# strategy "accuracy" optimizes the proportion of correctly classified samples.
from sklearn.feature_selection import RFECV
from sklearn.model_selection import StratifiedKFold
from sklearn.linear_model import LogisticRegression
min_features_to_select = 1 # Minimum number of features to consider
clf = LogisticRegression()
cv = StratifiedKFold(5)
rfecv = RFECV(
estimator=clf,
step=1,
cv=cv,
scoring="accuracy",
min_features_to_select=min_features_to_select,
n_jobs=2,
)
rfecv.fit(X, y)
print(f"Optimal number of features: {rfecv.n_features_}")
# %%
# In the present case, the model with 3 features (which corresponds to the true
# generative model) is found to be the most optimal.
#
# Plot number of features VS. cross-validation scores
# ---------------------------------------------------
import matplotlib.pyplot as plt
n_scores = len(rfecv.cv_results_["mean_test_score"])
plt.figure()
plt.xlabel("Number of features selected")
plt.ylabel("Mean test accuracy")
plt.errorbar(
range(min_features_to_select, n_scores + min_features_to_select),
rfecv.cv_results_["mean_test_score"],
yerr=rfecv.cv_results_["std_test_score"],
)
plt.title("Recursive Feature Elimination \nwith correlated features")
plt.show()
# %%
# From the plot above one can further notice a plateau of equivalent scores
# (similar mean value and overlapping errorbars) for 3 to 5 selected features.
# This is the result of introducing correlated features. Indeed, the optimal
# model selected by the RFE can lie within this range, depending on the
# cross-validation technique. The test accuracy decreases above 5 selected
# features, this is, keeping non-informative features leads to over-fitting and
# is therefore detrimental for the statistical performance of the models.
</code></pre>
|
<python><feature-selection><rfe>
|
2023-06-14 00:17:59
| 1
| 6,946
|
user2543622
|
76,469,380
| 4,542,117
|
Condensing multiple 'or' conditions in python
|
<p>Let's say that I have multiple 2D arrays that need to be utilized for some boolean logic. For example:</p>
<pre><code>q1 = np.array(data1 < 0)
q2 = np.array((data2 < -10) & (data1 > 5))
q3 = np.array(data3 > 10)
</code></pre>
<p>There are instances where some [x,y] will be the same for q1, q2, q3, and some instances not. I would like to use these conditions on another array. Such as:</p>
<pre><code>a[q1] = -999
a[q2] = -999
a[q3] = -999
</code></pre>
<p>The problem is that this is a bit cumbersome and I would like to condense it into something like:</p>
<pre><code>a[q1 or q2 or q3] = -999
</code></pre>
<p>But the above does not seem to work as intended. What is the best course of action to get the desired result?</p>
|
<python>
|
2023-06-14 00:03:22
| 1
| 374
|
Miss_Orchid
|
76,469,264
| 16,498,000
|
How to unpack a list of ints and print them so they have a preceding zero?
|
<p>I have a list of <code>[##, ##, ## ... ##]</code> and I want to unpack it and print it as <code>"##-##-##-...-##"</code> with preceding zeros on the left if the number is less than ten.</p>
<p>Is there a way to do this besides a for loop? I don't want the trailing <code>"-"</code> at the end.</p>
|
<python><python-3.x><list><iterable-unpacking>
|
2023-06-13 23:21:35
| 2
| 572
|
MiguelP
|
76,469,146
| 3,247,006
|
Where to set a custom middleware's path in "MIDDLEWARE" in "settings.py" in Django?
|
<p>I created the middleware <code>simple_middleware()</code> in <code>middleware/sample.py</code> following <a href="https://docs.djangoproject.com/en/4.2/topics/http/middleware/" rel="nofollow noreferrer">the doc</a> as shown below. *I'm learning <strong>Middleware</strong>:</p>
<pre class="lang-none prettyprint-override"><code>django-project
|-core
| └-settings.py
|-middleware
| |-__init__.py
| └-sample.py # Here
|-app1
└-app2
</code></pre>
<pre class="lang-py prettyprint-override"><code># "middleware/sample.py
def simple_middleware(get_response):
print("Only once the server starts")
def middleware(request):
print("Before a view is called")
response = get_response(request)
print("After a view is called")
return response
return middleware
</code></pre>
<p>But, I don't know where to set the custom middleware's path in <a href="https://docs.djangoproject.com/en/4.2/ref/settings/#middleware" rel="nofollow noreferrer">MIDDLEWARE</a> in <code>settings.py</code> as shown below. The 1st, the last or anywhere in <code>MIDDLEWARE</code>?:</p>
<pre class="lang-py prettyprint-override"><code># "core/settings.py"
MIDDLEWARE = [
# "middleware.sample.simple_middleware" # The 1st?
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
# "middleware.sample.simple_middleware" # Anywhere?
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
# "middleware.sample.simple_middleware" # The last?
]
</code></pre>
<p>Actually, I know that the doc says below in <a href="https://docs.djangoproject.com/en/4.2/topics/http/middleware/#activating-middleware" rel="nofollow noreferrer">Activating middleware</a> but I want to know exactly where to set it:</p>
<blockquote>
<p>The order in MIDDLEWARE matters because a middleware can depend on other middleware.</p>
</blockquote>
<p>So, where should I set the custom middleware's path in <code>MIDDLEWARE</code> in <code>settings.py</code>?</p>
|
<python><django><path><django-settings><django-middleware>
|
2023-06-13 22:46:39
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,469,013
| 7,959,116
|
Create a new column with the first value that matches a condition
|
<p>I have a Dataframe similar to this:</p>
<pre><code>import polars as pl
df = pl.from_repr("""
┌──────┬───────┐
│ Time ┆ Value │
│ --- ┆ --- │
│ i64 ┆ i64 │
╞══════╪═══════╡
│ 1 ┆ 100 │
│ 2 ┆ 75 │
│ 3 ┆ 70 │
│ 4 ┆ 105 │
│ 5 ┆ 140 │
│ 6 ┆ 220 │
│ 7 ┆ 65 │
│ 8 ┆ 180 │
│ 9 ┆ 150 │
└──────┴───────┘
""")
</code></pre>
<p>(<strong>Note</strong> that it comes sorted by <code>Time</code>)</p>
<p>I need to create a new column named <code>NewColumn</code>, that would yield the next <code>Time</code> where <code>Value</code> is lower than current <code>Value</code>.</p>
<p><strong>EDIT:</strong> it is important to specify that my original dataset has more than 10 millions of lines, and even though the original requirement is the one above, it is fair to say that some operations could end up exceeding RAM very quickly. To balance this, it would be acceptable to introduce a lookahead limit to test the original condition. For example it would be acceptable to yield the next <code>Time</code> where <code>Value</code> is lower than current <code>Value</code> <strong>up to the next 100 <code>Time</code></strong></p>
<p>Something like this:</p>
<pre><code>| Time | Value | NewColumn |
| 1 | 100 | 2 | >> next Time with Value lower than 100
| 2 | 75 | 3 | >> next Time with Value lower than 75
| 3 | 70 | 7 | >> next Time with Value lower than 70
| 4 | 105 | 7 | >> next Time with Value lower than 105
| 5 | 140 | 7 | >> next Time with Value lower than 140
| 6 | 220 | 7 | >> next Time with Value lower than 220
| 7 | 65 | null | >> next Time with Value lower than 65
| 8 | 180 | 9 | >> next Time with Value lower than 180
| 9 | 150 | null | >> next Time with Value lower than 150
</code></pre>
<p>So far, my approach have been to try to create a new temporary column that would hold a slice of <code>Value</code> from the next row up to last row, like this:</p>
<pre><code>| Time | Value | Slice_of_Value |
| 1 | 100 | [75, 70, … 150] |
| 2 | 75 | [70, 105, … 150] |
| 3 | 70 | [105, 140, … 150] |
| 4 | 105 | [140, 220, … 150] |
| 5 | 140 | [220, 65, … 150] |
| 6 | 220 | [65, 180, 150] |
| 7 | 65 | [180, 150] |
| 8 | 180 | [150] |
| 9 | 150 | [] |
</code></pre>
<p>Then try to infer the position of the first match satisfying the condition: "lower than column Value". Resulting in something like this:</p>
<pre><code>| Time | Value | Slice_of_Value | Position |
| 1 | 100 | [75, 70, … 150] | 0 |
| 2 | 75 | [70, 105, … 150] | 0 |
| 3 | 70 | [105, 140, … 150] | 3 |
| 4 | 105 | [140, 220, … 150] | 2 |
| 5 | 140 | [220, 65, … 150] | 1 |
| 6 | 220 | [65, 180, 150] | 0 |
| 7 | 65 | [180, 150] | null |
| 8 | 180 | [150] | 0 |
| 9 | 150 | [] | null |
</code></pre>
<p>Now I hurt myself with some issues:</p>
<p>Step 1: <code>Slice_of_Value</code>
To get the <code>Slice_of_Value</code> column, here is what I have tried at first:</p>
<pre><code>df = df.with_columns(
pl.col("Value").slice(pl.col("Time"), 9).implode().alias("Slice_of_Value")
)
</code></pre>
<p>but it seems that it is not possible to use <code>pl.col("")</code> as part of <code>.slice()</code>... So I resorted to do something like this instead:</p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(
pl.col("Value").shift(-i).alias(f"lag_{i}") for i in range(1, 9)
).with_columns(
pl.concat_list([f"lag_{i}" for i in range(1, 9)]).alias("Slice_of_Value")
)
</code></pre>
<p>So far so good.</p>
<p>Step 2: <code>Position</code></p>
<pre><code>df = df.with_columns(
pl.col("Slice_of_Value")
.list.eval(pl.arg_where(pl.element() < pl.col("Value")))
.list.eval(pl.element().first())
.list.eval(pl.element().drop_nulls())
.explode()
.add(pl.col("Time") + 1)
.alias("NewColumn")
)
</code></pre>
<p>Unfortunately this piece of code does not work because <code>named columns are not allowed in list.eval</code> ... So I am kinda hitting a wall now. I don't know if my whole approach is wrong or if I have missed something from the docs.</p>
<p>Any help or suggestion is greatly appreciated :)</p>
<p><strong>EDIT: BENCHMARK RESULTS</strong></p>
<p>So far I have tried 4 solutions on my real dataset. Here are my benchmarks:</p>
<p>SOLUTION 1: 0.83s for 10M rows</p>
<pre><code>lookahead=10
df = dfSource.with_columns(
pl.when(pl.col("Value").shift(-i)<pl.col("Value"))
.then(pl.col("Time").shift(-i)).alias(f"time_{i}") for i in range(1, lookahead+1)
).with_columns(
NewColumn=pl.coalesce(pl.col(f"time_{i}") for i in range(1,lookahead+1))
).drop(f"time_{i}" for i in range(1,lookahead+1)).collect()
</code></pre>
<p>SOLUTION 2: 3.41s for 10M rows</p>
<pre><code>df.rolling("Time", period=f"{df.height}i", offset="0i").agg(
x=pl.arg_where(pl.col("Value") < pl.col("Value").first()).first() - 1
)
</code></pre>
<p>Gave incorrect results, so I transformed it to this:</p>
<pre class="lang-py prettyprint-override"><code>df = df.group_by_dynamic(
"Time",
every="1i",
period="10i",
include_boundaries=False,
closed="left",
).agg(
pl.col("Value").alias("Slices")
).select(
pl.col("Time"),
pl.col("Slices")
.list.eval(pl.arg_where(pl.element() < pl.element().first()))
.list.first()
.add(pl.col("Time"))
.alias("NewColumn"),
).collect()
</code></pre>
<p>SOLUTION 3: (exceeded RAM)</p>
<pre><code>df.with_columns(position=pl.col("Value").implode()).with_columns(
next_time=pl.col("position")
.list.gather(pl.int_ranges(pl.col("Time") - 1, df.height))
.list.eval(pl.arg_where(pl.element() < pl.element().first()))
.list.first()
+ pl.col('Time') # +1 and -1 cancel out here.
)
</code></pre>
<p>Exceeded RAM because of <code>pl.col("Value").implode()</code> since it means transposing 10M rows into lists of 10M elements...</p>
<p>So I accepted the <code>SOLUTION 1</code> that produced fastest results on real situation and also cleanest code IMHO (no lists, more concise, no need to do further joins...).</p>
<p>Finally, here are some further benchmarks after increasing the lookahead size.</p>
<pre><code>Lookahead Size | SOLUTION 1 Time | SOLUTION 2 Time |
10 | 0.83s | 3.41s |
20 | 1.24s | 3.83s |
50 | 2.24s | 5.34s |
100 | 4.45s | 8.10s |
200 | 8.58s | 17.86s |
500 | 20.24s | 66.93s |
1000 | 70.63s | 108.12s |
</code></pre>
|
<python><dataframe><python-polars>
|
2023-06-13 22:08:50
| 2
| 980
|
Jona Rodrigues
|
76,468,935
| 13,764,814
|
how to specify response for specific request methods for API action in django
|
<p>I have an API action in django which accepts GET and POST requests where each of them will have a distinct response schema. If a request is of method "GET" we will return a list, if it is "POST" we will return only the created entity...</p>
<pre><code>from drf_spectacular.utils import extend_schema
class SessionsViewSet(viewsets.ModelViewSet):
...
@extend_schema(
parameters=[query_params["extra_query_param"]],
responses={
"GET": serializers.ExampleSerializer(many=True),
"POST": serializers.ExampleSerializer(many=False),
},
)
@action(
detail=True,
url_path="example",
methods=["GET", "POST"],
filter_backends=[],
)
def example(self, request, pk, *args, **kwargs):
match request.method:
case "GET":
queryset = MyModel.objects.filter(session_pk_id=pk)
page = self.paginate_queryset(queryset)
serializer = get_serializer(page, many=True)
return self.get_paginated_response(serializer.data)
case "POST":
serializer = get_serializer(data=request.data, many=False
)
if serializer.is_valid():
serializer.save()
return Response(
serializer.data,
status=status.HTTP_201_CREATED,
)
else:
return Response(
serializer.errors, status=status.HTTP_400_BAD_REQUEST
)
case _:
return Response(status=status.HTTP_405_METHOD_NOT_ALLOWED)
</code></pre>
<p>The problem is that now I am not sure how to define this rule so that the swagger auto-schema will detect and present as such.</p>
<p>How can I explicitly state that a response schema or serializer belongs to a specific request method?</p>
|
<python><django><swagger><drf-spectacular>
|
2023-06-13 21:49:56
| 1
| 311
|
Austin Hallett
|
76,468,665
| 3,719,459
|
Why does object.__new__ accept parameters?
|
<p>Besides the obvious asking "again" about <code>__new__</code> and <code>__init__</code> in Python - I can ensure, I know what it does. I'll demonstrate some strange and to my opinion undocumented behavior, for which I seek professional help :).</p>
<hr />
<h1>Background</h1>
<p>I'm implementing several features like abstract methods, abstract classes, must-override methods, singletone behavior, slotted classes (automatic inference of <code>__slots__</code>) and mixin classes (deferred slots) using a user-defined meta-class called <code>ExtendedType</code>. The following code can be found as a whole at <a href="https://github.com/pyTooling/pyTooling/blob/dev/pyTooling/MetaClasses/__init__.py?ts=2#L293" rel="nofollow noreferrer">pyTooling/pyTooling</a> on the development branch.</p>
<p>Thus, the presented question is a stripdown and simplified variant demonstrating the strange behavior of <code>object.__new__</code>.</p>
<h1>Idea</h1>
<p>Depending on the internal algorithms of <code>ExtendedType</code>, it might decide a class <code>A</code> is <em>abstract</em>. If so, the <code>__new__</code> method is replaced by a dummy method raising an exception (<code>AbstractClassError</code>). Later, when a class <code>B(A)</code> inherits from <code>A</code>, the meta-class might come to the decision, <code>B</code> isn't abstract anymore, thus we want to allow the object creation again and allow calling for the original <code>__new__</code> method. Therefore, the original method is preserved as a field in the class.</p>
<p>To simplify the internal algorithms for the abstractness decision, the meta-class implements a boolean named-parameter <code>abstract</code>.</p>
<pre class="lang-py prettyprint-override"><code>class AbstractClassError(Exception):
pass
class M(type):
# staticmethod
def __new__(cls, className, baseClasses, members, abstract):
newClass = type.__new__(cls, className, baseClasses, members)
if abstract:
def newnew(cls, *_, **__):
raise AbstractClassError(f"Class is abstract")
# keep original __new__ and exchange it with a dummy method throwing an error
newClass.__new_orig__ = newClass.__new__
newClass.__new__ = newnew
else:
# 1. replacing __new__ with original (preserved) method doesn't work
newClass.__new__ = newClass.__new_orig__
return newClass
class A(metaclass=M, abstract=True):
pass
class B(A, abstract=False):
def __init__(self, arg):
self.arg = arg
b = B(5)
</code></pre>
<p>When instantiating <code>B</code> we'll try two cases:</p>
<ol>
<li>with a single parameter: <code>b = B(5)</code><br />
Error message:
<pre><code>TypeError: object.__new__() takes exactly one argument (the type to instantiate)
</code></pre>
</li>
<li>without a parameter: <code>b = B()</code><br />
Error message:
<pre><code>TypeError: B.__init__() missing 1 required positional argument: 'arg'
</code></pre>
</li>
</ol>
<p>The error message of the latter case is expected, because <code>__init__</code> of <code>B</code> expects an argument <code>arg</code>. The strange behavior is in case 1, where it reports <code>object.__new__()</code> takes no additional parameters except of the type.</p>
<p>So let's investigate if swapping methods worked correctly:</p>
<pre class="lang-py prettyprint-override"><code>print("object.__new__ ", object.__new__)
print("A.__new_orig__ ", A.__new_orig__)
print("A.__new__ ", A.__new__)
print("B.__new__ ", B.__new__)
</code></pre>
<p>Results:</p>
<pre><code>object.__new__ <built-in method __new__ of type object at 0x00007FFE30EDD0C0>
A.__new_orig__ <built-in method __new__ of type object at 0x00007FFE30EDD0C0>
A.__new__ <function M.__new__.<locals>.newnew at 0x000001CF11AE5A80>
B.__new__ <built-in method __new__ of type object at 0x00007FFE30EDD0C0>
</code></pre>
<p>So, the preserved method in <code>__new_orig__</code> is identical to <code>object.__new__</code> and is again the same after swapping back the <code>__new__</code> method in class <code>B</code>.</p>
<h1>Comparing with Ordinary Classes</h1>
<p>Let's take two classes <code>X</code> and <code>Y(X)</code> and instantiate them:</p>
<pre class="lang-py prettyprint-override"><code>class X:
pass
class Y(X):
def __init__(self, arg):
self.arg = arg
y = Y(3)
</code></pre>
<p>Of cause this will work, but are the <code>__new__</code> methods different?</p>
<pre class="lang-py prettyprint-override"><code>object.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0>
A.__new_orig__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0>
A.__new__ <function M.__new__.<locals>.newnew at 0x000001CD1FB459E0>
B.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0>
X.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0>
Y.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0>
</code></pre>
<p>Also <code>X</code> and <code>Y</code> use the same <code>__new__</code> method as <code>B</code> or <code>object</code>.</p>
<p>So let's instantiate <code>Y</code> and <code>B</code> and compare results:</p>
<pre class="lang-py prettyprint-override"><code>print("Y.__new__ ", Y.__new__)
y = Y(3)
print("y.arg ", y.arg)
print("B.__new__ ", B.__new__)
b = B(5)
print("b.arg ", y.arg)
</code></pre>
<p>Results:</p>
<pre><code>Y.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0>
y.arg 3
B.__new__ <built-in method __new__ of type object at 0x00007FFE3B61D0C0>
Traceback (most recent call last):
File "C:\Temp\newIstKomisch.py", line 67, in <module>
b = B(5)
^^^^
TypeError: object.__new__() takes exactly one argument (the type to instantiate)
</code></pre>
<p><strong>Question 1: Why does <strong>new</strong> accept parameters for Y, but not for B?</strong></p>
<h1>Creating Objects</h1>
<p>When an object is created, the <code>__call__</code> method of the meta-class is executed, which roughly translates to:</p>
<pre class="lang-py prettyprint-override"><code>class M(type):
...
def __call__(cls, *args, **kwargs):
inst = cls.__new__(cls, *args, **kwargs)
inst.__init__(*args, **kwargs)
return inst
</code></pre>
<p>It first calls <code>__new__</code> to create an instance and then it calls <code>__init__</code> to initialize the object. One might argue and say: "maybe there is magic behavior in <strong>call</strong>" to check if a build-in or user-defined method is called"...</p>
<p>Let's quickly check how <code>object.__new__</code> behaves:</p>
<pre class="lang-py prettyprint-override"><code>o = object.__new__(object, 1)
</code></pre>
<p>Result:</p>
<pre><code>TypeError: object() takes no arguments
</code></pre>
<p>Observation: The error message is different then what we got before. This says "no arguments", the other says "exactly one argument".</p>
<p>Alternatively, we can create an object by hand skipping the meta-class:</p>
<pre class="lang-py prettyprint-override"><code>y = Y.__new__(Y, 3)
print("Y.__new__(Y, 3) ", y)
y.__init__(3)
print("y.__init__(3) ", y.arg)
</code></pre>
<p>Result:</p>
<pre><code>Y.__new__(Y, 3) <__main__.Y object at 0x0000020ED770BD40>
y.__init__(3) 3
</code></pre>
<p>Here we clearly see <code>__new__</code> can accept additional parameters and ignore them.</p>
<p>So let's compare to manual instance creation of <code>B</code>:</p>
<pre class="lang-py prettyprint-override"><code>b = B.__new__(B, 5)
print("B.__new__(B, 5) ", b)
b.__init__(5)
print("b.__init__(5) ", b.arg)
</code></pre>
<p>Result:</p>
<pre><code>Traceback (most recent call last):
File "C:\Temp\newIstKomisch.py", line 51, in <module>
b = B.__new__(B, 5)
^^^^^^^^^^^^^^^
TypeError: object.__new__() takes exactly one argument (the type to instantiate)
</code></pre>
<p><strong>Question 2: How can the same method have different behavior and exception handling?</strong></p>
<hr />
<p><strong>Additional notes:</strong></p>
<ul>
<li>All behavior is implemented in <code>M.__new__</code> or swapped <code>XXX.__new__</code> methods instead of <code>M.__call__</code>, so the object creation time isn't influenced. Modifying the meta-classes call would have a huge performance impact.</li>
</ul>
<hr />
<p><strong>Attachements:</strong></p>
<ul>
<li><a href="https://gist.github.com/Paebbels/72ac61916134a75fbcb57d265e443733" rel="nofollow noreferrer">Full reproducer file</a></li>
</ul>
|
<python><python-3.x><instantiation><metaclass><object-construction>
|
2023-06-13 20:58:03
| 1
| 16,480
|
Paebbels
|
76,468,658
| 20,612,566
|
Match dict values and add a key to the dict if it matchs
|
<p>I have two lists of dicts.</p>
<pre><code>products = [
{
'offer': {
'name': 'Iphone',
'sku': '1234'
}
},
{
'offer': {
'name': 'Samsung',
'sku': '5678'
}
},
]
</code></pre>
<pre><code>prices = [
{
'id': '1234',
'price': {
'value': 500,
'currencyId': 'USD'
}
},
{
'id': '5678',
'price': {
'value': 600,
'currencyId': 'USD'
}
}
]
</code></pre>
<p>I have to add prices to products with matching them by fields "sku" and "id".
I want to get a new list of dicts</p>
<pre><code>[
{
'offer': {
'name': 'Iphone',
'sku': '1234',
'value': 500,
'currencyId': 'USD'
}
},
{
'offer': {
'name': 'Samsung',
'sku': '5678',
'value': 600,
'currencyId': 'USD'
}
}
]
</code></pre>
<p>I tried to do this:</p>
<pre><code>for product, price in zip(products, prices):
if product.get("offer", {}).get("sku", {}) == price.get("id", {}):
product.get("offer", {}).update(price.get("price", {}))
</code></pre>
<p>but it doesn't works. It updates only 14 products (in the list of 4000 products) and I can't understand why.</p>
|
<python><list><dictionary>
|
2023-06-13 20:56:43
| 4
| 391
|
Iren E
|
76,468,648
| 3,247,006
|
Function-Based Middleware vs Class-Based Middleware in Django
|
<p>I read <a href="https://docs.djangoproject.com/en/4.2/topics/http/middleware/" rel="nofollow noreferrer">the doc</a>, then I could understand that we can define <strong>a function-based middleware</strong> and <strong>a class-based middleware</strong> in Django as shown below but I could not understand the difference between them. *I'm learning <strong>Middleware</strong>:</p>
<h2>Function-Based Middleware:</h2>
<pre class="lang-py prettyprint-override"><code>def simple_middleware(get_response):
# One-time configuration and initialization.
def middleware(request):
# Code to be executed for each request before
# the view (and later middleware) are called.
response = get_response(request)
# Code to be executed for each request/response after
# the view is called.
return response
return middleware
</code></pre>
<h2>Class-Based Middleware:</h2>
<pre class="lang-py prettyprint-override"><code>class SimpleMiddleware:
def __init__(self, get_response):
self.get_response = get_response
# One-time configuration and initialization.
def __call__(self, request):
# Code to be executed for each request before
# the view (and later middleware) are called.
response = self.get_response(request)
# Code to be executed for each request/response after
# the view is called.
return response
</code></pre>
<p>My questions:</p>
<ol>
<li><p>What is the difference between <strong>a function-based middleware</strong> and <strong>a class-based middleware</strong> in Django?</p>
</li>
<li><p>Which should I use basically?</p>
</li>
</ol>
|
<python><django><function><class><django-middleware>
|
2023-06-13 20:55:14
| 1
| 42,516
|
Super Kai - Kazuya Ito
|
76,468,610
| 10,083,382
|
Markowitz Optimization in Python
|
<p>I've done a Python implementation of Markowitz portfolio optimization. The minimization function would ensure the risk is minimized while solving the optimizer to generate certain percentage return based on list of tickers. It also takes into account that sum of weights should be equal 1. Most of the time I generate the results are infeasible. What could be the reason?</p>
<pre><code>import pandas as pd
import numpy as np
from helper_functions.get_data import download_data
from scipy.optimize import minimize
def optimizer(tickers, start_date, end_date, required_return=0.02):
df = download_data(tickers, start_date, end_date)
# Calculate daily returns
returns = df['Close'].pct_change()
# Calculate mean returns and covariance
mean_returns = returns.mean()
cov_matrix = returns.cov()
# Number of portfolio assets
num_assets = len(mean_returns)
# Initialize weights
weights_initial = num_assets * [1. / num_assets,]
# Constraints
constraints = ({'type': 'eq', 'fun': lambda x: np.sum(x) - 1},
{'type': 'eq', 'fun': lambda x: np.sum(x*mean_returns) - required_return})
# Bounds
bounds = tuple((0,1) for asset in range(num_assets))
# Objective function
def objective(weights):
return np.dot(weights.T, np.dot(cov_matrix, weights))
# Optimize
result = minimize(objective, weights_initial, method='SLSQP', bounds=bounds, constraints=constraints)
# Return optimization results
return result
</code></pre>
|
<python><optimization><scipy><mathematical-optimization><scipy-optimize>
|
2023-06-13 20:48:33
| 1
| 394
|
Lopez
|
76,468,601
| 850,781
|
How do I control extra xlim/ylim space?
|
<p><code>matplotlib</code> automatically adds some small margin to plots so make them pretty:</p>
<pre><code>import matplotlib.pyplot as plt
plt.plot([1,2,3],[2,3,4])
</code></pre>
<p>the actual <code>xlim</code> is <code>(0.9, 3.1)</code> (as returned by <code>axes.get_xlim()</code>) instead of <code>(1,3)</code> dictated by the data.</p>
<p>This is usually reasonable except for percent plots where the limits are <code>(-80%,120%)</code> which makes for 40/140 wasted space.</p>
<p>How is this extra margin computed?
How can I affect it? (other than passing data max/min to <code>set_xlimit</code>).</p>
|
<python><matplotlib>
|
2023-06-13 20:47:51
| 0
| 60,468
|
sds
|
76,468,504
| 6,197,439
|
Using a list access/slicing specification in a string, to access an array in Python?
|
<p>I would like to have a list access (or slicing) specification in a string, such as <code>"[1]"</code> or <code>"[3:2]"</code> or <code>"[:-4]"</code> - and then apply it to an existing list in Python. Consider the following simple example:</p>
<pre class="lang-py prettyprint-override"><code>import sys
mylist = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
myargspec = None
if ( len(sys.argv) > 1 ):
myargspec = sys.argv[1]
else:
myargspec = ""
print( "Got command line argument: '{}'".format(myargspec) )
mylist_sliced_str = "mylist{}".format(myargspec)
print( "... attempting to access: '{}'".format(mylist_sliced_str))
mylist_sliced = eval(mylist_sliced_str)
print( "... result: {}".format(mylist_sliced))
</code></pre>
<p>Here is what the script produces with some different list access specifications on the command line:</p>
<pre class="lang-none prettyprint-override"><code>$ python3 test.py
Got command line argument: ''
... attempting to access: 'mylist'
... result: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
$ python3 test.py "[2]"
Got command line argument: '[2]'
... attempting to access: 'mylist[2]'
... result: 3
$ python3 test.py "[2:4]"
Got command line argument: '[2:4]'
... attempting to access: 'mylist[2:4]'
... result: [3, 4]
$ python3 test.py "[:-4]"
Got command line argument: '[:-4]'
... attempting to access: 'mylist[:-4]'
... result: [1, 2, 3, 4, 5, 6]
</code></pre>
<p>Now this is all dandy fine, and it works - but it uses <code>eval</code>, which I really do not want to use.</p>
<p>Is there a better approach to something like this? (hopefully something that does not involve me parsing the list access specification string myself, and missing the part where regex <code>\d</code> does not match minus for negative numbers, and dealing with such errors).</p>
|
<python><list>
|
2023-06-13 20:28:48
| 1
| 5,938
|
sdbbs
|
76,468,406
| 5,868,293
|
Create many confusion-like matrices concatenated in python
|
<p>I have the following pandas dataframe</p>
<pre><code>import pandas as pd
df = pd.DataFrame({'cl1': ['A','A','A','A',
'A','A','A','A',
'D','D','D','D',
'D','D','D','D'],
'cl2': ['C','C','C','C',
'B','B','B','B',
'C','C','C','C',
'B','B','B','B'],
'p1p2': ['00','01','10','11',
'00','01','10','11',
'00','01','10','11',
'00','01','10','11'],
'val':[1,2,3,4,
10,20,30,40,
5,6,7,8,
50,60,70,80]})
df
cl1 cl2 p1p2 val
0 A C 00 1
1 A C 01 2
2 A C 10 3
3 A C 11 4
4 A B 00 10
5 A B 01 20
6 A B 10 30
7 A B 11 40
8 D C 00 5
9 D C 01 6
10 D C 10 7
11 D C 11 8
12 D B 00 50
13 D B 01 60
14 D B 10 70
15 D B 11 80
</code></pre>
<p>And I would like to create a plot that looks like this</p>
<p><a href="https://i.sstatic.net/V7lPz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/V7lPz.png" alt="enter image description here" /></a></p>
<p>How could I do that in python ?</p>
|
<python>
|
2023-06-13 20:11:32
| 2
| 4,512
|
quant
|
76,468,163
| 6,278,636
|
Check for files ending in .zip.part in specified directory in Python3
|
<p>So, I'm trying to write a Python script in Python3 that uses Selenium and saves files to a directory. Upon searching Google, I found that Selenium apparently doesn't have a function to wait for a download to complete, and as I'm using Firefox as the web driver, I instead want to check the download directory (defined as a string) in the script every second until there are no files ending in ".zip.part", which would indicate the download has completed. Is there any way to do this?</p>
|
<python><python-3.x>
|
2023-06-13 19:30:25
| 1
| 573
|
Eduardo Perez
|
76,467,976
| 3,448,136
|
How to convert naive, local timestamps to UTC in pandas dataframe?
|
<p>I have a python function that calls an API that returns a pandas dataframe. Two of the fields are datetime stamps, which pandas converted from strings.</p>
<p>The problem is that the API reports the datetimes in the local time zone. But the data used in the rest of the application is all UTC, so I need to convert the local timestamps to UTC timestamps.</p>
<p>Here is the code:</p>
<pre><code>my_df = get_external_data(begin_time, end_time)
print(my_df.head())
print(my_df.dtypes)
</code></pre>
<p>And the example data:</p>
<pre><code> jobid name user begin end
0 16138 bash noman 2022-12-13 11:33:33 2022-12-13 11:34:21
1 16139 bash noman 2022-12-13 11:34:22 2022-12-13 11:41:47
2 16140 bash noman 2022-12-13 11:41:49 2022-12-13 11:43:33
3 16141 bash noman 2022-12-13 11:49:36 2022-12-13 11:43:33
4 16142 bash noman 2022-12-13 11:57:08 2022-12-13 11:43:33
jobid int64
name string[python]
user string[python]
begin datetime64[ns]
end datetime64[ns]
dtype: object
</code></pre>
<p>The program will run in a variety of time zones, so the conversion must be based on the local system's time zone.</p>
<p>How do I convert the timestamps to UTC?</p>
|
<python><pandas><datetime><timezone><data-conversion>
|
2023-06-13 19:00:55
| 1
| 2,490
|
Lee Jenkins
|
76,467,712
| 3,380,902
|
Get h3 hex id using Databricks Mosaic library
|
<p>I am testing the Databricks Mosaic Spatial Grid Indexing method to obtain the <code>h3 hex</code> of a given lat, long.</p>
<pre><code># Get the latitude and longitude
latitude = 37.7716736
longitude = -122.4485852
# Get the resolution
resolution = 7
# Get the H3 hex ID
h3_hex_id = grid_longlatascellid(lit(latitude), lit(longitude), lit(resolution)).hex
# Print the H3 hex ID
print(h3_hex_id)
Column<'grid_longlatascellid(CAST(37.7716736 AS DOUBLE), CAST(-122.4485852 AS DOUBLE), 7)[hex]'>
</code></pre>
<p>How do I see the actual hex id in the code above?</p>
<p>when using <code>h3.geo_to_h3</code>, I get:</p>
<pre><code>h3.geo_to_h3(float(latitude), float(longitude), 7)
'872830829ffffff'
</code></pre>
<p>According the <a href="https://databrickslabs.github.io/mosaic/api/spatial-indexing.html" rel="nofollow noreferrer">docs</a>, the <code>h3 hex id</code> returned by <code>grid_longlatascellid</code> looks different from what is returned by <code>h3.geo_to_h3</code> method.</p>
<pre><code>h3.geo_to_h3(float(latitude), float(longitude), 7)
'872830829ffffff'
df = spark.createDataFrame([{'lon': 30., 'lat': 10.}])
df.select(grid_longlatascellid('lon', 'lat', lit(10))).show(1, False)
+----------------------------------+
|grid_longlatascellid(lon, lat, 10)|
+----------------------------------+
| 623385352048508927|
</code></pre>
<p>How do I obtain the <code>h3 hex id</code> using Databricks Mosaic library? I have the following imports and configurations:</p>
<pre><code>import h3
from mosaic import enable_mosaic
enable_mosaic(spark, dbutils)
from mosaic import *
spark.conf.set("spark.databricks.labs.mosaic.index.system", "H3")
</code></pre>
|
<python><databricks><geospatial><h3>
|
2023-06-13 18:23:00
| 1
| 2,022
|
kms
|
76,467,678
| 14,173,197
|
Using OCR to detect complex mathematical symbols
|
<p>I want to process some pdf files that contain equations in latex. I am using py-tesseract with eng+equ language, but it can't detect some math symbols like fractions, summations, and integrals from the pictures. It also adds some new characters to the results. This is the code I used to extract data. I can't share the images as they contain confidential data. Any suggestion on how to solve this issue? Note: I added the clean_text function to replace the long division bar that ocr is detecting as underscore before the dividend, which is not working.</p>
<pre><code>import json
import PyPDF2
import pytesseract
from pdf2image import convert_from_path
import cv2
import os
import re
import numpy as np
from tqdm import tqdm
# Set the language to "equ" (equation) for OCR
custom_config = r'--oem 3 --psm 6 -l eng+equ'
def preprocess_image(image):
# Convert PIL Image to NumPy array
image_array = np.array(image)
# Convert image to grayscale
gray = cv2.cvtColor(image_array, cv2.COLOR_BGR2GRAY)
# Apply adaptive thresholding
threshold = cv2.adaptiveThreshold(
gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 15, 10
)
return threshold
def clean_text(text):
# Remove unnecessary spaces and newlines
cleaned_text = text.replace("\n", " ")
# # Replace long division symbol (_) with division symbol after the term
# cleaned_text = re.sub(r'_\s*(\w+)\s+\((.+?)\)', r'\1/(\2)', cleaned_text)
# cleaned_text = re.sub(r'_\s*(\w+)\s+(\w+)', r'\1/\2', cleaned_text)
return cleaned_text
def is_textual_image(image):
# Convert image to grayscale if it has multiple channels
if len(image.shape) == 3 and image.shape[2] > 1:
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
else:
gray = image
# Apply adaptive thresholding
threshold = cv2.adaptiveThreshold(
gray, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 15, 10
)
# Calculate the percentage of white pixels in the image
total_pixels = threshold.shape[0] * threshold.shape[1]
white_pixels = np.sum(threshold == 255)
white_pixel_percentage = (white_pixels / total_pixels) * 100
# Calculate the percentage of non-zero pixels (non-textual regions) in the image
non_zero_pixels = np.sum(threshold != 0)
non_zero_pixel_percentage = (non_zero_pixels / total_pixels) * 100
# If the white pixel percentage is above a certain threshold and non-zero pixel percentage is below a threshold,
# consider it as a textual image
return white_pixel_percentage > 10 and non_zero_pixel_percentage < 30 # Adjust the thresholds as needed
def extract_text_from_pdf(pdf_path):
# Open the PDF file
with open(pdf_path, 'rb') as pdf_file:
# Create a PDF reader object
pdf_reader = PyPDF2.PdfReader(pdf_file)
# Find the start page of the introduction section
introduction_start_page = 0
for page_number, page in enumerate(pdf_reader.pages):
text = page.extract_text()
if re.search(r'Introduction', text, re.IGNORECASE):
introduction_start_page = page_number
break
# Determine the range of pages to process (excluding introduction pages)
start_page = introduction_start_page + 1
end_page = len(pdf_reader.pages)
# Get the first page and extract the title
first_page = pdf_reader.pages[0]
# Extract the text from the first-page
first_page_text = first_page.extract_text()
# Find the title case words in the extracted text
# title_match = re.search(r'\b(?!([A-Z]+\s)*[A-Z]+\b)[A-Z][A-Za-z\s]+\b', first_page_text)
title_match = re.search(r'(?<!\S)(?:[A-Z][a-z]+(?:\s+[A-Z][a-z]+)*)', first_page_text)
if title_match:
title = title_match.group(0)
else:
title = ""
print(title)
# Convert the PDF pages to images
images = convert_from_path(pdf_path)
# Extract text from each page
result = []
section_title = 'Introduction'
section_text = ''
print('Starting')
for page_number, image in tqdm(enumerate(images[start_page:end_page], start=start_page), total=len(images[start_page:end_page]), desc='Processing Pages'):
# Preprocess the image
processed_image = preprocess_image(image)
# Check if the image contains significant text
if is_textual_image(processed_image):
# Perform OCR using Tesseract
text = pytesseract.image_to_string(processed_image, config=custom_config)
# Compile the regular expressions
# footer_pattern = r'^\d{0,9}/\d{0,9}-[A-Z0-9]+\s+[A-Z0-9]+\s+\|\s+\d{0,9}-\d{0,9}-\d{0,9}$'
footer_pattern = re.compile(r'^.*-\s+\d{4}-\d{2}-\d{2}\S*')
section_title_pattern = re.compile(r'^\d+(\.\d+)*\s+')
symbol_pattern = re.compile(r'«<')
# Remove footer texts
lines = text.split('\n')
for line in lines:
# Check if the line is in the footer format and exclude it
if footer_pattern.match(line):
line = footer_pattern.sub('', line)
continue
if symbol_pattern.match(line):
line = symbol_pattern.sub('', line)
continue
# Check if the line starts with a number and is in header format
if section_title_pattern.match(line):
# Remove the leading number from the section title name
temp_section_title = section_title_pattern.sub('', line).strip()
if temp_section_title == section_title:
section_text += ' ' + line # Accumulate the lines of the same section
else:
# Store the previous title and content
section_text = clean_text(section_text.strip())
meta_data = {
'title': title,
'section': section_title,
'page_number': section_page_number # Use the page number of the section
}
paragraph_data = {
'content': section_text,
'meta': meta_data
}
print(paragraph_data)
result.append(paragraph_data)
# Update the current title and subtitle
# section_title = line.strip()
section_title = temp_section_title
section_text = ''
section_page_number = page_number # Update the page number of the section
else:
# Append the line to the current subtitle text
section_text += '\n' + line
# Store the last subtitle and content
section_text = clean_text(section_text.strip())
meta_data = {
'title': title,
'section': section_title,
'page_number': section_page_number
}
paragraph_data = {
'content': section_text,
'meta': meta_data
}
result.append(paragraph_data)
return result
# Path to the PDF file
pdf_path = '0_key_performance_indicator.pdf'
# Call the function to extract text from the PDF
extracted_data = extract_text_from_pdf(pdf_path)
# Save the extracted data as a JSON file
output_path = 'extracted_data_final.json'
with open(output_path, 'w') as json_file:
json.dump(extracted_data, json_file, indent=4)
print('Extraction completed. Data saved to', output_path)
</code></pre>
|
<python><ocr><tesseract><python-tesseract>
|
2023-06-13 18:16:36
| 0
| 323
|
sherin_a27
|
76,467,339
| 11,237,476
|
cannot build blender version 3.1.2
|
<p>I cloned blender and checkout <code>v3.1.2</code> (<code>git checkout v3.1.2</code>)
then <code>make update</code> then I have below error</p>
<pre><code>python3 ./build_files/utils/make_update.py
Blender git repository is in detached HEAD state, must be in a branch
make: *** [update] Error 1
</code></pre>
<p>how to build blender for python specific version?</p>
|
<python><blender>
|
2023-06-13 17:21:32
| 1
| 1,337
|
musako
|
76,467,300
| 11,644,523
|
In Snowflake SQL or Snowpark or Pandas, how do I filter records which have data for consecutive months?
|
<p>Given this sample table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: center;">sales</th>
<th style="text-align: right;">sales_date_monthend</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">5</td>
<td style="text-align: right;">2023-05-31</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">10</td>
<td style="text-align: right;">2023-04-30</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">11</td>
<td style="text-align: right;">2023-03-31</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">12</td>
<td style="text-align: right;">2023-02-27</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">1</td>
<td style="text-align: right;">2023-01-31</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">5</td>
<td style="text-align: right;">2022-12-31</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">2022-11-30</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">6</td>
<td style="text-align: right;">2022-10-31</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">9</td>
<td style="text-align: right;">2022-09-30</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">8</td>
<td style="text-align: right;">2022-08-31</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">7</td>
<td style="text-align: right;">2022-07-31</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">2023-06-30</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">2023-05-31</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">8</td>
<td style="text-align: right;">2022-08-31</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">7</td>
<td style="text-align: right;">2022-07-31</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">3</td>
<td style="text-align: right;">2023-06-30</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">2</td>
<td style="text-align: right;">2023-05-31</td>
</tr>
</tbody>
</table>
</div>
<p>I want to return or filter the id which have sales data for the <strong>last 12 consecutive months</strong>, i.e., data in every month. For example id=1 has sales each month, but id=2 is missing some data.</p>
<p>I think there must be an easier way in Python, I tried to generate a date range as so:</p>
<pre><code>SELECT last_day(dateadd(month,-seq8(0), DATE_TRUNC('month', CURRENT_DATE)))::date AS monthend
FROM table (generator(rowcount => 100))
</code></pre>
<p>And I want to use this month-end dates to compare, if ids do have data on each consecutive months. I thought it would be something like this in Snowpark, but I cannot get it</p>
<pre><code># Left join the table with the generated date range
joined_table = date_range.join(sample_table, on='month', how='left')
# Define the window specification
window_spec = Window.orderBy('month').rowsBetween(-11, 0)
# Add a lag column to compare the revenue with the previous 12 months
joined_table_with_lag = joined_table.withColumn('sales', lag('revenue').over(window_spec))
</code></pre>
|
<python><pandas><pyspark><snowflake-cloud-data-platform>
|
2023-06-13 17:15:54
| 1
| 735
|
Dametime
|
76,467,281
| 10,505,381
|
How to specify a path dependency to fallback to a wheel in pyproject.toml?
|
<p>I am currently implementing multiple serverless functions in Python which share a <code>base</code> package in the same repository.</p>
<p>Due to the restriction in folder structure of a Cloud Function, <code>../../libs/base</code> is not available in production environment.</p>
<p>What I planned to do is to build a wheel for <code>base</code> and place it in the same directory as the serverless function who uses it.</p>
<p>Regardless of all these context, I actually want to ask if it is possible to configure a <code>pyproject.toml</code> to look for <code>../libs/base</code> first, and fallback to <code>dist/base.whl</code> when the former does not exist?</p>
<p>I tried something like below:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry.dependencies]
base = [
{ path = "../../libs/base", develop = true },
{ path = "dist/base.whl", develop = false }
]
</code></pre>
<p>But <code>poetry</code> complained:</p>
<pre><code>Because clover-s3-events-watcher depends on both base (0.0.26) @ file:///workspace/libs/base and base @ file:///workspace/projects/function-1/dist/base-0.0.26-py3-none-any.whl, version solving failed.
</code></pre>
<p>Sorry for my broken English and appreciate any help :)</p>
|
<python><python-poetry><pyproject.toml>
|
2023-06-13 17:13:15
| 0
| 308
|
clifflau1120
|
76,467,157
| 9,908,877
|
Python Multithreading : Wait for multiple threads to execute another thread
|
<pre><code>for i in range (5):
thread = threading.Thread(target=some_fun,args=("thread : {}".format(i), fun_args))
thread.start()
thread6 = threading.Thread(target=some_fun,args=("thread : 6", fun_args))
thread6.start()
thread7 = threading.Thread(target=some_fun,args=("thread : 7", fun_args))
thread7.start()
</code></pre>
<p>Here I am creating 5 threads dynamically(1 to 5) and two threads(6 and 7) manually which will execute same function(some_fun) with different arguments.</p>
<p><strong>Problem</strong> : I want to execute Thread 6 just after Thread 1 and Thread 2 got executed, Thread 7 to execute just after Thread 4. Thread 6 not necessarily to wait for Thread 3, 4, 5 and 7 for completion and similarly Thread 7 will not wait for other threads except thread 4. There can be many dependencies of threads on another thread like this.</p>
<p>How can I achieve this? Please help</p>
|
<python><multithreading><parallel-processing><python-multithreading>
|
2023-06-13 16:56:40
| 0
| 371
|
Mohammad Sunny
|
76,467,116
| 10,966,677
|
django-ses module not working: Connect timeout when using django.core.mail
|
<p>I am using <code>django-ses==3.4.1</code> to send email via <code>django.core.mail</code> and both <code>send_mail()</code> and <code>EmailMessage()</code> get the connection timeout after one minute:</p>
<pre><code>botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://email-smtp.eu-central-1.amazonaws.com/
</code></pre>
<p>Using this configuration in <code>settings.py</code> according to the instructions on <a href="https://pypi.org/project/django-ses/" rel="nofollow noreferrer">https://pypi.org/project/django-ses/</a></p>
<pre><code>EMAIL_BACKEND = 'django_ses.SESBackend'
AWS_SES_USER = 'my-verified@email'
AWS_SES_ACCESS_KEY_ID = '-my-access-key-'
AWS_SES_SECRET_ACCESS_KEY = '-my-secret-access-key-'
AWS_SES_REGION_NAME = 'eu-central-1'
AWS_SES_REGION_ENDPOINT = 'email-smtp.eu-central-1.amazonaws.com'
</code></pre>
<p>Where I use the variable <code>AWS_SES_USER</code> as <code>from_email</code> while calling</p>
<pre><code>email = EmailMessage(subject, message, from_email, recipient_list)
email.content_subtype = 'html'
email.send()
</code></pre>
<p>I have also tested if the SES works without Django, i.e. simply using <code>smtplib</code> and it does.</p>
<p>The working example derived from <a href="https://realpython.com/python-send-email/#option-2-using-starttls" rel="nofollow noreferrer">https://realpython.com/python-send-email/#option-2-using-starttls</a></p>
<pre><code>smtp_server = "email-smtp.eu-central-1.amazonaws.com"
port = 587
# Create a secure SSL context
context = ssl.create_default_context()
# Try to log in to server and send email
try:
server = smtplib.SMTP(smtp_server,port)
server.ehlo() # Can be omitted
server.starttls(context=context) # Secure the connection
server.ehlo() # Can be omitted
server.login(AWS_SES_ACCESS_KEY_ID, AWS_SES_SECRET_ACCESS_KEY)
message = """\
Subject: Test SES
This message is sent from Python."""
receiver_email = 'my-recipient@email'
server.sendmail(AWS_SES_USER, receiver_email, message)
</code></pre>
<p>I have tried changing the parameters in <code>settings.py</code> in many ways, but without success.</p>
|
<python><django><amazon-ses>
|
2023-06-13 16:51:19
| 1
| 459
|
Domenico Spidy Tamburro
|
76,467,071
| 7,124,155
|
How can I get the name of a Pyspark dataframe as a string?
|
<p>I have eight Pyspark dataframes with names such as "store", "inventory", and "storage".</p>
<p>I need to create views for each one but in order to streamline, instead of saying</p>
<pre><code>store.createOrReplaceTempView('store_view') etc.
</code></pre>
<p>Is it possible to iterate through a list of dataframes and create a view? For example:</p>
<pre><code>df_list = ["store", "inventory", "storage"]
for d in df_list:
x = "convert dataframe d to a string"
d.createOrReplaceTempView(x)
</code></pre>
<p>How can I assign the dataframe name as a string to x?</p>
<p>I suppose you could do the opposite - have a list of strings but then how to get the dataframe from that?</p>
|
<python><dataframe><pyspark>
|
2023-06-13 16:44:28
| 2
| 1,329
|
Chuck
|
76,466,738
| 14,135,555
|
Numerical Optimization but with vectors
|
<p>I am new to solving something numerically, so I ask this to get a starter approach to a problem I have really clear.</p>
<p>So suppose that you have this optimization problem:</p>
<p><a href="https://i.sstatic.net/SoNi6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SoNi6.png" alt="enter image description here" /></a></p>
<p>Where you know the values of <code>\gamma_c</code>, <code>\gamma_\ell</code>, <code>\tau</code>, and <code>\Bar{\alpha}</code></p>
<p>I solved it by hand using Lagrange Multipliers and got a closed-form solution. So I have these answers for consumption (<code>c</code>), leisure (<code>\ell</code>), and labor supply (<code>h</code>)</p>
<p><a href="https://i.sstatic.net/KC2K4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KC2K4.png" alt="enter image description here" /></a></p>
<p>So the thing is that I can compute the optima (<code>c</code>,<code>\ell</code>, <code>h</code>) like this: (I did this in R, but the procedure in Python or Julia can be very similar)</p>
<pre><code>library(tidyverse)
w_par = c(4, 0.4)
i_par = c(3, 0.04)
e_par = c(0, 0.01^2)
gamma_l = 8; gamma_c = 50; tau = 0.08; Time = 24; alpha_bar = 0.7;N = 10000
gamma_h = Time - gamma_l
theta_true = c(gamma_h, gamma_c, alpha_bar, sqrt(e_par[2]))
set.seed(1)
df <- data.frame(w = exp(rnorm(n = N, mean = w_par[1], sd = sqrt(w_par[2]))),
I = exp(rnorm(n = N, mean = i_par[2], sd = sqrt(i_par[2]))),
e = rnorm(n = N, mean = e_par[1], sd = sqrt(e_par[2]))) %>%
mutate(a = alpha_bar + e,
h = a*gamma_h - (((1-a)*(I-gamma_c))/((1-tau)*w)),
L = Time - h,
C = (1-tau)*w*h+I,
U = a*log(C-gamma_c) + (1-a)*log(L-gamma_l))
</code></pre>
<p>Ok now take the first part of the dataframe that only contains these 4 variables (<code>w</code>, <code>I</code>,<code>e</code>, <code>a</code>), and the parameters.</p>
<p>Is there a way to obtain <code>h</code>, <code>L</code>, <code>C</code> (optima) with an optimizer? What steps should I follow to find the optimal columns? Do the columns obtained with the optimizer have the same values as the ones I got with the closed-form solution?</p>
<p>I don't need a super explicit answer, but something to start figuring out how to do this.</p>
<p>I state this small model because I know there's a closed-form solution. But for work, I have to get the optima for a model that doesn't have a closed-form solution, and all they told me is to solve it numerically (I don't know how to do it, but I am willing to learn)</p>
<p>Thanks in advance !</p>
<hr />
<p>Edit: There's a typo in my notation instead of T_i is just T</p>
<p>Edit 2: I put the Python tag because I don't mind having it solved in R or python as long I can retrieve U, L, and C</p>
|
<python><r><math><mathematical-optimization><numerical-methods>
|
2023-06-13 15:59:13
| 1
| 1,087
|
Jorge Paredes
|
76,466,731
| 5,317,332
|
Parse a dynamic HTML Page using PhantomJS and Python
|
<p>I would like to scrape an HTML page where content is not static but loaded with javascript.</p>
<p>I downgrade Selenium to version <code>3.3.0</code> in order to be able to support PhantomJS (<code>v4.9.x</code> does not support PhantomJS anymore) and wrote this code:</p>
<pre><code>from selenium import webdriver
driver = webdriver.PhantomJS('path-to-phantomJS')
driver.get('my_url')
p_element = driver.find_element_by_id(id_='my-id')
print(p_element)
</code></pre>
<p>The error I'm getting is:</p>
<pre><code>selenium.common.exceptions.NoSuchElementException: Message: "errorMessage":"Unable to find element with id 'my-id'"
</code></pre>
<p>The element I want to return is tag <code><section></code> with a certain <code>id</code> and all its subtags. The HTML content is like that:</p>
<pre><code><section id="my-id" class="my-class">...</section>
</code></pre>
|
<python><html><selenium-webdriver><phantomjs><webdriverwait>
|
2023-06-13 15:58:43
| 2
| 2,877
|
Alez
|
76,466,694
| 4,228,193
|
pandas chaining and the use of "inplace" parameter
|
<p>For pandas DataFrames in python, multiple member methods have an <code>inplace</code> parameter which purportedly allow you to NOT create a copy of the object, but rather to directly modify the original object*.</p>
<p>[*<em>Edited to add:</em> however, this proves to not be the case <a href="https://github.com/pandas-dev/pandas/issues/16529#issuecomment-323890422" rel="nofollow noreferrer">as pointed out by @juanpa.arrivillaga</a>. <code>inplace=True</code> DOES copy data and merely updates a pointer associated with the modified object, so has few advantages over a manual re-assignment to the name of the original object.]</p>
<p>Examples that I have seen online for the use of <code>inplace=True</code> do not include examples where chaining is used. This comment in a related SO thread may be an answer to why I don't see such examples anywhere:</p>
<blockquote>
<p><a href="https://stackoverflow.com/questions/69591787/assign-to-pandas-dataframe-in-place-with-method-chaining#comment123007080_69591820">you can't method chain and operate in-place. in-place ops return None and break the chain</a></p>
</blockquote>
<p>But, would "inplace chaining" work if you put an <code>inplace=True</code> in the last entry in the chain? [<em>Edited to add:</em> no] Or would that be equivalent to trying to change a copy created in an earlier link in the chain, which, as it is no longer your original object, is "lost" after the chain statement is complete? [<em>Edited to add:</em> yes; see answer <a href="https://stackoverflow.com/a/76466765/4228193">here</a>]</p>
<p>The use of large data objects would seem to preclude the notion of chaining without the ability to do so in-place, at least insofar as desire to maintain a low memory overhead and high computational speed. Is there an alternate implementation of pandas or, e.g. an equivalent of R's data.table available in python that might be appropriate for my needs? Or are my only options to not chain (and compute quickly) or to chain but make redundant copies of the data, at least transiently?</p>
|
<python><pandas><method-chaining><in-place>
|
2023-06-13 15:53:40
| 2
| 571
|
mpag
|
76,466,649
| 616,730
|
Why do some pandas methods require a list and not take tuples?
|
<p>I was surprised to find that in pandas, if you want to remove columns by name,</p>
<p><code>my_dataframe.drop(['column_name_one','column_name_two','column_name_three'])</code></p>
<p>works, but</p>
<p><code>my_dataframe.drop(('column_name_one','column_name_two','column_name_three'))</code></p>
<p>does not. Is there a technical reason, or a philosophical rationale, for this design decision?</p>
|
<python><pandas>
|
2023-06-13 15:48:39
| 1
| 7,140
|
foobarbecue
|
76,466,637
| 7,403,752
|
Scrapping KickStarter Through XHR requests
|
<p>I would like to scrape KickStarter via XHR requests. However, I get 403 as a response status code despite trying to replicate the exact request. Here is my code:</p>
<pre><code>import requests
url = "https://www.kickstarter.com/discover/advanced"
params = {
"google_chrome_workaround": "",
"woe_id": "0",
"sort": "magic",
"seed": "2811118",
"page": "1"
}
headers = {
"Accept": "application/json, text/javascript, */*; q=0.01",
"Accept-Encoding": "gzip, deflate, br",
"Accept-Language": "en-US,en;q=0.9,ru;q=0.8",
"Cookie": "copy-paste-from-the-browser",
"Referer": "https://www.kickstarter.com/discover/advanced",
"Sec-Ch-Ua": '"Not.A/Brand";v="8", "Chromium";v="114", "Google Chrome";v="114"',
"Sec-Ch-Ua-Mobile": "?0",
"Sec-Ch-Ua-Platform": '"Windows"',
"Sec-Fetch-Dest": "empty",
"Sec-Fetch-Mode": "cors",
"Sec-Fetch-Site": "same-origin",
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36",
"X-Csrf-Token": "copy-paste-from-the-browser",
"X-Requested-With": "XMLHttpRequest"
}
response = requests.get(url, params=params, headers=headers)
print("Status Code:", response.status_code)
</code></pre>
<p>Can you please give me any advice on how to scrape this data?</p>
|
<python><python-requests><request>
|
2023-06-13 15:47:46
| 0
| 2,326
|
edyvedy13
|
76,466,621
| 16,306,516
|
Beautiful Soup is not filtering values
|
<p>hii all here i have a code snippet in which i am using beautiful soup to parse value</p>
<pre><code>soup = BeautifulSoup(self.content, 'html.parser')
count = 1
for assumptions in data_list:
for rec in soup.findAll():
if (assumptions.replace(' ', '') in rec.get_text().replace(' ', '') or (assumptions.replace(' ', '') == rec.get_text().replace(' ', ''))
if assumptions in data_list:
data_list.remove(assumptions)
count += 1
</code></pre>
<p>Here in <code>self.content</code> field I am getting the <code>HTML data</code>,</p>
<p><code>data_list</code> is a list of strings</p>
<p>I am searching those strings which are available in the data_list inside the HTML content which is partially working.</p>
<p>some strings that I am searching for are avaliable in HTMl content but still, I am unable to process those in the <code>if condition</code>.</p>
<p>What will be the <code>if condition</code> in which I can parse all the strings which are inside the data_list</p>
|
<python><python-3.x><odoo>
|
2023-06-13 15:45:32
| 0
| 726
|
Sidharth Panda
|
76,466,400
| 7,318,120
|
Why are docstrings and block comments suddenly the same color as a single line comment in Visual Studio Code?
|
<p>My Python docstrings & block comments in Visual Studio Code always used to be a different color to the single line comment. They use to be:</p>
<ul>
<li>docstrings & block comments: orange</li>
<li>single line comments: green</li>
</ul>
<p>I did a reinstall of Visual Studio Code this morning and the block comments and docstrings are now the same color as the single line comments (so all comments are green).</p>
<p>I have pasted an image (in case the colors here are different).</p>
<p><a href="https://i.sstatic.net/RAQJV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RAQJV.png" alt="enter image description here" /></a></p>
<p>I have always used the default dark mode (and never change the default settings). Has something changed or is there a setting that could change this back?</p>
|
<python><visual-studio-code>
|
2023-06-13 15:18:30
| 2
| 6,075
|
darren
|
76,466,388
| 237,258
|
Python mypy-mirror in precommit, how to find a list of additional dependencies
|
<p>I have .pre-commit set with the following config:</p>
<pre><code>repos:
- repo: https://github.com/pre-commit/mirrors-mypy
rev: v1.3.0
hooks:
- id: mypy
additional_dependencies: [types-requests,types-python-dateutil,types-PyYAML]
</code></pre>
<p>I wonder where this types- comes from. I tried to add a few, like say: <code>types-boto3</code> it works, but I do not understand why (or rather: to what it's mapped, by whom, and where is it defined).</p>
<p>I tried googling for the list of possible additional dependencies but my googling skills are not good enough. For example, googling after <code>types-boto3</code> leads me to: <a href="https://mypy-boto3.readthedocs.io/en/latest/" rel="nofollow noreferrer">https://mypy-boto3.readthedocs.io/en/latest/</a>. But again, why?</p>
|
<python><mypy><pre-commit-hook><pre-commit.com>
|
2023-06-13 15:16:42
| 0
| 3,356
|
Drachenfels
|
76,466,342
| 5,868,293
|
Change opacity based on another column in stacked bar, plotly express
|
<p>I have the following dataframe</p>
<pre><code>import pandas as pd
data = {
'user_type': ['L', 'L', 'L', 'L', 'L', 'L', 'L', 'L'],
'cluster_phase1': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A'],
'group': ['Control', 'Control', 'Control', 'Control', 'Test', 'Test', 'Test', 'Test'],
'phase': ['phase1', 'phase1', 'phase2', 'phase2', 'phase1', 'phase1', 'phase2', 'phase2'],
'col': [0, 1, 0, 1, 0, 1, 0, 1],
'value': [38.0, 62.0, 54.0, 46.0, 37.0, 63.0, 41.0, 59.0],
'counts': [18, 18, 13, 13, 94, 94, 89, 89]
}
df = pd.DataFrame(data)
print(df)
user_type cluster_phase1 group phase col value counts
0 L A Control phase1 0 38.0 18
1 L A Control phase1 1 62.0 18
2 L A Control phase2 0 54.0 13
3 L A Control phase2 1 46.0 13
4 L A Test phase1 0 37.0 94
5 L A Test phase1 1 63.0 94
6 L A Test phase2 0 41.0 89
7 L A Test phase2 1 59.0 89
</code></pre>
<p>I am creating the following stacked bar plot, using plotly express</p>
<pre><code>import plotly.express as px
fig = px.bar(df, x='cluster_phase1', y='value', color='col',
facet_row='group', facet_col='phase',
text='value', title='L')
fig.for_each_annotation(lambda a: a.update(text=a.text.split("=")[-1]))
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/xgYtX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xgYtX.png" alt="enter image description here" /></a></p>
<p>I would like the bars that have <code>counts < 20</code> to be more grey, or have a smaller opacity,
in this case the top 2 bars.</p>
<p>I have tried adding</p>
<pre><code>for i, count in enumerate(df['counts']):
opacity = 0.2 if count < 20 else 1.0
fig.update_traces(opacity=opacity, selector=f"x[{i + 1}]")
</code></pre>
<p>But it doesnt work.
Any ideas how I could achieve that ?</p>
<p><strong>UPDATED DATA TO TEST</strong></p>
<pre><code>data1 = {
'user_type': ['L', 'L', 'L', 'L', 'L', 'L', 'L', 'L'],
'cluster_phase1': ['A', 'A', 'A', 'A', 'A', 'A', 'A', 'A'],
'group': ['Control', 'Control', 'Control', 'Control', 'Test', 'Test', 'Test', 'Test'],
'phase': ['phase1', 'phase1', 'phase2', 'phase2', 'phase1', 'phase1', 'phase2', 'phase2'],
'col': [0, 1, 0, 1, 0, 1, 0, 1],
'value': [38.0, 62.0, 54.0, 46.0, 37.0, 63.0, 41.0, 59.0],
'counts': [18, 18, 13, 13, 94, 94, 89, 89]
}
data2 = {
'user_type': ['L', 'L', 'L', 'L', 'L', 'L', 'L', 'L'],
'cluster_phase1': ['B', 'B','B','B','B','B','B','B',],
'group': ['Control', 'Control', 'Control', 'Control', 'Test', 'Test', 'Test', 'Test'],
'phase': ['phase1', 'phase1', 'phase2', 'phase2', 'phase1', 'phase1', 'phase2', 'phase2'],
'col': [0, 1, 0, 1, 0, 1, 0, 1],
'value': [38.0, 62.0, 54.0, 46.0, 37.0, 63.0, 41.0, 59.0],
'counts': [40, 40, 13, 13, 94, 94, 13, 13]
}
df = pd.concat([pd.DataFrame(data1), pd.DataFrame(data2)])
</code></pre>
|
<python><plotly>
|
2023-06-13 15:11:07
| 1
| 4,512
|
quant
|
76,466,298
| 1,023,123
|
How to click on context menu option appears after long press on Android app using appium python?
|
<p>I am trying to automate long press action followed with clicking on the static shortcuts available in context menu.</p>
<p><strong>Details:</strong></p>
<p><strong>language</strong> - python</p>
<p><strong>automation lib</strong> - appium</p>
<p><strong>android app</strong> - Youtube</p>
<p><strong>static shortcut</strong> - Subscriptions</p>
<p><strong>Sample Image of what I want to click</strong> - <a href="https://i.sstatic.net/tBNI0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tBNI0.jpg" alt="enter image description here" /></a></p>
<p>I am able to perform longpress action on the Youtube App <strong>but unable to click on the shortcut</strong>( for example - Subscriptions) available in context menu.</p>
<p>Below is the code :</p>
<pre><code>from appium import webdriver
from appium.webdriver.common.appiumby import AppiumBy
import time
import os
from appium import webdriver
from appium.webdriver.common.appiumby import AppiumBy
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.by import By
from selenium.common.exceptions import StaleElementReferenceException
from appium.webdriver.common.touch_action import TouchAction
from selenium.webdriver.common.action_chains import ActionChains
def reopen(android_package="com.google.android.youtube", udid="udid_xxx"):
"""
Creating a new driver object
"""
time.sleep(2)
desired_caps = {
"deviceName": "MI",
"platformName": "Android",
"version": "13",
"automationName": "Appium",
"noReset": True,
"chromedriverExecutable": os.path.join(os.getcwd(), "chromedriver.exe"),
"showChromedriverLog": True,
"chromeOptions": {
"androidPackage": android_package,
"w3c": False
},
"autoGrantPermissions": True,
"newCommandTimeout": 3000,
"wdaStartupRetryInterval": 20000,
"wdaStartupRetries": 4,
"adbExecTimeout": 20000,
"ignoreHiddenApiPolicyError": True,
"udid": udid
}
# Create web driver
driver = webdriver.Remote("http://localhost:4723/wd/hub", desired_caps)
wait = WebDriverWait(driver, timeout=5)
by = AppiumBy.XPATH
# searching for YouTube app
value = "//android.widget.ImageView[@content-desc='YouTube']"
element = wait.until(EC.element_to_be_clickable((by, value)))
# Performing logn_press action on YouTube app
actions = TouchAction(driver)
actions.long_press(element).perform()
time.sleep(5)
# Pressing the Subscriptions option in context menu
tap_element = driver.find_element(AppiumBy.XPATH, "//android.widget.TextView[@text='Subscriptions']")
#element = wait.until(EC.element_to_be_clickable((AppiumBy.XPATH, tap_element)))
print(tap_element.is_displayed()) # output - True
print(tap_element.text) # output - Subscriptions
print(tap_element.is_enabled) # output - <bound method WebElement.is_enabled of <appium.webdriver.webelement.WebElement
# (session="e811b0a3-c59f-4042-9613-ba650fae1d21", element="00000000-0000-0a1b-ffff-ffff000019c8")>>
tap_element.click() # Not working
if __name__ == '__main__':
reopen()
</code></pre>
|
<python><automation><appium><appium-android><python-appium>
|
2023-06-13 15:05:10
| 1
| 378
|
Gobeaner
|
76,466,236
| 2,059,689
|
Pandas read_csv() - Case insensitive column names for converters/dtypes
|
<p>I'm using <code>pd.read_csv()</code> to load file that might have column names of unknown case. Using lambda for <code>usecols</code> argument as described <a href="https://stackoverflow.com/a/62412830/2059689">here</a>, I can choose which columns to load regardless the case, and with method from <a href="https://stackoverflow.com/questions/19726029/how-can-i-make-pandas-dataframe-column-headers-all-lowercase">here</a> I can access these columns like this:</p>
<pre><code>df = pd.read_csv(myfile, usecols=lambda x: x.lower() in ['foo', 'bar'])
df.columns = df.columns.str.lower()
print(df['foo']) # Works no matter which column name case is in the file
</code></pre>
<p>But is there a way to use the <code>dtypes</code>/<code>converters</code> parameters in this case?</p>
<p>I had two workaround ideas in mind:</p>
<ol>
<li>Load all the data as strings and do the conversions in my code later. This seems way less performant.</li>
<li>Open the file just to read the header, analyze it, then open again with prior knowledge of the actual case of the column names (wrap it all as a function).</li>
</ol>
<p>Are there other approaches?</p>
|
<python><pandas>
|
2023-06-13 14:57:21
| 1
| 3,200
|
vvv444
|
76,466,157
| 11,688,559
|
Apache Beam Python script fails to write to Google Big Query
|
<p>I cannot write my data to Google Big Query. I keep getting errors such as</p>
<pre><code>AttributeError: Error trying to access nonexistent attribute `0` in write result. Please see __documentation__ for available attributes. [while running '[8]: Write to BigQuery']
</code></pre>
<p>I currently call and API. The API either gives me data as below or returns a <code>None</code>. In the former, I want to write to an existing Google Big Query table.</p>
<pre><code>[{'datetime': '2023-06-13T14:09:57',
'user_location': 'Hungry Lion',
'store_location': 'ZEVENWACHT MALL',
'device_name': 'UHC',
'battery_level': 'medium',
'probe_name': 'UPRIGHT 1',
'probe_type': 'K Type',
'probe_number': 1,
'channel_name': 'Temperature',
'channel_type': 'Temperature',
'channel_number': 1,
'record_type': 'reading',
'value': 72.2,
'channel_unit': '°C'},
{'datetime': '2023-06-13T14:09:57',
'user_location': 'Hungry Lion',
'store_location': 'ZEVENWACHT MALL',
'device_name': 'UHC',
'battery_level': 'medium',
'probe_name': 'UPRIGHT 2',
'probe_type': 'K Type',
'probe_number': 2,
'channel_name': 'Temperature',
'channel_type': 'Temperature',
'channel_number': 2,
'record_type': 'reading',
'value': 66.6,
'channel_unit': '°C'}]
</code></pre>
<p>To me this seems like valid data. I then run the following pipeline which handles the case when a None is returned by the API. In the below pipeline, I first read some data from Cloud Storage as to make the API call. The API is then called which returns data that looks like the above list of dictionaries.</p>
<pre><code>SCHEMA = 'datetime:DATETIME,user_location:STRING,store_location:STRING,'+\
'device_name:STRING,battery_level:STRING,probe_name:STRING,'+\
'probe_type:STRING,probe_number:INTEGER,channel_name:STRING,'+\
'channel_type:STRING,channel_number:INTEGER,record_type:STRING,'+\
'value:FLOAT,channel_unit:STRING'
pipeline = beam.Pipeline()
def api_call(start_datetime):
API = EasyLogCloudAPI(userGUID,APIToken)
start_datetime = datetime.datetime.strptime(start_datetime,
DATETIME_FORMAT)
end_datetime = datetime.datetime.today()
history = API.get_history(start_datetime=start_datetime,
end_datetime=end_datetime,
df=False)
if history is None:
return None
return history # a list of dictionaries
class WriteToBigQuery(beam.DoFn):
def process(self,element):
if element[0] is None:
return None
return element | 'Write to BigQuery' >> beam.io.WriteToBigQuery(TABLE,schema=SCHEMA,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
custom_gcs_temp_location=TEMP)
api2bq = (
pipeline
| 'Read last seen datetime' >> beam.io.ReadFromText(DATETIME)
| 'EasyLog Cloud API call' >> beam.Map(api_call)
| 'Write to BigQuery' >> beam.ParDo(WriteToBigQuery())
)
pipeline.run()
</code></pre>
<p>I cannot understand why it will not batch load data to my Big Query table. The schema is correct so I have ruled that out as well. Perhaps, I am handling the logic that resolves the return of None in a manner that is incorrect?</p>
<p>Please help me understand what I am doing wrong.</p>
<p>Furthermore, when I run the modification that does not include the logic then I get a different error:</p>
<pre><code>api2bq = (
pipeline
| 'Read last seen datetime' >> beam.io.ReadFromText(DATETIME)
| 'EasyLog Cloud API call' >> beam.Map(api_call)
| 'Write to BigQuery' >> beam.io.WriteToBigQuery(TABLE,schema=SCHEMA,
write_disposition=beam.io.BigQueryDisposition.WRITE_APPEND,
create_disposition=beam.io.BigQueryDisposition.CREATE_IF_NEEDED,
custom_gcs_temp_location=TEMP)
)
</code></pre>
<p>The new error is:</p>
<pre><code>RuntimeError: BigQuery job beam_bq_job_LOAD_AUTOMATIC_JOB_NAME_LOAD_STEP_460_6b53b91f5dd19adf8697c0ca6de4564c_08955518ebba4e399d41cf49592523f6 failed. Error Result: <ErrorProto
location: 'gs://api-test-321/temp_multi/bq_load/1418752d549c42b3a935ecbbecbdde94/tensile-proxy-386313.dataflow.another-table/5324c897-225d-47b2-8520-15c3af69929c'
message: 'Error while reading data, error message: JSON table encountered too many errors, giving up. Rows: 1; errors: 1. Please look into the errors[] collection for more details. File: gs://api-test-321/temp_multi/bq_load/1418752d549c42b3a935ecbbecbdde94/tensile-proxy-386313.dataflow.another-table/5324c897-225d-47b2-8520-15c3af69929c'
reason: 'invalid'> [while running '[9]: Write to BigQuery/BigQueryBatchFileLoads/TriggerLoadJobsWithoutTempTables/ParDo(TriggerLoadJobs)']
</code></pre>
|
<python><google-bigquery><google-cloud-dataflow><apache-beam><apache-beam-io>
|
2023-06-13 14:48:18
| 1
| 398
|
Dylan Solms
|
76,466,128
| 10,143,378
|
Poor performance using SQL with Dash
|
<p>I have a Dash application to create a dashboard with graph, and all the data of the graphs come from and SQL Server database.
the issue I face is each of the call to the database takes too much time. Here what the stuff I have done to test the difference performance :</p>
<p>My function look like that (I have simplify a lot the inputs)</p>
<pre><code>@app.callback(
Output(component_id=component_id, component_property='figure'),
Input(component_id='adm_level_radio', component_property='value'),
prevent_initial_call=True
)
def myfunction(param):
# do some computation
myquery = "SELECT Id, Data FROM MyTable"
df = sqlmanager.read(myquery)
# do some computation
</code></pre>
<p>the computation time is pretty instant, and in the sql part, it takes 20 secondes.
I have try then to run only the SQL part in a normal python file (without Dash) to be sure it is nor related to a non optimize query : it runs in 2 secondes.
I have also try to run a local Flask server that just execute the query and return the data, and replace the SQL part in Dash by a request to the Flask Server, again, the query is executed in 2 seconds.</p>
<p>So is there anything I need to know about SQL and Dash application ? Any good practice that maybe I didn't follow?</p>
<p>I know the issue is not very well explain, but I don't see what can I put more instead of all the Dash Application.
If you need something (configuration of the Dash App) to help me, don't hesitate to put in comment I will edit the post to add all relevant information, but for now I just ask for global idea where the time can be lose in this dash app (it is the first time I develop using Dash so I probably miss something important)</p>
<p>Thanks</p>
|
<python><sql><plotly-dash>
|
2023-06-13 14:45:22
| 0
| 576
|
kilag
|
76,465,741
| 8,510,149
|
Rank using ntile based on specific window function (PySpark)
|
<p>In the code below I create a pyspark dataframe with the following features, id, date and score. The date represents quarters. Then I rank 'score' using ntile. The rank is global, covering the complete dataframe.</p>
<p>However, I want to perform the ranking for each observation based on the last two years of data.</p>
<p>What is an elegant solution to that problem? Is it for example possible to do a partition using a condition?</p>
<pre><code>import pandas as pd
import numpy as np
from pyspark.sql import SparkSession
# Generate the data using pandas
num_observations = 1000
num_dates = 20
dates = pd.date_range(start='2017-01-01', periods=num_dates, freq='3M')
df_pandas = pd.DataFrame({
'id': range(1, num_observations + 1),
'date': sorted(list(dates) * (num_observations // num_dates + 1))[:num_observations],
'score': np.random.normal(loc=400, scale=25, size=num_observations).astype(int)
})
# Create SparkSession
spark = SparkSession.builder.getOrCreate()
# Convert pandas DataFrame to PySpark DataFrame
df_spark = spark.createDataFrame(df_pandas)
from pyspark.sql import functions as F
from pyspark.sql.window import Window
window = Window.orderBy(F.col('score'))
# Global rank
df_spark = df_spark.withColumn('ranked_score', F.ntile(10).over(window))
# Window rank
</code></pre>
|
<python><pyspark>
|
2023-06-13 14:01:47
| 1
| 1,255
|
Henri
|
76,465,606
| 11,227,857
|
Why is pip3.9 trying to use the debian 12 python3.11?
|
<p>I was using Debian 11 and had installed python3.9 in addition to the Debian 11 system version of python 3.9.</p>
<p>I installed jupyter notebooks on my version of python3.9 (not the debian included version) and was able to use <code>jupyter lab</code> to run jupyter labs. From here I selected my venv kernel depending on the project I was working on to work within a venv.</p>
<p>Now I have updated to Debian 12 which has a system version of python 3.11. But I'm having issues finding my existing installation of python 3.9 and all my installed packages such as jupyter. For exmaple, if I run:
<code>jupyter lab</code>
I get this:</p>
<pre><code>Traceback (most recent call last):
File "/home/gary/.local/bin/jupyter-lab", line 5, in <module>
from jupyterlab.labapp import main
ModuleNotFoundError: No module named 'jupyterlab'
</code></pre>
<p>Here are the outputs for various console commands:</p>
<pre><code>which pip /home/<my_username>/.local/bin/pip
which pip3 /home/<my_username>/.local/bin/pip3
which pip3.9 /home/<my_username>/.local/bin/pip3.9
which pip3.11 /usr/bin/pip3.11
which python
which python3 /usr/local/bin/python3
which python3.9 /usr/local/bin/python3.9
which python3.11 /usr/bin/python3.11
</code></pre>
<p><code>printenv</code> includes:</p>
<pre><code>PATH=
/home/<my_username>/.local/bin:
/usr/local/bin:
/usr/bin:
/bin:
/usr/local/games:
/usr/games:
/snap/bin
</code></pre>
<p>In the <code>/home/<my_username>/.local/bin/</code> directory I can see all the jupyter files installed. If I run <code>python3.9</code> to open a python shell within a terminal, I am able to import jupyter successfully.</p>
<p>However when I run <code>pip3.9 list</code> jupyter is note in there. If I try to run <code>pip3.9 install jupyter-lab</code> it gives me this error:</p>
<pre><code>error: externally-managed-environment
× This environment is externally managed
╰─> To install Python packages system-wide, try apt install
python3-xyz, where xyz is the package you are trying to
install.
If you wish to install a non-Debian-packaged Python package,
create a virtual environment using python3 -m venv path/to/venv.
Then use path/to/venv/bin/python and path/to/venv/bin/pip. Make
sure you have python3-full installed.
If you wish to install a non-Debian packaged Python application,
it may be easiest to use pipx install xyz, which will manage a
virtual environment for you. Make sure you have pipx installed.
See /usr/share/doc/python3.11/README.venv for more information.
note: If you believe this is a mistake, please contact your Python installation or OS distribution provider. You can override this, at the risk of breaking your Python installation or OS, by passing --break-system-packages.
hint: See PEP 668 for the detailed specification.
</code></pre>
<p>It's referencing 3.11, indicating it's trying to run this command with the debian system python installation, even though I explicitly called <code>python3.9</code>. How do I fix this without breaking the system installation?</p>
|
<python><linux><debian>
|
2023-06-13 13:45:42
| 1
| 530
|
gazm2k5
|
76,465,558
| 7,236,386
|
ansible-lint failing with: AttributeError: 'str' object has no attribute 'resolve'
|
<p>While running a github linter workflow on a new CentOS VM that I set up, I got the following error:</p>
<pre class="lang-none prettyprint-override"><code>Run ansible-lint --parseable ./ansible
Traceback (most recent call last):
File "/usr/local/bin/ansible-lint", line 8, in <module>
sys.exit(_run_cli_entrypoint())
File "/usr/local/lib/python3.9/site-packages/ansiblelint/__main__.py", line 344, in _run_cli_entrypoint
sys.exit(main(sys.argv))
File "/usr/local/lib/python3.9/site-packages/ansiblelint/__main__.py", line 197, in main
initialize_options(argv[1:])
File "/usr/local/lib/python3.9/site-packages/ansiblelint/__main__.py", line 110, in initialize_options
options.cache_dir = get_cache_dir(options.project_dir)
File "/usr/local/lib/python3.9/site-packages/ansible_compat/prerun.py", line 13, in get_cache_dir
basename = project_dir.resolve().name.encode(encoding="utf-8")
AttributeError: 'str' object has no attribute 'resolve'
Error: Process completed with exit code 1.
</code></pre>
<p>I tried with the following package versions which all failed with the above error:</p>
<ul>
<li>python3: 3.9.16</li>
<li>ansible: 8.0.0 or 6.6.0</li>
<li>ansible-compat: 4.1.2 or 2.2.7</li>
<li>ansible-core: 2.15.0 or 2.13.9</li>
<li>ansible-lint: 6.17.0 or 6.10.2</li>
</ul>
<p>Using pip, the following dependencies were also installed:</p>
<ul>
<li>flake8</li>
<li>yamllint==1.28.0</li>
<li>codespell</li>
<li>comment_parser</li>
</ul>
<p>Is there some linter dependency or something else that I am missing?</p>
<p>The step within the linter job is:</p>
<pre class="lang-yaml prettyprint-override"><code> - name: 'Lint yaml - ansible, GH actions and GH workflows'
run: yamllint -f parsable ./
if: always()
</code></pre>
|
<python><python-3.x><ansible><ansible-lint>
|
2023-06-13 13:39:58
| 2
| 347
|
Annemie
|
76,465,531
| 5,681,397
|
Python keep anchors when modifying value in Yaml
|
<p>I have a yaml file with anchors and aliases similar to below one:</p>
<pre class="lang-yaml prettyprint-override"><code>dev:
value1: &dev-value1 "value1"
value2: &dev-value2 "value2"
test:
value1: *dev-value1
value2: *dev-value2
</code></pre>
<p>I would like to change the value for key dev/value1 but also to preserve the anchor.</p>
<p>With a simple script:</p>
<pre class="lang-py prettyprint-override"><code>import ruamel.yaml
yaml = ruamel.yaml.YAML()
with open("test.eyaml") as eyaml_file:
data = yaml.load(eyaml_file)
data["dev"]["value1"] = "new_value"
with open("new.eyaml", "w") as new_eyaml_file:
yaml.dump(data, new_eyaml_file)
</code></pre>
<p>It changes the value but the anchor is now missing and the modified file looks like below:</p>
<pre class="lang-yaml prettyprint-override"><code>dev:
value1: new_value
value2: &dev-value2 value2
test:
value1: &dev-value1 value1
value2: *dev-value2
</code></pre>
<p>And I would like it be looks like this:</p>
<pre class="lang-yaml prettyprint-override"><code>dev:
value1: &dev-value1 "new_value"
value2: &dev-value2 "value2"
test:
value1: *dev-value1
value2: *dev-value2
</code></pre>
<p>How to achieve this?</p>
|
<python><yaml><ruamel.yaml>
|
2023-06-13 13:37:02
| 1
| 494
|
bugZ
|
76,465,446
| 14,752,392
|
Celery is not updating Django database related task
|
<p>I have a somewhat heavy task that involves bulk creating data into a django models with postgres database. I am using celery to run the task see task function below</p>
<pre><code>from os import PathLike
import pandas as pd
from app.models import MyModel
from celery import shared_task
@shared_task()
def db_operation(csv_file: PathLike) -> None:
dataframe = pd.read_csv(csv_file)
dataframe.dropna(subset="code", inplace=True)
dataframe = dataframe.drop_duplicates(subset=["code"], keep="first")
dataset = [
MyModel(code=row.code, title=row.title)
for row in dataframe.itertuples()
]
MyModel.objects.bulk_create(dataset)
return "finished"
</code></pre>
<p>I call the task when the API is called using the celery <code>delay</code> like so <code>db_operation.delay("path_to_file.csv")</code></p>
<p>On my flower dashboard I can see that the task is executed when I call it and it shows as successful, however the database does not get updated. If I run the same task without the celery <code>delay</code>, it updates the database.
Just to be sure that celery is working I created another task</p>
<pre><code>from time import time
@shared_task()
def dummy_task_for_confirmation()
time.sleep(20)
print("hello world")
return 2*20
</code></pre>
<p>this operation works without any issues.</p>
<p>This is making me think that the issue is somehow coming from the django ORM not being able to execute database operations on celery, but I maybe very wrong. I want to know how to execute the <code>db_operation</code> task on celery so that it updates the database successfully.</p>
<p>I am using redis as the broker and the only celery related variables in my settings is</p>
<pre><code># Celery settings
CELERY_BROKER_URL = os.getenv("CELERY_BROKER")
CELERY_RESULT_BACKEND = os.getenv("CELERY_RESULT_BACKEND")
</code></pre>
<p>my model looks like so</p>
<pre><code>class MyModel(models.Model):
code = models.CharField(max_length=25, unique=True)
title = models.CharField(max_length=1024)
</code></pre>
|
<python><django><django-models><redis><celery>
|
2023-06-13 13:24:54
| 1
| 918
|
se7en
|
76,465,343
| 188,331
|
HuggingFace Transformers model config reported "This is a deprecated strategy to control generation and will be removed soon"
|
<p>I am training a sequence-to-sequence model using HuggingFace Transformers' <code>Seq2SeqTrainer</code>. When I execute the training process, it reports the following warning:</p>
<blockquote>
<p>/path/to/python3.9/site-packages/transformers/generation/utils.py:1219: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use a generation configuration file (see <a href="https://huggingface.co/docs/transformers/main_classes/text_generation" rel="noreferrer">https://huggingface.co/docs/transformers/main_classes/text_generation</a>)</p>
</blockquote>
<p>Note the HuggingFace documentation link is dead.</p>
<p>I use the following codes:</p>
<pre><code>model = BartForConditionalGeneration.from_pretrained(checkpoint)
model.config.output_attentions = True
model.config.output_hidden_states = True
training_args = Seq2SeqTrainingArguments(
output_dir = "output_dir_here",
evaluation_strategy = IntervalStrategy.STEPS, #"epoch",
optim = "adamw_torch", # Use new PyTorch optimizer
eval_steps = 1000, # New
logging_steps = 1000,
save_steps = 1000,
learning_rate = 2e-5,
per_device_train_batch_size = batch_size,
per_device_eval_batch_size = batch_size,
weight_decay = 0.01,
save_total_limit = 3,
num_train_epochs = 30,
predict_with_generate=True,
remove_unused_columns=True,
fp16 = True,
push_to_hub = True,
metric_for_best_model = 'bleu', # New or "f1"
load_best_model_at_end = True # New
)
trainer = Seq2SeqTrainer(
model = model,
args = training_args,
train_dataset = train_ds,
eval_dataset = eval_ds,
tokenizer = tokenizer,
data_collator = data_collator,
compute_metrics = compute_metrics,
callbacks = [EarlyStoppingCallback(early_stopping_patience=3)]
)
trainer.train()
</code></pre>
<p>The training process can be completed without any problem, but I am concerned about the deprecation warning. How should I modify the codes to solve the problem?</p>
<p>Version:</p>
<ul>
<li>Transformers 4.28.1</li>
<li>Python 3.9.7</li>
</ul>
|
<python><huggingface-transformers>
|
2023-06-13 13:12:33
| 1
| 54,395
|
Raptor
|
76,465,225
| 4,451,521
|
Unit testing a function that produces csv files
|
<p>I have a function that produces csv files.
These files are written based on the contents of dataframes.</p>
<p>I would like to write unit tests for this function.</p>
<p>I suppose I can read a csv file with the correct intended values into a dataframe and then make an assert.</p>
<p>However some of the values of these dataframes are not <em>exactly</em> the same. For example they can be numbers like 8.00 and 7.999999</p>
<p>My question is how can I write assertions comparing two dataframes that check their approximate equality?</p>
|
<python><pandas><csv><unit-testing><pytest>
|
2023-06-13 13:00:04
| 3
| 10,576
|
KansaiRobot
|
76,464,908
| 12,243,638
|
fillna by avoiding row wise operation in pandas
|
<p>I have a data frame in which there is a column containing several NaN values. The dataframe looks like this:</p>
<pre><code> col_1 col_2
2022-10-31 99.094 102.498
2022-11-30 99.001 101.880
2022-12-31 NaN 108.498
2023-01-31 NaN 100.500
</code></pre>
<p>I want to fill those NaN based on the simple calculation below:
desired_val = (previous value in col_1 * current value in col_2) / previous value in col_2</p>
<p>which means,</p>
<p>df.loc['2022-12-31', 'col_1'] should be = (99.001 * 108.498) / 101.880 = 105.432</p>
<p>and df.loc['2023-01-31', 'col_1'] should be = (105.432 * 100.500) / 108.498 = 97.660</p>
<p>I found solution by using row by row operation but it is slow when the dataset is big. I tried column wise operation by using this:</p>
<pre><code>df['col_1'] = df['col_1'].fillna(
(df[col_1].shift(1) * df[col_2])
/ df[col_2].shift(1)
)
</code></pre>
<p>But it does work only for one row and then it does not go further. Is there any column wise pandas solution for that?</p>
|
<python><pandas><fillna>
|
2023-06-13 12:23:36
| 3
| 500
|
EMT
|
76,464,868
| 9,642,804
|
Pandas get matrix of missing interactions
|
<p>I have a dataframe with following format:</p>
<pre><code>user_id, item_id
0, 0
0, 1
1, 1
2, 1
2, 2
3, 0
</code></pre>
<p>Where each entry in the <code>df</code> means that <code>user_id</code> has purchased an item with <code>item_id</code> (for example user <code>0</code> bought item <code>0</code> & <code>1</code>)</p>
<p>How can I generate a data structure that would store the items not purchased by the users?</p>
<p>For the example above the expected result would be:</p>
<pre><code>0: 2
1: 0, 2
2: 0
3: 1, 2
</code></pre>
<p>Naive approach would be to do something like this:</p>
<pre><code>missing = dict()
all_items = set(df['item_id'].unique())
for user in df['user_id'].unique():
user_items = set(df[df['user_id'] == user]['item_id'].values)
missing_items = all_items.difference(user_items)
missing[user] = missing_items
</code></pre>
<p>But this approach is very slow for large dataframes. Is there a more efficient method?</p>
|
<python><pandas>
|
2023-06-13 12:19:18
| 1
| 1,810
|
Ach113
|
76,464,828
| 15,222,211
|
How to optimize memory usage by python script
|
<p>In my docker environment, I have a python script that contains a large number of objects in a dict.
To optimize memory usage, I periodically delete unused objects from dict.
However, I noticed that the RAM is only cleared when it reaches the maximum limit of docker,
whereas I would like it to be cleared immediately upon object deletion.
Can someone help me in resolving this issue ?</p>
<p>Simplified example:</p>
<pre class="lang-py prettyprint-override"><code>
import sys
from datetime import datetime
import gc
big_data = {i: datetime.now() for i in range(1000)}
memory_usage = sys.getsizeof(big_data)
print(f"{memory_usage=} by {len(big_data)=}")
# memory_usage=36960 of len(big_data)=1000
# Delete data
for idx in list(big_data):
del big_data[idx]
# Trying to trigger garbage collection
# In output you can see empty dict that still using memory. I expect minimal memory_usage.
gc.collect()
memory_usage = sys.getsizeof(big_data)
print(f"{memory_usage=} by {len(big_data)=}")
# memory_usage=36960 of len(big_data)=0
</code></pre>
|
<python><memory-leaks>
|
2023-06-13 12:14:54
| 1
| 814
|
pyjedy
|
76,464,804
| 17,444,096
|
Instloader - Facing issue while getting profile information using username, previously It was working
|
<p>Library Link: <a href="https://github.com/instaloader/instaloader" rel="nofollow noreferrer">https://github.com/instaloader/instaloader</a></p>
<pre class="lang-py prettyprint-override"><code>import instaloader
ACCOUNT_USERNAME=""
ACCOUNT_PASSWORD=""
# Creating an instance of the Instaloader class
bot = instaloader.Instaloader()
bot.login(ACCOUNT_USERNAME, ACCOUNT_PASSWORD)
username = "instagram"
profile = instaloader.Profile.from_username(bot.context, username)
print("Trying to scrape profile of:", profile.username)
print("Number of Posts: ", profile.mediacount)
print("Followers Count: ", profile.followers)
print("Following Count: ", profile.followees)
</code></pre>
<p>I'm trying to run the above code but facing the bellow issue,</p>
<p>issue:</p>
<pre><code>HTTP redirect from https://i.instagram.com/api/v1/users/web_profile_info/?username=instagram to Number of requests within last 10/11/20/22/30/60 minutes grouped by type:
other: 1 1 1 1 1 1
* iphone: 1 1 1 1 1 1
Instagram responded with HTTP error "429 - Too Many Requests". Please
do not run multiple instances of Instaloader in parallel or within
short sequence. Also, do not use any Instagram App while Instaloader
is running.
The request will be retried in 30 minutes, at 18:04.
</code></pre>
<p>can anyone please help me to resolve this? After every 30 mins, it's saying the same thing again and again, any resolution?</p>
|
<python><instagram-api><instaloader>
|
2023-06-13 12:12:25
| 0
| 376
|
Yash Chauhan
|
76,464,752
| 7,692,855
|
Unresolved reference to X module with Flask app in Pycharm
|
<pre><code>flask
utils
__init__.py
database.py
redis.py
config.py
app.py
requirements.txt
...
</code></pre>
<p>I'm having issues with PyCharm stating "Unresolved reference to utils" with the above project structure.</p>
<p>I have <code>from utils import config</code> in app.py and PyCharm is happy. No warning.</p>
<p>But in utils/database.py I have <code>from utils import config</code> and it is underlined in red and I get the warning <code>Unresolved reference to utils</code>.</p>
<p>What am I doing wrong?</p>
|
<python><pycharm>
|
2023-06-13 12:05:19
| 2
| 1,472
|
user7692855
|
76,464,710
| 2,791,346
|
Set any type from WSDL in Zeep SOAP request
|
<h2>I have WSDL envelope like this:</h2>
<pre><code>...
<xs:complexType name="hilfsmittelattribute">
<xs:sequence>
<xs:any maxOccurs="unbounded" namespace="http://hma.ws.bswsnt.vsa/xsd" processContents="skip"/>
</xs:sequence>
</xs:complexType>
...
</code></pre>
<h2>What I would like to do is something like this</h2>
<pre><code>conn = zeep.Client(...)
data = {
....
'hilfsmittelattribute': {'koerperhaelfte': 1}
...
}
pack = conn.service.someMethod(**data)
</code></pre>
<p>but I get error that</p>
<pre><code>TypeError: ... got an unexpected keyword argument 'koerperhaelfte'. Signature: `_value_1: ANY[]`
</code></pre>
<p>I read documentation from here <a href="https://docs.python-zeep.org/en/master/datastructures.html#any-objects" rel="nofollow noreferrer">Zeep documentation for Any</a></p>
<h1>But:</h1>
<p>if I do:</p>
<p>conn.get_element('ns0:hilfsmittelattribute') # or ns1, ns2, ns3...</p>
<p>I get</p>
<pre><code>LookupError: No element 'hilfsmittelattribute' in namespace http://ws.bswsnt.vsa/xsd. Available elements are: -
</code></pre>
<p>But if I write</p>
<pre><code>hilfsmittelattribute_type = conn.get_type('ns2:hilfsmittelattribute')
</code></pre>
<p>I get the type object</p>
<p>( the code return )</p>
<pre><code>hilfsmittelattribute_type.elements
>> [('_value_1', <Any(name=None)>)]
</code></pre>
<p>but if I would like to use this type object like this</p>
<pre><code>hilfsmittelattribute_ = xsd.AnyObject(
hilfsmittelattribute_type, hilfsmittelattribute_type(koerperhaelfte='1'))
</code></pre>
<p>I get the error</p>
<pre><code>TypeError: {...}hilfsmittelattribute() got an unexpected keyword argument 'koerperhaelfte'. Signature: `_value_1: ANY[]`
</code></pre>
<hr />
<p>EDIT:</p>
<p>I try to create SOAP envelope by hand. If I set :</p>
<pre><code>....
<hilfsmittelattribute>
<koerperhaelfte>5</koerperhaelfte>
</hilfsmittelattribute>
....
</code></pre>
<p>It actually works... So the problem needs to be in Zeep on handling Any type.</p>
|
<python><soap><wsdl><zeep>
|
2023-06-13 11:59:44
| 1
| 8,760
|
Marko Zadravec
|
76,464,427
| 6,133,833
|
Correctly interpreting numpy shapes
|
<p>I’m very new to the numpy library in python and want to understand the basics better. Given a numpy array:</p>
<pre><code>array_example = np.array([[[0, 1, 2, 3],
[4, 5, 6, 7]],
[[0, 1, 2, 3],
[4, 5, 6, 7]],
[[0 ,1 ,2, 3],
[4, 5, 6, 7]]])
</code></pre>
<p><code>array_example.ndim</code> prints out 3.
<strong>My understanding:</strong>
The shape of this numpy array is (3, 2, 4) because I have visualised the shape property as denoting, number of elements in each dimension, or along each axis. In this case, an array of 3 matrices (along axis 0), 2 rows along axis 1 and 4 elements in each row, along axis 2. Basically, the first element in the shape tuple for 3D arrays should denote the number of matrices in the list.</p>
<p><strong>However</strong>, as per the docs, while analysing an image(<a href="https://numpy.org/numpy-tutorials/content/tutorial-svd.html" rel="nofollow noreferrer">https://numpy.org/numpy-tutorials/content/tutorial-svd.html</a>), on opening an image as a numpy array, the shape returned is (768, 1024, 3), which should be read as 3 arrays (here the last item in the tuple denotes the number of matrices) each of shape (768, 1024).</p>
<p>Is there a possible way of understanding multi-dimensional numpy arrays that satisfy both logics?</p>
|
<python><numpy><numpy-ndarray>
|
2023-06-13 11:24:18
| 2
| 341
|
Soham Bhaumik
|
76,464,417
| 11,000,061
|
Building Docker Image Programmatically Using Python Docker SDK without Dockerfile
|
<p>I am working on a Python project where I need to programmatically build a Docker image without using a Dockerfile. I want to use the Docker SDK for Python to achieve this.</p>
<p>Specifically, I would like to:</p>
<ul>
<li>Dynamically generate the image contents and configuration using Python code instead of writing a Dockerfile.</li>
<li>Execute each Docker command separately, such as setting up the base image, copying files, installing dependencies, and running custom build commands.</li>
<li>Retrieve the output or response from each Docker command execution for further processing or error handling.</li>
<li>Finally, save the resulting changes as a new Docker image.</li>
</ul>
<p>I would greatly appreciate it if someone could guide me on how to approach this task using the Python Docker SDK. Any code examples or step-by-step instructions would be very helpful.</p>
<p>I have already installed the Docker SDK for Python using pip install docker.</p>
|
<python><docker>
|
2023-06-13 11:23:15
| 1
| 1,325
|
Omar Al-Howeiti
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.