QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,270,101
1,088,076
Default to a derived unit on output in python pint
<p>Is it possible to default the output if a unit with pint to a derived unit? For example, if I compute a pressure by dividing a force and area, I would like it to display in &quot;psi&quot; by default.</p> <pre><code>import pint ureg = pint.UnitRegistry() ureg.default_format = &quot;.1f~&quot; s = 42*ureg.pound_force / (4.2*ureg.inch**2) s # shows 'lbf/in2', but I would like 'psi' w/o using to('psi') </code></pre>
<python><pint>
2023-10-11 03:39:44
1
1,911
slaughter98
77,269,970
8,734,514
How to classify segments of time series data
<p>I have time series data being captured from the accelerometer of an android phone. I have added a class column so that I can label the class of the data. The data has a binary classification, 0 for false, and 1 for true. The time series data has continuous segments where the entire segment together should be classified as 1. I have highlighted the charted data to show the continuous sections of data I am classifying as 1. I would like to be able to label this training data and then provide new data to a trained model that can classify the data, and also tell where it starts and ends so I can pull out the start and stop time. From what I have read online, it seems that a LSTM neural net would be the right solution, but I am not sure how to build a model that can classify the continuous segments. Direction or simple examples anyone is aware of that shows how to use LSTM, or a more suitable model for this type of data to accomplish what I am trying to accomplish would be awesome. I am new to machine learning and attempting this project for my own learning/curiosity, so apologies for incorrect terminology or lack of knowledge.</p> <p><a href="https://i.sstatic.net/SlVYW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SlVYW.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/foYno.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/foYno.png" alt="enter image description here" /></a></p>
<python><tensorflow><keras><neural-network><lstm>
2023-10-11 02:51:31
1
656
Kevin Gardenhire
77,269,875
395,857
How can I change some properties of a textbox when created with gr.Interface?
<p>I'm following the basic Gradio interface from the <a href="https://www.gradio.app/docs/interface" rel="nofollow noreferrer">Gradio documentation</a>:</p> <pre><code>import gradio as gr def greet(name): return &quot;Hello &quot; + name + &quot;!&quot; #demo = gr.Interface(fn=greet, inputs=&quot;text&quot;, outputs=&quot;text&quot;) demo.launch() </code></pre> <p>The interface is:</p> <p><a href="https://i.sstatic.net/D4DBD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/D4DBD.png" alt="enter image description here" /></a></p> <p>How can I change some properties of the output textbox? E.g., I'd like to set <a href="https://www.gradio.app/docs/textbox" rel="nofollow noreferrer"><code>autoscroll</code></a> to False.</p>
<python><gradio>
2023-10-11 02:15:58
1
84,585
Franck Dernoncourt
77,269,865
13,635,877
Using SLSQP for finding the maximum sharpe ratio portfolio, getting nan as portfolio volatility
<p>I don't really understand what SLSQP is doing when I run this, the denominator of the objective function is always nan (as seen in the callback). When I plug in x0 into the denominator, I get an actual number so I don't understand why its nan in the callback right off the bat?</p> <pre><code>from scipy.optimize import minimize import numpy as np n = 250 x0 = np.ones(n) / n pred = np.random.normal(loc=0.07, scale=0.2, size=(n)) c = ((np.random.random(size = (n, n)))-0.5) *0.1 def objective(weights): if np.sum(weights) == 0: return 10 else: return -np.dot(weights, pred)/(np.sqrt(np.dot(weights, np.dot(c, weights)))) # / def custom_callback(xk): print(np.dot(xk, pred) , (np.sqrt(np.dot(xk, np.dot(c, xk))))) constraints = ( {'type': 'ineq', 'fun': lambda weights: np.abs(np.sum(weights)) - 0.03}, {'type': 'ineq', 'fun': lambda weights: 1.03 - np.sum(np.abs(weights))}, {'type': 'ineq', 'fun': lambda weights: np.sum(np.abs(weights))-0.97}, ) bounds = tuple((-0.02, 0.02) for asset in range(len(x0))) result = minimize(objective, x0, method='SLSQP', bounds=bounds, constraints=constraints, options={'disp': 2, 'maxiter' : 100}, callback=custom_callback) </code></pre> <p>denominator with x0 plugged in:</p> <pre><code>np.sqrt(np.dot(x0, np.dot(c, x0))) </code></pre>
<python><optimization><scikit-learn><minimize>
2023-10-11 02:10:31
1
452
lara_toff
77,269,857
21,575,627
What is a more optimal algorithm for this problem?
<p>Suppose you have an array <code>A</code> of <code>N</code> integers. In one round, you make changes as follows (based on the array snapshot at the beginning of the round):</p> <ul> <li>-= 1 if greater than both adjacent (on both sides)</li> <li>+= 1 if less than both adjacent (on both sides)</li> </ul> <p>The one on the edges/ends never change, and you keep going while a change was made the previous round.</p> <p>A simple algorithm looks like:</p> <pre><code> while True: prior = A cur = A[:] for i in range(1, len(cur) - 1): if prior[i - 1] &gt; prior[i] and prior[i + 1] &gt; prior[i]: cur[i] += 1 elif prior[i - 1] &lt; prior[i] and prior[i + 1] &lt; prior[i]: cur[i] -= 1 if cur == prior: break A = cur return A </code></pre> <p>This was for a coding assessment I recently took. Here's a couple examples:</p> <pre><code>Input: [1, 6, 3, 4, 3, 5] Returns: [1, 4, 4, 4, 4, 5] Input: [100, 50, 40, 30] Returns: [100, 50, 40, 30] </code></pre> <p><strong>Explanation for 1st case:</strong></p> <p><code>1</code> and <code>5</code> at the ends never change since they don't have both neighbours. This will also be true for any testcase. Next state would be as below,</p> <pre><code>[ 1, 5, 4, 3, 4, 5] </code></pre> <ul> <li><p>arr[2] = 5 since <code>1 &lt; 6 &gt; 3</code>, <code>6</code> gets reduced by 1 making it <code>5</code></p> </li> <li><p>arr[3] = 4 since <code>6 &gt; 3 &lt; 4</code>, <code>3</code> gets increased by 1 making it <code>4</code>.</p> </li> <li><p>arr[4] = 3 since <code>3 &lt; 4 &gt; 3</code>, <code>4</code> gets reduced by 1 making it <code>3</code>.</p> </li> <li><p>arr[5] = 4 since <code>4 &gt; 3 &lt; 5</code>, <code>3</code>, gets increased by 1 making it <code>4</code>.</p> <pre><code>[ 1, 5, 4, 3, 4, 5] becomes [1, 4, 4, 4, 4, 5] in a similar fashion as above and stops here since no further operations can be performed. </code></pre> </li> </ul> <p><strong>Explanation for 2nd case:</strong></p> <p>No action is taken on the array since none of the non-border elements have neighbours where both are either smaller or greater.</p>
<python><algorithm><performance><data-structures><time-complexity>
2023-10-11 02:07:52
0
1,279
user129393192
77,269,753
360,869
passing `apify_api_token` as a named parameter
<p>I am new to python, I am trying to pass the API key to apifywrapper here in the notebook. I am getting this error:</p> <blockquote> <p>ValidationError Traceback (most recent call last) in &lt;cell line: 9&gt;() 7 api_token = userdata.get(&quot;apify&quot;) 8 ----&gt; 9 apify = ApifyWrapper(apify_api_token=api_token) 10 # Call the Actor to obtain text from the crawled webpages 11 loader = apify.call_actor(</p> <p>/usr/local/lib/python3.10/dist-packages/pydantic/main.cpython-310-x86_64-linux-gnu.so in pydantic.main.BaseModel.<strong>init</strong>()</p> <p>ValidationError: 1 validation error for ApifyWrapper <strong>root</strong> Did not find apify_api_token, please add an environment variable <code>APIFY_API_TOKEN</code> which contains it, or pass <code>apify_api_token</code> as a named parameter. (type=value_error)</p> </blockquote> <pre><code>!pip install apify apify-client from langchain.docstore.document import Document from langchain.indexes import VectorstoreIndexCreator from langchain.utilities import ApifyWrapper import apify api_token = userdata.get(&quot;apify&quot;) apify = ApifyWrapper(apify_api_token=api_token) # Call the Actor to obtain text from the crawled webpages loader = apify.call_actor( actor_id=&quot;apify/website-content-crawler&quot;, run_input={&quot;startUrls&quot;: [{&quot;url&quot;: &quot;https://python.langchain.com/docs/integrations/chat/&quot;}]}, dataset_mapping_function=lambda item: Document( page_content=item[&quot;text&quot;] or &quot;&quot;, metadata={&quot;source&quot;: item[&quot;url&quot;]} ), ) </code></pre> <p>Can explain to me what am I doing wrong and how can I fix this problem? Ty</p>
<python><google-colaboratory><apify>
2023-10-11 01:25:21
1
457
ilteris
77,269,618
19,123,103
Why does an operation on a large integer silently overflow?
<p>I have a list that contains very large integers and I want to cast it into a pandas column with a specific dtype. As an example, if the list contains <code>2**31</code>, which is outside the limit of int32 dtype, casting it into dtype int32 throws an Overflow Error, which lets me know to use another dtype or handle the number in some other way beforehand.</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd pd.Series([2**31], dtype='int32') # OverflowError: Python int too large to convert to C long </code></pre> <p>But if a number is large but inside the dtype limits (i.e. <code>2**31-1</code>), and some number is added to it which results in a value that is outside the dtype limits, then instead of an OverflowError, the operation is executed without any errors, yet the value is now inverted, becoming a completely wrong number for the column.</p> <pre class="lang-py prettyprint-override"><code>pd.Series([2**31-1], dtype='int32') + 1 0 -2147483648 dtype: int32 </code></pre> <p>Why is it happening? Why doesn’t it raise an error like the first case?</p> <p>PS. I'm using pandas 2.1.1 and numpy 1.26.0 on Python 3.12.0.</p>
<python><pandas><long-integer>
2023-10-11 00:35:19
5
25,331
cottontail
77,269,443
1,410,221
How to add custom attributes to a Python list?
<p>I'm trying to associate additional data with a Python list. Ideally, I'd like to add custom attributes to a list object, similar to how you'd add attributes to a custom class. For example:</p> <pre class="lang-py prettyprint-override"><code>a = [1, 2, 4, 8] a.x = &quot;Hey!&quot; # I'd like this to work without errors </code></pre> <p>However, when I attempt this, I get the following error:</p> <pre><code>AttributeError: 'list' object has no attribute 'x' </code></pre> <p>I understand that Python lists don't support adding arbitrary attributes out of the box. I also tried using <code>setattr</code>, but it gave me a similar error:</p> <pre class="lang-py prettyprint-override"><code>setattr(a, &quot;k&quot;, 98) # Raises AttributeError </code></pre> <p>I cannot create a custom class <code>DummyList(list)</code> to work around this (for reasons). I also don't want to use some global variable that holds the attributes for the lists. Are there any workarounds to achieve this in Python?</p>
<python><list><setattr><python-builtins>
2023-10-10 23:15:29
0
4,193
HappyFace
77,269,418
525,916
Update data assigned to global variable (outside callback) once a day
<p>I am creating a read-only Plotly Dash dashboard that uses a lot of data. It involves a lot of database reads and transformations which usually takes up to 1 minute. To avoid user waiting this long on every interaction, I am performing these steps outside the callbacks and assign the data to a global variable. These steps are performed only once when the first user opens the dashboard.</p> <p>The dashboard has filters. When the user changes the filters, I filter the global data (in a different variable) based on the selected filters in the callback functions and display the data. As the data is already available on the server, subsequent interactions are fast. The global data is not updated from the callbacks. This work great and it performs well.</p> <p>The data I am using has daily granularity. I need to update this global data once a day to ensure that the global data is up-to-date. How do I update this global data? I think I can use Redis as an intermediate store which is updated once a day and all callbacks read from Redis but can this be done without the intermediate layer?</p> <p>Here is the sample code:</p> <pre><code>from dash import Dash, dcc, html, Input, Output, callback import pandas as pd from dash_ag_grid import AgGrid def get_global_data(): data_df = pd.DataFrame() # Lot of DB reads and transformations return data_df global_data = get_global_data() app = Dash(__name__) app.layout = html.Div([ html.H6(&quot;Select country to view data!&quot;), dcc.Dropdown(['USA', 'Canada', 'Mexico'], 'USA', id='country-dropdown'), html.Br(), AgGrid( id=&quot;data-grid&quot;, defaultColDef={&quot;resizable&quot;: True, &quot;sortable&quot;: True, &quot;filter&quot;: True}, columnSize=&quot;sizeToFit&quot;, style={'width': '100%'}, ) ]) @callback( Output('data-grid', 'rowData'), Output('data-grid', 'columnDefs'), Input(component_id='country-dropdown', component_property='value') ) def update_table(input_country): filtered_data = global_data[global_data['country']==input_country] filtered_data_as_dict = convert_df_to_dict(filtered_data) # convert to dict column_defs = create_column_def(filtered_data) # generates column def return filtered_data_as_dict, column_defs if __name__ == '__main__': app.run(debug=True) </code></pre> <p>If global data in this code is updated daily, how do I update it in the dashboard?</p>
<python><plotly-dash>
2023-10-10 23:07:03
1
4,099
Shankze
77,269,319
1,394,336
Why are distinct multiprocessing.Value objects pointing to the same value?
<p>In the following program, two processes are created. Each receives <em>it's own</em> <code>multiprocessing.Value</code> object. Somehow, the two processes still influence each other. Why?</p> <p>I would expect the two processes to deadlock: The first process can increment the counter once (it's initially 0) but then should wait for it to be even again. The second process can never increment it, as it's 0 initially and never changes. Somehow, this deadlock does not happen, both processes continue to run.</p> <pre><code>import multiprocessing import time def inc_even(counter): print(f&quot;inc_even started. {hash(counter)=}&quot;) while True: if counter.value % 2 == 0: counter.value += 1 print(&quot;inc_even incremented counter&quot;) else: time.sleep(1) def inc_odd(counter): print(f&quot;inc_odd started. {hash(counter)=}&quot;) while True: if counter.value % 2 == 1: counter.value += 1 print(&quot;inc_odd incremented counter&quot;) else: time.sleep(1) multiprocessing.Process( target=inc_even, kwargs={ &quot;counter&quot;: multiprocessing.Value(&quot;i&quot;, 0), }, ).start() multiprocessing.Process( target=inc_odd, kwargs={ &quot;counter&quot;: multiprocessing.Value(&quot;i&quot;, 0), }, ).start() </code></pre> <p>Output:</p> <pre class="lang-plaintext prettyprint-override"><code>inc_even started. hash(counter)=8786024404161 inc_even incremented counter inc_odd started. hash(counter)=8786024404157 inc_odd incremented counter inc_even incremented counter inc_odd incremented counter ... </code></pre> <p>Interestingly, if I change this to first create two variables that hold the two counters, the deadlock occurs:</p> <pre><code>counterA = multiprocessing.Value(&quot;i&quot;, 0) counterB = multiprocessing.Value(&quot;i&quot;, 0) multiprocessing.Process( target=inc_even, kwargs={ &quot;counter&quot;: counterA, }, ).start() multiprocessing.Process( target=inc_odd, kwargs={ &quot;counter&quot;: counterB, }, ).start() </code></pre> <p>Output:</p> <pre class="lang-plaintext prettyprint-override"><code>inc_even started. hash(counter)=8765172881357 inc_even incremented counter inc_odd started. hash(counter)=8765172756097 </code></pre> <p>Edit: If I replace the <code>multiprocessing.Value</code> object with some custom class, this does not happen:</p> <pre><code>class CustomCounter: def __init__(self) -&gt; None: self.value = 0 multiprocessing.Process( target=inc_even, kwargs={ &quot;counter&quot;: CustomCounter(), }, ).start() multiprocessing.Process( target=inc_odd, kwargs={ &quot;counter&quot;: CustomCounter(), }, ).start() </code></pre> <p>This deadlocks as expected. So it must be caused by <code>multiprocessing.Value</code>, not just the multiprocessing in general.</p>
<python><multiprocessing><python-multiprocessing>
2023-10-10 22:36:46
1
2,057
Christopher
77,269,228
6,814,713
SQLAlchemy query distinct values from filtered many to many relationship
<p>I have been struggling to put together the correct query for this and need some help. I have two tables with a many-to-many relationship. I would like to put together a performant query for all distinct values in table B using the filtered values from table A:</p> <pre class="lang-py prettyprint-override"><code>association_table = Table( &quot;a_b&quot;, BaseModel.metadata, Column(&quot;a_id&quot;, ForeignKey(&quot;a.id&quot;), primary_key=True), Column(&quot;b_id&quot;, ForeignKey(&quot;b.id&quot;), primary_key=True), ) class A(BaseModel): __tablename__ = &quot;a&quot; id: Mapped[int] = mapped_column(primary_key=True) field_a: Mapped[str] = mapped_column(nullable=False) field_b: Mapped[str] = mapped_column(nullable=False) b: Mapped[List[&quot;B&quot;]] = relationship( secondary=job_tag_association_table, back_populates=&quot;a&quot;, passive_deletes=True ) class B(BaseModel): __tablename__ = &quot;b&quot; id: Mapped[int] = mapped_column(primary_key=True) name: Mapped[str] = mapped_column(nullable=False) b: Mapped[List[&quot;A&quot;]] = relationship( secondary=job_tag_association_table, back_populates=&quot;b&quot;, passive_deletes=True ) </code></pre> <p>I have a pre-filtered query of A objects (<code>sqlalchemy.orm.query.Query</code>) that I am getting from something like <code>A.query.where(A.field_a == 'something')</code>. I do not have access to the internals of this query, but for the purpose of solving this issue, all we need to know that it is equivalent to a filtered query as would be returned by <code>A.query</code>.</p> <p>I want to join this query with B and get the distinct values of of B.name for the subset of these filtered A objects. Some example of what I am trying to do below:</p> <pre class="lang-py prettyprint-override"><code> filtered_a = A.query [i for i in filtered_a.join(A.b).distinct(B.name)] </code></pre> <p>The above returns a list of <code>A</code> objects, but I am looking for a list of <code>B</code> objects instead (and it doesn't seem like I can get, as <code>i.b</code> does not seem to work. I am also not sure if the distinct is operating on the single object level, or if it is working globally across all <code>A</code> objects (as would be my intent). Any advice is appreciated - thanks!</p>
<python><sqlalchemy><orm><many-to-many>
2023-10-10 22:10:45
1
2,124
Brendan
77,269,168
678,572
Inconsistent behavior of Sklearn's precision_score when manual cross-validation is performed vs. cross_val_score
<p>I'm trying to use <code>precision_score</code> with <code>np.nan</code> for the <code>zero_division</code>. It's not working with <code>cross_val_score</code> but working when I do manual cross-validation with the same pairs.</p> <p>Here's the data files to reproduce: <a href="https://github.com/scikit-learn/scikit-learn/files/12861719/sklearn_data.pkl.zip" rel="nofollow noreferrer">sklearn_data.pkl.zip</a></p> <pre><code># Load in data with open(&quot;sklearn_data.pkl&quot;, &quot;rb&quot;) as f: objects = pickle.load(f) # &gt; objects.keys() # dict_keys(['estimator', 'X', 'y', 'scoring', 'cv', 'n_jobs']) estimator = objects[&quot;estimator&quot;] X = objects[&quot;X&quot;] y = objects[&quot;y&quot;] scoring = objects[&quot;scoring&quot;] cv = objects[&quot;cv&quot;] n_jobs = objects[&quot;n_jobs&quot;] # &gt; scoring # make_scorer(precision_score, pos_label=Case_0, zero_division=nan) # &gt; y.unique() # ['Control', 'Case_0'] # Categories (2, object): ['Case_0', 'Control'] # First I checked to make sure that there are both classes in all the training and validation pairs pos_label = &quot;Case_0&quot; control_label = &quot;Control&quot; for index_training, index_validation in cv: assert y.iloc[index_training].nunique() == 2 assert y.iloc[index_validation].nunique() == 2 assert pos_label in y.values assert control_label in y.values # If I run manually: scores = list() for index_training, index_validation in cv: estimator.fit(X.iloc[index_training], y.iloc[index_training]) y_hat = estimator.predict(X.iloc[index_validation]) score = precision_score(y_true = y.iloc[index_validation], y_pred=y_hat, pos_label=pos_label) scores.append(score) # &gt; print(np.mean(scores)) # 0.501156937317928 # If I use cross_val_score: cross_val_score(estimator=estimator, X=X, y=y, cv=cv, scoring=scoring, n_jobs=n_jobs) # /Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env2/lib/python3.9/site-packages/sklearn/model_selection/_validation.py:839: UserWarning: Scoring failed. The score on this train-test partition for these parameters will be set to nan. Details: # Traceback (most recent call last): # File &quot;/Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env2/lib/python3.9/site-packages/sklearn/metrics/_scorer.py&quot;, line 136, in __call__ # score = scorer._score( # File &quot;/Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env2/lib/python3.9/site-packages/sklearn/metrics/_scorer.py&quot;, line 355, in _score # return self._sign * self._score_func(y_true, y_pred, **scoring_kwargs) # File &quot;/Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env2/lib/python3.9/site-packages/sklearn/utils/_param_validation.py&quot;, line 201, in wrapper # validate_parameter_constraints( # File &quot;/Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env2/lib/python3.9/site-packages/sklearn/utils/_param_validation.py&quot;, line 95, in validate_parameter_constraints # raise InvalidParameterError( # sklearn.utils._param_validation.InvalidParameterError: The 'zero_division' parameter of precision_score must be a float among {0.0, 1.0, nan} or a str among {'warn'}. Got nan instead. </code></pre> <p>Here's my versions:</p> <pre><code>System: python: 3.9.16 | packaged by conda-forge | (main, Feb 1 2023, 21:42:20) [Clang 14.0.6 ] executable: /Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env2/bin/python machine: macOS-13.4.1-x86_64-i386-64bit Python dependencies: sklearn: 1.3.1 pip: 22.0.3 setuptools: 60.7.1 numpy: 1.24.4 scipy: 1.8.0 Cython: 0.29.27 pandas: 1.4.0 matplotlib: 3.7.1 joblib: 1.3.2 threadpoolctl: 3.1.0 Built with OpenMP: True threadpoolctl info: user_api: blas internal_api: openblas prefix: libopenblas filepath: /Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env2/lib/libopenblasp-r0.3.18.dylib version: 0.3.18 threading_layer: openmp architecture: Haswell num_threads: 16 user_api: openmp internal_api: openmp prefix: libomp filepath: /Users/jespinoz/anaconda3/envs/soothsayer_py3.9_env2/lib/libomp.dylib version: None num_threads: 16 </code></pre>
<python><numpy><machine-learning><scikit-learn><nan>
2023-10-10 21:56:09
1
30,977
O.rka
77,269,011
13,860,719
How to get the callable function defined by exec() in Python?
<p>Say I want to have a function <code>exec_myfunc</code> that can execute any user-defined function with an input value of 10. The user is supposed to define the function by a <strong>string</strong> as shown below:</p> <pre><code>func1_str = &quot;&quot;&quot; def myfunc1(x): return x &quot;&quot;&quot; func2_str = &quot;&quot;&quot; def myfunc2(x): return x**2 &quot;&quot;&quot; </code></pre> <p>Right now I am using a very hacky way, by extracting the function name between <code>def </code> and <code>(</code> using regular expression, as shown below:</p> <pre><code>def exec_myfunc(func_str: str): import re exec(func_str) myfunc_str = re.search(r'def(.*)\(', func_str).group(1).strip() return eval(myfunc_str)(10) print(exec_myfunc(func1_str)) # 10 print(exec_myfunc(func2_str)) # 100 </code></pre> <p>I would like to know what is the general and correct way of doing this?</p>
<python><python-3.x><string><function><exec>
2023-10-10 21:17:27
2
2,963
Shaun Han
77,268,984
7,542,939
Search for jobs with Cloud Talent Solution API in Python
<p>I cannot find any useful documentation whatsoever on how to use the Google Cloud Talent Solution API in Python to perform even a basic query. Right now I am trying:</p> <pre><code>from google.cloud import talent credential_path = &quot;creds_path/creds.json&quot; os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = credential_path client = talent.JobServiceClient() client.job_query(query=&quot;python&quot;) </code></pre> <p>This gives the result:</p> <pre><code>TypeError: search_jobs() got an unexpected keyword argument 'query' </code></pre> <p>Removing <code>query</code> and trying other variables doesn't work. How do I use this API???</p>
<python><google-cloud-platform><google-cloud-talent-solution>
2023-10-10 21:10:44
1
1,831
snapcrack
77,268,954
2,733,436
Using prefix sum to calculate scores of 2 robots in a 2d array
<p><strong>Background</strong></p> <p>I am trying to solve the following <a href="https://leetcode.com/problems/grid-game/description/" rel="nofollow noreferrer">leetcode problem</a>. I have written the below function and it seems to be passing the sample test cases.</p> <p><strong>My Code</strong></p> <pre><code>def gridGame(grid): robot_score = 0 for _ in range(2): robot_score = 0 row_one_sum = sum(grid[0]) row_two_sum = sum(grid[1]) row = 0 col = 0 while row &lt; len(grid) and col &lt; len(grid[0]): # get current points robot_score += grid[row][col] # subtract points from sum if row == 0: row_one_sum -= grid[0][col] else: row_two_sum -= grid[1][col] # set current grid to 0 grid[row][col] = 0 # figure out next move if row == 0: if row_two_sum &gt; row_one_sum or col == len(grid[0]) - 1: row += 1 else: row_two_sum -= grid[row - 1][col - 1] col += 1 else: col += 1 return robot_score </code></pre> <p><strong>Test cases i am passing</strong></p> <pre><code>grid = [[1,3,1,15],[1,3,3,1]] grid = [[3,3,1],[8,5,2]] grid = [[2,5,4],[1,5,1]] </code></pre> <p><strong>Test Case I am Failing</strong></p> <pre><code>gridGame([[10, 12, 14, 19, 19, 12, 19, 2, 17], [20, 7, 17, 14, 3, 1, 1, 17, 12]]) </code></pre> <p>I have tried debugging for 2.5 hours with no luck and i know i can find solutions online but i am trying to understand what i did wrong.</p> <p>My Over all approach is i need to get sum of each row and then i know if i move right how much more points i can get vs if i move down how much points i can get.</p>
<python><arrays><algorithm><data-structures>
2023-10-10 21:06:34
1
1,658
user1010101
77,268,908
468,455
Updating Google Sheet with API & Python with less rows than original
<p>I'm building a script that writes a deployment log for all our products. I have a Google sheet that holds the data. The first sheet of that spreadsheet is titled &quot;Recent Releases&quot; and as it says, holds the most recent releases (as of when the script runs). Most times there will be more or less rows than the current data. If there are less rows, how do I overwrite the extra existing rows?</p> <p>I have this function:</p> <pre><code>def overWriteSheet(self, service, spreadSheetId, strRange, myData): try: sheet = service.spreadsheets() body={'values': myData} result = service.spreadsheets().values().update(spreadsheetId=spreadSheetId, range=strRange, valueInputOption='RAW', body=body).execute() except Exception as e: print(e) </code></pre> <p>which works but only replaces up to the number of items in the passed in array in the body param. This leaves me with the extra old data in the sheet. I could:</p> <ol> <li>read the sheet first</li> <li>count the number of rows</li> <li>compare to the new data array</li> <li>if new data count is less than existing data add empty entries in the new data array</li> </ol> <p>but that seems like a hack and very inefficient. Is there nothing I can pass to let the API know to clear the sheet first then add the new data?</p>
<python><google-sheets><google-sheets-api>
2023-10-10 20:56:32
1
6,396
PruitIgoe
77,268,899
8,521,346
Strange Roundrobin Style Sorting Algorithem Name
<p>I have been using the more_itertools <code>roundrobin</code> function to &quot;fairly&quot; distribute to workers. (literal ones, not Celery)</p> <p>which would look like this.</p> <p><code>list(roundrobin('AAA', 'BB', 'CC')) == ['A', 'B', 'C', 'A', 'B', 'C', 'A']</code></p> <p>This all looks well and good to me, but the client has said that this isnt &quot;even&quot; enough.</p> <p>The output he is wanting is as follows <code>['A', 'B', 'A', 'C', 'A, 'B', 'C']</code></p> <p>He is wanting the order not to sample ALL the other elements before going back to a duplicate.</p> <p><code>roundrobin</code> is based around <code>interleave_longest</code> and apparently interleave_longest will sample from all the lists before going back around to the duplicates.</p> <p>As for my question, <em>what</em> is this process even called? Is there a Python function for it or is it something to roll out on my own? Is it even statistically viable to do this method while maintaining &quot;fairness&quot;?</p>
<python>
2023-10-10 20:53:55
0
2,198
Bigbob556677
77,268,771
22,466,650
How to make a cluster of items based on membership?
<p>I need like a magnet effect to form a cluster within a list of unique items.</p> <p>My input is this :</p> <pre><code>lst1 = ['ah', 'ab', 'c', None, 'xo', 'i', 'b', 'lji', 'z', 'bel', 'oyb'] </code></pre> <p>So based on a given item (a string that could be of any length), we need to move from left and right (towards this item) every string that contains it. For simplicity, lets consider the chosen item as a single letter : <code>letter = 'b'</code>. Now, we need to look for any string that contain <code>b</code> and bring it closer to it (from both sides) and preserve its suborder, like magnet. The other items will also be shifted <code>(-+)</code>.</p> <p>In this case, the output should be :</p> <pre><code>lst2 = ['ah', 'c', None, 'xo', 'i', 'ab', 'b', 'bel', 'oyb', 'lji', 'z'] </code></pre> <p>The cluster is represented here by the slice : <code>['ab', 'b', 'bel', 'oyb']</code>.</p> <p>I tried the code below but I got a wrong list even though I was able to make the cluster :</p> <pre><code>letter = 'b' data = {} for i, e in enumerate(lst1): if e is not None and letter in e and e!=letter: data[e] = i left = lst1.index(letter)-1 right = lst1.index(letter)+1 for key,value in data.items(): if value&lt;lst1.index(letter): lst1[left] = key left -= 1 elif value&gt;lst1.index(letter): lst1[right] = key right += 1 </code></pre> <p>Here is what I got :</p> <pre><code>print(lst1) ['ah', 'ab', 'c', None, 'xo', 'ab', 'b', 'bel', 'oyb', 'bel', 'oyb'] </code></pre> <p>Do you have any suggestions ?</p>
<python><list>
2023-10-10 20:26:42
4
1,085
VERBOSE
77,268,753
4,701,426
Creating a column based on values of another column
<p>Please consider this dataframe:</p> <pre><code>import pandas as pd import numpy as np values = [0, 22, 30, 0, 20, 22, 11, 0, 13] index = pd.date_range(start = '2023-10-1', periods = len(values)) df = pd.DataFrame({'values':values }, index = index) df values 2023-10-01 0 2023-10-02 22 2023-10-03 30 2023-10-04 0 2023-10-05 20 2023-10-06 22 2023-10-07 11 2023-10-08 0 2023-10-09 13 </code></pre> <p><strong>Goal:</strong> create a new column that counts how many days has passed since the last 0 in <code>values</code>.</p> <p>I can do this using a for loop:</p> <pre><code>zero_indices = df[df['values'] == 0].index df['days'] = np.nan for i in range(len(zero_indices)-1): df['days'][zero_indices[i]: zero_indices[i+1]] = range(len(df[zero_indices[i]: zero_indices[i+1]])) df['days'][zero_indices[-1]: ] = range(len(df[zero_indices[-1]: ])) values days 2023-10-01 0 0.00 2023-10-02 22 1.00 2023-10-03 30 2.00 2023-10-04 0 0.00 2023-10-05 20 1.00 2023-10-06 22 2.00 2023-10-07 11 3.00 2023-10-08 0 0.00 2023-10-09 13 1.00 </code></pre> <p><strong>Question</strong>: How can this be done using vectorization (faster)?</p>
<python><pandas><numpy><vectorization>
2023-10-10 20:21:47
1
2,151
Saeed
77,268,678
2,708,215
Databricks: "Numba needs NumPy 1.22" when updating numpy
<p>I'm using Azure Databricks with Python. Among other things, I'm using the umap and numpy modules. When I try <code>import umap.umap_ as umap</code>, I get this error:</p> <p>ImportError: Numba needs NumPy 1.22 or greater. Got NumPy 1.21.</p> <p>Ok, so just update numpy, right? I ran <code>!{sys.executable} -m pip install numpy --upgrade</code> and it worked. Numpy now shows version in the module explorer:</p> <p><a href="https://i.sstatic.net/B1Hbc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B1Hbc.png" alt="Module explorer screenshot - numpy" /></a></p> <p>But when I try the umap import again I get the same error as before, telling me it needs 1.22+. That requirement is met, although it seems like the 1.26 version is not being loaded. I have tried getting the proper version loaded using this:</p> <pre><code>import pkg_resources pkg_resources.require(&quot;numpy==1.26.0&quot;) import numpy as np </code></pre> <p>Still no luck. That gives:</p> <pre><code>VersionConflict: (numpy 1.21.5 (/databricks/python3/lib/python3.9/site-packages), Requirement.parse('numpy==1.26.0')) </code></pre> <p>So how do I do this?</p>
<python><azure-databricks>
2023-10-10 20:04:52
1
503
Andrew
77,268,484
5,397,214
Opentelemetry: Unable to Link spans with trace in a straightforward/clean way
<p>When trying to add span links in a span, the straightforward way<br /> I have read in the documentation does not work. I always get <code>AttributeError: 'Context' object has no attribute 'trace_id'</code></p> <h3>My latest attempt</h3> <h3>Simplified code to reproduce issue</h3> <pre class="lang-py prettyprint-override"><code>from opentelemetry import trace from opentelemetry.trace.propagation.tracecontext import TraceContextTextMapPropagator from shared_functionality.common.observability import make_global_tracer make_global_tracer() tracer = trace.get_tracer(__name__) def message_broker_consumer_runner(): with tracer.start_as_current_span(name=f&quot;consumer_runner.{None}&quot;) as consume_span: carrier = {} TraceContextTextMapPropagator().inject(carrier) return carrier def api_gateway(): with tracer.start_as_current_span(name=f&quot;api_gateway.{None}&quot;) as consume_span: carrier = {} TraceContextTextMapPropagator().inject(carrier) return carrier def update_info(__trace_propagator): print(f&quot;{__trace_propagator=}&quot;) def end(consumer_runner_carrier): print(f&quot;{consumer_runner_carrier=}&quot;) consumer_runner_span = trace.NonRecordingSpan( TraceContextTextMapPropagator().extract(consumer_runner_carrier) ) consumer_runner_context = consumer_runner_span.get_span_context() with tracer.start_as_current_span( name=&quot;trigger_update.info&quot;, context=api_gateway(), # to show the API Gateway trigger as causal parent. links=[ trace.Link(consumer_runner_context) ], # to show consumer running function as co-parent/linked. ) as msg_span: update_info( __trace_propagator=msg_span.get_span_context() ) # Trace further passed to track business logic function. if __name__ == &quot;__main__&quot;: end(message_broker_consumer_runner()) </code></pre> <h4>Error</h4> <pre class="lang-py prettyprint-override"><code>consumer_runner_carrier={'traceparent': '00-624066a40219dec50ed88de4feb4ee7a-1be9a825202fa729-01'} __trace_propagator=SpanContext(trace_id=0x7ae344302d28c0000ea8d4b5457aff39, span_id=0x851a6758cb28f8b7, trace_flags=0x01, trace_state=[], is_remote=False) { &quot;name&quot;: &quot;consumer_runner.None&quot;, &quot;context&quot;: { &quot;trace_id&quot;: &quot;0x624066a40219dec50ed88de4feb4ee7a&quot;, &quot;span_id&quot;: &quot;0x1be9a825202fa729&quot;, &quot;trace_state&quot;: &quot;[]&quot; }, &quot;kind&quot;: &quot;SpanKind.INTERNAL&quot;, &quot;parent_id&quot;: null, &quot;start_time&quot;: &quot;2023-10-10T18:39:20.205144Z&quot;, &quot;end_time&quot;: &quot;2023-10-10T18:39:20.205172Z&quot;, &quot;status&quot;: { &quot;status_code&quot;: &quot;UNSET&quot; }, &quot;attributes&quot;: {}, &quot;events&quot;: [], &quot;links&quot;: [], &quot;resource&quot;: { &quot;attributes&quot;: { &quot;telemetry.sdk.language&quot;: &quot;python&quot;, &quot;telemetry.sdk.name&quot;: &quot;opentelemetry&quot;, &quot;telemetry.sdk.version&quot;: &quot;1.20.0&quot;, &quot;service.name&quot;: &quot;data_fetcher&quot;, &quot;service.version&quot;: &quot;1.0&quot;, &quot;deployment.environment&quot;: &quot;development&quot; }, &quot;schema_url&quot;: &quot;&quot; } } { &quot;name&quot;: &quot;api_gateway.None&quot;, &quot;context&quot;: { &quot;trace_id&quot;: &quot;0x565151dfcc4cffbbd44e953d2c8ba27a&quot;, &quot;span_id&quot;: &quot;0x3e3e8a6265a6e481&quot;, &quot;trace_state&quot;: &quot;[]&quot; }, &quot;kind&quot;: &quot;SpanKind.INTERNAL&quot;, &quot;parent_id&quot;: null, &quot;start_time&quot;: &quot;2023-10-10T18:39:20.205270Z&quot;, &quot;end_time&quot;: &quot;2023-10-10T18:39:20.205287Z&quot;, &quot;status&quot;: { &quot;status_code&quot;: &quot;UNSET&quot; }, &quot;attributes&quot;: {}, &quot;events&quot;: [], &quot;links&quot;: [], &quot;resource&quot;: { &quot;attributes&quot;: { &quot;telemetry.sdk.language&quot;: &quot;python&quot;, &quot;telemetry.sdk.name&quot;: &quot;opentelemetry&quot;, &quot;telemetry.sdk.version&quot;: &quot;1.20.0&quot;, &quot;service.name&quot;: &quot;data_fetcher&quot;, &quot;service.version&quot;: &quot;1.0&quot;, &quot;deployment.environment&quot;: &quot;development&quot; }, &quot;schema_url&quot;: &quot;&quot; } } Exception while exporting Span batch. Traceback (most recent call last): File &quot;/usr/local/lib/python3.11/site-packages/opentelemetry/sdk/trace/export/__init__.py&quot;, line 368, in _export_batch self.span_exporter.export(self.spans_list[:idx]) # type: ignore ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/opentelemetry/sdk/trace/export/__init__.py&quot;, line 522, in export self.out.write(self.formatter(span)) ^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/opentelemetry/sdk/trace/export/__init__.py&quot;, line 513, in &lt;lambda&gt; ] = lambda span: span.to_json() ^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/opentelemetry/sdk/trace/__init__.py&quot;, line 492, in to_json f_span[&quot;links&quot;] = self._format_links(self._links) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/opentelemetry/sdk/trace/__init__.py&quot;, line 535, in _format_links ] = Span._format_context( # pylint: disable=protected-access ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/usr/local/lib/python3.11/site-packages/opentelemetry/sdk/trace/__init__.py&quot;, line 500, in _format_context x_ctx[&quot;trace_id&quot;] = f&quot;0x{trace_api.format_trace_id(context.trace_id)}&quot; ^^^^^^^^^^^^^^^^ AttributeError: 'Context' object has no attribute 'trace_id' </code></pre> <h3>OpenTelemetry versions</h3> <pre class="lang-bash prettyprint-override"><code>Python 3.11.4 opentelemetry-api 1.20.0 opentelemetry-distro 0.41b0 opentelemetry-exporter-otlp 1.20.0 opentelemetry-exporter-otlp-proto-common 1.20.0 opentelemetry-exporter-otlp-proto-grpc 1.20.0 opentelemetry-exporter-otlp-proto-http 1.20.0 opentelemetry-instrumentation 0.41b0 opentelemetry-instrumentation-aws-lambda 0.41b0 opentelemetry-instrumentation-dbapi 0.41b0 opentelemetry-instrumentation-grpc 0.41b0 opentelemetry-instrumentation-httpx 0.41b0 opentelemetry-instrumentation-logging 0.41b0 opentelemetry-instrumentation-pymongo 0.41b0 opentelemetry-instrumentation-requests 0.41b0 opentelemetry-instrumentation-sqlite3 0.41b0 opentelemetry-instrumentation-tortoiseorm 0.41b0 opentelemetry-instrumentation-urllib 0.41b0 opentelemetry-instrumentation-urllib3 0.41b0 opentelemetry-instrumentation-wsgi 0.41b0 opentelemetry-propagator-aws-xray 1.0.1 opentelemetry-proto 1.20.0 opentelemetry-sdk 1.20.0 opentelemetry-semantic-conventions 0.41b0 opentelemetry-util-http 0.41b0 </code></pre> <p>When I use Pycharm's debugger to pause execution, I can see an attribute object of <code>consumer_runner_context</code> which has a trace_id deep down in its attributes tree, and I can access it thusly:</p> <pre class="lang-py prettyprint-override"><code>consumer_runner_context.get(list(consumer_runner_context.keys())[0]).get_span_context() </code></pre> <p>and this works:</p> <pre class="lang-py prettyprint-override"><code> with tracer.start_as_current_span( name=&quot;trigger_update.info&quot;, context=api_gateway(), # to show the API Gateway trigger as causal parent. links=[trace.Link(consumer_runner_context.get(list(consumer_runner_context.keys())[0]).get_span_context())] ) </code></pre> <p>But this hardly seems the right way to do this.</p> <p>What is the right way to do this?</p>
<python><open-telemetry>
2023-10-10 19:22:24
2
469
kchawla-pi
77,268,235
3,204,212
Why isn't my locally pip-installed package importable?
<p>I'm trying to create and install a Python package, but no matter what I try, I cannot get it to be importable. I've followed the tutorial on the official website and in both of my books and on three different websites to the letter and cannot get it going, nor to produce an error I can understand or find helpful Google/SO results for. I'd appreciate any explanation of what I'm doing wrong.</p> <h1>Folder structure</h1> <pre><code>src/ __init__.py stuff.py pyproject.toml </code></pre> <h2><code>__init__.py</code></h2> <pre><code>from stuff import * </code></pre> <h2><code>stuff.py</code></h2> <pre><code>def dance() -&gt; int: return 20 </code></pre> <h2><code>pyproject.toml</code></h2> <pre><code>[project] name = &quot;applesauce&quot; version = &quot;0.0.1&quot; authors = [ { name=&quot;Jane Doe&quot;, email=&quot;janedoe@example.com&quot; }, ] description = &quot;A test package&quot; readme = &quot;README.md&quot; requires-python = &quot;&gt;=3.7&quot; classifiers = [ &quot;Programming Language :: Python :: 3&quot;, &quot;License :: OSI Approved :: MIT License&quot;, &quot;Operating System :: OS Independent&quot;, ] [project.urls] &quot;Homepage&quot; = &quot;https://example.com/applesauce&quot; &quot;Bug Tracker&quot; = &quot;https://example.com/applesauce/issues&quot; </code></pre> <h1>What I try</h1> <pre><code>$ python -m pip install ~/applesauce Installing build dependencies ... done Getting requirements to build wheel ... done Installing backend dependencies ... done Preparing metadata (pyproject.toml) ... done Building wheels for collected packages: applesauce Building wheel for applesauce (pyproject.toml) ... done Created wheel for applesauce: filename=applesauce-0.0.1-py3-none-any.whl size=1539 sha256=7aad1073d884974ff119dc4ac04b40a2daaecf6b3b45f34b3de5bfd55cfa4c67 Stored in directory: /tmp/pip-ephem-wheel-cache-an0mxg3o/wheels/a2/5b/ce/6e4327bbc99ccd453f25abb9e6258bd6d475ba061d0377c48f Successfully built applesauce Installing collected packages: applesauce Successfully installed applesauce-0.0.1 $ python -c &quot;import applesauce&quot; Traceback (most recent call last): File &quot;&lt;string&gt;&quot;, line 1, in &lt;module&gt; ModuleNotFoundError: No module named 'applesauce' </code></pre> <p>This also does not work if I <code>python -m build</code> and then install the .whl file, if I use setuptools, if I try on Windows, if I install as editable, if I use absolute paths, if I use relative paths, if I use <code>--no-index</code>... I'm completely out of ideas.</p>
<python><pip>
2023-10-10 18:38:00
1
2,480
GreenTriangle
77,268,171
8,260,569
In a dataframe column of ints, replace every repeated occurence of n to floats between n and n+1
<p>Suppose I have a df with a column <code>a</code> that looks like so:</p> <pre><code>a - 1 2 2 3 3 3 3 2 4 4 5 </code></pre> <p>I'm looking for an efficient way to &quot;de-duplicate&quot; the values by converting them to intermediate floats. So the expected output can be either of the two shown below</p> <pre><code>a a - - 1 1 2.25 2 2.5 2.33333 3.2 3 3.4 3.25 3.6 3.5 3.8 3.75 2.75 2.66666 4.33333 4 4.66666 4.5 5 5 </code></pre> <p>The only thing I can think of is to do a groupby on <code>a</code> and put an np.linspace from 0 to n, where n is the value of <code>a</code> for that group. But I think this is inefficient. Are there better, faster ways to do this? Thanks!</p>
<python><pandas><dataframe>
2023-10-10 18:26:31
3
304
Shirish
77,268,049
11,235,680
lambda_handler not found in lamba_function
<p>I'm trying to deploy an AWS lamba. Here's my folder architecture:</p> <p><a href="https://i.sstatic.net/tEYEb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tEYEb.png" alt="enter image description here" /></a></p> <p>and my function seems to be well defined:</p> <pre><code>import psycopg2 def lambda_handler(event, context): # Database connection parameters db_params = { } # SQL query to retrieve data from a table select_query = &quot;SELECT * FROM users&quot; try: # Establish a connection to the PostgreSQL database conn = psycopg2.connect(**db_params) # Create a cursor object to interact with the database cursor = conn.cursor() # Execute the SQL query cursor.execute(select_query) # Fetch all rows from the result set rows = cursor.fetchall() # Process the data (print it here, but you can do anything you need) for row in rows: print(row) except psycopg2.Error as e: print(f&quot;Error: {e}&quot;) finally: # Close the cursor and connection if cursor: cursor.close() if conn: conn.close() </code></pre> <p>even with this code I get: &quot;errorMessage&quot;: &quot;Handler 'lambda_handler' missing on module 'lambda_function'&quot; what is wrong with this code?</p>
<python><amazon-web-services><aws-lambda>
2023-10-10 18:04:17
1
316
Bouji
77,267,931
489,088
How to vectorize a function with numpy so that it can be applied to a 3d array, given this function needs to access certain cells of the array?
<p>I have a computation that in which I need go through items of a 3d numpy array and add them to the values in the second dimension of the array (skipping the values in that dimension). It is analogous to this canonical mimimal reproduction example:</p> <pre><code>import numpy as np data = np.array([ [[1, 1, 1], [10, 10, 10], [1, 1, 1]], [[2, 2, 2], [20, 20, 20], [2, 2, 2]], [[3, 3, 3], [30, 30, 30], [3, 3, 3]] ]) def process_data(const_idx, data, i, j, k): if const_idx != j: # PROBLEM: how can I access this value if this function is vectorized? value_to_add = data[i][const_idx][k] data[i][j][k] += value_to_add const_idx = 1 for i in range(data.shape[0]): for j in range(data.shape[1]): for k in range(data.shape[2]): process_data(const_idx, data, i, j, k) print(data) </code></pre> <p>Where the expected output in this case would be:</p> <pre><code>[[[11 11 11] [10 10 10] [11 11 11]] [[22 22 22] [20 20 20] [22 22 22]] [[33 33 33] [30 30 30] [33 33 33]]] </code></pre> <p>The code above works but it is very slow for large arrays. I would like to vectorize this function.</p> <p>My first stab is something like this:</p> <pre><code>def process_data(val, data, const_idx): # PROBLEM: How can I access this value given that I do not have access to the i / j / k coordinates val came from? value_to_add = ... # PROBLEM: I cannot make this check either since I dont know the j index being processed here if const_idx != j: return val + value_to_add else: return val vfunc = np.vectorize(process_data) result = vfunc(data, data, const_idx) print(result) </code></pre> <p>How can I accomplish this, or is perhaps vectorization not the answer?</p>
<python><arrays><python-3.x><numpy><vectorization>
2023-10-10 17:42:45
2
6,306
Edy Bourne
77,267,893
7,231,968
flask to respond immediately on node await
<p>I have two services, service A (node) and service B (flask) there is an endpoint on serivce A which calls endpoint of service B like this:</p> <pre><code>const response = await axios.get( endpoint_url, { params: { library_id: req.query.library_id, }, } ); </code></pre> <p>On service B this goes to a library update route which updates all the books for that library in a <code>for</code> loop which takes some time. I want service B to respond immediately and do the computation in the background. There were some libraries which were suggested like asyncio, threading, celery but I don't know which is the most suitable for me and I went with threading because that seemed the simplest. I implemented the below code however the &quot;testing&quot; is not printed immediately after the thread is started and the entire <code>process</code> function is executed before the response is sent back to node app. What am I doing wrong here?</p> <p><strong>Service B endpoint:</strong></p> <pre><code>import threading from src.Library import Library @bp_csv.route('update-library', methods=['GET']) def get_library(): library_id = request.args.get('library_id') if not library_id: raise ValueError(&quot;Missing query parameter 'library_id' &quot;) LIB = Library(library_id) thread = threading.Thread(target=LIB.process()).start() print(&quot;testing&quot;) status_code = 202 return_message = &quot;Updating library&quot; response = make_response( jsonify({&quot;message&quot;: str(return_message)}), status_code) response.headers[&quot;Content-Type&quot;] = &quot;application/json&quot; return response </code></pre>
<javascript><python><node.js><multithreading><flask>
2023-10-10 17:34:21
2
323
SunAns
77,267,769
2,386,113
Optimizing Vectorized Transformation on NumPy Arrays
<p>I have a function that performs a matrix transformation on a NumPy array, and I'm interested in optimizing its performance by eliminating a loop and taking full advantage of NumPy's vectorized operations. The function <code>transformation(current_tuple)</code> takes a 1D tuple <code>current_tuple</code> and returns a 2x2 array.</p> <p><strong>MWE:</strong></p> <pre><code>import numpy as np def transformation(current_tuple): return np.array([[current_tuple[0] , current_tuple[1]], [ 0 , 3.*current_tuple[0]]]) array_with_two_rows = np.random.randint(0, 10, size=(2, 6)) # Example 2D array transformation_result = np.vstack([transformation(x) for x in array_with_two_rows.T]) print('true') </code></pre> <p>Currently, I apply the <code>transformation()</code> function to each column using a <strong>for-loop</strong>.</p> <p><strong>Question:</strong> I'm interested in finding a more efficient and vectorized alternative to achieve the same result without the need for a loop.</p> <p><strong>PS:</strong> The transformation logic is just a dummy logic to represent that a higher dimension matrix will be returned after transformation. The <em><strong>transformation logic is subject to change</strong></em>. The only thing valid about the transformation logic is that it will take a row vector/array as input and return a two-dimensional array as output (could be 2x2, 2 x 5, 3 x 7 etc.).</p>
<python><arrays><numpy>
2023-10-10 17:13:49
1
5,777
skm
77,267,714
2,727,278
Colab and Bigquery: endless 403 errors
<p>My goal is to access my BigQuery data in a Colab notebook, but despite this being what seems like a fairly straightforward task I'm getting endless 403 errors.</p> <p>I've added a service worker to my project and permission it the following ways:</p> <pre><code>BigQuery Admin BigQuery Data Editor BigQuery Data Viewer BigQuery User Viewer Editor </code></pre> <p>According to the <strong>Policy Analyzer</strong> each of these is applied to the project and the datasets. I've created an API key and imported that to the Colab. Despite that I get a 403 access denied error each time I run the following:</p> <pre><code>from google.cloud import bigquery from google.oauth2 import service_account import google.auth credentials = service_account.Credentials.from_service_account_file('pathtoapikey.json') project_id = 'myproject' client = bigquery.Client(credentials= credentials,project=project_id) query_job = client.query(&quot;&quot;&quot; SELECT * FROM myproject.mydataset.table_name LIMIT 1000 &quot;&quot;&quot;) results = query_job.result() </code></pre> <p>Error:</p> <pre><code>Forbidden Traceback (most recent call last) &lt;ipython-input-12-208d568225a8&gt; in &lt;cell line: 15&gt;() 13 LIMIT 1000 &quot;&quot;&quot;) 14 ---&gt; 15 results = query_job.result() 5 frames /usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/job/query.py in result(self, page_size, max_results, retry, timeout, start_index, job_retry) 1518 do_get_result = job_retry(do_get_result) 1519 -&gt; 1520 do_get_result() 1521 1522 except exceptions.GoogleAPICallError as exc: /usr/local/lib/python3.10/dist-packages/google/api_core/retry.py in retry_wrapped_func(*args, **kwargs) 347 self._initial, self._maximum, multiplier=self._multiplier 348 ) --&gt; 349 return retry_target( 350 target, 351 self._predicate, /usr/local/lib/python3.10/dist-packages/google/api_core/retry.py in retry_target(target, predicate, sleep_generator, timeout, on_error, **kwargs) 189 for sleep in sleep_generator: 190 try: --&gt; 191 return target() 192 193 # pylint: disable=broad-except /usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/job/query.py in do_get_result() 1508 self._job_retry = job_retry 1509 -&gt; 1510 super(QueryJob, self).result(retry=retry, timeout=timeout) 1511 1512 # Since the job could already be &quot;done&quot; (e.g. got a finished job /usr/local/lib/python3.10/dist-packages/google/cloud/bigquery/job/base.py in result(self, retry, timeout) 909 910 kwargs = {} if retry is DEFAULT_RETRY else {&quot;retry&quot;: retry} --&gt; 911 return super(_AsyncJob, self).result(timeout=timeout, **kwargs) 912 913 def cancelled(self): /usr/local/lib/python3.10/dist-packages/google/api_core/future/polling.py in result(self, timeout, retry, polling) 259 # pylint: disable=raising-bad-type 260 # Pylint doesn't recognize that this is valid in this case. --&gt; 261 raise self._exception 262 263 return self._result Forbidden: 403 Access Denied: BigQuery BigQuery: Permission denied while globbing file pattern. Location: US Job ID: </code></pre> <p>Things I've tried:</p> <ol> <li><p>Deleting all service workers</p> </li> <li><p>Re-permissioning everything</p> </li> <li><p>Removing all permissions from all users other than owner, creating</p> </li> <li><p>New service worker, then adding permissions.</p> </li> <li><p>Creating new API keys (and updating in colab/etc)</p> </li> <li><p>Tearing the whole thing down and starting from scratch</p> </li> </ol> <p>Etc. It really seems like this should be a pretty easy permissioning item, but for the life of me I can't find the missing link. Any help greatly appreciated.</p>
<python><google-bigquery><google-api><google-colaboratory>
2023-10-10 17:00:42
2
312
ike
77,267,713
11,626,909
extracting first digit of a float64 variable
<p>I imported a <code>.xlsb</code> file into python (pycharm IDE) by using <code>pd.read_excel()</code> function; however, before importing the <code>.xlsb</code> file, I convert it into <code>.xlsx</code> format and then import it. Please note that I did not use <code>.xlsb</code> because I faced issues regarding the parsing of the date variable.</p> <p>Now I need to extract the first digit of a float64 type variable and I run the following code -</p> <pre><code>app_invoice = app_invoice \ .assign (FIRSTDIGIT = int(str(app_invoice['INVOICE_AMOUNT'][:1])) ) </code></pre> <p>But it does not extract the first digit. Then I just checked the issue by running the following code -</p> <pre><code>str(app_invoice['INVOICE_AMOUNT']) </code></pre> <p>but the output of it looks like this. I do not why it is <code>\n1</code> and so on</p> <pre><code>0 13611.34\n1 91000.00\n2 159.97\n3 1300.00\n4 ``` </code></pre>
<python><pandas><string>
2023-10-10 17:00:08
2
401
Sharif
77,267,596
720,877
mypy type stubs for AWS lambda function
<p>Are there maintained mypy types published for AWS Lambda functions?</p> <p>I'm talking here about defining a function to handle an HTTP request (from a lambda URL or API Gateway). I'm not talking about other AWS Lambda-related APIs such as are exposed for example by boto3. Here's a example of the kind of function I would like to have types for, taken from AWS's docs:</p> <pre><code>def lambda_handler(event, context): message = 'Hello {} {}!'.format(event['first_name'], event['last_name']) return { 'message' : message } </code></pre> <p>I'd like to type functions like that one as much as possible - for example, the <code>context</code> parameter (please ignore the fact that this trivial example does not happen to use parameter <code>context</code>).</p> <p>I'm aware that AWS schema registry provides &quot;bindings&quot; for Event Bridge events, but note this question is about invoking lambdas in response to HTTP requests, not Event Bridge events.</p> <p>I'm also aware that for example, if I used FastAPI together with a library like Mangum, I'd get types. I'm restricting this question just to use of the &quot;native&quot; AWS Lambda API (see the example code above).</p> <p>I'm also aware that I could define my own types, but here I'm asking about types maintained by AWS or a third party.</p>
<python><aws-lambda><mypy>
2023-10-10 16:40:00
1
2,820
Croad Langshan
77,267,532
1,671,319
How do I use pytest-mock with modules and correct import order
<p>I am totally new to <code>pytest-mock</code>. My project layout looks like this:</p> <pre><code>src pull app.py utils sftp.py tests test_pull.py </code></pre> <p>In <code>app.py</code> I have this:</p> <pre><code>from src.utils.sftp import list_files def handler(): response = list_files() </code></pre> <p>In <code>sftp.py</code> I have this:</p> <pre><code>def list_files(): ... </code></pre> <p>Trying to write a pytest which has <code>list_files</code> mocked.</p> <pre><code>from src.pull.app import handler def test_pulling(mocker): mocker.patch('src.utils.sftp.list_files', return_value=[&quot;something&quot;]) </code></pre> <p>This however does not work because, I believe because the import inside <code>app.py</code> is evaluated first. It works, when I change <code>app.py</code> and do a local import.</p> <pre><code>def handler(): from src.utils.sftp import list_files response = list_files() </code></pre> <p>Is there a better way to set this up? Ideally I don't want to be dependant on the import order - another developer might revert the local import and break the test.</p>
<python><pytest><pytest-mock>
2023-10-10 16:32:05
1
3,074
reikje
77,267,346
21,395,742
Error while installing python package: llama-cpp-python
<p>I am using Llama to create an application. Previously I used openai but am looking for a free alternative. Based on my limited research, this library provides openai-like api access making it quite easy to add into my prexisting code. However this library has errors while downloading. I tried installing cmake which did not help.</p> <pre><code>Building wheels for collected packages: llama-cpp-python Building wheel for llama-cpp-python (pyproject.toml) ... error error: subprocess-exited-with-error × Building wheel for llama-cpp-python (pyproject.toml) did not run successfully. │ exit code: 1 ╰─&gt; [20 lines of output] *** scikit-build-core 0.5.1 using CMake 3.27.7 (wheel) *** Configuring CMake... 2023-10-10 21:23:02,749 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None loading initial cache file C:\Users\ARUSHM~1\AppData\Local\Temp\tmpf1bzj6ul\build\CMakeInit.txt -- Building for: NMake Makefiles CMake Error at CMakeLists.txt:3 (project): Running 'nmake' '-?' failed with: The system cannot find the file specified CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage -- Configuring incomplete, errors occurred! *** CMake configuration failed [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for llama-cpp-python Failed to build llama-cpp-python ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects </code></pre> <p>Although not directly related to this question, these are other questions I am unable to get answers for:</p> <ol> <li>Does this library use Llama or Llama 2?</li> <li>Will this be secure on a Python Flask Application?</li> </ol>
<python><llama><llama-cpp-python>
2023-10-10 16:02:20
7
845
hehe
77,267,314
4,504,877
FastAPI - how to feed values into a function?
<p>I'm trying to use FastAPI to take some values and then paste out a result from a predefined function. In reality my function is a bit more complicated than this and needs to be wrapped in a function. The follow that I'm trying gives an error:</p> <pre><code>from fastapi import FastAPI from pydantic import BaseModel app = FastAPI() def some_function(a,b,c): return(a+b+c) class Item(BaseModel): a: int b: int c :int @app.post(&quot;/test&quot;) async def create_item(item: Item): return {'response':some_function(item)} </code></pre> <p>I'd like to pass those inputs in &quot;Item&quot; into the function.</p> <p>Also - is there anywhere I can debug my code using fastapi? it's very tough for me to understand what &quot;item&quot; and &quot;Item&quot; means and what exactly python is doing.</p>
<python><fastapi>
2023-10-10 15:57:33
1
566
user33484
77,267,290
1,418,090
Masking `nan` on tensorflow efficiently
<p>Suppose I want to run a simple deep-learning classification model. Here is a data example:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd import numpy as np X = pd.DataFrame({ 'v1': [1, 2, 3, 4, np.nan, 6], 'v2': [0.3119080, 0.9352281, 0.2509079, 0.8880956, -1.1892642, np.nan], 'v3': [-1.36932765, np.nan, 0.02033295, 0.35838342, -1.11678819, 1.86502911], }) y = np.array([0, 1, 0, 1, 0, 1]) </code></pre> <p>And here is an illustrative model (not sure if it is going to work, just an example):</p> <pre class="lang-py prettyprint-override"><code>import tensorflow as tf inputs = tf.keras.Input(shape = (3,)) x = tf.keras.layers.Dense(10, activation = 'relu')(input) outputs = tf.keras.layers.Dense(1, activation = 'sigmoid')(x) model = tf.keras.Model(inputs = inputs, outputs = outputs) model.compile(optimizer = &quot;Adam&quot;, loss = &quot;binary_crossentropy&quot;, metrics = [&quot;accuracy&quot;]) model.fit(X, y) </code></pre> <p>Now, my problem: If I drop the NaNs, I will have only three observations. Is there a way to mask each case with NaN so that the Neural Net still uses valid observations from the other variables?</p> <p>Things I don't want (otherwise, it would make this Q very obvious): 1) change with means. 2) Change for any other value that does not make sense for the problem.</p> <p>I was thinking of zeroing the variable (I believe this is what dropout layers do, right?), but am unsure if this is ok (and violates the thing 2 above).</p> <p>Thanks!</p>
<python><pandas><numpy><tensorflow><masking>
2023-10-10 15:54:35
0
438
Umberto Mignozzetti
77,267,289
11,145,822
Connection issue while connecting to DB2 instance
<p>I'm trying to connect to DB2 instance from my local MacOS. I tried with 2 python modules, but I'm getting the below errors for both modules. Any idea/suggestions how I can resolve this.</p> <p>Tried many things after searching in google and checking in ChatGPT, but no success. Any light here will be much appreciated.</p> <p>Python Version : 3.7.6</p> <p><strong>Using pyodbc module</strong></p> <pre><code> import pyodbc print(pyodbc.drivers()) db2_params = { &quot;DATABASE&quot;: &quot;xxxxxx&quot;, &quot;HOSTNAME&quot;: &quot;xxxxxxx&quot;, &quot;PORT&quot;: &quot;449&quot;, &quot;PROTOCOL&quot;: &quot;TCPIP&quot;, &quot;UID&quot;: &quot;xxxxxxxx&quot;, &quot;PWD&quot;: &quot;xxxxxxxxx&quot;, } # Construct the connection string for IBM Db2 conn_str = ( f&quot;DRIVER=IBM DB2 ODBC DRIVER;&quot; f&quot;DATABASE={db2_params['DATABASE']};&quot; f&quot;HOSTNAME={db2_params['HOSTNAME']};&quot; f&quot;PORT={db2_params['PORT']};&quot; f&quot;PROTOCOL={db2_params['PROTOCOL']};&quot; f&quot;UID={db2_params['UID']};&quot; f&quot;PWD={db2_params['PWD']};&quot; ) conn = None # Initialize the connection variable try: # Establish a connection to IBM Db2 conn = pyodbc.connect(conn_str) print('Connected Successfully to IBM Db2') except Exception as e: print(f'Error: {e}') finally: # Close the connection in a finally block to ensure it's always closed if conn is not None: conn.close() </code></pre> <p><strong>Error</strong></p> <pre><code> ['ODBC Driver 18 for SQL Server'] Error: ('01000', &quot;[01000] [unixODBC][Driver Manager]Can't open lib 'IBM DB2 ODBC DRIVER' : file not found (0) (SQLDriverConnect)&quot;) </code></pre> <p>I tried to install the river, but there is no direct way or i haven't found any to do so.</p> <p><strong>Using ibm_db</strong></p> <pre><code> import ibm_db db2_params = { &quot;DATABASE&quot;: &quot;xxxxxx&quot;, &quot;HOSTNAME&quot;: &quot;xxxxxx&quot;, &quot;PORT&quot;: &quot;449&quot;, &quot;PROTOCOL&quot;: &quot;TCPIP&quot;, &quot;UID&quot;: &quot;xxxxxxx&quot;, &quot;PWD&quot;: &quot;xxxxxxx&quot;, } # Construct the connection string for IBM Db2 conn_str = ( f&quot;DATABASE={db2_params['DATABASE']};&quot; f&quot;HOSTNAME={db2_params['HOSTNAME']};&quot; f&quot;PORT={db2_params['PORT']};&quot; f&quot;PROTOCOL={db2_params['PROTOCOL']};&quot; f&quot;UID={db2_params['UID']};&quot; f&quot;PWD={db2_params['PWD']};&quot; ) try: # Establish a connection to IBM Db2 conn = ibm_db.connect(conn_str, &quot;&quot;, &quot;&quot;) print('Connected Successfully to IBM Db2') except Exception as e: print(f'Error: {e}') ibm_db.close(conn) </code></pre> <p><strong>Error</strong></p> <pre><code> Traceback (most recent call last): File &quot;/Users/x/PycharmProjects/dev_test/cnc_ibm.py&quot;, line 1, in &lt;module&gt; import ibm_db ImportError: dlopen(/Users/x/PycharmProjects/dev_test/venv/lib/python3.7/site-packages/ibm_db.cpython-37m-darwin.so, 0x0002): Symbol not found: ___cxa_throw_bad_array_new_length Referenced from: &lt;855AE640-1BCF-3A61-A65E-58F5490BFF43&gt; /Users/x/PycharmProjects/dev_test/venv/lib/python3.7/site-packages/clidriver/lib/libdb2.dylib Expected in: &lt;3F5CB4D5-26D1-3E43-B241-2688FF0A67BD&gt; /usr/lib/libstdc++.6.dylib </code></pre>
<python><python-3.x><db2>
2023-10-10 15:54:24
1
731
Sandeep
77,267,232
9,544,507
pytest.raises() doesn't catch custom exception
<p>It seems that pytest.raises(CustomException) is failing, when the class raises it:</p> <p>I have a project folder like this:</p> <pre><code>root: pytest.ini models: my_model.py custom_exceptions.py test: test_one.py </code></pre> <p>my_model.py:</p> <pre class="lang-py prettyprint-override"><code>from custom_exceptions import MyException class MyModel: def __init__(self, a): self.a = a self.raisers() def raisers(self): raise MyException </code></pre> <p>test_one.py:</p> <pre class="lang-py prettyprint-override"><code>import pytest from models import my_model from models import custom_exceptions class TestOne: def test(self): assert True def test_ex(self): with pytest.raises(custom_exceptions.MyException): my_model.MyModel(&quot;a&quot;) </code></pre> <p>custom_exceptions.py:</p> <pre class="lang-py prettyprint-override"><code>class MyException(Exception): pass </code></pre> <p>and pytest.ini:</p> <pre><code>[pytest] minversion = 7.0 pythonpath = models testpaths = test </code></pre> <p>This is under a pyenv venv. when i run <code>python -m pytest</code>, the first test succeeds, but the second one fails:</p> <pre><code>self = &lt;test_one.TestOne object at 0x7fa1f2504b50&gt; def test_ex(self): with pytest.raises(custom_exceptions.MyException): &gt; my_model.MyModel(&quot;a&quot;) test/test_one.py:11: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ models/my_model.py:6: in __init__ self.raisers() _ _ _ _ _ _ _ _ _ _ _ _ _ self = &lt;models.my_model.MyModel object at 0x7fa1f2504790&gt; def raisers(self): &gt; raise MyException E custom_exceptions.MyException models/my_model.py:9: MyException ========== short test summary info ============ FAILED test/test_one.py::TestOne::test_ex - custom_exceptions.MyException </code></pre> <p>However, this succeeds:</p> <pre class="lang-py prettyprint-override"><code> def test_two(self): with pytest.raises(custom_exceptions.MyException): raise custom_exceptions.MyException </code></pre> <p>I don't understand what is going on?</p> <p>I have tried</p> <pre class="lang-py prettyprint-override"><code> def test_ex(self): with pytest.raises(custom_exceptions.MyException): try: my_model.MyModel(&quot;a&quot;) except Exception as e: print(type(e)) print(type(custom_exceptions.MyException)) raise e </code></pre> <p>And the types are:</p> <pre><code>&lt;class 'custom_exceptions.MyException'&gt; &lt;class 'type'&gt; </code></pre> <p>Why would the type be 'type'?</p>
<python><python-3.x><pytest>
2023-10-10 15:45:35
1
333
Wolfeius
77,267,060
2,547,403
Possibility of using python method in C++ project
<p>Python has a very useful optimization library called <code>nevergrad</code>, which I wanted to try using in a project written in C++. A simple test case on <code>nevergrad</code>:</p> <pre><code>import nevergrad as ng # optimization function def square(x): return sum((x - 0.5) ** 2) # solve method def solve(): optimizer = ng.optimizers.NGOpt(parametrization=2, budget=100) return optimizer.minimize(square).value </code></pre> <p>The peculiarity of my task is that the minimization function that I want to use instead of square must call a method from the C++ project. Visually it looks like this: <a href="https://i.sstatic.net/PVhIu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PVhIu.png" alt="scheme of calls" /></a></p> <p>If the call from C++ is done using Embedded Python, then the reverse call is done using <code>boost.python</code>. But the difficulty is that the C++ project has already been launched by the time it accesses python, and <code>Class 2</code> has been initialized accordingly. However, in most <code>boost.python</code> usage examples, the C++ class initialization is done in python.</p> <p>Is it possible to execute the sequence of calls shown in the picture C++ &gt; Python &gt; C++ and get the result in reverse order?</p>
<python><c++><boost>
2023-10-10 15:22:33
2
369
DiA
77,267,052
15,965,186
FastAPI Jinja: `url_for` not working inside macros
<p>Using <code>url_for</code> in a macro defined in another file in a template raises a KeyError for 'request'.</p> <p>Considering the following tree</p> <pre><code>. |-- app.py |-- static | |-- css | | `-- style.css | |-- img | | `-- logo.png | `-- js | |-- script.js `-- templates |-- base.html |-- index.html `-- macros `-- header.html </code></pre> <p>And this app.py</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, Request, UploadFile, HTTPException from fastapi.responses import HTMLResponse, FileResponse from fastapi.staticfiles import StaticFiles from fastapi.templating import Jinja2Templates app = FastAPI() app.mount(&quot;/static&quot;, StaticFiles(directory=&quot;static&quot;), name=&quot;static&quot;) app.mount(&quot;/static/css&quot;, StaticFiles(directory=&quot;static/css&quot;), name=&quot;css&quot;) app.mount(&quot;/static/img&quot;, StaticFiles(directory=&quot;static/img&quot;), name=&quot;img&quot;) app.mount(&quot;/static/js&quot;, StaticFiles(directory=&quot;static/js&quot;), name=&quot;js&quot;) app.mount(&quot;/mails&quot;, StaticFiles(directory=&quot;mails&quot;), name=&quot;mails&quot;) templates = Jinja2Templates(directory=&quot;templates&quot;) @app.get(&quot;/&quot;, response_class=HTMLResponse) async def index(request: Request): return templates.TemplateResponse(&quot;index.html&quot;, {&quot;request&quot;: request}) </code></pre> <p>if in base.html i define the macro in the same file it works perfectly.</p> <pre><code>{% macro header(request) %} &lt;header class=&quot;header&quot;&gt; &lt;div class=&quot;header__logo&quot;&gt; {{ request.url }} &lt;img class=&quot;header__logo-image&quot; alt=&quot;Email Sharer&quot; src=&quot;{{ url_for('img', path='logo.png') }}&quot;&gt; &lt;/img&gt; &lt;/div&gt; &lt;/header&gt; {% endmacro %} &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt; &lt;title&gt;Item Details&lt;/title&gt; &lt;/head&gt; &lt;body&gt; {{ header(request) }} {% block content %} {% endblock content %} &lt;/body&gt; &lt;/html&gt; </code></pre> <hr /> <p>The problem arises if i try to define the header macro in another file</p> <pre><code>{# templates/base.html #} {% import 'macros/header.html' as header %} &lt;!DOCTYPE html&gt; &lt;html lang=&quot;en&quot;&gt; &lt;head&gt; &lt;meta charset=&quot;UTF-8&quot;&gt; &lt;meta name=&quot;viewport&quot; content=&quot;width=device-width, initial-scale=1.0&quot;&gt; &lt;title&gt;Item Details&lt;/title&gt; &lt;link href=&quot;{{ url_for('css', path='style.css') }}&quot; rel=&quot;stylesheet&quot;&gt; &lt;/head&gt; &lt;body&gt; {{ header.header(request) }} {% block content %} {% endblock content %} &lt;/body&gt; &lt;/html&gt; </code></pre> <pre><code>{# templates/macros/header.html #} {% macro header(request) %} &lt;header class=&quot;header&quot;&gt; &lt;div class=&quot;header__logo&quot;&gt; {{ request.url }} &lt;img class=&quot;header__logo-image&quot; alt=&quot;Email Sharer&quot; src=&quot;{{ url_for('img', path='logo.png') }}&quot;&gt; &lt;/img&gt; &lt;/div&gt; &lt;/header&gt; {% endmacro %} </code></pre> <p>When i do that, i fastAPI raises this error in the url_for</p> <pre><code>INFO: 127.0.0.1:57681 - &quot;GET / HTTP/1.1&quot; 500 Internal Server Error ERROR: Exception in ASGI application Traceback (most recent call last): File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\uvicorn\protocols\http\httptools_impl.py&quot;, line 426, in run_asgi result = await app( # type: ignore[func-returns-value] ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\uvicorn\middleware\proxy_headers.py&quot;, line 84, in __call__ return await self.app(scope, receive, send) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\fastapi\applications.py&quot;, line 292, in __call__ await super().__call__(scope, receive, send) File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\applications.py&quot;, line 122, in __call__ await self.middleware_stack(scope, receive, send) File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\middleware\errors.py&quot;, line 184, in __call__ raise exc File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\middleware\errors.py&quot;, line 162, in __call__ await self.app(scope, receive, _send) File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\middleware\exceptions.py&quot;, line 79, in __call__ raise exc File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\middleware\exceptions.py&quot;, line 68, in __call__ await self.app(scope, receive, sender) File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py&quot;, line 20, in __call__ raise e File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\fastapi\middleware\asyncexitstack.py&quot;, line 17, in __call__ await self.app(scope, receive, send) File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\routing.py&quot;, line 718, in __call__ await route.handle(scope, receive, send) File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\routing.py&quot;, line 276, in handle await self.app(scope, receive, send) File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\routing.py&quot;, line 66, in app response = await func(request) ^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\fastapi\routing.py&quot;, line 273, in app raw_response = await run_endpoint_function( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\fastapi\routing.py&quot;, line 190, in run_endpoint_function return await dependant.call(**values) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\app\app.py&quot;, line 86, in index return templates.TemplateResponse(&quot;index.html&quot;, {&quot;request&quot;: request}) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\templating.py&quot;, line 113, in TemplateResponse return _TemplateResponse( ^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\templating.py&quot;, line 39, in __init__ content = template.render(context) ^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\jinja2\environment.py&quot;, line 1301, in render self.environment.handle_exception() File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\jinja2\environment.py&quot;, line 936, in handle_exception raise rewrite_traceback_stack(source=source) File &quot;templates\index.html&quot;, line 1, in top-level template code {% extends &quot;base.html&quot; %} File &quot;templates\base.html&quot;, line 12, in top-level template code {{ header.header(request) }} ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\jinja2\runtime.py&quot;, line 777, in _invoke rv = self._func(*arguments) ^^^^^^^^^^^^^^^^^^^^^^ File &quot;templates\macros\header.html&quot;, line 7, in template src=&quot;{{ url_for('img', path='logo.png') }}&quot;&gt; ^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\starlette\templating.py&quot;, line 82, in url_for request = context[&quot;request&quot;] ~~~~~~~^^^^^^^^^^^ File &quot;C:\Users\krappr\dev\email-sharer\.venv\Lib\site-packages\jinja2\runtime.py&quot;, line 331, in __getitem__ raise KeyError(key) KeyError: 'request' </code></pre>
<python><jinja2><fastapi>
2023-10-10 15:21:46
1
1,101
Krapp
77,266,962
12,691,575
python unittesting - functions depending on other modules
<p>Is there a &quot;correct&quot; way to write unit tests for functions that rely on non-local functions (multi-module dependency tests). e.g.</p> <pre><code>. ├── src │   ├── main1.py │   └── main2.py ├── tests │   └── tests_main.py </code></pre> <pre class="lang-py prettyprint-override"><code># /src/main1.py import main2 def plusOne(number: int): return main2.addOne(number) if __name__ == &quot;__main__&quot;: print(plusOne(1)) </code></pre> <pre class="lang-py prettyprint-override"><code># /src/main2.py def addOne(number: int): return number + 1 </code></pre> <pre class="lang-py prettyprint-override"><code># tests/test_main.py import unittest import src.main1 class TestMain(unittest.TestCase): def test_case1(self): number = 1 result = src.main1.plusOne(number) self.assertEqual(result, number + 1) if __name__ == &quot;__main__&quot;: unittest.main() </code></pre> <p>You can run <code>python src/main.py</code> as you'd expected. but when running tests <code>python -m unittest tests/tests_main.py</code> There's an import error:</p> <pre><code> File &quot;/.../app/src/main1.py&quot;, line 3, in &lt;module&gt; import main2 ModuleNotFoundError: No module named 'main2' </code></pre> <hr /> <p>The import statement in main1.py can be changed from <code>import main2</code> to <code>from . import main2</code> but this breaks running <code>python src/main.py</code> with a relative import error. i.e.</p> <pre><code> from . import main2 ImportError: attempted relative import with no known parent package </code></pre> <p>Is there a correct way to manage imports to satisfy import requirements and avoid conflicts?</p>
<python><python-3.x><unit-testing><testing>
2023-10-10 15:08:24
1
337
joegrammer
77,266,908
6,059,213
Error connecting to SQL Server from Python using pyodbc on CentOS 8
<p>I am having trouble connecting to SQL Server from a Python environment using pyodbc on a CentOS 8 system.</p> <p><strong>Environment:</strong></p> <ul> <li>CentOS 8</li> <li>Python version: 3.11.2</li> <li>pyodbc version: 4.0.39</li> <li>ODBC Driver: ODBC Driver 17 for SQL Server</li> </ul> <p><strong>Error:</strong></p> <pre><code>('01000', &quot;[01000] [unixODBC][Driver Manager]Can't open lib '/opt/microsoft/msodbcsql17/lib64/libmsodbcsql-17.10.so.4.1' : file not found (0) (SQLDriverConnect)&quot;) </code></pre> <p><strong>Steps I've taken:</strong></p> <ul> <li>Verified that the ODBC driver is installed and listed correctly.</li> <li>Successfully connected to SQL Server using <code>isql</code> with the same connection string.</li> <li>Verified the driver path and ensured the library exists in the mentioned directory.</li> <li>Checked dependencies using <code>ldd</code>.</li> <li>Reinstalled SQL Server ODBC driver, unixODBC, and pyodbc.</li> <li>Checked environment variables and ensured LD_LIBRARY_PATH was set correctly.</li> <li>Set SELinux to permissive mode for testing, but it didn't help.</li> </ul> <p>The issue persists only when trying to connect from a Python environment. The driver works perfectly from the command line.</p> <p>I'm stumped and would appreciate any help or guidance on this issue. Has anyone faced something similar or have any idea what might be wrong?</p>
<python><sql-server><odbc><pyodbc>
2023-10-10 14:59:44
0
374
Tom Leung
77,266,754
7,190,575
S3 – Images sometimes showing up with wrong rotations
<p>I run a web service where users are allowed to upload images.</p> <p>I've noticed that sometimes, the uploaded image on S3 shows up rotated – as I came to learn due to exif metadata.</p> <p>Taking a sample of some of the images, and opening them in GIMP, it yields a prompt where I have to decide whether to keep or strip the exif metadata.</p> <p>I noticed, however, that sometimes the &quot;properly&quot; rotated image includes the exif metadata, and, other times, to &quot;properly&quot; rotate the image I need to tell GIMP to ignore the exif data.</p> <p>I thought I could resolve this issue simply by getting rid of the exif metadata, but this will not work under the above observation.</p> <p><strong>So, in summary:</strong> <strong>how can I guarantee image uploading to S3 with the correct image rotation?</strong></p> <p>P.S.: I am using Python via boto3 to upload images.</p> <p>Thanks!</p>
<python><django>
2023-10-10 14:39:02
1
6,949
OhMad
77,266,671
2,135,504
Polars - drop duplicate row based on column subset but keep first
<p>Given the following table, I'd like to remove the duplicates based on the column subset <code>col1,col2</code>. I'd like to keep the first row of the duplicates though:</p> <pre class="lang-py prettyprint-override"><code>data = { 'col1': [1, 2, 3, 1, 1], 'col2': [7, 8, 9, 7, 7], 'col3': [3, 4, 5, 6, 8] } tmp = pl.DataFrame(data) </code></pre> <pre><code>┌──────┬──────┬──────┐ │ col1 ┆ col2 ┆ col3 │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞══════╪══════╪══════╡ │ 1 ┆ 7 ┆ 3 │ │ 2 ┆ 8 ┆ 4 │ │ 3 ┆ 9 ┆ 5 │ │ 1 ┆ 7 ┆ 6 │ │ 1 ┆ 7 ┆ 9 │ └──────┴──────┴──────┘ </code></pre> <p>The result should be</p> <pre><code>┌──────┬──────┬──────┐ │ col1 ┆ col2 ┆ col3 │ │ --- ┆ --- ┆ --- │ │ i64 ┆ i64 ┆ i64 │ ╞══════╪══════╪══════╡ │ 1 ┆ 7 ┆ 3 │ │ 2 ┆ 8 ┆ 4 │ │ 3 ┆ 9 ┆ 5 │ └──────┴──────┴──────┘ </code></pre> <p>Usually I'd do this with pandas <code>df[&quot;col1&quot;,&quot;col2&quot;].is_duplicated(keep='first')</code>, but polars <code>dl.is_duplicated()</code> marks all rows as duplicates including the first occurence.</p>
<python><pandas><python-polars>
2023-10-10 14:26:26
2
2,749
gebbissimo
77,266,623
10,574,250
Assign pyspark column based upon other column values using groupby average
<p>I have a pyspark dataframe hat looks as such:</p> <pre><code> identifier numbers_to_sum dog 5 cat 4 dog 3 parrot 2 cat 7 </code></pre> <p>What I want is to assign a new column with the values based upon a groupby average of numbers_to_sum related to the specific identifier. The result would look like this</p> <pre><code> identifier numbers_to_avg average dog 5 4 cat 4 5.5 dog 3 4 parrot 2 2 cat 7 5.5 </code></pre> <p>I tried to do this:</p> <pre><code>df.withColumn('average', df.groupBy('identifier').mean('numbers_to_avg').alias('avg').select('avg')) </code></pre> <p>But I get this error:</p> <pre><code>PySparkTypeError: [NOT_COLUMN] Argument `col` should be a Column, got DataFrame. </code></pre> <p>Any help would be appreciated.</p>
<python><pyspark>
2023-10-10 14:20:21
1
1,555
geds133
77,266,318
3,829,437
Defining an Annotated[str] with constraints in Pydantic v2
<p>Suppose I want a validator for checksums that I can reuse throughout an application. The Python type-hint system has changed a lot in 3.9+ and that is adding to my confusion. In pydantic v1, I subclassed <code>str</code> and implement <code>__get_pydantic_core_schema__</code> and <code>__get_validators__</code> class methods. v2 has changed some of this and the preferred way has changed to using annotated types.</p> <p>In <a href="https://github.com/pydantic/pydantic-extra-types/blob/main/pydantic_extra_types/mac_address.py" rel="nofollow noreferrer">pydantic-extra-types</a> package I have found examples that require insight into the inner workings of pydantic. I can copy and get something working, but I would prefer to find the &quot;correct&quot; user's way to do it rather than copying without understanding.</p> <p>In pydantic v2 it looks like I can make a constrained string as part of a pydantic class, e.g.</p> <pre class="lang-py prettyprint-override"><code>from typing import Annotated from pydantic import BaseModel, StringConstraints class GeneralThing(BaseModel): special_string = Annotated[str, StringConstraints(pattern=&quot;^[a-fA-F0-9]{64}$&quot;)] </code></pre> <p>but this is not valid (pydantic.errors.PydanticUserError: A non-annotated attribute was detected). Additionally I would have to annotate every field I want to constrain, as opposed to <code>special_string = ChecksumStr</code> that I was able to do in the past.</p>
<python><python-typing><pydantic>
2023-10-10 13:39:58
1
413
nerdstrike
77,266,124
13,558,699
icicidirect breeze_connect websocket is closing instantly on startup
<p>I am using the direct code snippet from icicidirect breeze_connect api documentation. It is instantly getting closed after subscription of data feed is successful. I am facing this issue in Pycharm and aws cloud9 IDE. The code is running fine and callback function returning the expected data in Jupyter notebook. The code I am using is as below,</p> <pre><code>from breeze_connect import BreezeConnect from time import sleep api_key = &quot;&quot; api_secret = &quot;&quot; session_token = &quot;&quot; breeze = BreezeConnect(api_key=api_key) breeze.generate_session(api_secret=api_secret, session_token=session_token) # Connect to websocket(it will connect to tick-by-tick data server) breeze.ws_connect() def on_ticks(tick): print(&quot;Ticks: {}&quot;.format(tick)) # Assign the callbacks. breeze.on_ticks = on_ticks sleep(2) breeze.subscribe_feeds( exchange_code=&quot;NFO&quot;, stock_code=&quot;CNXBAN&quot;, product_type=&quot;options&quot;, expiry_date=&quot;11-Oct-2023&quot;, strike_price=&quot;45000&quot;, right=&quot;Call&quot;, get_exchange_quotes=True, get_market_depth=False, ) </code></pre> <p>I tried to uninstall the breeze_connect api and reinstalled it but of no use. I have also checked other websocket libraries which are up to date. As I am using direct code from api documentation, nothing seems wrong in the code. api link <a href="https://pypi.org/project/breeze-connect/" rel="nofollow noreferrer">https://pypi.org/project/breeze-connect/</a></p>
<python><python-3.x><websocket><pycharm><aws-cloud9>
2023-10-10 13:15:51
2
313
Yogendra Kumar
77,266,110
156,458
How can I find out the number of tabs at the beginning of each line in a text file?
<p>I have a text file, where each line may start with a number of tabs, including no tabs. For example, the first line starts with no tab, the second line with 1 tab, and the third line with 2 tabs:</p> <pre><code>Chapter 1 1 1.1 Chapter 2 1 1.1 </code></pre> <p>Is it possible to get the number of tabs at the beginning of each line, by using Python?</p>
<python><text-processing>
2023-10-10 13:14:03
3
100,686
Tim
77,266,095
728,795
How to create a process in a new Linux namespace
<p>I am trying to create a child process with Python under a new <a href="https://man7.org/linux/man-pages/man7/namespaces.7.html" rel="nofollow noreferrer">Linux namespace</a>. But checking the <a href="https://docs.python.org/3/library/subprocess.html" rel="nofollow noreferrer">subprocess documentation</a> it does not seem as though Python actually has an API to do so. The closest thing I found is the <code>unshare</code> method in the <code>os</code> module (<a href="https://docs.python.org/3/library/os.html#os.unshare" rel="nofollow noreferrer">here</a>). But that seems to require these steps:</p> <ol> <li>Create a child process in the same namespace as the current parent</li> <li>Run unshare isolate the child process</li> <li>Run the command(s) we wanted in the child process</li> </ol> <p>That's not quite the same as creating an isolated process to start with. Is there indeed no simple API in Python for this?</p> <p>As an example, here is the analogous code in Go:</p> <pre><code>cmd := exec.Command(...) cmd.SysProcAttr = &amp;syscall.SysProcAttr { Cloneflags: syscall.CLONE_NEWUTS } cmd.Run() </code></pre> <p>The question is how to achieve the same with Python.</p>
<python><linux-namespaces>
2023-10-10 13:11:56
1
56,836
Andrei
77,266,032
17,835,656
how can i solve the problem in the language's direction in FPDF2?
<p>I am working on a project to print words in arabic on a PDF file using FPDF2 in python.</p> <p>but i have faced a lot of problems in showing the language in its original direction.</p> <p>FPDF2 shows everything language in english direction.</p> <p>this is how you read english :</p> <p><a href="https://i.sstatic.net/Yjtj7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Yjtj7.png" alt="enter image description here" /></a></p> <p>and this is how you should read arabic:</p> <p><a href="https://i.sstatic.net/O6D6c.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O6D6c.png" alt="enter image description here" /></a></p> <p><strong>but when i put the word above in arabic i face a problem &gt;&gt;&gt;&gt;</strong></p> <p><strong>The problem is FPDF Shows arabic like this :</strong></p> <p>1- Tax Before Discount</p> <p>2- The Total Without</p> <p><strong>But I need it to be like this:</strong></p> <p>1- The Total Without</p> <p>2- Tax Before Discount</p> <blockquote> <p><strong>i wrote it in english to the problem be understandable.</strong></p> </blockquote> <p>And This is the problem in arabic language :</p> <p><a href="https://i.sstatic.net/HY3wr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HY3wr.png" alt="enter image description here" /></a></p> <h1>But Why This Problem Comes ?</h1> <hr /> <h2>because FPDF2 shows arabic like it shows english, it shows it in the same direction.</h2> <p>This is my code:</p> <pre class="lang-py prettyprint-override"><code>from fpdf import FPDF import arabic_reshaper def arabic(text): if text.isascii(): return text else: reshaped_text = arabic_reshaper.reshape(text) return reshaped_text[::-1] pdf = FPDF() pdf.add_page() pdf.add_font(family='DejaVu', style='', fname=&quot;DejaVuSans.ttf&quot;) pdf.add_font(family='DejaVu', style='B', fname=&quot;DejaVuSans.ttf&quot;) pdf.set_font(family='DejaVu', style='', size=55) pdf.set_margin(5.0) with pdf.table(cell_fill_color=(200,200,200), cell_fill_mode=&quot;ALL&quot;, text_align=&quot;CENTER&quot;) as table: names_row = table.row() names_row.cell(text=arabic(&quot;الإجمالي بدون الضريبة قبل الخصم&quot;)) pdf.set_fill_color(r=255,g=255,b=255) row = table.row() row.cell(text=str(5000)) pdf.output('final.pdf') </code></pre> <p>The result of the code :</p> <p><a href="https://i.sstatic.net/nVc4e.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nVc4e.png" alt="enter image description here" /></a></p> <p>The problem will be shown when you have to separate the text to multi lines not in one line.</p> <p>i have tried a lot of solutions but no one of them made me close.</p> <p>Thanks.</p>
<python><python-3.x><pdf><fpdf><fpdf2>
2023-10-10 13:03:42
1
721
Mohammed almalki
77,265,938
5,942,907
cargo rustc failed with code 101 - Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects
<p>I have a project with Python 3.11.4. Until now I was able to build my docker image without issues with the next libraries (just mentioning the important ones related to the issue):</p> <p>transformers 4.8.2</p> <p>tokenizers 0.10.3</p> <p>In the Dockerfile the rust compiler is installed by executing the next:</p> <pre><code>RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y --default-toolchain 1.66.1 # Add .cargo/bin to PATH ENV PATH=&quot;/root/.cargo/bin:${PATH}&quot; </code></pre> <p>But from yesterday it is not working anymore. It is giving an error by trying to install the tokenizers library when one week ago it was working fine and not changes were done in the code until yesterday. This is how the error looks:</p> <pre> 215.3 Compiling tokenizers v0.10.1 (/tmp/pip-req-build-nxa8r_ow/tokenizers-lib) 215.3 Running `/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg 'feature="default"' --cfg 'feature="indicatif"' --cfg 'feature="progressbar"' -C metadata=dde566ece3782b43 -C extra-filename=-dde566ece3782b43 --out-dir /tmp/pip-req-build-nxa8r_ow/target/release/deps -L dependency=/tmp/pip-req-build-nxa8r_ow/target/release/deps --extern clap=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libclap-e97ec3243ee04998.rmeta --extern derive_builder=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libderive_builder-41ec09770f2959ba.so --extern esaxx_rs=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libesaxx_rs-f2759c190d1e60f6.rmeta --extern indicatif=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libindicatif-a13f9aa303c115b1.rmeta --extern itertools=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libitertools-2781adac909e0e8d.rmeta --extern lazy_static=/tmp/pip-req-build-nxa8r_ow/target/release/deps/liblazy_static-b3eac7b1efe0daf0.rmeta --extern log=/tmp/pip-req-build-nxa8r_ow/target/release/deps/liblog-e9c072abf79b5d2b.rmeta --extern onig=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libonig-7322b1e79f302581.rmeta --extern rand=/tmp/pip-req-build-nxa8r_ow/target/release/deps/librand-eb8967ca2ff2f601.rmeta --extern rayon=/tmp/pip-req-build-nxa8r_ow/target/release/deps/librayon-06bbb925cd5ab1af.rmeta --extern rayon_cond=/tmp/pip-req-build-nxa8r_ow/target/release/deps/librayon_cond-d5db76508c986330.rmeta --extern regex=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libregex-42ac3f9a5fee7536.rmeta --extern regex_syntax=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libregex_syntax-6d6a76aa7e489183.rmeta --extern serde=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libserde-699614b478bcb51c.rmeta --extern serde_json=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libserde_json-931598b7299b1c2d.rmeta --extern spm_precompiled=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libspm_precompiled-71a4dce0d8e7a388.rmeta --extern unicode_normalization_alignments=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libunicode_normalization_alignments-a9d3428c3ac7b5af.rmeta --extern unicode_segmentation=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libunicode_segmentation-83c854e18f560ee0.rmeta --extern unicode_categories=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libunicode_categories-efa8a5c4f5aee929.rmeta -L native=/tmp/pip-req-build-nxa8r_ow/target/release/build/esaxx-rs-a7fec0442126d010/out -L native=/tmp/pip-req-build-nxa8r_ow/target/release/build/onig_sys-03c260a5c54a327a/out` 215.3 warning: `#[macro_use]` only has an effect on `extern crate` and modules 215.3 --> tokenizers-lib/src/utils/mod.rs:24:1 215.3 | 215.3 24 | #[macro_use] 215.3 | ^^^^^^^^^^^^ 215.3 | 215.3 = note: `#[warn(unused_attributes)]` on by default 215.3 215.3 warning: `#[macro_use]` only has an effect on `extern crate` and modules 215.3 --> tokenizers-lib/src/utils/mod.rs:35:1 215.3 | 215.3 35 | #[macro_use] 215.3 | ^^^^^^^^^^^^ 215.3 215.3 warning: variable does not need to be mutable 215.3 --> tokenizers-lib/src/models/unigram/model.rs:280:21 215.3 | 215.3 280 | let mut target_node = &mut best_path_ends_at[key_pos]; 215.3 | ----^^^^^^^^^^^ 215.3 | | 215.3 | help: remove this `mut` 215.3 | 215.3 = note: `#[warn(unused_mut)]` on by default 215.3 215.3 warning: variable does not need to be mutable 215.3 --> tokenizers-lib/src/models/unigram/model.rs:297:21 215.3 | 215.3 297 | let mut target_node = &mut best_path_ends_at[starts_at + mblen]; 215.3 | ----^^^^^^^^^^^ 215.3 | | 215.3 | help: remove this `mut` 215.3 215.3 warning: variable does not need to be mutable 215.3 --> tokenizers-lib/src/pre_tokenizers/byte_level.rs:175:59 215.3 | 215.3 175 | encoding.process_tokens_with_offsets_mut(|(i, (token, mut offsets))| { 215.3 | ----^^^^^^^ 215.3 | | 215.3 | help: remove this `mut` 215.3 215.3 warning: fields `bos_id` and `eos_id` are never read 215.3 --> tokenizers-lib/src/models/unigram/lattice.rs:59:5 215.3 | 215.3 53 | pub struct Lattice { 215.3 | ------- fields in this struct 215.3 ... 215.3 59 | bos_id: usize, 215.3 | ^^^^^^ 215.3 60 | eos_id: usize, 215.3 | ^^^^^^ 215.3 | 215.3 = note: `Lattice` has a derived impl for the trait `Debug`, but this is intentionally ignored during dead code analysis 215.3 = note: `#[warn(dead_code)]` on by default 215.3 215.3 error: casting `&T` to `&mut T` is undefined behavior, even if the reference is unused, consider instead using an `UnsafeCell` 215.3 --> tokenizers-lib/src/models/bpe/trainer.rs:517:47 215.3 | 215.3 513 | let w = &words[*i] as *const _ as *mut _; 215.3 | -------------------------------- casting happend here 215.3 ... 215.3 517 | let word: &mut Word = &mut (*w); 215.3 | ^^^^^^^^^ 215.3 | 215.3 = note: `#[deny(invalid_reference_casting)]` on by default 215.3 215.3 warning: `tokenizers` (lib) generated 6 warnings 215.3 error: could not compile `tokenizers` (lib) due to previous error; 6 warnings emitted 215.3 215.3 Caused by: 215.3 process didn't exit successfully: `/root/.rustup/toolchains/stable-x86_64-unknown-linux-gnu/bin/rustc --crate-name tokenizers --edition=2018 tokenizers-lib/src/lib.rs --error-format=json --json=diagnostic-rendered-ansi,artifacts,future-incompat --crate-type lib --emit=dep-info,metadata,link -C opt-level=3 -C embed-bitcode=no --cfg 'feature="default"' --cfg 'feature="indicatif"' --cfg 'feature="progressbar"' -C metadata=dde566ece3782b43 -C extra-filename=-dde566ece3782b43 --out-dir /tmp/pip-req-build-nxa8r_ow/target/release/deps -L dependency=/tmp/pip-req-build-nxa8r_ow/target/release/deps --extern clap=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libclap-e97ec3243ee04998.rmeta --extern derive_builder=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libderive_builder-41ec09770f2959ba.so --extern esaxx_rs=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libesaxx_rs-f2759c190d1e60f6.rmeta --extern indicatif=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libindicatif-a13f9aa303c115b1.rmeta --extern itertools=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libitertools-2781adac909e0e8d.rmeta --extern lazy_static=/tmp/pip-req-build-nxa8r_ow/target/release/deps/liblazy_static-b3eac7b1efe0daf0.rmeta --extern log=/tmp/pip-req-build-nxa8r_ow/target/release/deps/liblog-e9c072abf79b5d2b.rmeta --extern onig=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libonig-7322b1e79f302581.rmeta --extern rand=/tmp/pip-req-build-nxa8r_ow/target/release/deps/librand-eb8967ca2ff2f601.rmeta --extern rayon=/tmp/pip-req-build-nxa8r_ow/target/release/deps/librayon-06bbb925cd5ab1af.rmeta --extern rayon_cond=/tmp/pip-req-build-nxa8r_ow/target/release/deps/librayon_cond-d5db76508c986330.rmeta --extern regex=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libregex-42ac3f9a5fee7536.rmeta --extern regex_syntax=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libregex_syntax-6d6a76aa7e489183.rmeta --extern serde=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libserde-699614b478bcb51c.rmeta --extern serde_json=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libserde_json-931598b7299b1c2d.rmeta --extern spm_precompiled=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libspm_precompiled-71a4dce0d8e7a388.rmeta --extern unicode_normalization_alignments=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libunicode_normalization_alignments-a9d3428c3ac7b5af.rmeta --extern unicode_segmentation=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libunicode_segmentation-83c854e18f560ee0.rmeta --extern unicode_categories=/tmp/pip-req-build-nxa8r_ow/target/release/deps/libunicode_categories-efa8a5c4f5aee929.rmeta -L native=/tmp/pip-req-build-nxa8r_ow/target/release/build/esaxx-rs-a7fec0442126d010/out -L native=/tmp/pip-req-build-nxa8r_ow/target/release/build/onig_sys-03c260a5c54a327a/out` (exit status: 1) 215.3 warning: build failed, waiting for other jobs to finish... 215.3 error: `cargo rustc --lib --message-format=json-render-diagnostics --manifest-path Cargo.toml --release -v --features pyo3/extension-module --crate-type cdylib --` failed with code 101 215.3 [end of output] 215.3 215.3 note: This error originates from a subprocess, and is likely not a problem with pip. 215.3 ERROR: Failed building wheel for tokenizers 215.3 Failed to build tokenizers 215.3 ERROR: Could not build wheels for tokenizers, which is required to install pyproject.toml-based projects 215.3 215.3 215.3 at /opt/poetry/venv/lib/python3.11/site-packages/poetry/utils/env.py:1540 in _run 215.4 1536│ output = subprocess.check_output( 215.4 1537│ command, stderr=subprocess.STDOUT, env=env, **kwargs 215.4 1538│ ) 215.4 1539│ except CalledProcessError as e: 215.4 → 1540│ raise EnvCommandError(e, input=input_) 215.4 1541│ 215.4 1542│ return decode(output) 215.4 1543│ 215.4 1544│ def execute(self, bin: str, *args: str, **kwargs: Any) -> int: 215.4 215.4 The following error occurred when trying to handle this error: 215.4 215.4 215.4 PoetryException 215.4 215.4 Failed to install /root/.cache/pypoetry/artifacts/11/c1/65/6a1ee2c3ed75cdc8840c15fb385ec739aedba8424fd6b250657ff16342/tokenizers-0.10.3.tar.gz 215.4 215.4 at /opt/poetry/venv/lib/python3.11/site-packages/poetry/utils/pip.py:58 in pip_install 215.4 54│ 215.4 55│ try: 215.4 56│ return environment.run_pip(*args) 215.4 57│ except EnvCommandError as e: 215.4 → 58│ raise PoetryException(f"Failed to install {path.as_posix()}") from e 215.4 59│ 215.4 ------ Dockerfile:57 -------------------- 55 | FROM builder-image AS dev-image 56 | 57 | >>> RUN poetry install --with main,dev 58 | 59 | FROM builder-image AS runtime-image -------------------- ERROR: failed to solve: process "/bin/sh -c poetry install --with main,dev" did not complete successfully: exit code: 1 </pre>
<python><docker><pip><python-wheel>
2023-10-10 12:48:44
3
389
sylvinho81
77,265,766
17,267,064
Spark job fails in AWS Glue | "An error occurred while calling o86.getSink. The connection attempt failed."
<p>I attempted to migrate data from a csv file from an S3 storage to a table in my Redshift cluster. I took reference from an autogenerated code which came after I built blocks using Visual mode in AWS Glue. This job ran successfully.</p> <p>I copied that code and changed basic details such as variable and table name in the Spark script I wanted to run under the different job. FYI, I gave all necessary permissions and attached required policies for the job to run and have set the VPC and security group rules.</p> <p>Here's my code given below.</p> <pre><code>import sys from awsglue.transforms import * from awsglue.utils import getResolvedOptions from pyspark.context import SparkContext from awsglue.context import GlueContext from awsglue.job import Job from awsglue import DynamicFrame args = getResolvedOptions(sys.argv, [&quot;JOB_NAME&quot;]) sc = SparkContext() glueContext = GlueContext(sc) spark = glueContext.spark_session job = Job(glueContext) job.init(args[&quot;JOB_NAME&quot;], args) # Script generated for node S3 bucket node1 = glueContext.create_dynamic_frame.from_options( format_options={&quot;quoteChar&quot;: '&quot;', &quot;withHeader&quot;: True, &quot;separator&quot;: &quot;,&quot;}, connection_type=&quot;s3&quot;, format=&quot;csv&quot;, connection_options={&quot;paths&quot;: [&quot;s3://testbucket-mohit/Source/AMH.csv&quot;]}, transformation_ctx=&quot;S3bucket_node1&quot;, ) # Script generated for node Amazon Redshift node2 = glueContext.write_dynamic_frame.from_options( frame=node1, connection_type=&quot;redshift&quot;, connection_options={ &quot;redshiftTmpDir&quot;: &quot;s3://aws-glue-assets-765579251507-ap-south-1/temporary/&quot;, &quot;useConnectionProperties&quot;: &quot;true&quot;, &quot;dbtable&quot;: &quot;public.amh2&quot;, &quot;connectionName&quot;: &quot;redshift&quot;, &quot;preactions&quot;: &quot;DROP TABLE IF EXISTS public.amh2; CREATE TABLE IF NOT EXISTS public.amh2 (id VARCHAR, owner_name VARCHAR, property_name VARCHAR, address_line1 VARCHAR, city VARCHAR, state VARCHAR, zipcode VARCHAR, country VARCHAR, square_feet VARCHAR, property_type VARCHAR, year_built VARCHAR, url VARCHAR, cityurl VARCHAR);&quot;, }, transformation_ctx=&quot;node2&quot;, ) job.commit() </code></pre> <p>The data successfully loads into dynamic frame however when it comes to writing data to redshift, it throws below error.</p> <pre><code>23/10/10 09:07:56 INFO LogPusher: stopping 23/10/10 09:07:56 INFO ProcessLauncher: postprocessing 23/10/10 09:07:56 ERROR ProcessLauncher: Error from Python:Traceback (most recent call last): File &quot;/tmp/testrun.py&quot;, line 26, in &lt;module&gt; node2 = glueContext.write_dynamic_frame.from_options( File &quot;/opt/amazon/lib/python3.7/site-packages/awsglue/dynamicframe.py&quot;, line 640, in from_options return self._glue_context.write_dynamic_frame_from_options(frame, File &quot;/opt/amazon/lib/python3.7/site-packages/awsglue/context.py&quot;, line 337, in write_dynamic_frame_from_options return self.write_from_options(frame, connection_type, File &quot;/opt/amazon/lib/python3.7/site-packages/awsglue/context.py&quot;, line 355, in write_from_options sink = self.getSink(connection_type, format, transformation_ctx, **new_options) File &quot;/opt/amazon/lib/python3.7/site-packages/awsglue/context.py&quot;, line 317, in getSink j_sink = self._ssql_ctx.getSink(connection_type, File &quot;/opt/amazon/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/java_gateway.py&quot;, line 1321, in __call__ return_value = get_return_value( File &quot;/opt/amazon/spark/python/lib/pyspark.zip/pyspark/sql/utils.py&quot;, line 190, in deco return f(*a, **kw) File &quot;/opt/amazon/spark/python/lib/py4j-0.10.9.5-src.zip/py4j/protocol.py&quot;, line 326, in get_return_value raise Py4JJavaError( py4j.protocol.Py4JJavaError: An error occurred while calling o86.getSink. : java.sql.SQLException: The connection attempt failed. at com.amazon.redshift.util.RedshiftException.getSQLException(RedshiftException.java:56) at com.amazon.redshift.Driver.connect(Driver.java:319) at com.amazonaws.services.glue.util.JDBCWrapper$.$anonfun$connectionProperties$5(JDBCUtils.scala:1061) at com.amazonaws.services.glue.util.JDBCWrapper$.$anonfun$connectWithSSLAttempt$2(JDBCUtils.scala:1012) at scala.Option.getOrElse(Option.scala:189) at com.amazonaws.services.glue.util.JDBCWrapper$.$anonfun$connectWithSSLAttempt$1(JDBCUtils.scala:1012) at scala.Option.getOrElse(Option.scala:189) at com.amazonaws.services.glue.util.JDBCWrapper$.connectWithSSLAttempt(JDBCUtils.scala:1012) at com.amazonaws.services.glue.util.JDBCWrapper$.connectionProperties(JDBCUtils.scala:1057) at com.amazonaws.services.glue.util.JDBCWrapper.connectionProperties$lzycompute(JDBCUtils.scala:820) at com.amazonaws.services.glue.util.JDBCWrapper.connectionProperties(JDBCUtils.scala:820) at com.amazonaws.services.glue.util.JDBCWrapper.getRawConnection(JDBCUtils.scala:833) at com.amazonaws.services.glue.RedshiftDataSink.&lt;init&gt;(RedshiftDataSink.scala:39) at com.amazonaws.services.glue.GlueContext.getSink(GlueContext.scala:1121) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182) at py4j.ClientServerConnection.run(ClientServerConnection.java:106) at java.lang.Thread.run(Thread.java:750) Caused by: java.net.SocketTimeoutException: connect timed out at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:350) at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:206) at java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:188) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392) at java.net.Socket.connect(Socket.java:607) at com.amazon.redshift.core.RedshiftStream.&lt;init&gt;(RedshiftStream.java:86) at com.amazon.redshift.core.v3.ConnectionFactoryImpl.tryConnect(ConnectionFactoryImpl.java:111) at com.amazon.redshift.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:224) at com.amazon.redshift.core.ConnectionFactory.openConnection(ConnectionFactory.java:51) at com.amazon.redshift.jdbc.RedshiftConnectionImpl.&lt;init&gt;(RedshiftConnectionImpl.java:328) at com.amazon.redshift.Driver.makeConnection(Driver.java:474) at com.amazon.redshift.Driver.connect(Driver.java:295) ... 24 more </code></pre> <p>Even I tried to run below cell block in AWS Glue's Jupyter Notebook, still it gives below error.</p> <pre><code>node2 = glueContext.write_dynamic_frame.from_options( frame=node1, connection_type=&quot;redshift&quot;, connection_options={ &quot;redshiftTmpDir&quot;: &quot;s3://aws-glue-assets-765579251507-ap-south-1/temporary/&quot;, &quot;useConnectionProperties&quot;: &quot;true&quot;, &quot;dbtable&quot;: &quot;public.amh2&quot;, &quot;connectionName&quot;: &quot;redshift&quot;, &quot;preactions&quot;: &quot;DROP TABLE IF EXISTS public.amh2; CREATE TABLE IF NOT EXISTS public.amh2 (id VARCHAR, owner_name VARCHAR, property_name VARCHAR, address_line1 VARCHAR, city VARCHAR, state VARCHAR, zipcode VARCHAR, country VARCHAR, square_feet VARCHAR, property_type VARCHAR, year_built VARCHAR, url VARCHAR, cityurl VARCHAR);&quot;, }, transformation_ctx=&quot;node2&quot;, ) </code></pre> <p>Am I missing something here? Thank you in advance.</p>
<python><pyspark><amazon-redshift><aws-glue>
2023-10-10 12:25:01
0
346
Mohit Aswani
77,265,677
2,386,605
Autocompletion with OpenAI API or Langchain
<p>Assume I have some sort of text like</p> <p>&quot;This is a&quot;</p> <p>and now I want to use the OpenAI package for Python or even better Langchain to complete this statement, such that something like</p> <p>&quot;This is a question in Stackoverflow&quot;</p> <p>arises. How would I do that?</p>
<python><openai-api><langchain><py-langchain><azure-openai>
2023-10-10 12:13:31
1
879
tobias
77,265,609
22,466,650
How to recursively look for sub classes and at the same time aggregate their values?
<p>Please find my dataframe this :</p> <pre><code>df = pd.DataFrame({'class': ['class_b', 'class_a', 'class_c', 'class_d'], 'sub_class': ['class_d', None, 'class_e', 'class_a'], 'entities': [5, 1, 7, 6]}) print(df) class sub_class entities 0 class_b class_d 5 1 class_a None 1 2 class_c class_e 7 3 class_d class_a 6 </code></pre> <p>As per the title, I'm just trying to look for subclasses like we do in <code>os.walk</code> but I can't figure it out. For example, <code>class_b</code> has <code>class_d</code> as subclass and this one has also a subclass (<code>class_a</code>) and we could have sometimes up to 5 sub levels.</p> <p>My expected output is this :</p> <pre><code> class all_subclass sum_entites 0 class_a [] 1 1 class_b [class_d, class_a] 12 2 class_c [class_e] 7 3 class_d [class_a] 7 4 class_e [] 0 </code></pre> <p>I made a bad code below. I was thinking about making a <code>while</code> loop and keep mergin until there is no match but it feels not good.</p> <pre><code>df1 = df.merge(df, left_on='sub_class', right_on='class', how=&quot;left&quot;, indicator=True).filter(like='class') result = df1.groupby('class_x').apply(lambda x: list(x.dropna().T.values.tolist())).to_dict() result {'class_a': [[], [], [], []], 'class_b': [['class_b'], ['class_d'], ['class_d'], ['class_a']], 'class_c': [[], [], [], []], 'class_d': [[], [], [], []]} </code></pre> <p>Do you have any suggestions guys ?</p>
<python><pandas>
2023-10-10 12:04:29
1
1,085
VERBOSE
77,265,591
5,513,253
Coming from Perl/DBI to Python
<p>From an absolute Python n00b:</p> <p>Years after falling in love with Perl's DBI <code>fetchall_hashref()</code> I've come to Python to do some similar db work.</p> <p>What I found so far is that <code>res.fetchall()</code> returns a row in which the columns are not indexed with the column name. I have to use row[0], row[0] and so on. Kind of like Perl's fetchall.</p> <p>So I started to look for Pythons equivalent to fetchall_hashref() and came up blank.</p> <p>Is it not there in Python land?</p> <p>BTW, reread and thought it needed some clarification</p> <p>Perl's <code>fetchall_hashref()</code> return a hashreference (as the name implies). You can access a certain column field with the column name, i.e.: <code>$row-&gt;{'column_name'}</code></p>
<python><database>
2023-10-10 12:01:31
0
366
Peter
77,265,534
3,532,367
Django How to use proxy models instead of parent
<p>My code:</p> <pre><code>from django.db import models class Animal(models.Model): type = models.CharField(max_length=100) class Dog(Animal): class Meta: proxy = True class Cat(Animal): class Meta: proxy = True </code></pre> <p>I want to do:</p> <pre><code>animals = Animal.objects.all() </code></pre> <p>but in animals, I should have only Dogs and Cats instances.</p> <p>I need to cast an Animal object to a specific proxy model somehow. How do I do that? in the <strong>init</strong> method?</p>
<python><django><model>
2023-10-10 11:53:26
3
369
Mariusz
77,265,239
5,743,194
In a pydantic model, how do I define the type of property which is a decorated method?
<p>Consider this code snippet:</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel class SomeModel(BaseModel): property_1: str property_2: Dict[str, str] property_3: int = 12 @property def property_4(self) -&gt; int: response = external_call() # returns '22.0' (a string) &gt;&gt;&gt; a = SomeModel(property_1=&quot;a&quot;, property_2={&quot;k&quot;: &quot;v&quot;}, property_3=&quot;12.0&quot;) &gt;&gt;&gt; a.property_3 12 &gt;&gt;&gt; a.property_4 '22.0' </code></pre> <p><code>property_3</code> gets properly casted to <code>int</code>. How do I achieve the same with the <code>@property</code> decorator? Is it possible to get <code>pydantic</code> do its magic on those decorated properties?</p>
<python><model><pydantic>
2023-10-10 11:06:39
1
1,065
Artur
77,265,129
1,420,050
Imports in Python editable local project installs
<p>I have a Python project (the parent project) that should use two other local Python projects (the library projects) via editable installs as dependencies (see the corresponding <a href="https://peps.python.org/pep-0660/" rel="nofollow noreferrer">PEP</a>, <a href="https://pip.pypa.io/en/stable/topics/local-project-installs/" rel="nofollow noreferrer">pip</a> and <a href="https://setuptools.pypa.io/en/latest/userguide/development_mode.html" rel="nofollow noreferrer">setuptools</a> docs for more info). However, I have issues making imports within the library projects work.</p> <p>My project structure is as follows:</p> <pre><code>├── poetry.lock ├── pyproject.toml └── src ├── lib_bar │   ├── lib_bar │   │   ├── bar.py │   │   ├── helper.py │   │   └── __init__.py │   ├── poetry.lock │   └── pyproject.toml ├── lib_foo │   ├── lib_foo │   │   ├── foo.py │   │   ├── helper.py │   │   └── __init__.py │   ├── poetry.lock │   └── pyproject.toml └── main.py </code></pre> <p>I'm using <a href="https://python-poetry.org/" rel="nofollow noreferrer">Poetry</a> to manage dependencies, and the parent's <code>pyproject.toml</code> looks like this:</p> <pre class="lang-ini prettyprint-override"><code>[tool.poetry] name = &quot;multi-editable-dependencies&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;Your Name &lt;you@example.com&gt;&quot;] packages = [{include = &quot;src&quot;}] [tool.poetry.dependencies] python = &quot;^3.11&quot; lib_foo = { path = &quot;./src/lib_foo/&quot;, develop = true } lib_bar = { path = &quot;./src/lib_bar/&quot;, develop = true } [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>and <code>lib_foo/pyproject.toml</code> looks like this:</p> <pre class="lang-ini prettyprint-override"><code>[tool.poetry] name = &quot;lib-foo&quot; version = &quot;0.1.0&quot; description = &quot;&quot; authors = [&quot;Your Name &lt;you@example.com&gt;&quot;] packages = [{include = &quot;lib_foo&quot;}] [tool.poetry.dependencies] python = &quot;^3.11&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>(analogously for <code>lib_bar/pyproject.toml</code>)</p> <p>Inside <code>main.py</code> I can import from both <code>lib_foo</code> and <code>lib_bar</code> like this:</p> <pre class="lang-py prettyprint-override"><code>from lib_foo.foo import do_main_foo from lib_bar.bar import do_main_bar do_main_foo() do_main_bar() </code></pre> <p>The issue is that with this setup, local imports within the library projects do not work. With <code>lib_foo/foo.py</code> looking like this:</p> <pre class="lang-py prettyprint-override"><code>from helper import do_foo def do_main_foo(): do_foo() </code></pre> <p>and <code>lib_foo/helper.py</code> like this:</p> <pre class="lang-py prettyprint-override"><code>def do_foo(): print(&quot;foo&quot;) </code></pre> <p>and analogously for <code>lib_bar</code> (replacing <code>foo</code> with <code>bar</code> everywhere), I'm getting errors when I try to run <code>main.py</code> via <code>poetry run python src/main.py</code> from the parent's root directory:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;parent&gt;/src/main.py&quot;, line 8, in &lt;module&gt; from lib_foo.foo import do_main_foo File &quot;&lt;parent&gt;/src/lib_foo/lib_foo/foo.py&quot;, line 1, in &lt;module&gt; from helper import do_foo ModuleNotFoundError: No module named 'helper' </code></pre> <p>I tried a couple of different solutions that all don't work or are unsatisfactory in different ways.</p> <h2>Modifying the path</h2> <p>I can fix this error by adding the project roots of the library projects to the path using the following code inside <code>main.py</code>:</p> <pre class="lang-py prettyprint-override"><code>sys.path.append(os.path.dirname(__file__) + &quot;/lib_foo/lib_foo/&quot;) sys.path.append(os.path.dirname(__file__) + &quot;/lib_bar/lib_bar/&quot;) </code></pre> <p>However, by doing this for both projects, <code>helper.py</code> now conflicts, i.e. since <code>lib_foo</code>'s <code>helper.py</code> appears first on the path, <code>lib_bar</code> tries to import from <code>lib_foo</code>'s <code>helper.py</code> and subsequently I get the error:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;parent&gt;/src/main.py&quot;, line 8, in &lt;module&gt; from lib_bar.bar import do_main_bar File &quot;&lt;parent&gt;/src/lib_bar/lib_bar/bar.py&quot;, line 1, in &lt;module&gt; from helper import do_bar ImportError: cannot import name 'do_bar' from 'helper' (&lt;parent&gt;/src/lib_foo/lib_foo/helper.py) </code></pre> <h2>Renaming conflicting imports</h2> <p>The above attempt works if I rename <code>helper.py</code> to e.g. <code>help.py</code> in only one of the libraries. However, in the actual libraries I want to build such renaming would lead to some awkward names, and I want to use different combinations of libraries in different parent projects, so I would strongly prefer to be able to use modules within each project with the same names.</p> <h2>Using relative imports</h2> <p>Instead of using <code>from helper import do_foo</code>, I can use <code>from .helper import do_foo</code> (and analogously for <code>lib_bar</code>), which solves the problem. However, the actual libraries I'm planning to use will have several levels of nested modules, so I would have to use awkward import constructs like <code>from ....pkg.subpkg import foo</code>, which will break every time I'm moving a file. Therefore, I would like to be able to use absolute imports <em>within each library project</em>.</p> <h2>Summing up</h2> <p>Is there some solution that would allow me to use projects as editable, local installs while using local absolute imports within each project, that allow for modules within each project to have the same names as those in other projects? I am wondering how normal, non-editable installs accomplish this, and whether there is some way to leverage the same mechanism in this case.</p> <p>Note that I'm using Poetry here, but I also tested the same setup with Pip, i.e. using <code>pip install -e src/lib_foo</code>, with the same issues.</p>
<python><pip><dependencies><python-poetry>
2023-10-10 10:50:13
0
465
Till S
77,265,055
7,366,600
Python send to Twilio websocket in x-audio/mulaw 8kHz
<p>I am trying to send media back to twilio websocket but twilio is not playing the audio, I am using gTTS for text to speach.</p> <p>Here is my code for text to speach</p> <pre><code>def text_to_mp3(self, f_path :str , f_key: str, text: str) -&gt; str: tmp_fn = os.path.join(f_path, f_key + &quot;.mp3&quot;)) tts = gTTS(text, lang=&quot;en&quot;) tts.save(tmp_fn) #save mp3 new_fn = os.path.join(f_path, f_key + &quot;.wav&quot;) audio = am.from_mp3(tmp_fn) #convert mp3 to wav audio = audio.set_channels(1) # Mono audio audio = audio.set_frame_rate(8000) # 8000Hz sample rate audio.export(new_fn, format='wav') return new_fn </code></pre> <p>Then when sending the audio I take that file and skip the wav header</p> <pre><code> def get_encoded_payload(self, filename): with wave.open(filename, 'rb') as wav_in: params = wav_in.getparams() header_size = params.nframes * params.sampwidth + 44 # 44 is the size of the WAV header wav_in.readframes(header_size) audio_data = wav_in.readframes(params.nframes) base64_encoded_audio = base64.b64encode(audio_data).decode() return base64_encoded_audio </code></pre> <p>And lastly I sent the audio in twilio stream</p> <pre><code> def play(self, base_data : str): media_data = { &quot;event&quot;: &quot;media&quot;, &quot;streamSid&quot;: self.streamId, &quot;media&quot;: { &quot;payload&quot;: base_data } } self.ws.send(media_data) </code></pre> <p>I am also sending mark message after sending media</p> <pre><code> def play_mark(self, unique_name: str): media_data = { &quot;event&quot;: &quot;mark&quot;, &quot;streamSid&quot;: self.streamId, &quot;mark&quot;: { &quot;name&quot;: unique_name } } self.ws.send(media_data) </code></pre> <p>I am not sure what I am missing. The audio file is in single channel 8khz.</p> <p>Edit I did tried audio/x-mulaw encoding and still nothing. now my get_encode_payload looks like :</p> <pre><code> def get_encoded_payload(self, filename): with wave.open(filename, 'rb') as wav_in: params = wav_in.getparams() header_size = params.nframes * params.sampwidth + 44 # 44 is the size of the WAV header wav_in.readframes(header_size) audio_data = wav_in.readframes(params.nframes) ulaw_audio_data= audioop.lin2ulaw(audio_data, params.sampwidth) base64_encoded_audio = base64.b64encode(ulaw_audio_data).decode() return base64_encoded_audio </code></pre>
<python><twilio>
2023-10-10 10:39:39
1
718
Stacy Thompson
77,264,996
2,300,597
Is there still maxfev parameter in curve_fit? If not, how can we control the number of iterations performed?
<p>As of today, is there still this <code>maxfev</code> parameter in <code>curve_fit</code>?</p> <p>If not, how to control the number of iterations that are performed while fitting?</p> <p>I tried passing it in but seems it's just ignored.</p> <p>I wonder if this parameter was removed and if so in which version?! And mostly I wonder what's the recommended way now of controlling the number of iterations in the version that I have?</p> <p>I am using an Anaconda distribution and its version of scipy is <code>1.7.3</code>.</p> <p>See also</p> <p><a href="https://stackoverflow.com/questions/15831763/scipy-curvefit-runtimeerroroptimal-parameters-not-found-number-of-calls-to-fun">Scipy curvefit RuntimeError:Optimal parameters not found: Number of calls to function has reached maxfev = 1000</a></p> <p><a href="https://github.com/scipy/scipy/issues/6340" rel="nofollow noreferrer">https://github.com/scipy/scipy/issues/6340</a></p>
<python><scipy><curve-fitting>
2023-10-10 10:32:24
0
39,631
peter.petrov
77,264,885
10,909,217
Parallelizing independent creation of Dataframes
<p>I have a general question about parallelizing Dataframe operations.</p> <p>Let's say I have an operation like this in mind (pseudo-Code, <code>df</code>, <code>df1</code> and <code>df2</code> are DataFrames)</p> <pre><code>df1 = pandas_operations(df, arg1, arg2) df2 = pandas_operations(df, arg3, arg4) result = pd.concat([df1, df2]) </code></pre> <p>where <code>pandas_operations</code> is some function that makes heavy use of the pandas API to crunch numbers.</p> <p>In what situations would creating <code>df1</code> and <code>df2</code> in parallel (e.g. with <code>multiprocessing</code>) make sense in order to speed up the program?</p> <p>I am mainly asking this question because <code>pandas</code> delegates a lot of computation heavy tasks to <code>numpy</code>, which (as I understand) already makes use of multiple cores when calling code written in C. If this were true, then I could parallelize the creation of <code>df1</code> and <code>df2</code> with <code>multiprocessing</code>, but the creation of each Dataframe would likely be slower than in the sequential program.</p>
<python><pandas><numpy><multiprocessing>
2023-10-10 10:17:32
1
1,290
actual_panda
77,264,776
1,715,185
How to make generic class inheriting from TypeVar in Python?
<p>How can I create a Generic class in Python, that has the TypeVar as a base class?</p> <p>The minimum example is this:</p> <pre><code>from typing import TypeVar, Generic T = TypeVar(&quot;T&quot;) class A(T, Generic[T]): pass </code></pre> <p>Unfortunately, it gives me the following error:</p> <pre><code>TypeError: metaclass conflict: the metaclass of a derived class must be a (non-strict) subclass of the metaclasses of all its bases </code></pre> <p>A bit of context: the reason I want to do do this, is since I want my derived class <code>A</code> to have all the fields of <code>T</code>, so that to have these fields annotated in <code>T</code> and I want the resulting instance of <code>A</code> to have those fields, as that will serve as the type as a response in my REST server with FastAPI.</p>
<python><generics><fastapi><multiple-inheritance><metaclass>
2023-10-10 10:03:15
1
508
Petr
77,264,725
10,027,628
Python Bigquery import fails on Windows
<p>I'm trying to use <code>from google.cloud import bigquery</code> in my Python script, but it fails with the following error:</p> <pre><code>Script failed to evaluate(): 'NoneType' object has no attribute 'message_types_by_name' Traceback (most recent call last): File &quot;Python Script&quot;, line 4, in &lt;module&gt; from google.cloud import bigquery File &quot;__init__.py&quot;, line 35, in &lt;module&gt; from google.cloud.bigquery.client import Client File &quot;client.py&quot;, line 60, in &lt;module&gt; import google.api_core.exceptions as core_exceptions File &quot;exceptions.py&quot;, line 29, in &lt;module&gt; from google.rpc import error_details_pb2 File &quot;error_details_pb2.py&quot;, line 39, in &lt;module&gt; _ERRORINFO = DESCRIPTOR.message_types_by_name[&quot;ErrorInfo&quot;] AttributeError: 'NoneType' object has no attribute 'message_types_by_name' </code></pre> <p>I'm using Python 3.8.1 (default, Jan 24 2020, 17:35:53) [MSC v.1916 64 bit (AMD64)] and I used the following command to install the dependencies: <code>&quot;...\python\python38\python&quot; -m pip install --no-cache-dir --upgrade --target=&quot;...\some\directory&quot; google-cloud-storage google-cloud-bigquery[pandas]</code>. I also tried the previous version <code>google-cloud-bigquery[pandas]==3.11.4</code> and adding <code>protobuf</code> as dependency, which both did not resolve the issue.</p> <p>For example <code>from google.oauth2 import service_account</code> works just fine.</p> <p>Does anyone have an idea on how I could fix the problem?</p> <p>Edit: <code>pip freeze --path ...\some\directory</code> gives the following output:</p> <pre><code>cachetools==5.3.1 certifi==2023.7.22 charset-normalizer==3.3.0 db-dtypes==1.1.1 google-api-core==2.12.0 google-auth==2.23.3 google-cloud-bigquery==3.11.4 google-cloud-core==2.3.3 google-cloud-storage==2.11.0 google-crc32c==1.5.0 google-resumable-media==2.6.0 googleapis-common-protos==1.60.0 grpcio==1.59.0 idna==3.4 numpy==1.24.4 packaging==23.2 pandas==2.0.3 proto-plus==1.22.3 protobuf==4.24.4 pyarrow==13.0.0 pyasn1==0.5.0 pyasn1-modules==0.3.0 python-dateutil==2.8.2 pytz==2023.3.post1 requests==2.31.0 rsa==4.9 six==1.16.0 tzdata==2023.3 urllib3==2.0.6 </code></pre>
<python><windows><google-bigquery>
2023-10-10 09:56:44
1
377
Christoph H.
77,264,606
21,404,794
Plotting BoTorch Synthetic functions
<p>I'm trying to plot the <a href="https://botorch.org/api/test_functions.html#botorch.test_functions.synthetic.Ackley" rel="nofollow noreferrer">Ackley synthetic function</a> from botorch (mainly to understand how it works and why if I feed it 2 tensors it only returns 1 value), and I'm facing a bunch of different problems. I've also tried other functions but with the same results... Any person able to shed some light over this topic will be welcome.</p> <p>Here's the code I'm using:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import torch import matplotlib.pyplot as plt import botorch.test_functions.synthetic as funcs #We only use Ackley for now branin = funcs.Branin() hartman = funcs.Hartmann() ackley = funcs.Ackley() x1 = np.linspace(0, 5, 2) x2 = np.linspace(0, 5, 2) X, Y = np.meshgrid(x1,x2) Z = ackley(torch.tensor([X,Y])) fig, ax = plt.subplots(subplot_kw={'projection':'3d'}) surf = ax.plot_surface(X,Y,Z) </code></pre> <p>The fun part is that with these settings, I'm able to plot it, but if I change the linspace so it has more points, the plotting part gives an error for shape mismatch...</p>
<python><pytorch><botorch>
2023-10-10 09:40:03
2
530
David Siret Marqués
77,264,491
2,876,079
How to loop over all documents and get meta information about existing directives in Sphinx?
<p>I would like to create a custom Sphinx extension, that shows a list of all figures. Also see related question <a href="https://stackoverflow.com/questions/77259971/how-to-create-list-of-all-figures-tables-in-sphinx">How to create list of all figures (tables) in Sphinx?</a></p> <p>In the Sphinx documentation there is a ToDo example, where the ToDos instances are registered/managed in a global variable:</p> <pre><code>self.env.todo_all_todos.append...) </code></pre> <p><a href="https://www.sphinx-doc.org/en/master/development/tutorials/todo.html" rel="nofollow noreferrer">https://www.sphinx-doc.org/en/master/development/tutorials/todo.html</a></p> <p>Instead of creating a custom <code>.. my-figure::</code> directive, I would prefer to loop over the existing <code>.. figure::</code> directives to extract the corresponding meta data.</p> <p>=&gt; How can I do so?</p> <p>Below are some dummy code versions that have been <strong>hallucinated by AI</strong>. Unfortunately they <strong>do not work</strong> because the corresponding properties do not exist. At least they illustrate what I am looking for.</p> <pre><code>from docutils import nodes from sphinx.util.docutils import SphinxDirective class FigureListDirective(SphinxDirective): def run(self): env = self.state.document.settings.env figures = env.metadata.get('figures', []) return [nodes.paragraph('', '', nodes.Text(f'Figures: {&quot;, &quot;.join(figures)}'))] </code></pre> <hr /> <pre><code>from docutils import nodes from sphinx.util.docutils import SphinxDirective class FigureListDirective(SphinxDirective): def run(self): env = self.state.document.settings.env figures = [] # Loop over all figure nodes in the document for figure_node in env.get_domain('std').data['objects']['figure']: # Access the figure metadata figure_data = figure_node[0]['object'] figures.append(figure_data) # Create a list node to hold the figures list_node = nodes.bullet_list() # Create list item nodes for each figure for figure_data in figures: list_item_node = nodes.list_item() list_item_node.append(nodes.paragraph(text=figure_data['name'])) list_node.append(list_item_node) return [list_node] </code></pre>
<python><python-sphinx><docutils>
2023-10-10 09:26:20
0
12,756
Stefan
77,264,168
11,747,148
ImportError: <PKG_NAME> not found in importing a Julia package in Python using PyJulia
<p>I tried to import a registered Julia package into Python to use it but I got an error (I followed <a href="https://blog.esciencecenter.nl/how-to-call-julia-code-from-python-8589a56a98f2" rel="nofollow noreferrer">this turorial</a>):</p> <pre><code>$ pip install julia </code></pre> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; import julia &gt;&gt;&gt; julia.install() ... Precompiling PyCall... Precompiling PyCall... DONE PyCall is installed and built successfully. </code></pre> <p>After a complete installation, I tried to install the required package:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; from julia import Main as jl &gt;&gt;&gt; from julia import Pkg &gt;&gt;&gt; Pkg.add(&quot;OnlinePortfolioSelection&quot;) Resolving package versions... No Changes to `C:\Users\Shayan\.julia\environments\v1.9\Project.toml` No Changes to `C:\Users\Shayan\.julia\environments\v1.9\Manifest.toml` &gt;&gt;&gt; Pkg.status() Status `C:\Users\Shayan\.julia\environments\v1.9\Project.toml` [6e4b80f9] BenchmarkTools v1.3.2 [336ed68f] CSV v0.10.11 [aaaa29a8] Clustering v0.15.2 `C:\Users\Shayan\.julia\dev\Clustering` [a2441757] Coverage v1.6.0 [a93c6f00] DataFrames v1.6.1 [aaf54ef3] DistributedArrays v0.6.7 [cd3eb016] HTTP v1.10.0 [358108f5] JMcDM v0.7.8 `C:\Users\Shayan\.julia\dev\JMcDM` [dc48124f] LinearSegmentation v0.2.0 `C:\Users\Shayan\.julia\dev\LinearSegmentation` [5fb14364] OhMyREPL v0.5.22 [038f9fe3] OnlinePortfolioSelection v1.8.0 [91a5bcdd] Plots v1.39.0 [aea7be01] PrecompileTools v1.2.0 [438e738f] PyCall v1.96.1 [295af30f] Revise v3.5.6 [2913bbd2] StatsBase v0.34.2 [e4b3b0a2] YFinance v0.1.4 </code></pre> <p>All good, then I tried to import the package:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; from julia import OnlinePortfolioSelection as OPS Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; File &quot;C:\Users\Shayan\miniconda3\envs\im\lib\site-packages\julia\core.py&quot;, line 260, in load_module raise ImportError(&quot;{} not found&quot;.format(juliapath)) ImportError: OnlinePortfolioSelection not found </code></pre> <p>Where's the problem?</p> <p><strong>Update1:</strong><br /> I can import <code>DataFrames</code> package, so the problem is related to the <code>OnlinePortfolioSelection</code> package. But, I don't know what's the problem.</p> <p><strong>Update2:</strong><br /> I think there is a problem with finding the path to Julia as we have <code>raise ImportError(&quot;{} not found&quot;.format(juliapath))</code> within the error message. So, I decided to remove Julia from the path and run the codes to get the error again. Then I closed the Python session and added Julia's path to the environment variables, and rerun Python. This time, the package was loaded successfully. I don't know where should I report this problem.</p>
<python><julia>
2023-10-10 08:40:04
0
6,682
Shayan
77,263,913
219,153
Is there a multicore option to compress a NumPy array?
<p>I'm using <code>np.savez_compressed()</code> to compress and save a single large 4D NumPy array, but it uses only one CPU core. Is there an alternative, which can use many cores? Preferably something simple without need to code array split and compression of its pieces in multiple processes.</p>
<python><compression><numpy-ndarray><multicore>
2023-10-10 08:00:12
0
8,585
Paul Jurczak
77,263,880
15,320,579
Process python dictionary based on previous, current and next value
<p>I have a python dictionary as follows:</p> <pre><code>ip_dict = {'GLArch': {'GLArch-0.png': ['OTHER', 'Figure 28 TAC '], 'GLArch-1.png': ['DCDFP', 'This insurance '], 'GLArch-2.png': ['DCDNP', 'Item 3'], 'GLArch-3.png': ['OTHER', 'SCHEDULE OF'], 'GLArch-4.png': ['OTHER', 'SCHEDULEed OF3'], 'GLArch-5.png': ['DCCFP', 'COMMERCIAL GENERAL'], 'GLArch-6.png': ['OTHER', 'a The'], 'GLArch-7.png': ['OTHER', 'c Such attorney'], 'GLArch-8.png': ['DCCNP', '2 To any'], 'GLArch-9.png': ['OTHER', 'e as part'], 'GLArch-10.png': ['OTHER', '1 A watercraft'], 'GLArch-11.png': ['OTHER', '5 That particular'], 'GLArch-12.png': ['DCCNP', 'Damages claimed'], 'GLArch-13.png': ['OTHER', 'resulting from the'], 'GLArch-14.png': ['DCCNP', 'processing or packaging'], 'GLArch-15.png': ['DCCNP', 's. Fungi'], 'GLArch-16.png': ['OTHER', '1 the actual'], 'GLArch-17.png': ['OTHER', '5 6 9 10 11']}} </code></pre> <p>I want to process it such that if there is one or more <code>OTHER</code> appearing in between <code>DCCFP</code> and <code>DCCNP</code> or if there is one or more <code>OTHER</code> appearing in between <code>DCCNP</code> and another <code>DCCNP</code> then it should be renamed to <code>DCCNP</code>. So in the element <code>GLArch-6.png</code> and <code>GLArch-7.png</code>, since they both are appearing in between <code>DCCFP</code> and <code>DCCNP</code>, the <code>OTHER</code> present in the list will be renamed to <code>DCCNP</code>. Similary <code>GLArch-9.png</code>, <code>GLArch-10.png</code> and <code>GLArch-11.png</code>, the <code>OTHER</code> present in them will be renamed to <code>DCCNP</code> because these elements lie in between <code>DCCNP</code> and another <code>DCCNP</code>. Same goes for <code>GLArch-13.png</code>. So the output dictionary would look like:</p> <pre><code>op_dict = {'GLArch': {'GLArch-0.png': ['OTHER', 'Figure 28 TAC '], 'GLArch-1.png': ['DCDFP', 'This insurance '], 'GLArch-2.png': ['DCDNP', 'Item 3'], 'GLArch-3.png': ['OTHER', 'SCHEDULE OF'], 'GLArch-4.png': ['OTHER', 'SCHEDULEed OF3'], 'GLArch-5.png': ['DCCFP', 'COMMERCIAL GENERAL'], 'GLArch-6.png': ['DCCNP', 'a The'], 'GLArch-7.png': ['DCCNP', 'c Such attorney'], 'GLArch-8.png': ['DCCNP', '2 To any'], 'GLArch-9.png': ['DCCNP', 'e as part'], 'GLArch-10.png': ['DCCNP', '1 A watercraft'], 'GLArch-11.png': ['DCCNP', '5 That particular'], 'GLArch-12.png': ['DCCNP', 'Damages claimed'], 'GLArch-13.png': ['DCCNP', 'resulting from the'], 'GLArch-14.png': ['DCCNP', 'processing or packaging'], 'GLArch-15.png': ['DCCNP', 's. Fungi'], 'GLArch-16.png': ['OTHER', '1 the actual'], 'GLArch-17.png': ['OTHER', '5 6 9 10 11']}} </code></pre> <p>I tried the below script but it is not working:</p> <pre><code>def process_dict(ip_dict): op_dict = {} for key, value in ip_dict.items(): op_dict[key] = {} prev_val = None for k, v in value.items(): if prev_val is not None and (&quot;DCCFP&quot; in prev_val and &quot;DCCNP&quot; in v[0]): op_dict[key][k] = [&quot;DCCFP&quot;, v[1]] else: op_dict[key][k] = v prev_val = v[0] return op_dict </code></pre>
<python><python-3.x><dictionary><data-processing>
2023-10-10 07:55:26
2
787
spectre
77,263,766
583,464
select the max element in an increasing (and restarting over) list
<p>I have this list:</p> <pre><code>mylist = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 2, 1, 2, 3] </code></pre> <p>The length of the list is : <code>38</code>.</p> <p>I want a new list with length: <code>32</code>.</p> <p>My new list will be:</p> <pre><code>new_list = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 2, 1, 1, 1, 2, 1, 2, 3] </code></pre> <p>So, the numbers represent different counts. The first <code>19</code> elements of the <code>mylist</code> are all <code>1</code>. The <code>20</code> and <code>21</code> elements are <code>1</code> and <code>2</code>. This means I had 2 counts.</p> <p>I want to get rid of the <code>1</code> and keep only the <code>2</code> because I have 2 counts. The same goes to the next <code>1 and 2</code> elements and to the last <code> 1, 2, 3</code> elements. There, I have a max count of <code>3</code>.</p> <p>If I try something like this:</p> <pre><code>new_list = [] for idx in range(len(mylist)-1): if mylist[idx] == 1: if mylist[idx + 1] &gt; mylist[idx]: new_list.append(mylist[idx + 1]) else: new_list.append(mylist[idx]) </code></pre> <p>because I am only checking the current and next elements, I miss the last <code>3</code>.</p>
<python><list>
2023-10-10 07:39:19
2
5,751
George
77,263,314
10,639,382
Not signed in Earth Enine or Project is not Registered
<p>Hi I am attempting to use the Python API to access google earth engine from VsCode.</p> <p>When I enter,</p> <pre><code>import ee ee.Authenticate() ee.Initialize() </code></pre> <p>I get the following error for <code>ee.Initialize()</code> step - &quot;ee.ee_exception.EEException: Not signed up for Earth Engine or project is not registered. Visit <a href="https://developers.google.com/earth-engine/guides/access%22" rel="noreferrer">https://developers.google.com/earth-engine/guides/access&quot;</a></p> <p>Prior to that the <code>ee.Authenticate()</code> step works fine where a new web page opens up stating &quot;You are now authenticated with the gcloud CLI!&quot;</p> <p>I also have existing google earth engine cloud project called &quot;ee-imantha&quot;.</p> <p>Any idea what I might be doing wrong?</p>
<python><authentication><google-earth-engine>
2023-10-10 06:19:16
2
3,878
imantha
77,263,279
5,055,577
Beautifulsoup python issue when looping through multiple pages
<p>I'm trying to learn some basic webscraping using python beautiful soup. Intent for below code is to loop through 32 NFL team pages and return the Wins / Loses for each team. However whenever the code gets to the second iteration of the dataframe, I get the error returned below.</p> <pre><code>`import requests from urllib.request import Request, urlopen from bs4 import BeautifulSoup as soup import pandas as pd year = '2022' df = pd.read_excel('NFL_Team_Names.xlsx') for i, j in df.iterrows(): url_insert = j[0].replace(' ', '-') url = 'https://champsorchumps.us/team/nfl/' + url_insert + '/' + year print(url) req = Request(url , headers={'User-Agent': 'Mozilla/5.0'}) webpage = urlopen(req).read() soup = soup(webpage, &quot;html.parser&quot;) games = 18 for x in range(1, games): insert = 'game_' + str(x) wl_count = 0 row = soup.find('tr', id=''+insert) for elements in row.find_all('td'): if(wl_count == 3): win_lose = elements.get_text().strip() print(j[0] + &quot; &quot; + insert + &quot; &quot; + win_lose[0]) wl_count = wl_count + 1` </code></pre> <pre><code> raise AttributeError( AttributeError: ResultSet object has no attribute 'find'. You're probably treating a list of elements like a single element. Did you call find_all() when you meant to call find()? </code></pre> <p>If I try each of the team names manually, code seems to work fine for the first loop. Am I missing something with using reqs/soup in loops that I need to be doing?</p> <p>Essentially I want the code to loop through each url constructed from the teams file I'm using (simply an excel with each NFL team name) and print out the following for each team:</p> <p><a href="https://champsorchumps.us/team/nfl/Arizona-Cardinals/2022" rel="nofollow noreferrer">https://champsorchumps.us/team/nfl/Arizona-Cardinals/2022</a></p> <p>Arizona Cardinals game_1 L</p> <p>Arizona Cardinals game_2 W</p> <p>Arizona Cardinals game_3 L</p> <p>Arizona Cardinals game_4 W</p> <p>Arizona Cardinals game_5 L</p> <p>Arizona Cardinals game_6 L</p> <p>Arizona Cardinals game_7 W</p> <p>Arizona Cardinals game_8 L</p> <p>Arizona Cardinals game_9 L</p> <p>Arizona Cardinals game_10 W</p> <p>Arizona Cardinals game_11 L</p> <p>Arizona Cardinals game_12 L</p> <p>Arizona Cardinals game_13 L</p> <p>Arizona Cardinals game_14 L</p> <p>Arizona Cardinals game_15 L</p> <p>Arizona Cardinals game_16 L</p> <p>Arizona Cardinals game_17 L</p> <p><a href="https://champsorchumps.us/team/nfl/Atlanta-Falcons/2022" rel="nofollow noreferrer">https://champsorchumps.us/team/nfl/Atlanta-Falcons/2022</a> etc. etc.</p>
<python><web-scraping><beautifulsoup><python-requests>
2023-10-10 06:11:18
1
2,094
Jesse
77,263,248
5,273,308
C++ Cython Map operations returns {b'a': 2} rather than {'a': 2}
<p>Why does the following code print {b'a': 2} rather than {'a': 2}. And how can I fix it:</p> <p>File1: parameters.pyx</p> <pre><code># parameters.pyx from libcpp.map cimport map from libcpp.string cimport string cdef class Parameters: cdef map[string,int] data cdef int myVal def __init__(self): self.data['a'] = 1 self.myVal = 0 cpdef void change_data(self, map[string,int] newdata): self.data = newdata cpdef map[string,int] print_data(self): print(self.data) cpdef void change_myVal(self): self.myVal = 3 </code></pre> <p>File2</p> <pre><code># model.pyx #from parameters import Parameters from parameters import Parameters from libcpp.map cimport map from libcpp.string cimport string cpdef single_run(): parameters = Parameters() cdef map[string,int] newMap newMap['a'] = 2 parameters.change_data(newMap) parameters.print_data() parameters.change_myVal() </code></pre> <p>Setup.py file</p> <pre><code>#!/usr/bin/env python3 from setuptools import setup from Cython.Build import cythonize setup( ext_modules=cythonize([&quot;*.py&quot;,&quot;*.pyx&quot;], language=&quot;c++&quot;) ) </code></pre>
<python><c++><cython><stdmap>
2023-10-10 06:03:06
1
1,633
user58925
77,263,124
2,192,824
Where is the python interpreter installed when ssh to a remote machine
<p>I'm a little confused with this scenario about which python interpreter is used:</p> <p>I use vscode to ssh to the python codes in a remote machine, and then I installed the python interpreter as VScode suggests. I was wondering which machine is the python interpreter installed. The remote machine or my local machine? I think it's the remote one, but not 100% sure.</p>
<python><visual-studio-code>
2023-10-10 05:27:52
2
417
Ames ISU
77,263,051
9,989,308
Azure Log Analytics: How to send custom python pandas DataFrames into LAW
<p>I have a Python script that I want to run daily that produces a Pandas dataframe.</p> <p>I want this dataframe to be added daily to my log analytics workspace.</p> <p>I have a Windows server that I can use to run my Python script.</p> <p>What do I need to do to make this work? Is there a way to go from DataFrames to push to a Syslog server?</p>
<python><azure><syslog><azure-log-analytics>
2023-10-10 05:01:23
1
868
HarriS
77,262,974
1,395,266
Pass variablle value as parametrs to python fromJupyter notebook
<p><a href="https://stackoverflow.com/questions/51551056/running-a-python-script-in-jupyter-notebook-with-arguments-passing">Running a Python script in Jupyter Notebook, with arguments passing</a></p> <pre><code>a = 3 b = 5 !ipython two_digits.py a b </code></pre> <p>Why this is not working ? How to pass parameters from variables ?</p>
<python><jupyter><ipython>
2023-10-10 04:37:34
1
6,220
sudhansu63
77,262,913
19,238,204
Integral with Piecewise Resulting into Different 3D Wireframe Plot using SymPy and Python
<p>I am trying to use <code>piecewise</code> to integrate function that change at certain domain.</p> <p>The result of the 3D wireframe plot (with spb) is different than when using manual way of computing integral one by one. The correct plot is the one that is using manual way.</p> <p>If you are wondering, why there is for loop from <code>1</code> to <code>10</code>, it is to replace sum from 1 to infinity, since it can represent sum of 1 to infinity (the Fourier Series in Heat Equation). It converges anyway in the end.</p> <p>This is the plot (the correct plot is on the right): <a href="https://i.sstatic.net/liSR4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/liSR4.png" alt="1" /></a></p> <p>So, where is my fault in this piecewise MWE:</p> <pre><code># https://stackoverflow.com/questions/77259323/can-sympy-compute-definite-integral-by-cases/77259532#77259532 # https://stackoverflow.com/questions/77258199/how-to-replace-the-variable-from-sympy-computation-so-it-can-be-plotted-with-mat/77258592#77258592 import numpy as np import sympy as sm from sympy import * from spb import * x = sm.symbols(&quot;x&quot;) t = sm.symbols(&quot;t&quot;) n = sm.symbols(&quot;n&quot;, integer=True) L = 20 D = 0.475 f = (S(2)/L)*sin(n*np.pi*x/20)*Piecewise((x, (0 &lt;= x) &amp; (x &lt;= 10)), (20-x, (10 &lt; x) &amp; (x &lt;= 20))) print('The function u(x,0) : ') print('') sm.pretty_print(f) print('') print('') print(piecewise_fold(f)) fint = integrate(f, (x, 0, 20)) g = fint*exp(-(n**2)*(np.pi**2)*D*t/400).nsimplify() print('') print('') sm.pretty_print(fint) print('') print('') s3 = 0 for c in range(10): s3 += g.subs({n:c}) print('') print('The function u(x,t) : ') print('') sm.pretty_print(s3) plot3d( s3, (x, 0, 20), (t, 0, 10), {&quot;alpha&quot;: 0}, # hide the surface wireframe=True, wf_n1=20, wf_n2=10, wf_rendering_kw={&quot;color&quot;: &quot;tab:blue&quot;}, # optional step to customize the wireframe lines backend=MB, zlabel=&quot;$u(x,t)$&quot;, title=&quot;One Dimensional Heat Equation&quot; ) </code></pre> <p>The code with the correct plot, but still integrating with manual way, not using <code>piecewise</code>:</p> <pre><code># https://stackoverflow.com/questions/77258199/how-to-replace-the-variable-from-sympy-computation-so-it-can-be-plotted-with-mat/77258592#77258592 import numpy as np import sympy as sm from sympy import * from spb import * x = sm.symbols(&quot;x&quot;) t = sm.symbols(&quot;t&quot;) n = sm.symbols(&quot;n&quot;, integer=True) L = 20 f1 = (2/L)*x*sin(n*np.pi*x/20) f2 = (2/L)*(20-x)*sin(n*np.pi*x/20) fint1 = sm.integrate(f1,(x,0,10)) fint2 = sm.integrate(f2,(x,10,20)) D = 0.475 g = (fint1+fint2)*sin(n*np.pi*x/20)*exp(-(n**2)*(np.pi**2)*D*t/400).nsimplify() s = 0 for c in range(10): s += g.subs({n:c}) print(s) print('') print('The function u(x,t) : ') print('') sm.pretty_print(s) print('') print('') plot3d( s, (x, 0, 20), (t, 0, 10), {&quot;alpha&quot;: 0}, # hide the surface wireframe=True, wf_n1=20, wf_n2=10, wf_rendering_kw={&quot;color&quot;: &quot;tab:blue&quot;}, # optional step to customize the wireframe lines backend=MB, zlabel=&quot;$u(x,t)$&quot;, title=&quot;One Dimensional Heat Equation&quot; ) </code></pre>
<python><numpy><sympy>
2023-10-10 04:21:21
1
435
Freya the Goddess
77,262,819
5,273,308
Calling Cython cdef function from another file
<p>I would like to call a cdef function defined in one file, from another file. I do not want to change cdef function to a cpdef, but I am willing to change the second file.</p> <p>File1 parameters.pyx</p> <pre><code>cdef class Parameters: cdef int myVal def __init__(self): self.myVal = 0 cdef change_myVal(self): self.myVal = 3 </code></pre> <p>File2 model.pyx</p> <pre><code>from parameters import Parameters cpdef single_run(): parameters = Parameters() parameters.change_myVal() </code></pre> <p>The error I get is: &quot;AttributeError: 'parameters.Parameters' object has no attribute 'change_myVal'&quot;</p>
<python><cython>
2023-10-10 03:44:22
1
1,633
user58925
77,262,771
22,643,255
Sharepoint Rest API office365 library 403 Client Error
<p>I am trying to access a Sharepoint folder and retrieve the files within.</p> <p>With the following code:</p> <pre><code># Import libraries from office365.sharepoint.client_context import ClientContext from office365.runtime.auth.client_credential import ClientCredential # Config for Sharepoint (to be stored in env later) SP_CLIENT_ID = xxx SP_CLIENT_SECRET = xxx SP_URL = 'https://&lt;organization_url&gt;.sharepoint.com/sites/&lt;site-name&gt;' relative_folder_url = '/Shared%20Documents/&lt;subfolder&gt;' # App-based authentication with access credentials context = ClientContext(SP_URL).with_credentials(ClientCredential(SP_CLIENT_ID, SP_CLIENT_SECRET)) folder = context.web.get_folder_by_server_relative_url(relative_folder_url) context.load(folder) context.execute_query() # Processes each Sharepoint file in folder for file in folder.files: print(f'Processing file: {file.properties[&quot;Name&quot;]}') </code></pre> <p>However, it throws an error</p> <pre><code>office365.runtime.client_request_exception.ClientRequestException: ('-2147024891, System.UnauthorizedAccessException', 'Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))', &quot;403 Client Error: Forbidden for url: https://&lt;tenant-name&gt;.sharepoint.com/sites/&lt;site-id&gt;/_api/Web/getFileByServerRelativePath(DecodedUrl='%2Fsites%2F&lt;site-name&gt;%2FDocuments%2FGeneral%2F&lt;subfolder&gt;')&quot;)` stemming from `context.execute_query() with the specific names replaced by &lt;xxx&gt;. </code></pre> <p>I am able to retrieve files using <code>Graph API</code> on <code>Postman</code>, using the same <code>SP_CLIENT_ID</code> and <code>SP_CLIENT_SECRET</code>. It is done by generating a token using the <code>client ID</code> and <code>secret</code>, then passing token in Graph GET API calls.</p> <p>It appears the app-based authentication is fine as the error is <code>403</code>.</p> <p>How can it be fixed such that files can be programmatically seen (and ultimately downloaded) through Sharepoint Rest API on Python?</p> <p>Edit: The error is likely coming from the lack of permission granted for Sharepoint Rest API on the Azure app I am using to authenticate with Sharepoint. Now I am thinking if I should continue using <code>office365</code> or simply use some pure API approach like <code>requests</code> library in Python to programmatically download Sharepoint files.</p>
<python><sharepoint><azure-active-directory><azure-ad-graph-api><sharepoint-rest-api>
2023-10-10 03:28:21
1
315
ooalgomaniac
77,262,625
8,523,868
Memory error while trying to get list of files from the directory and storing it in the spreadsheet
<p>I have downloaded the code from Google bard to get the list of files from the directory and sub directory to store it in a spreadsheet.</p> <p>I am getting a memory error.</p> <p>My question: write a program in python to list of files in the directory and sub directory and store the file names in the spreadsheet with index, header, path of the file and file name in separate columns.</p> <p>Code given by Google Bard:</p> <pre><code>import os import openpyxl def list_files(directory): &quot;&quot;&quot;Lists all files in the given directory and subdirectories. Args: directory: The directory to list. Returns: A list of all files in the given directory and subdirectories. &quot;&quot;&quot; files = [] for root, _, files in os.walk(directory): for file in files: files.append(os.path.join(root, file)) return files def save_to_spreadsheet(files, spreadsheet_path): &quot;&quot;&quot;Saves the given list of files to the given spreadsheet. Args: files: A list of files. spreadsheet_path: The path to the spreadsheet. &quot;&quot;&quot; wb = openpyxl.Workbook() ws = wb.active # Write the header row. ws.cell(row=1, column=1).value = &quot;Index&quot; ws.cell(row=1, column=2).value = &quot;Header&quot; ws.cell(row=1, column=3).value = &quot;Path&quot; ws.cell(row=1, column=4).value = &quot;File Name&quot; # Write the file data. for i, file in enumerate(files): ws.cell(row=i + 2, column=1).value = i + 1 ws.cell(row=i + 2, column=2).value = &quot;File&quot; ws.cell(row=i + 2, column=3).value = file ws.cell(row=i + 2, column=4).value = os.path.basename(file) # Save the spreadsheet. wb.save(spreadsheet_path) if __name__ == &quot;__main__&quot;: # Get the directory to list. directory = &quot;/path/to/directory&quot; # List the files in the directory and subdirectories. files = list_files(directory) # Save the file names to a spreadsheet. spreadsheet_path = &quot;/path/to/spreadsheet.xlsx&quot; save_to_spreadsheet(files, spreadsheet_path) </code></pre> <p>Error:-</p> <pre><code>E:\pycharm\projecttelegram\venv\Scripts\python.exe E:\pycharm\projecttelegram\venv\listoffiles_storing_excel.py Traceback (most recent call last): File &quot;E:\pycharm\projecttelegram\venv\listoffiles_storing_excel.py&quot;, line 52, in &lt;module&gt; files = list_files(directory) File &quot;E:\pycharm\projecttelegram\venv\listoffiles_storing_excel.py&quot;, line 17, in list_files files.append(os.path.join(root, file)) MemoryError </code></pre> <p>Please give me a solution to overcome with this. Thank you so much.</p>
<python>
2023-10-10 02:27:44
4
911
vivek rajagopalan
77,262,518
7,009,988
How to install Python into Windows Docker container
<p>I want to create a docker container using base image <code>mcr.microsoft.com/dotnet/framework/runtime:4.8.1</code> but I also need Python on here. Unfortunately the official Python images only use <code>windowsservercore</code>. What do I put in my Dockerfile to make this happen? All the documentation I see are either for linux/unix or require manually downloading an exe beforehand.</p>
<python><docker>
2023-10-10 01:41:27
2
1,643
wheeeee
77,262,501
10,565,820
How to alter cipher suite used with Python requests?
<p>This is part of a larger program I'm working on, but I've got it pinpointed down to the exact problem. When I use Python <code>requests</code> module in some environments it works but not others and it seems to be related to the cipher suite being used by SSL. When I run the following command in Python, it gives an error or success depending on the OS I'm using. here is the command:</p> <p><code>requests.get(&quot;https://api-mte.itespp.org/markets/VirtualService/v2/?WSDL&quot;)</code></p> <p>On Windows 10 and Ubuntu 18.04, it works. But on Windows 11 and Ubuntu 22.04, it does not work and gives the following error:</p> <p><code>raise SSLError(e, request=request) requests.exceptions.SSLError: HTTPSConnectionPool(host='api-mte.itespp.org', port=443): Max retries exceeded with url: /markets/VirtualService/v2/?WSDL (Caused by SSLError(SSLError(1, '[SSL: DH_KEY_TOO_SMALL] dh key too small (_ssl.c:997)')))</code></p> <p>The part <code>[SSL: DH_KEY_TOO_SMALL] dh key too small</code> seems to be the relevant part indicating the <code>requests</code> module thinks the server is using an outdated cipher? When I view the site in Firefox it loads fine.</p> <p>Am I seeing this correctly or am I off base? This seems to be a relevant question (<a href="https://stackoverflow.com/questions/38015537/python-requests-exceptions-sslerror-dh-key-too-small">Python - requests.exceptions.SSLError - dh key too small</a>) but I could not get it to work when I changed <code> requests.packages.urllib3.util.ssl_.DEFAULT_CIPHERS</code>. I know Ubuntu 22 uses OpenSSL v3 while Ubuntu 18 uses OpenSSL v1.1.1 and that seems relevant. Thanks!</p> <p>EDIT: I believe specifically I'm trying to use the cipher ECDHE-RSA-AES128-GCM-SHA256 or ECDHE-RSA-AES256-GCM-SHA384</p>
<python><ssl><encryption><python-requests><openssl>
2023-10-10 01:33:55
1
644
geckels1
77,262,318
498,201
Python NLTK text dispersion plot has y vertical axis is in backwards / reversed order
<p>Since last month NLTK dispersion_plot seems to have y (vertical) axis in reversed order on my machine. This is likely something about my versions of software (I am on a school virtual machine).</p> <p>versions: nltk 3.8.1 matplotlib 3.7.2 Python 3.9.13</p> <p>code:</p> <pre><code>from nltk.draw.dispersion import dispersion_plot words=['aa','aa','aa','bbb','cccc','aa','bbb','aa','aa','aa','cccc','cccc','cccc','cccc'] targets=['aa','bbb', 'f', 'cccc'] dispersion_plot(words, targets) </code></pre> <p><a href="https://i.sstatic.net/ciYPl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ciYPl.png" alt="enter image description here" /></a></p> <p>expected: aaa is present at the beginning, and cccc at the end. actual: it's backwards! also notice f should be completely absent - instead bbb is absent.</p> <p>conclusion: Y axis is backwards.</p>
<python><nltk><text-mining>
2023-10-10 00:05:07
2
2,650
drpawelo
77,262,213
4,807,043
rsync doesn't work in Python script: rsync: change_dir failed: No such file or directory (2)
<p>I need to transfer a directory (and all its content in it) to a remote host using <code>rsync</code> and I need to use it inside a python script but I got this following error:</p> <pre><code>building file list ... rsync: change_dir &quot;/home/xxxx//xxxx/xxxx/ root@xx.xx.xx.xx:/opt/customized&quot; failed: No such file or directory (2) done sent 20 bytes received 11 bytes 62.00 bytes/sec total size is 0 speedup is 0.00 rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1183) [sender=3.1.1] </code></pre> <p>The Python cmd I use:</p> <pre><code>HOST_PASSWORD=&quot;xxxx&quot; filepath=&quot;xxxx/xxxx/&quot; ip=&quot;xx.xx.xx.xx&quot; DEPLOY_DIR=&quot;/opt/customized/&quot; subprocess.call(['rsync', '-Kavz', '--exclude=*.py', '--chown=root:root','--delete-after','--rsh=&quot;/usr/bin/sshpass -p {} ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l root&quot;'.format(HOST_PASSWORD),'{} root@{}:{}'.format(filepath, ip, DEPLOY_DIR)]) </code></pre> <p>However, I can use this <code>rsync</code> cmd in the terminal <strong>no problem</strong>:</p> <pre><code>rsync -Kavz --exclude=*.py --chown=root:root --delete-after --rsh=&quot;/usr/bin/sshpass -p xxxx ssh -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -l root&quot; xxxx/xxxx/ root@xx.xx.xx.xx:/opt/customized/ </code></pre> <p>What goes wrong in my Python script? Thank you!</p>
<python><linux><rsync>
2023-10-09 23:22:01
1
3,250
Henry
77,262,037
9,795,817
Read range of parquet files in PySpark
<p>I have a ton of daily files stored in an HDFS where the partitions are stored in YYYY-MM-DD format.</p> <p>For example:</p> <pre class="lang-bash prettyprint-override"><code>$ hdfs dfs -ls /my/path/here &lt;some stuff here&gt; /my/path/here/cutoff_date=2023-10-02 &lt;some stuff here&gt; /my/path/here/cutoff_date=2023-10-03 &lt;some stuff here&gt; /my/path/here/cutoff_date=2023-10-04 &lt;some stuff here&gt; /my/path/here/cutoff_date=2023-10-05 &lt;some stuff here&gt; /my/path/here/cutoff_date=2023-10-06 </code></pre> <p>How can I read a range of dates given this structure? In particular, I need to read all the partitions available between <code>2023-06-07</code> and <code>2023-10-06</code>.</p> <p>According to <a href="https://stackoverflow.com/questions/37732766/read-range-of-files-in-pyspark">this post</a>, I may be able to use <code>sqlContext</code> to pass a range using <code>[]</code>. Something along the lines of:</p> <pre class="lang-py prettyprint-override"><code>sqlContext.read.load('/my/path/here/cutoff_date=[2023-10-02-2023-10-06]') </code></pre> <p>which obviously doesn't work.</p>
<python><apache-spark><pyspark><apache-spark-sql><hdfs>
2023-10-09 22:13:30
3
6,421
Arturo Sbr
77,262,031
363,796
Line length limit in SMTP emails [Was: Message length limit sending HTML email with smtplib]
<p>Here's a simple test that demonstrates the problem I'm seeing:</p> <pre><code>import smtplib from email.message import EmailMessage my_addr = 'my_email@my_domain.com' str = '' for i in range(500): str += f'{i}: &lt;a href=&quot;https://google.com&quot;&gt;https://google.com&lt;/a&gt;&lt;br&gt;' msg = EmailMessage() msg.set_content('') msg.add_alternative(str, subtype='html') msg['Subject'] = 'html test' msg['From'] = my_addr msg.add_header('reply-to', my_addr) msg['To'] = my_addr with smtplib.SMTP('localhost', timeout=30) as smtp_server: smtp_server.send_message(msg) </code></pre> <p>The resulting email seems to &quot;break&quot; about every 1000 bytes in the input html. This length includes any tags in the HTML string. The received email looks something like this:</p> <p><a href="https://i.sstatic.net/Xqv8V.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Xqv8V.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/Gar0S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Gar0S.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/NDBaU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NDBaU.png" alt="enter image description here" /></a></p> <p>I've tried different SMTP servers with different message length limits, but all were set to 10M or higher, so I doubt this had any effect. Also, I receive no error codes or exceptions. Everything sends cleanly. smtplib just seems to break up the message somehow.</p> <p>Any idea what the problem is or how to correct this?</p>
<python><smtp><smtplib>
2023-10-09 22:11:38
1
1,653
zenzic
77,261,986
7,106,915
Remove padding in sidebar, Streamlit
<p>How to remove the top vertical padding in the default Streamlit sidebar?</p> <p>Posting the solution below.</p>
<python><css><user-interface><streamlit>
2023-10-09 21:58:39
1
3,007
Rexcirus
77,261,948
489,088
Processing all cells in a 2d DataFrame is very slow - what am I missing?
<p>I have a dataframe that comes from a csv file. I pivot this dataframe by two columns, and then attempt to process each cell.</p> <p>In my actual code, which is of course much more complicated, there is some processing for each value. But I notice it is extremely slow. I then created a canonical example here to illustrate the issue where I don't do anything to the value - I just iterate through them and assign the same value back to each cell - which eliminates my processing as the issue and highlights the access / assignment as being slow:</p> <pre><code>import pandas as pd from tqdm import tqdm def test_speed(): df = pd.read_csv('https://media.githubusercontent.com/media/datablist/sample-csv-files/main/files/customers/customers-100.csv') df = pd.pivot(df, index='Index', columns='Country') original = df.copy() inputs = df.columns.values # process each row at a time for row_index in tqdm(range(df.shape[0])): row_data = df.iloc[row_index] # get the row for input_idx in range(len(inputs)): # for each column label, get the value val = row_data[inputs[input_idx]] # just assign them back to that cell df.iloc[row_index, input_idx] = val location print(f'Are they equal? {original.equals(df)}') if __name__ == '__main__': test_speed() </code></pre> <p>I am getting this:</p> <pre><code>100%|██████████| 100/100 [00:04&lt;00:00, 22.63it/s] Are they equal? True </code></pre> <p>So, just 22 rows processed per second.</p> <p>If I do the same processing iterating through the csv using a <code>DictReader</code> I get about 5000 rows per iteration.</p> <p>I am obviously missing something important on how to access cells in a <code>DataFrame</code>.</p> <p>My objective is to traverse all cells in this 2d pivot table and assign the value (possibly altered) back in an efficient way.</p> <p>How can I speed up this code?</p> <p><strong>UPDATE:</strong> if I do the processing before pivot, it goes up to 1600 iterations / second. I think whenever I assign anything indexes are being rebuilt? I am never assigning values to the row / column I used to pivot, so this is surprising - any way to keep the pivot and the performance?</p>
<python><python-3.x><pandas><dataframe><performance>
2023-10-09 21:49:08
1
6,306
Edy Bourne
77,261,768
10,859,585
Python Monday.com Create new board
<p>I've read through the monday.com API documentation, but have been unsuccessful of creating a new board within a given workspace. The code runs and gives a <code>status_code == 200</code>, however there is no new board created within that workspace. Is this because no data has been added to it?</p> <pre class="lang-py prettyprint-override"><code>import requests import json start = timeit.default_timer() apiKey = 'xxx' apiUrl = &quot;https://api.monday.com/v2&quot; headers = {&quot;Authorization&quot; : apiKey, &quot;Content-Type&quot; : 'application/json', &quot;API-Version&quot; : '2023-10'} board_kind = 'private' board_name = 'Baulder Gate 3' description = 'My Campaign' workspace_id = 'xxx' payload = f&quot;&quot;&quot; mutation {{ create_board ( board_kind: {board_kind}, board_name: {board_name}, description: {description}, workspace_id: {workspace_id}) {{ id }} }} &quot;&quot;&quot; data = {'query' : payload} r_boards = requests.post(url=apiUrl, headers=headers, data=json.dumps(data)) # make request if r_boards.status_code == 200: print(&quot;board was created&quot;) </code></pre>
<python><post><monday.com>
2023-10-09 21:03:58
1
414
Binx
77,261,622
13,067,389
Merging Top Two Rows of a dataframe and editing the result
<p>I have a dataframe:</p> <pre><code>df = pd.DataFrame({ '0': ['FY18', 'Q1', 1500, 1200, 950, 2200], '1': ['FY18', 'Q2', 2340, 1234, 2000, 1230], '2': ['FY18', 'Q3', 2130, 2200, 2190, 2210], '3': ['FY18', 'YearTotal', 1000, 1900, 1500, 1800], }) </code></pre> <p>I wish to merge the top two rows of the dataframe and make it the index</p> <p>I tried:</p> <pre><code># Merge the top two rows into a single row merged_row = pd.concat([df.iloc[0], df.iloc[1]], axis=1) # Transpose the merged row to make it a single row with all columns merged_row = merged_row.T # Replace the first row of the DataFrame with the merged row df.iloc[0] = merged_row </code></pre> <p>But I get an error</p> <pre class="lang-none prettyprint-override"><code>ValueError: Incompatible indexer with DataFrame </code></pre> <p>Further I want to edit the header so it reverse the 'Q1' to '1Q'. Also deletes ' YearTotal' and keeps just 'FY18' when the column says 'YearTotal'. The final output could look like this:</p> <pre><code>df = pd.DataFrame({ '0': ['1Q18', 1500, 1200, 950, 2200], '1': ['2Q18', 2340, 1234, 2000, 1230], '2': ['3Q18', 2130, 2200, 2190, 2210], '3': ['FY18', 1000, 1900, 1500, 1800], }) </code></pre>
<python><pandas><merge>
2023-10-09 20:30:51
2
681
postcolonialist
77,261,523
5,987
Why does struct.unpack need 2 extra bytes?
<p>I have the following <code>struct.unpack</code> call that fails:</p> <pre><code>struct.unpack('14x4x2i2xh8x2i', b'\0'*46) Traceback (most recent call last): File &quot;&lt;pyshell#88&gt;&quot;, line 1, in &lt;module&gt; struct.unpack('14x4x2i2xh8x2i', b'\0'*46) struct.error: unpack requires a buffer of 48 bytes </code></pre> <p>I've added up the number of bytes required by the format multiple times, and I always come up with a total of 46 bytes. But obviously the error says it's not enough. Why am I coming up short?</p> <p>The following does produce the expected output:</p> <pre><code>struct.unpack('14x4x2i2xh8x2i', b'\0'*48) (0, 0, 0, 0, 0) </code></pre>
<python><struct>
2023-10-09 20:12:10
1
309,773
Mark Ransom
77,261,425
9,983,652
How to run Python using Docker?
<p>I don't know Docker at all. I just use VS code or jupyter notebook to run notebook. I just bought a book and went to Github page of the authors trying to download the notebooks. It said I have to run docker. It is really annoying. I am wondering if I can download all these notebooks from the Docker using some sort of commands. Thanks</p> <p><a href="https://github.com/kylegallatin/ml-python-cookbook-runner" rel="nofollow noreferrer">https://github.com/kylegallatin/ml-python-cookbook-runner</a></p> <p><a href="https://www.oreilly.com/library/view/machine-learning-with/9781098135713/" rel="nofollow noreferrer">https://www.oreilly.com/library/view/machine-learning-with/9781098135713/</a></p>
<python><docker>
2023-10-09 19:50:57
0
4,338
roudan
77,261,423
388,520
Python: import the actual module object for the containing package, without naming it
<p>This question or a close relative has been asked at least three times before (<a href="https://stackoverflow.com/questions/436497/python-import-the-containing-package">1</a>, <a href="https://stackoverflow.com/questions/73437382/python-relatively-import-the-containing-package">2</a>, <a href="https://stackoverflow.com/questions/5286210/is-there-a-way-to-access-parent-modules-in-python">3</a>) but never, that I can find, satisfactorily answered; only workarounds are offered, and none of them work for my specific use case.</p> <p>Suppose a directory structure like this:</p> <pre><code>mypackage/ ├── __init__.py └── mod.py </code></pre> <p>From <code>mod.py</code> I want to import <code>mypackage</code>. Not any specific thing defined in <code>mypackage</code>, nor a sibling module, but the exact same object you would get if you wrote <code>import mypackage</code> from outside the package. And I want to do this without using the name <code>mypackage</code> (in case it might be renamed in the future). How do I do that?</p> <p>(You may assume that the containing package is <em>not</em> a namespace package.)</p> <p>(You might think it would work to say <code>import __init__</code>, but that actually reloads <code>__init__.py</code> and creates a <em>separate</em> module object, <code>mypackage.__init__</code>, so that's no good.)</p> <p>To reiterate, the answers to previous iterations of this question supply only workarounds (e.g. relatively importing only what you need from the containing package, or restructuring the code to avoid needing to refer to the containing package) that do not work for me. <strong>Please do not suggest any alternative approach. Please answer the exact question I have asked.</strong></p>
<python><python-3.x>
2023-10-09 19:50:35
1
142,389
zwol
77,261,415
11,278,478
Convert Horizontal text file to vertical CSV file using Python
<p>I have a text file which has each column as a row and a header to represent start of new record. I need to convert this to a vertical CSV using Python.</p> <p>Not all columns apply to each records so I can set them up either blank or 0.</p> <p>Please see example below:</p> <pre><code>WERF this is vendor1 data this is col1 : 20 this is col2 : 30 here is col1 scenario2: 1 here is col2 scenario2: 3 ADEF this is vendor2 data this is col1 : 2 this is col2 : 3 TRGF this is vendor3 data this is col1 : 10 this is col2 : 12 PSDFAA this is vendor4 data this is col1 : 4 this is col2 : 7 here is col1 scenario2: 12 here is col2 scenario2: 11 </code></pre> <p>Expected output:</p> <pre><code>Vendor this is col1 this is col1 here is col1 scenario2 here is col2 scenario2 WERF 20 30 1 3 ADEF 2 3 TRGF 10 12 PSDFAA 4 7 12 11 </code></pre> <p>Please suggest how to implement this in Python.</p>
<python><python-3.x><pandas><dataframe><csv>
2023-10-09 19:47:36
4
434
PythonDeveloper
77,261,325
13,613,776
How can I optimize this SQLAlchemy delete query for the Match table, which deletes rows where both user_id and partner_id match a specific user_id?
<p>Here's my current code:</p> <pre><code>async def delete_match(user_id): async with session_pool() as session: delete_stmt = delete(Match).where(Match.user_id == user_id) delete_stmt = delete(Match).where(Match.partner_id == user_id) await session.execute(delete_stmt) await session.commit() print(f&quot;Matches deleted for user ID {user_id}&quot;) </code></pre> <p>here is my model</p> <pre><code>class Match(Base, TableNameMixin): id: Mapped[int] = mapped_column(BIGINT, nullable=False, autoincrement=True) user_id: Mapped[int] = mapped_column(BIGINT, nullable=False, primary_key=True) partner_id: Mapped[int] = mapped_column(BIGINT, nullable=False) </code></pre> <p>Is there a more efficient and faster way to achieve this? Additionally, what would be an efficient way to save the last 3 match IDs for a <code>user_id</code>?</p>
<python><sqlalchemy><flask-sqlalchemy>
2023-10-09 19:26:04
0
448
Rishu Pandey
77,261,135
3,825,948
Make Connection/Pool Available Outside Main
<p>Using the following code from <a href="https://pypi.org/project/cloud-sql-python-connector/" rel="nofollow noreferrer">https://pypi.org/project/cloud-sql-python-connector/</a>:</p> <pre><code>import asyncio import asyncpg import sqlalchemy from sqlalchemy.ext.asyncio import AsyncEngine, create_async_engine from google.cloud.sql.connector import Connector async def init_connection_pool(connector: Connector) -&gt; AsyncEngine: # initialize Connector object for connections to Cloud SQL async def getconn() -&gt; asyncpg.Connection: conn: asyncpg.Connection = await connector.connect_async( &quot;project:region:instance&quot;, # Cloud SQL instance connection name &quot;asyncpg&quot;, user=&quot;my-user&quot;, password=&quot;my-password&quot;, db=&quot;my-db-name&quot; # ... additional database driver args ) return conn # The Cloud SQL Python Connector can be used along with SQLAlchemy using the # 'async_creator' argument to 'create_async_engine' pool = create_async_engine( &quot;postgresql+asyncpg://&quot;, async_creator=getconn, ) return pool async def main(): # initialize Connector object for connections to Cloud SQL loop = asyncio.get_running_loop() async with Connector(loop=loop) as connector: # initialize connection pool pool = await init_connection_pool(connector) # example query async with pool.connect() as conn: await conn.execute(sqlalchemy.text(&quot;SELECT NOW()&quot;)) # dispose of connection pool await pool.dispose() </code></pre> <p>How do I make a database connection available outside of main for other functions to use? Any help would be appreciated. Thanks.</p>
<python><database><sqlalchemy><connector><asyncpg>
2023-10-09 18:41:30
0
937
Foobar
77,261,129
5,830,492
How to create a Frame/Canvas using class in Python and use it many times with different widgets inside it?
<p>I am new in Python. I am developing a program which actually is a small shell for phpMyAdmin. In my program I need in some Windows (which is created by TKinter) draw a scrolled canvas. I've written a class like this to create a Scrolled Canvas. It's working great when I use it in every where but problem is I need different widgets(Like: Buttons/ Labels/ Entry/ Combo-box, ... ) inside it in every Windows. I looked over the net and found a solution (<a href="https://stackoverflow.com/questions/47360640/how-to-create-tkinter-widgets-inside-parent-class-from-sub-class">How to create Tkinter Widgets inside Parent class from Sub class</a>) but that one makes my program so complicated. So I'm looking for a better solution that keeps my code simple. Here is my Class which create Scrolled Form/Canvas:</p> <pre><code>class ScrolledCanvas(tk.Frame): def __init__(self, master_frame, back_color, label_frame_text, border, x, y, w, h): super(ScrolledCanvas, self).__init__() dynamic_widget_frame = LabelFrame(master_frame, relief=SUNKEN, bg=back_color, text=label_frame_text, bd=border) dynamic_widget_frame.place(x=x, y=y) mycanvas = Canvas(dynamic_widget_frame, width=w, height=h) mycanvas.pack(side=LEFT) yscollbar = ttk.Scrollbar(dynamic_widget_frame, orient='vertical', command=mycanvas.yview) yscollbar.pack(side=RIGHT, fill='y') mycanvas.configure(yscrollcommand=yscollbar.set) myframe = Frame(mycanvas) mycanvas.create_window((0, 0), window=myframe, anchor=&quot;nw&quot;) mycanvas.bind('&lt;Configure&gt;', lambda es: mycanvas.configure(scrollregion=mycanvas.bbox('all'))) close_btn = Button(myframe, text='X', command=dynamic_widget_frame.destroy, fg='red') close_btn.grid() </code></pre> <p>This sample code explains how we can create a widget inside a class from outside it. But makes my code extremely long and complicated. Because I've multi different widgets in Windows.</p> <pre><code>import tkinter as tk root = tk.Tk() class MainApp(tk.Frame): def __init__(self, master): super().__init__(master) #a child frame of MainApp object self.frame1 = tk.Frame(self) tk.Label(self.frame1, text=&quot;This is MainApp frame1&quot;).pack() self.frame1.grid(row=0, column=0, sticky=&quot;nsew&quot;) #another child frame of MainApp object self.frame2 = SearchFrame(self) self.frame2.grid(row=0, column=1, sticky=&quot;nsew&quot;) def create_labels(self, master): return tk.Label(master, text=&quot;asd&quot;) class SearchFrame(tk.Frame): def __init__(self, master): super().__init__(master) self.label = tk.Label(self, text=&quot;this is SearchFrame&quot;) self.label.pack() master.label1 = MainApp.create_labels(self, master) master.label1.grid() mainAppObject = MainApp(root) mainAppObject.pack() root.mainloop() </code></pre>
<python><class><tkinter><subclass>
2023-10-09 18:40:24
1
388
Saleh
77,261,119
353,337
Manually escaping URL query string
<p>I have a URL string</p> <pre><code>https://example.com/search?'=$&amp;(=/&amp;)=! </code></pre> <p>that I'd like to transform into its escaped form</p> <pre><code>https://example.com/search?%28=%2F&amp;%29=%21&amp;%27=%24 </code></pre> <p>as reported by <code>res.request.path_url</code> using Python.</p> <p>I've played around with <code>urllib.parse.quote</code>/<code>parse_qs</code>, but that didn't quite get me there:</p> <pre class="lang-py prettyprint-override"><code>import requests from urllib.parse import quote, urlsplit, parse_qs # reference: res = requests.get(&quot;https://example.com/search&quot;, params={&quot;(&quot;: &quot;/&quot;, &quot;)&quot;: &quot;!&quot;, &quot;'&quot;: &quot;$&quot;}) print(res.request.path_url) url = &quot;https://example.com/search?'=$&amp;(=/&amp;)=!&quot; p = urlsplit(url) path_url = p.path or &quot;/&quot; if p.query: parsed_query = parse_qs(p.query) escaped_qs = &quot;&amp;&quot;.join( f&quot;{quote(key)}={','.join(quote(item) for item in value)}&quot; for key, value in parse_qs(p.query).items() ) path_url += f&quot;?{escaped_qs}&quot; print(path_url) </code></pre> <pre><code>/search?%28=%2F&amp;%29=%21&amp;%27=%24 /search?%27=%24&amp;%28=/&amp;%29=%21 </code></pre> <p>Perhaps there's an easier way altogether. Any hints?</p>
<python><url><python-requests>
2023-10-09 18:38:51
1
59,565
Nico Schlömer
77,261,097
7,638,876
Filter data in long format using dropdown menu with plotly
<p>I am trying to filter data, which is in a long format, using dropdown menu in plotly. However, I might be misunderstanding something as I am not getting the proper results. I want to combine this with faceting, but that is not shown in the code below. I tried two ways that did not work (they fail in different ways) and I don't understand what is wrong with them.</p> <p>Specifically, what you'll notice is as you change the dropdown menu selection the following information is incorrect: color of the data points, hover data information, legend and displayed data. (This will depend on whether attempt 1 or attempt 2 code is used)</p> <p>Here is the code I am using (without faceting) on Iris dataset:</p> <pre><code>import pandas as pd import plotly.express as px df = px.data.iris() dropdown_menu = [ { 'label': 'All Categories', 'method': 'update', 'args': [{ 'visible': True }, {'title': 'All Categories'}]} ] dropdown_menu.extend( [{'label': cat, 'method': 'update', 'args': [{ ### Attempt 1 'visible': df['species'] == cat, ### Attempt 2 #'y': [df[df['species'] == cat]['petal_length']], #'x': [df[df['species'] == cat]['petal_width']], }, {'title': cat}]} for cat in df['species'].unique()] ) fig = px.line( df, x='petal_width', y='petal_length', color='species', line_group='species', markers = True)#, facet_col='other categorical feature') fig.update_layout( {'updatemenus': [ { 'type': 'dropdown', 'buttons': dropdown_menu, 'direction': 'down', 'showactive': True, 'active': 0, 'x': 0.15, 'xanchor': 'left', 'y': 1.15, 'yanchor': 'top' } ]} ) </code></pre> <p>Side question: what is the purpose of the square brackets in my second attempt? I am getting different results without them.</p>
<python><plotly>
2023-10-09 18:35:55
1
995
student
77,260,935
1,147,321
Python evaluating "eval" windows paths wrongly
<p>I am trying to use eval to convert a string representing a list of file paths to an actual list.</p> <p>However, it seems to me that eval ruins that path. See the following code:</p> <p><em>Code executed in Pycharm Python console</em></p> <pre><code>&gt;&gt;&gt; _path = '[&quot;c:\\path\\to\\knowwhere&quot;]' &gt;&gt;&gt; eval(_path) ['c:\\path\to\\knowwhere'] </code></pre> <p>As shown above, the eval process removes one <code>'\'</code> making it a tab in the string, and thus cannot find the file.</p> <p>Is there any way around this or have I made a mistake somewhere?</p> <p>Note: <code>ast.literal_eval()</code> does the same.</p> <p>I'm using the latest Python3.10, and Pycharm IDE</p> <p>EDIT: This also fails</p> <pre><code>&gt;&gt;&gt; cmd = &quot;&quot;&quot;_path = ['C:\\path\\to\\knowwhere']\nprint(_path)&quot;&quot;&quot; &gt;&gt;&gt; exec(cmd) ['C:\\path\to\\knowwhere'] </code></pre> <p>And the above could be an external script, but would not work.</p> <p>regards</p>
<python><python-3.x><windows><eval>
2023-10-09 18:04:54
1
2,167
Ephreal
77,260,861
4,966,317
Solve University Courses Constraint satisfaction problem with Pandas
<p>I have a dataframe that contains some university courses.</p> <ul> <li>Each course has <code>pre-requisites</code> and <code>co-requisites</code> where <code>pre-requisites</code> are the courses that should be passed <strong>before</strong> being able to register a specific course.</li> <li><code>Co-requisites</code> are the courses that should be passed <strong>before</strong> taking the course, or courses that can be registered <strong>with</strong> the course that we are looking forward to registering <strong>at the same semester</strong>.</li> </ul> <p>For example, this is a part of my dataframe.</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>course_name</th> <th>pre_requisites</th> <th>co_requisites</th> </tr> </thead> <tbody> <tr> <td>Calculus A</td> <td></td> <td></td> </tr> <tr> <td>Calculus B</td> <td>Calculus A</td> <td></td> </tr> <tr> <td>Linear Algebra</td> <td></td> <td></td> </tr> <tr> <td>Calculus C</td> <td>[Calculus B, Linear Algebra]</td> <td></td> </tr> <tr> <td>Differential Equations</td> <td>Linear Algebra</td> <td>Calculus C</td> </tr> <tr> <td>Computer Programming</td> <td>[Calculus A, Linear Algebra]</td> <td></td> </tr> <tr> <td>Physics A</td> <td></td> <td>Calculus A</td> </tr> <tr> <td>Physics A Lab</td> <td></td> <td>Physics A</td> </tr> <tr> <td>Physics B</td> <td>Physics A</td> <td></td> </tr> <tr> <td>Physics B Lab</td> <td>Physics A Lab</td> <td>Physics B</td> </tr> <tr> <td>Electrical Circuits</td> <td>Physics B</td> <td>Differential Equations</td> </tr> <tr> <td>Electrical Circuits Lab</td> <td>Physics B Lab</td> <td>Electrical Circuits</td> </tr> <tr> <td>Digital Logic</td> <td>[Computer Programming, Electrical Circuits, Electrical Circuits Lab]</td> <td></td> </tr> <tr> <td>Digital Logic Lab</td> <td></td> <td>Digital Logic</td> </tr> </tbody> </table> </div> <p>I covered all of the cases.</p> <ul> <li>There are courses without <code>pre-requisites</code> nor <code>co-requisites</code> like <code>Calculus A</code>.</li> <li>Also, there are courses with <code>co-requisites</code> but without <code>pre-requisites</code> like <code>Digital Logic Lab</code>.</li> <li>Also, there are courses which are a chain of <code>co-requisites</code> like <code>Electrical Circuits Lab</code> which requires you to pass <code>Electrical Circuits</code> or register it at the same semester. <ul> <li>So, if you only passed <code>Physics B</code>, <code>Physics B Lab</code>, <code>Calculus B</code> and <code>Linear Algebra</code>, and you want to register <code>Electrical Circuits Lab</code>, then you need to register <code>Calculus C</code>, <code>Differential Equations</code> and <code>Electrical Circuits</code> at the same semester.</li> </ul> </li> <li>Finally, for simplicity, a course have zero or one <code>co-requisite</code>(s) only.</li> </ul> <p>I need a solution that I can give an input which is the passed courses, and I get an output which is the courses that can be registered.</p> <p>Here is a part of my courses data source (<code>courses.csv</code>):</p> <pre><code>course_name,pre_requisites,co_requisite Calculus-A,, Calculus-B,Calculus-A, Linear-Algebra,, Calculus-C,Calculus-B_Linear-Algebra, Differential-Equations,Linear-Algebra,Calculus-C Computer-Programming,Calculus-A_Linear-Algebra, Physics-A,,Calculus-A Physics-A-Lab,,Physics-A Physics-B,Physics-A, Physics-B-Lab,Physics-A-Lab,Physics-B Electrical-Circuits,Physics-B,Differential-Equations Electrical-Circuits-Lab,Physics-B-Lab,Electrical-Circuits Digital-Logic,Computer-Programming_Electrical-Circuits_Electrical-Circuits-Lab, Digital-Logic-Lab,,Digital-Logic </code></pre> <p>Here is a part of my python code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd df = pd.read_csv('courses.csv', dtype=str) df.fillna('', inplace=True) df['finished'] = False finished_courses = ['Calculus-A', 'Calculus-B', 'Linear-Algebra', 'Physics-A', 'Physics-A-Lab', 'Physics-B', 'Physics-B-Lab'] for finished_course in finished_courses: df.loc[df['course_name'] == finished_course, 'finished'] = True df['pre_requisites'] = df['pre_requisites'].apply(lambda pre_requisites: pre_requisites.split('_')) courses_without_pre_requisites = df[df['pre_requisites'].apply(lambda pre_requisites: pre_requisites == [''])] courses_without_pre_requisites_without_co_requisites = courses_without_pre_requisites[ courses_without_pre_requisites['co_requisite'].apply(lambda co_requisites: co_requisites == '')] courses_without_pre_requisites_with_co_requisites = courses_without_pre_requisites[ courses_without_pre_requisites['co_requisite'].apply(lambda co_requisites: co_requisites != '')] courses_with_pre_requisites = df[df['pre_requisites'].apply(lambda pre_requisites: pre_requisites != [''])] courses_with_pre_requisites_with_co_requisites = courses_with_pre_requisites[ courses_with_pre_requisites['co_requisite'].apply(lambda co_requisites: co_requisites != '')] courses_with_pre_requisites_without_co_requisites = courses_with_pre_requisites[ courses_with_pre_requisites['co_requisite'].apply(lambda co_requisites: co_requisites == '')] courses_with_finished_pre_requisites = courses_with_pre_requisites_without_co_requisites[ courses_with_pre_requisites_without_co_requisites['pre_requisites'].apply( lambda pre_requisite: set(pre_requisite).issubset(set(df[df['finished']]['course_name'])))] courses_to_be_registered = pd.concat( [courses_without_pre_requisites_without_co_requisites, courses_with_finished_pre_requisites]) courses_to_be_registered = courses_to_be_registered[~(courses_to_be_registered['finished'])] print('Courses to be registered:') print(courses_to_be_registered['course_name']) </code></pre> <p>Here is the output:</p> <pre class="lang-bash prettyprint-override"><code>Courses to be registered: Calculus-C Computer-Programming </code></pre> <p>Here is the desired output:</p> <pre class="lang-bash prettyprint-override"><code>Calculus-C Differential-Equations Computer-Programming Electrical-Circuits Electrical-Circuits-Lab </code></pre>
<python><python-3.x><pandas><dataframe>
2023-10-09 17:48:58
1
2,643
Ambitions
77,260,842
335,427
Place equally grouped members into same buckets
<p>I am looking for an optimal algorithm to to the following</p> <p>I have a list of groups like so</p> <pre><code>groups = [ ['A', 'B', 'C'], ['D', 'A', 'E'], ['X', 'Y', 'Z'], ['X', 'A', 'W'], ['AA', 'BB', 'CC'], ] </code></pre> <p>I want to put them into buckets where, if there is no bucket, create a new bucket.</p> <p>For each group like the first one <code>[A, B, C]</code> put them into the first bucket. All members of that group belong to the bucket. Now the next group <code>[D,A,E]</code> there is at least 1 member which is in bucket 1. This means all members from that group belong to bucket 1. The next one <code>[X,Y,Z]</code> has no members in any bucket. This means create a new bucket and store the members there. Now the next one <code>[X,A,W]</code> has a member in bucket 1 and bucket 2. this means bucket 1 and bucket 2 are the same and needs to be merged.</p> <p>For this example I should have 2 buckets. What would be an efficient way to do it if there is thousands of groups and each group could have hundreds of entries in similar length</p>
<python><algorithm>
2023-10-09 17:45:50
1
3,735
Chris
77,260,839
1,874,170
Dependency injection with CFFI?
<p>I'm trying to make a toy/proof-of-concept <a href="https://cffi.readthedocs.io/" rel="nofollow noreferrer">CFFI</a>-based Python library exposing the <a href="https://github.com/pqclean/PQClean" rel="nofollow noreferrer">PQClean</a> implementation of <a href="https://classic.mceliece.org/" rel="nofollow noreferrer">Classic McEliece</a> KEM version 6960119f.</p> <p>The documentation notes that you must &quot;Provide instantiations of any of the common cryptographic algorithms used by the implementation&quot;. This is true, because this code:</p> <pre class="lang-py prettyprint-override"><code>from cffi import FFI ffibuilder = FFI() ffibuilder.cdef(&quot;&quot;&quot; int PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_enc( uint8_t *c, uint8_t *key, const uint8_t *pk ); int PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_dec( uint8_t *key, const uint8_t *c, const uint8_t *sk ); int PQCLEAN_MCELIECE6960119F_CLEAN_crypto_kem_keypair ( uint8_t *pk, uint8_t *sk ); &quot;&quot;&quot;) ffibuilder.set_source(&quot;_libmceliece6960119f&quot;, &quot;&quot;&quot; #include &quot;api.h&quot; &quot;&quot;&quot;, library_dirs=[&quot;Lib/PQClean/crypto_kem/mceliece6960119f/clean&quot;], include_dirs=[&quot;Lib/PQClean/crypto_kem/mceliece6960119f/clean&quot;], libraries=[&quot;libmceliece6960119f_clean&quot;]) if __name__ == &quot;__main__&quot;: import os assert 'x64' == os.environ['VSCMD_ARG_TGT_ARCH'] == os.environ['VSCMD_ARG_HOST_ARCH'] ffibuilder.compile(verbose=True) #^ Code crashes here from _libmceliece6960119f import lib as libmceliece6960119f ... </code></pre> <p>yields MSVC error 1120 due to lacking implementations for <code>void shake256(uint8_t *output, size_t outlen, const uint8_t *input, size_t inlen);</code> and <code>int randombytes(uint8_t *output, size_t n);</code>:</p> <pre><code> Creating library .\Release\_libmceliece6960119f.cp311-win_amd64.lib and object .\Release\_libmceliece6960119f.cp311-win_amd64.exp LINK : warning LNK4098: defaultlib 'LIBCMT' conflicts with use of other libs; use /NODEFAULTLIB:library libmceliece6960119f_clean.lib(operations.obj) : error LNK2001: unresolved external symbol shake256 libmceliece6960119f_clean.lib(operations.obj) : error LNK2001: unresolved external symbol PQCLEAN_randombytes libmceliece6960119f_clean.lib(encrypt.obj) : error LNK2001: unresolved external symbol PQCLEAN_randombytes .\_libmceliece6960119f.cp311-win_amd64.pyd : fatal error LNK1120: 2 unresolved externals … Traceback (most recent call last): File &quot;C:\Users\████\Documents\code\███\cffi_compile.py&quot;, line 34, in &lt;module&gt; ffibuilder.compile(verbose=True) File &quot;C:\Users\████\.venvs\███\Lib\site-packages\cffi\api.py&quot;, line 725, in compile return recompile(self, module_name, source, tmpdir=tmpdir, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\████\.venvs\███\Lib\site-packages\cffi\recompiler.py&quot;, line 1564, in recompile outputfilename = ffiplatform.compile('.', ext, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\████\.venvs\███\Lib\site-packages\cffi\ffiplatform.py&quot;, line 20, in compile outputfilename = _build(tmpdir, ext, compiler_verbose, debug) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\████\.venvs\███\Lib\site-packages\cffi\ffiplatform.py&quot;, line 54, in _build raise VerificationError('%s: %s' % (e.__class__.__name__, e)) cffi.VerificationError: LinkError: command 'C:\\Program Files (x86)\\Microsoft Visual Studio\\2022\\BuildTools\\VC\\Tools\\MSVC\\14.37.32822\\bin\\HostX86\\x64\\link.exe' failed with exit code 1120 </code></pre> <p>I <strong>don't</strong> want to choose an implementation for either of these functions at compile-time. I want to <a href="https://en.wikipedia.org/wiki/Dependency_injection" rel="nofollow noreferrer">force the consumer of this library to provide one early in run-time</a>, ideally as Python functions <code>Callable[[bytes], bytes], Callable[[int], bytes]</code> (neither function needs to implement any streaming capabilities because the <code>shake256_inc_…</code> suite apparently isn't required by the <code>PQCLEAN_MCELIECE6960119F_crypto_kem_…</code> suite.)</p> <p>How can this be done?</p>
<python><dependency-injection><python-cffi>
2023-10-09 17:45:27
1
1,117
JamesTheAwesomeDude