QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
77,251,814
21,025,283
When running tests in Flask RuntimeError: Either 'SQLALCHEMY_DATABASE_URI' or 'SQLALCHEMY_BINDS' must be set
<p>I used SQLAlchemy for database management in Flask application, once running integration tests using pytest this error arises: <code>RuntimeError: Either 'SQLALCHEMY_DATABASE_URI' or 'SQLALCHEMY_BINDS' must be set</code>.</p> <p>I set the variable <code>SQLALCHEMY_DATABASE_URI=&quot;mysql+mysqlconnector://root:root@localhost/app&quot;</code> in <code>.flaskenv</code> file. I have done manual tests in Postman and everything went well, but once I tried to run the test with pytest the error arose. Here is the code snippet:</p> <pre><code>import pytest import json from backend.app import app @pytest.fixture(autouse=True) def load_env_vars(): import dotenv dotenv.load_dotenv(dotenv_path=&quot;.flaskenv&quot;) @pytest.fixture def client(): app.config[&quot;TESTING&quot;] = True with app.test_client() as client: yield client # Test login endpoint def test_login_user(client): payload = {&quot;username&quot;: &quot;testuser&quot;, &quot;password&quot;: &quot;password123&quot;} response = client.post(&quot;/login&quot;, json=payload) assert response.status_code == 200 assert &quot;token&quot; in response.json </code></pre> <p>Here is the stack trace error after running <code>pytest tests/</code></p> <pre><code> ========================== test session starts =========================== platform win32 -- Python 3.7.4, pytest-7.4.2, pluggy-1.2.0 rootdir: D:\link_b_flaskapp\backend plugins: dotenv-0.5.2 collected 0 items / 1 error ================================= ERRORS ================================= _______________ ERROR collecting tests/test_integration.py _______________ .venv\lib\site-packages\_pytest\runner.py:341: in from_call result: Optional[TResult] = func() .venv\lib\site-packages\_pytest\runner.py:372: in &lt;lambda&gt; call = CallInfo.from_call(lambda: list(collector.collect()), &quot;collect&quot;).venv\lib\site-packages\_pytest\python.py:531: in collect self._inject_setup_module_fixture() .venv\lib\site-packages\_pytest\python.py:545: in _inject_setup_module_fixture self.obj, (&quot;setUpModule&quot;, &quot;setup_module&quot;) .venv\lib\site-packages\_pytest\python.py:310: in obj self._obj = obj = self._getobj() .venv\lib\site-packages\_pytest\python.py:528: in _getobj return self._importtestmodule() .venv\lib\site-packages\_pytest\python.py:617: in _importtestmodule mod = import_path(self.path, mode=importmode, root=self.config.rootpath) .venv\lib\site-packages\_pytest\pathlib.py:567: in import_path importlib.import_module(module_name) C:\Python\Python37\lib\importlib\__init__.py:127: in import_module return _bootstrap._gcd_import(name[level:], package, level) &lt;frozen importlib._bootstrap&gt;:1006: in _gcd_import ??? &lt;frozen importlib._bootstrap&gt;:983: in _find_and_load ??? &lt;frozen importlib._bootstrap&gt;:967: in _find_and_load_unlocked ??? &lt;frozen importlib._bootstrap&gt;:677: in _load_unlocked ??? .venv\lib\site-packages\_pytest\assertion\rewrite.py:178: in exec_module exec(co, module.__dict__) tests\test_integration.py:4: in &lt;module&gt; from backend.app import app app.py:5: in &lt;module&gt; app = create_app() __init__.py:93: in create_app db.init_app(app) .venv\lib\site-packages\flask_sqlalchemy\extension.py:311: in init_app &quot;Either 'SQLALCHEMY_DATABASE_URI' or 'SQLALCHEMY_BINDS' must be set.&quot; E RuntimeError: Either 'SQLALCHEMY_DATABASE_URI' or 'SQLALCHEMY_BINDS' must be set. ======================== short test summary info ========================= ERROR tests/test_integration.py - RuntimeError: Either 'SQLALCHEMY_DATABASE_URI' or 'SQLALCHEMY_BINDS' m... !!!!!!!!!!!!!!!!! Interrupted: 1 error during collection !!!!!!!!!!!!!!!!! ============================ 1 error in 1.81s ============================ </code></pre> <p>Here is the <code>__init__.py</code> of the app</p> <pre><code>from flask import Flask, jsonify from datetime import timedelta from flask_sqlalchemy import SQLAlchemy # local imports from .config import app_config from .extensions import jwt, db, bcrypt, migrate from .models import TokenBlocklist def create_app(): app = Flask(__name__) app.config[&quot;SQLALCHEMY_DATABASE_URI&quot;] = os.environ.get(&quot;SQLALCHEMY_DATABASE_URI&quot;) app.config[&quot;JWT_SECRET_KEY&quot;] = os.environ.get(&quot;JWT_SECRET_KEY&quot;) app.config[&quot;SQLALCHEMY_TRACK_MODIFICATIONS&quot;] = os.environ.get(&quot;DATABASE_TRACK_MODIFICATIONS&quot;) app.config[&quot;JWT_ALGORITHM&quot;] = os.environ.get(&quot;JWT_ALGORITHM&quot;) app.config[&quot;FLASK_DEBUG&quot;] = os.environ.get(&quot;FLASK_DEBUG&quot;) jwt.init_app(app) db.init_app(app) bcrypt.init_app(app) migrate.init_app(app, db) # Register blueprints and initialize extensions here from .routes.user_routes import user_bp from .routes.admin_routes import admin_bp from .routes.auth_routes import auth_bp app.register_blueprint(user_bp, url_prefix=&quot;/user&quot;) app.register_blueprint(auth_bp, url_prefix=&quot;/auth&quot;) app.register_blueprint(admin_bp, url_prefix=&quot;/admin&quot;) return app </code></pre> <p><code>.flaskenv</code> file</p> <pre><code>FLASK_DEBUG = 1 SQLALCHEMY_DATABASE_URI = &quot;mysql+mysqlconnector://root:root@localhost/app&quot; SQLALCHEMY_TRACK_MODIFICATIONS = False # jw configuration JWT_SECRET_KEY = &quot;secret_key&quot; JWT_ALGORITHM = &quot;HS256&quot; </code></pre>
<python><flask><pytest>
2023-10-07 23:29:07
1
333
abdou_dev
77,251,711
15,781,591
How to stack multiple columns into one and fill in with column headers
<p>I have the following dataframe:</p> <pre><code> Date Blue Red Green ----------------------------------------- 1 1/1/14 55 34 34 2 1/2/14 36 35 23 3 1/3/14 23 46 43 4 1/4/14 47 34 55 </code></pre> <p>I want to &quot;stack&quot; these color-named columns with unique values into a single &quot;Color&quot; column with each corresponding value attributed, so that there are three columns with for each row, a date (in chronological order, a color label, and the attributed value), with dates repeating for each color.</p> <p>And so I am trying to get this dataframe:</p> <pre><code> Date Color -------------------------- 1 1/1/14 Blue 2 1/2/14 Blue 3 1/3/14 Blue 4 1/4/14 Blue 5 1/1/14 Red 6 1/2/14 Red 7 1/3/14 Red 8 1/4/14 Red 9 1/1/14 Green 10 1/2/14 Green 11 1/3/14 Green 12 1/4/14 Green ------------------------ </code></pre> <p>I am trying to &quot;stack&quot; my columns by color, but I am not sure if I should be stacking, melting, or concatenating. Which method should I use to get all of my colors and their values into one column?</p>
<python><pandas><stack>
2023-10-07 22:30:22
1
641
LostinSpatialAnalysis
77,251,464
8,030,794
How to merge rows with nearest values in Dataframe
<p>I have a <code>DataFrame</code> like this:</p> <pre><code>index B 0 1 1 2 2 5 3 6 4 7 5 10 </code></pre> <p>And i need to merge rows where the difference is less than or equal 2, select the line with the smaller value and set count merges</p> <p>The result should be like this :</p> <pre><code>index B count 0 1 2 1 5 3 2 10 1 </code></pre> <p>How can this be solved using pandas?</p>
<python><pandas>
2023-10-07 20:45:30
2
465
Fresto
77,251,444
5,339,430
How to set field `pint.Quantity._magnitude` of existing object?
<p>Using the <a href="https://pint.readthedocs.io/en/stable/" rel="nofollow noreferrer">pint library</a>, I profiled my code and found a bottleneck is creating new <code>Quantity</code> objects using the constructor like this:</p> <pre class="lang-py prettyprint-override"><code>import pint ureg = pint.UnitRegistry() ... quantity = pint.Quantity(-2.5, ureg.kcal / ureg.mol) quantities[i] = quantity </code></pre> <p>I have a list <code>quantities</code> of <code>Quantity</code> objects whose size doesn't change. I would like to edit the <code>Quantity</code> objects in-place to try to speed this up. Since the units will never change (always will be kcal/mol), I had hoped to be able simply to set the magnitude instead:</p> <pre class="lang-py prettyprint-override"><code>quantities[i].magnitude = -2.5 </code></pre> <p>However, this results in this error:</p> <pre><code>---&gt; 43 quantities[i].magnitude = -2.5 AttributeError: can't set attribute </code></pre> <p>I know <code>Quantity</code> objects are mutable, so I hope there is some way to edit the <code>magnitude</code> attribute, but I can't tell how from the documentation. I don't see a method to do it.</p> <p>Of course, Python lets you edit &quot;private&quot; fields directly, so I could do this:</p> <pre class="lang-py prettyprint-override"><code>quantities[i]._magnitude = -2.5 </code></pre> <p>This does speed up the code significantly. But presumably that field is private for a reason, and something might break if I try to set <code>_magnitude</code> directly, i.e., perhaps some other derived fields don't get recomputed that should be?</p>
<python><pint>
2023-10-07 20:39:57
0
355
Dave Doty
77,251,378
11,653,374
Observing the value and percentage change on a line graph like what Google does
<p>If you search <a href="https://en.wikipedia.org/wiki/Apple_Inc." rel="nofollow noreferrer">Apple</a> stock on Google you will be taken to <a href="https://www.google.com/search?q=apple%20stock&amp;rlz=1C1RXQR_enUS1073US1073&amp;oq=apple%20sto&amp;gs_lcrp=EgZjaHJvbWUqDAgBECMYJxidAhiKBTIGCAAQRRg5MgwIARAjGCcYnQIYigUyCQgCECMYJxiKBTITCAMQLhiDARjHARixAxjRAxiABDISCAQQABgUGIMBGIcCGLEDGIAEMg0IBRAuGK8BGMcBGIAEMgYIBhBFGDwyBggHEEUYPNIBCDM3NzJqMWo3qAIAsAIA&amp;sourceid=chrome&amp;ie=UTF-8&amp;bshm=rimc/1" rel="nofollow noreferrer">this page</a>. On the chart, you can left-click and hold it and move to the right or left. If you do so, you get the change of percentage as well as value change which is shown to you as you move around.</p> <p>Is it possible to create the above capability exactly as described in Python? I tried with the <a href="https://pypi.org/project/plotly/" rel="nofollow noreferrer">Plotly</a> package, but I could not do it.</p> <p>I want to do it on the following graph:</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt np.random.seed(0) x = np.random.randn(2000) y = np.cumsum(x) df = pd.DataFrame(y, columns=['value']) fig, ax = plt.subplots(figsize=(20, 4)) df['value'].plot(ax=ax) plt.show() </code></pre> <p>In the comment section below, <a href="https://stackoverflow.com/questions/77251378/observing-the-value-and-percentage-change-on-a-line-graph-like-what-google-does/77274264#comment136189093_77251378">Joseph suggested</a> using <a href="https://pyqtgraph.readthedocs.io/en/latest/getting_started/introduction.html#examples" rel="nofollow noreferrer">PyQtgraph pool of examples</a>, but this is my first time using this package and I am not sure how to do it.</p>
<python><visualization><graph-visualization>
2023-10-07 20:17:41
1
728
Saeed
77,251,171
2,386,113
How do I pass a function pointer to cuPy Raw kernel in Python?
<p>I am using <strong>cuPy</strong> to call raw CUDA kernels in Python scripts. I am able to load the simple standalone CUDA kernels in my Python script but I don't know the syntax to use if my CUDA kernel requires a function pointer as an argument. How do I pass a function pointer to a device function?</p> <p><strong>Sample CUDA Kernel Requiring Function Pointer as Argument:</strong></p> <pre><code>extern &quot;C&quot; { // substraction __device__ float substractValues(float a, float b) { float value = a - b; return value; } typedef float(*FuncPtrSubstraction)(float, float, float, float, float); __device__ FuncPtrSubstraction d_ptrSubstraction = substractValues; // addition __device__ float addValues(float a, float b) { float sum = a + b; return sum; } typedef float(*FuncPtrAddition)(float, float, float, float, float); __device__ FuncPtrAddition d_ptrAddition = addValues; // main Kernel __global__ void applyMatricesOperation(float (*funcPtrOperation)(float, float), const float* A, const float* B, float* C, int rows, int cols) { int col = blockIdx.x * blockDim.x + threadIdx.x; int row = blockIdx.y * blockDim.y + threadIdx.y; if (col &lt; cols &amp;&amp; row &lt; rows) { int index = row * cols + col; C[index] = funcPtrOperation(A[index] , B[index]); // A[index] + B[index]; OR A[index] - B[index] printf(&quot;computed...\n&quot;); } } } </code></pre> <p><strong>UPDATE:</strong> I tried to use <strong>cuPy</strong>'s <code>RawModule()</code> to load my CUDA code and then used <code>get_global()</code> to get my already defined function pointer on the device side. The code is working <strong>but I am getting an exception while trying to read the results</strong></p> <blockquote> <p>cudaErrorIllegalAddress: an illegal memory access was encountered</p> </blockquote> <pre><code>import cupy as cp # Define the CUDA kernel code kernel_code = &quot;&quot;&quot; extern &quot;C&quot; { __device__ float addValues(float a, float b) { float sum = a + b; return sum; } typedef float(*FuncPtrAddition)(float, float); __device__ FuncPtrAddition d_ptrAddition = addValues; __global__ void addMatrices(float (*funcPtrAddValues)(float, float), const float* A, const float* B, float* C, int rows, int cols) { int col = blockIdx.x * blockDim.x + threadIdx.x; int row = blockIdx.y * blockDim.y + threadIdx.y; printf(&quot;computed...&quot;); if (col &lt; cols &amp;&amp; row &lt; rows) { int index = row * cols + col; C[index] = funcPtrAddValues(A[index], B[index]); } } } &quot;&quot;&quot; # Create a RawModule from the kernel code raw_module = cp.RawModule(code=kernel_code) # Load the addMatrices kernel from the module addMatrices_kernel = raw_module.get_function(&quot;addMatrices&quot;) # Define the dimensions for your matrix rows, cols = 3, 3 # Allocate device memory for input and output arrays A = cp.random.rand(rows, cols).astype(cp.float32) B = cp.random.rand(rows, cols).astype(cp.float32) C = cp.empty((rows, cols), dtype=cp.float32) # Define grid and block dimensions block_dim = (3, 3) grid_dim = (rows // block_dim[0], cols // block_dim[1]) # Launch the kernel, passing the function pointer as an argument funcPtr = raw_module.get_global(&quot;d_ptrAddition&quot;) addMatrices_kernel(grid_dim, block_dim, (funcPtr, A, B, C, rows, cols)) # Copy the result back to the host result = C.get() print(result) </code></pre>
<python><cuda><cupy>
2023-10-07 19:11:35
1
5,777
skm
77,250,681
8,708,364
Can't upload video over 5GB on App Engine
<p>I recently decided to move my software hosting to Google App Engine (since we believe it's serverless), but, I am coming across a problem where I cannot upload large video files. This is a big problem, since the point of the software is handling video files.</p> <p>HTML:</p> <pre class="lang-html prettyprint-override"><code>&lt;center&gt; &lt;form method=&quot;POST&quot; action='/tutorial' class=&quot;dropzone dz-clickable&quot; id=&quot;dropper&quot; enctype=&quot;multipart/form-data&quot; &gt; &lt;/form&gt; &lt;/center&gt; &lt;script type=&quot;application/javascript&quot;&gt; Dropzone.options.dropper = { paramName: 'file', chunking: true, maxFiles: 1, dictDefaultMessage: &quot;Upload Video&quot;, acceptedFiles: &quot;.mp4&quot;, forceChunking: true, url: '/tutorial', maxFilesize: 10240, // megabytes chunkSize: 20000000, // bytes retryChunks: true, retryChunksLimit: 3, autoQueue: false, ... &lt;/script&gt; </code></pre> <p>Python:</p> <pre class="lang-py prettyprint-override"><code>@app.route('/tutorial', methods=['GET', 'POST']) def tutorial(): if request.method == &quot;GET&quot;: return render_template('tutorial.html', static_folder='static') f = request.files['file'] session['filename'] = secure_filename(f.filename) gcs_client = storage.Client() storage_bucket = gcs_client.get_bucket('myfiles') tmp_file_path = '/tmp/' + session['filename'] fsr = f.stream.read() with open(tmp_file_path, &quot;ab&quot;) as out_file: out_file.seek(int(request.form[&quot;dzchunkbyteoffset&quot;])) out_file.write(fsr) if int(request.form['dzchunkindex']) + 1 == int(request.form['dztotalchunkcount']): blob = storage_bucket.blob(session['filename']) blob.upload_from_filename(tmp_file_path, client=gcs_client, timeout=5000) return redirect(&quot;/video&quot;) else: return make_response((&quot;Chunk upload successful&quot;, 200)) return redirect(&quot;/video&quot;) </code></pre> <p><code>app.yaml</code>:</p> <pre class="lang-yaml prettyprint-override"><code>runtime: python entrypoint: gunicorn -b :$PORT main:app manual_scaling: instances: 1 runtime_config: operating_system: ubuntu22 runtime_version: &quot;3.11&quot; resources: cpu: 8 memory_gb: 32 disk_size_gb: 20 env: flex </code></pre> <p>I already have way more than enough CPU and RAM required to process this upload, I am not sure why the workers just keep timing out and exiting.</p> <p>Error:</p> <pre class="lang-none prettyprint-override"><code>[CRITICAL] WORKER TIMEOUT (pid:51) [INFO] Worker exiting (pid: 51) [INFO] Booting worker with pid: 209 </code></pre>
<python><flask><google-cloud-platform><google-app-engine><file-upload>
2023-10-07 16:40:10
0
71,788
U13-Forward
77,250,667
10,118,061
Web scraping through Selenium only loads the skeleton of the page and does not populate the data
<p>I am developing a python/selenium script to scrape publicly available data on a website.</p> <p>Within the website, there is &quot;Search&quot; button, when clicked should bring out 100s of search results.</p> <p>However, upon initiating a click through selenium, only the skeleton of the search results are loaded, and the actual data is never populated into the skeleton.</p> <p><a href="https://i.sstatic.net/VQABS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VQABS.png" alt="enter image description here" /></a></p> <p>When I check the network tab in chrome console, the data is being fetched perfectly, it is just not being populated onto the webpage</p> <p><a href="https://i.sstatic.net/E36wr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E36wr.png" alt="enter image description here" /></a></p> <p>The console logs throw this error:</p> <pre><code>Stacktrace: GetHandleVerifier [0x00007FF7D8D67D12+55474] (No symbol) [0x00007FF7D8CD77C2] (No symbol) [0x00007FF7D8B8E0EB] (No symbol) [0x00007FF7D8BCEBAC] (No symbol) [0x00007FF7D8BCED2C] (No symbol) [0x00007FF7D8C09F77] (No symbol) [0x00007FF7D8BEF19F] (No symbol) [0x00007FF7D8C07EF2] (No symbol) [0x00007FF7D8BEEF33] (No symbol) [0x00007FF7D8BC3D41] (No symbol) [0x00007FF7D8BC4F84] GetHandleVerifier [0x00007FF7D90CB762+3609346] GetHandleVerifier [0x00007FF7D9121A80+3962400] GetHandleVerifier [0x00007FF7D9119F0F+3930799] GetHandleVerifier [0x00007FF7D8E03CA6+694342] (No symbol) [0x00007FF7D8CE2218] (No symbol) [0x00007FF7D8CDE484] (No symbol) [0x00007FF7D8CDE5B2] (No symbol) [0x00007FF7D8CCEE13] BaseThreadInitThunk [0x00007FFB49117614+20] RtlUserThreadStart [0x00007FFB4A3026B1+33] </code></pre> <p>I have tried to make the script go to sleep for 10,20,30,60,120 seconds. Yet the page keeps showing me a skeleton state and fails to populate the data.</p> <p>Moving forward the script tries to extra data from the search results, and I have tried using <code>WebDriverWait</code> to wait for element's existence or visibility on the web page.</p> <p>Some other posts suggested adding header to the chrome driver, that too did not produce desired results.</p> <p>All of this results in skeleton loading state and then times out with the error in the console as mentioned above.</p> <p>Note of Interest: The script works perfectly fine on some devices, however it fails to run on certain other devices which has been documented <a href="https://github.com/wimpywarlord/appliFLY/issues" rel="nofollow noreferrer">here</a></p> <p>The code base for the script can be found <a href="https://github.com/wimpywarlord/appliFLY/tree/main/assets/script" rel="nofollow noreferrer">here</a> , might be super easy to duplicate the said issue in your local if you run the script.</p>
<python><selenium-webdriver><web-scraping><selenium-chromedriver>
2023-10-07 16:35:49
0
843
Kshitij Dhyani
77,250,637
8,230,132
cookiecutter conditional prompt and capture output to variable
<p>I am very much new in cookiecutter programming and trying to create an interactive program similar to below example to generate config! The requirement is to prompt for sub options conditionally if getting <code>yes</code> on one prompt and capture value into variable for later use. My desired menu is as below</p> <pre><code>$ python -m cookiecutter . [1/5] Select your project name: (Short project name or application code): Example1 [2/5] Select your project slug: (Example1): [3/5] Setting up DEV environment? 1 - yes 2 - no Choose from [1/2] (1): 1 DEV Account Id (What is your AWS Dev account's id?): 1234567890 [4/5] Setting up TST environment? 1 - no 2 - yes Choose from [1/2] (1): [5/5] Setting up PRD environment? 1 - no 2 - yes Choose from [1/2] (1): </code></pre> <p><strong>Problem1#</strong> In actual, the line <code>DEV Account Id (What is your AWS Dev account's id?)</code> getting printed only after all prompt completes at <code>cookiecutter.json</code> and not after answering <code>[3/5] Setting up DEV environment?</code></p> <p><strong>Problem2#</strong> Python script is capturing the value however I am unable to import that into cookiecutter variable for later use. <code>Error message: '_aws_dev_account_info' is undefined</code></p> <p><code>cookiecutter.json</code></p> <pre><code>{ &quot;project_name&quot;: &quot;Short project name or application code&quot;, &quot;project_slug&quot;: &quot;{{ cookiecutter.project_name.lower().replace(' ', '-') }}&quot;, &quot;dev_account&quot;: [&quot;yes&quot;, &quot;no&quot;], &quot;tst_account&quot;: [&quot;no&quot;, &quot;yes&quot;], &quot;prd_account&quot;: [&quot;no&quot;, &quot;yes&quot;], &quot;__prompts__&quot;: { &quot;project_name&quot;: &quot;Select your project name:&quot;, &quot;project_slug&quot;: &quot;Select your project slug:&quot;, &quot;dev_account&quot;: &quot;Setting up DEV environment?&quot;, &quot;tst_account&quot;: &quot;Setting up TST environment?&quot;, &quot;prd_account&quot;: &quot;Setting up PRD environment?&quot; }, &quot;aws_dev_account_id&quot;: &quot;{{ cookiecutter._aws_dev_account_id.strip() }}&quot; } </code></pre> <p>In <code>hooks/pre_gen_project.py</code> I have written sample code to get a conditional prompt if answer of <code>dev_account</code> is <code>yes</code> only. this should return the value to cookiecutter in return, but that is not working for me or I am doing mistake here.</p> <pre><code>if &quot;{{ cookiecutter.dev_account }}&quot; == &quot;yes&quot;: aws_dev_account_info = read_user_variable(&quot;\t_aws_dev_account_id&quot;,&quot;What is your AWS Dev account's id?&quot;) print(&quot;My value is {0}&quot;.format(aws_dev_account_info)) &quot;&quot;&quot;{{ cookiecutter.update({&quot;_aws_dev_account_id&quot;: aws_dev_account_info})}}&quot;&quot;&quot; else: &quot;&quot;&quot;{{ cookiecutter.update()}}&quot;&quot;&quot; sys.exit(0) </code></pre>
<python><python-3.x><cookiecutter>
2023-10-07 16:29:42
0
703
Rio
77,250,622
10,291,435
pyspark dataframe takes too long to filter/transform into pandas df/ filter
<p>I am havin a pyspark dataframe which has around 120k records, I am trying to do some operations on it. some operations work so fine. but when I try to write it into csv, or transform it into pandas, or try to filter like this:</p> <pre><code>state_not_equal= df_h_with_distance_order_osm_P.filter(df_h_with_distance_order_osm['state'] != df_h_with_distance_order_osm['state_osm']) </code></pre> <p>it takse so long time and it seems stuck. even though when I tried to apply another condition on this data it worked fine:</p> <pre><code>df_h_with_distance_order_osm_P= df_h_with_distance_order_osm_P.filter(df_h_with_distance_order_osm_P[&quot;city&quot;] == 'Hannover, Landeshauptstadt') </code></pre> <p>I tried to cache the dataframe and persist it but no luck, any idea what I can do or check please?</p> <p>update:</p> <p>I added these lines to the config of the cluster of spark in databricks</p> <pre><code>spark.executor.memoryOverhead 4g spark.executor.instances 4 spark.executor.memory 16g spark.dynamicAllocation.enabled true spark.sql.shuffle.partitions 100 </code></pre> <p>and I even just set the writing limit to 4 as follows:</p> <pre><code>df_h_with_distance_order_osm.limit(4).write.csv(&quot;/FileStore/tables/final_s.csv&quot;) </code></pre> <p>and it's still stuck, even though when I did .show() it is shown. also, I want to mention that the same issue appears when I try to run on colab pro. please help</p> <p>update again:</p> <p>when I tried to write only 4 rows it succeeded, but when I tried to make them 500 rows for example it still stuck i am not sure what to do</p>
<python><pyspark>
2023-10-07 16:26:00
0
1,699
Mee
77,250,557
4,399,016
Applying function to Pandas Data frame and getting result as a new data frame
<p>I have this code:</p> <pre><code>import yfinance as yF import datetime import pandas as pd df = pd.DataFrame({'ID':['1', '2'], 'Ticker': ['AIN', 'TILE'], 'Company':['Albany International', 'Interface']}) def get_returns(tic,com): df_Stock = yF.download(tickers = tic, period = &quot;max&quot;, interval = &quot;1mo&quot;, prepost = False, repair = False) df_Stock[com + ' % Growth'] = df_Stock['High']-df_Stock['Open'] df_Stock[com + ' % Growth'] = (df_Stock[com + ' % Growth'] * 100 )/df_Stock['Open'] return df_Stock[com + ' % Growth'] get_returns('AIN','AIN') </code></pre> <p>Everything work as expected until this point.</p> <pre><code>df1 = df.apply(lambda x: get_returns(x.Ticker, x.Company), axis=1) </code></pre> <p>Here I am trying to use this function</p> <pre><code>get_returns() </code></pre> <p>with Pandas apply and lambda on the dataframe df defined above. The desired output is another dataframe df1 which has 3 columns namely</p> <ol> <li>Date</li> <li>Company 1 - Albany International % Returns</li> <li>Company 2 - Interface % Returns</li> </ol> <p>If there are more rows in dataframe df, each company will be a new column in df1 and its monthly return would be the row value for the given time period under the Date column.</p>
<python><pandas><dataframe><apply>
2023-10-07 16:07:52
1
680
prashanth manohar
77,250,285
4,805,412
How to add Meta (Facebook) Pixel to web pages generated by Sphinx?
<p>I have obtained the <strong>Meta Pixel</strong> from Meta and I have a website made in Python documentation tool called <strong>Sphinx</strong>.</p> <p>Now I need to add the generated code of Meta Pixel in the head section of my pages, right above .</p> <p>The result should be like this:</p> <pre class="lang-html prettyprint-override"><code>&lt;head&gt; ... &lt;script&gt;Meta Pixel&lt;/script&gt; &lt;/head&gt; </code></pre> <p>How to do that?</p>
<python><html><python-sphinx>
2023-10-07 14:54:01
1
4,842
Pavel Hanpari
77,249,985
5,800,969
google.auth.exceptions.RefreshError: ('invalid_grant: Bad Request', {'error': 'invalid_grant', 'error_description': 'Bad Request'})
<p>I have generated developer_token, access_token and refresh token with googleads read permission. While using in code using google-ads-api I am getting below error.</p> <p>Code:</p> <pre><code>from google.ads.googleads.client import GoogleAdsClient googleads_client = GoogleAdsClient.load_from_storage(path=&quot;./googleads.yaml&quot;, version=&quot;v14&quot;) </code></pre> <p>Error:</p> <pre><code> googleads_client = GoogleAdsClient.load_from_storage(path=&quot;./googleads.yaml&quot;, version=&quot;v14&quot;) </code></pre> <p>I followed <a href="https://towardsdatascience.com/getting-adwords-kpis-reports-via-api-python-step-by-step-guide-245fc74d9d73" rel="nofollow noreferrer">this</a> tutorial to generate developer token, access token and refresh_token.</p> <p>Here is permissions in the app and during authorization screenshot <a href="https://i.sstatic.net/E6iyJ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/E6iyJ.png" alt="enter image description here" /></a></p> <p><a href="https://i.sstatic.net/HKRpG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HKRpG.png" alt="enter image description here" /></a></p>
<python><oauth-2.0><google-oauth><google-ads-api><google-ads-script>
2023-10-07 13:34:22
0
2,071
iamabhaykmr
77,249,730
206,253
Quickest way to swap 0 and 1 in a list (not numpy)
<p>What is the quickest non-numpy way of swapping 1 and 0 in a Python list?</p>
<python><list>
2023-10-07 12:17:14
2
3,144
Nick
77,249,570
5,790,653
TypeError: list indices must be integers or slices, not str over a big and merged json file
<p>This is <code>combined.json</code>:</p> <pre><code>[ { &quot;count&quot;: 1, &quot;next&quot;: null, &quot;previous&quot;: null, &quot;results&quot;: [ { &quot;id&quot;: 5883, &quot;url&quot;: &quot;https://some.api.com/api/ipam/ip-addresses/5883/&quot;, &quot;display&quot;: &quot;1.1.1.130/24&quot;, &quot;family&quot;: { &quot;value&quot;: 4, &quot;label&quot;: &quot;IPv4&quot; }, &quot;address&quot;: &quot;1.1.1.130/24&quot;, &quot;vrf&quot;: null, &quot;tenant&quot;: null, &quot;status&quot;: { &quot;value&quot;: &quot;active&quot;, &quot;label&quot;: &quot;Active&quot; }, &quot;role&quot;: null, &quot;assigned_object_type&quot;: &quot;dcim.interface&quot;, &quot;assigned_object_id&quot;: 801, &quot;assigned_object&quot;: { &quot;id&quot;: 801, &quot;url&quot;: &quot;https://some.api.com/api/dcim/interfaces/801/&quot;, &quot;display&quot;: &quot;N2&quot;, &quot;device&quot;: { &quot;id&quot;: 123, &quot;url&quot;: &quot;https://some.api.com/api/dcim/devices/123/&quot;, &quot;display&quot;: &quot;A-F3-G23-15-Saeed Unit15&quot;, &quot;name&quot;: &quot;A-F3-G23-15-Saeed Unit15&quot; }, &quot;name&quot;: &quot;N2&quot;, &quot;cable&quot;: null, &quot;_occupied&quot;: false }, &quot;nat_inside&quot;: null, &quot;nat_outside&quot;: null, &quot;dns_name&quot;: &quot;&quot;, &quot;description&quot;: &quot;&quot;, &quot;tags&quot;: [], &quot;custom_fields&quot;: {}, &quot;created&quot;: &quot;2023-10-01&quot;, &quot;last_updated&quot;: &quot;2023-10-01T14:05:32.001606+03:30&quot; } ] }, { &quot;count&quot;: 1, &quot;next&quot;: null, &quot;previous&quot;: null, &quot;results&quot;: [ { &quot;id&quot;: 6883, &quot;url&quot;: &quot;https://some.api.com/api/ipam/ip-addresses/6883/&quot;, &quot;display&quot;: &quot;2.2.2.130/24&quot;, &quot;family&quot;: { &quot;value&quot;: 4, &quot;label&quot;: &quot;IPv4&quot; }, &quot;address&quot;: &quot;2.2.2.130/24&quot;, &quot;vrf&quot;: null, &quot;tenant&quot;: null, &quot;status&quot;: { &quot;value&quot;: &quot;active&quot;, &quot;label&quot;: &quot;Active&quot; }, &quot;role&quot;: null, &quot;assigned_object_type&quot;: &quot;dcim.interface&quot;, &quot;assigned_object_id&quot;: 901, &quot;assigned_object&quot;: { &quot;id&quot;: 901, &quot;url&quot;: &quot;https://some.api.com/api/dcim/interfaces/901/&quot;, &quot;display&quot;: &quot;N2&quot;, &quot;device&quot;: { &quot;id&quot;: 123, &quot;url&quot;: &quot;https://some.api.com/api/dcim/devices/223/&quot;, &quot;display&quot;: &quot;A-F3-G23-16-Saeed Unit16&quot;, &quot;name&quot;: &quot;A-F3-G23-16-Saeed Unit16&quot; }, &quot;name&quot;: &quot;N2&quot;, &quot;cable&quot;: null, &quot;_occupied&quot;: false }, &quot;nat_inside&quot;: null, &quot;nat_outside&quot;: null, &quot;dns_name&quot;: &quot;&quot;, &quot;description&quot;: &quot;&quot;, &quot;tags&quot;: [], &quot;custom_fields&quot;: {}, &quot;created&quot;: &quot;2023-10-01&quot;, &quot;last_updated&quot;: &quot;2023-10-01T14:05:32.001606+03:30&quot; } ] } ] </code></pre> <p>I want to access all elements in <code>[&quot;results&quot;][0][&quot;assigned_object&quot;][&quot;device&quot;][&quot;name&quot;]</code> with the following code (this code is given from my other question: <a href="https://stackoverflow.com/questions/77249378">how to access objects in python with for loop in different files</a>):</p> <pre><code>import json file_name = 'combined.json' with open(file_name) as file: data = json.load(file) data[&quot;results&quot;][0][&quot;assigned_object&quot;][&quot;device&quot;][&quot;name&quot;] ### OR EVEN THIS ### import json file_name = 'combined.json' with open(file_name) as file: data = json.load(file) for element in data[&quot;results&quot;]: print(element[&quot;assigned_object&quot;][&quot;device&quot;]['name']) </code></pre> <p>But I get this error:</p> <pre><code>Traceback (most recent call last): File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt; TypeError: list indices must be integers or slices, not str </code></pre> <p>I just can be sure the syntax of <code>combined.json</code> is correct, but I'm not sure why it doesn't print all of what I want.</p> <p>This is the expected output:</p> <pre><code>A-F3-G23-15-Saeed Unit15 A-F3-G23-16-Saeed Unit16 </code></pre> <p>I checked the syntax of my merged <code>json</code> file and that is correct.</p>
<python>
2023-10-07 11:31:48
1
4,175
Saeed
77,249,499
1,795,245
Developing and debugging AWS Lambda Function: conda -> docker image using Visual Studio Code
<p>I'm working on a project that involves developing an AWS Lambda function in Python. At the end I want to use a Docker image containing my custom code. I want to use the following steps:</p> <ol> <li>Develop and debug the code locally using a conda virtual environment - No problem</li> <li>Debug the code within the Docker image. - No problem</li> <li>Push the Docker image to AWS ECR and deploy it as an AWS Lambda function.- No problem</li> </ol> <p>My question what I need to to between step 1 and step 2, to switch from a conda environment to docker and be able to start the code. I also want to be able to switch back (my assumption is to use the command &quot;Python: select interpreter&quot;). What do I need to do? Thanks!</p>
<python><amazon-web-services><docker><visual-studio-code>
2023-10-07 11:06:12
1
649
Jonas
77,249,422
7,149,485
Changing django model structure produces 1054 unkown column error
<p>I am trying to change the <code>model</code> structure (column names) of an existing <code>Django</code> app that was already working and had a database.</p> <p><strong>Original Files:</strong></p> <p>The original <code>model.py</code> file was:</p> <pre><code>from django.db import models from django.core.validators import MinLengthValidator class Breed(models.Model): name = models.CharField( max_length=200, validators=[MinLengthValidator(2, &quot;Breed must be greater than 1 character&quot;)] ) def __str__(self): return self.name class Cat(models.Model): nickname = models.CharField( max_length=200, validators=[MinLengthValidator(2, &quot;Nickname must be greater than 1 character&quot;)] ) weight = models.FloatField() food = models.CharField(max_length=300) breed = models.ForeignKey('Breed', on_delete=models.CASCADE, null=False) def __str__(self): return self.nickname </code></pre> <p>and the original <code>/templates/cats/cat_list.html</code> file was:</p> <pre><code>{% extends &quot;base_bootstrap.html&quot; %} {% block content %} &lt;h1&gt;Cat List&lt;/h1&gt; {% if cat_list %} &lt;ul&gt; {% for cat in cat_list %} &lt;li&gt; {{ cat.nickname }} ({{ cat.breed }}) (&lt;a href=&quot;{% url 'cats:cat_update' cat.id %}&quot;&gt;Update&lt;/a&gt; | &lt;a href=&quot;{% url 'cats:cat_delete' cat.id %}&quot;&gt;Delete&lt;/a&gt;) &lt;br/&gt; {{ cat.weight }} ({{ cat.food }} food) &lt;/li&gt; {% endfor %} &lt;/ul&gt; {% else %} &lt;p&gt;There are no cats in the library.&lt;/p&gt; {% endif %} &lt;p&gt; {% if breed_count &gt; 0 %} &lt;a href=&quot;{% url 'cats:cat_create' %}&quot;&gt;Add a cat&lt;/a&gt; {% else %} Please add a breed before you add a cat. {% endif %} &lt;/p&gt; &lt;p&gt; &lt;a href=&quot;{% url 'cats:breed_list' %}&quot;&gt;View breeds&lt;/a&gt; ({{ breed_count }}) | &lt;a href=&quot;{% url 'cats:breed_create' %}&quot;&gt;Add a breed&lt;/a&gt; &lt;/p&gt; {% endblock %} </code></pre> <p><strong>Desired Change:</strong></p> <p>I am trying to change:</p> <ul> <li>weight &gt;&gt; age</li> <li>food &gt;&gt; colour</li> </ul> <p><strong>What I Did:</strong></p> <p>I made the two substitutions in the files above. I deleted the <code>0001_initial.py</code> from the <code>/migrations</code> folder.</p> <p><a href="https://i.sstatic.net/VpYQQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VpYQQ.png" alt="enter image description here" /></a></p> <p>I then (1) <code>check</code> for syntax errors, (2) <code>makemigrations</code>; (3) <code>migrate</code> as per below:</p> <p><a href="https://i.sstatic.net/3W2N1.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3W2N1.png" alt="enter image description here" /></a></p> <p><strong>Error:</strong></p> <p>I get an error in the template rendering where it says the column <code>cats_cat.age</code> is unknown.</p> <p><a href="https://i.sstatic.net/KwKlt.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KwKlt.png" alt="enter image description here" /></a></p> <p><strong>Additional sources:</strong></p> <p>I have looked at a few things on the internet including:</p> <ol> <li><a href="https://stackoverflow.com/questions/44073550/django-models-1054-unknown-column-in-field-list">Django Models (1054, “Unknown column in &#39;field list&#39;”)</a></li> <li><a href="https://stackoverflow.com/questions/3787237/django-models-1054-unknown-column-in-field-list">Django Models (1054, &quot;Unknown column in &#39;field list&#39;&quot;)</a></li> <li><a href="https://stackoverflow.com/questions/32383978/no-such-column-error-in-django-models">No such column error in Django models</a></li> </ol> <p>But cannot work out how to fix this error...</p>
<python><django>
2023-10-07 10:41:48
1
1,169
brb
77,249,420
11,813,880
UnicodeDecodeError when Transcribing Live Audio using Sphinx in Python
<p>I am trying to transcribe a live playing audio, but I am getting the following error when decoding,</p> <blockquote> <p>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xff in position 0: invalid start byte</p> </blockquote> <pre><code>import pyaudio import requests from pocketsphinx import LiveSpeech def speech_to_text(audio_stream): speech = LiveSpeech( verbose=False, sampling_rate=16000, buffer_size=2048, no_search=False, ) for chunk in audio_stream.iter_content(chunk_size=1024): for phrase in speech.decode(bytes(chunk)): yield str(phrase) def main(): url = &quot;AUDIO_URL&quot; audio_stream = requests.get(url, stream=True) audio_format = pyaudio.paInt16 sample_rate = 16000 audio_chunk_size = 1024 p = pyaudio.PyAudio() stream = p.open(format=audio_format, channels=1, rate=sample_rate, input=True, frames_per_buffer=audio_chunk_size) print(&quot;Listening to audio stream...&quot;) for phrase in speech_to_text(audio_stream): print(&quot;Recognized text:&quot;, phrase) stream.stop_stream() stream.close() p.terminate() if __name__ == &quot;__main__&quot;: main() </code></pre> <p>The error occurs when I attempt to process the binary audio data received from the URL. Since this is binary audio data, I understand that decoding it as UTF-8 doesn't make sense, but I'm unsure about how to handle it correctly.</p> <p>So what I am doing wrong here? Can anybody help me? Thanks in advance.</p>
<python><cmusphinx><sphinx4>
2023-10-07 10:41:27
0
365
Abraham Arnold
77,249,378
5,790,653
how to access objects in python with for loop in different files
<p>This is my <code>file1.json</code>:</p> <pre><code>{ &quot;count&quot;: 1, &quot;next&quot;: null, &quot;previous&quot;: null, &quot;results&quot;: [ { &quot;id&quot;: 5883, &quot;url&quot;: &quot;https://some.api.com/api/ipam/ip-addresses/5883/&quot;, &quot;display&quot;: &quot;1.1.1.130/24&quot;, &quot;family&quot;: { &quot;value&quot;: 4, &quot;label&quot;: &quot;IPv4&quot; }, &quot;address&quot;: &quot;1.1.1.130/24&quot;, &quot;vrf&quot;: null, &quot;tenant&quot;: null, &quot;status&quot;: { &quot;value&quot;: &quot;active&quot;, &quot;label&quot;: &quot;Active&quot; }, &quot;role&quot;: null, &quot;assigned_object_type&quot;: &quot;dcim.interface&quot;, &quot;assigned_object_id&quot;: 801, &quot;assigned_object&quot;: { &quot;id&quot;: 801, &quot;url&quot;: &quot;https://some.api.com/api/dcim/interfaces/801/&quot;, &quot;display&quot;: &quot;N2&quot;, &quot;device&quot;: { &quot;id&quot;: 123, &quot;url&quot;: &quot;https://some.api.com/api/dcim/devices/123/&quot;, &quot;display&quot;: &quot;A-F3-G23-15-Saeed Unit15&quot;, &quot;name&quot;: &quot;A-F3-G23-15-Saeed Unit15&quot; }, &quot;name&quot;: &quot;N2&quot;, &quot;cable&quot;: null, &quot;_occupied&quot;: false }, &quot;nat_inside&quot;: null, &quot;nat_outside&quot;: null, &quot;dns_name&quot;: &quot;&quot;, &quot;description&quot;: &quot;&quot;, &quot;tags&quot;: [], &quot;custom_fields&quot;: {}, &quot;created&quot;: &quot;2023-10-01&quot;, &quot;last_updated&quot;: &quot;2023-10-01T14:05:32.001606+03:30&quot; } ] } </code></pre> <p>This is my <code>file2.json</code>:</p> <pre><code>{ &quot;count&quot;: 1, &quot;next&quot;: null, &quot;previous&quot;: null, &quot;results&quot;: [ { &quot;id&quot;: 6883, &quot;url&quot;: &quot;https://some.api.com/api/ipam/ip-addresses/6883/&quot;, &quot;display&quot;: &quot;2.2.2.130/24&quot;, &quot;family&quot;: { &quot;value&quot;: 4, &quot;label&quot;: &quot;IPv4&quot; }, &quot;address&quot;: &quot;2.2.2.130/24&quot;, &quot;vrf&quot;: null, &quot;tenant&quot;: null, &quot;status&quot;: { &quot;value&quot;: &quot;active&quot;, &quot;label&quot;: &quot;Active&quot; }, &quot;role&quot;: null, &quot;assigned_object_type&quot;: &quot;dcim.interface&quot;, &quot;assigned_object_id&quot;: 901, &quot;assigned_object&quot;: { &quot;id&quot;: 901, &quot;url&quot;: &quot;https://some.api.com/api/dcim/interfaces/901/&quot;, &quot;display&quot;: &quot;N2&quot;, &quot;device&quot;: { &quot;id&quot;: 123, &quot;url&quot;: &quot;https://some.api.com/api/dcim/devices/223/&quot;, &quot;display&quot;: &quot;A-F3-G23-16-Saeed Unit16&quot;, &quot;name&quot;: &quot;A-F3-G23-16-Saeed Unit16&quot; }, &quot;name&quot;: &quot;N2&quot;, &quot;cable&quot;: null, &quot;_occupied&quot;: false }, &quot;nat_inside&quot;: null, &quot;nat_outside&quot;: null, &quot;dns_name&quot;: &quot;&quot;, &quot;description&quot;: &quot;&quot;, &quot;tags&quot;: [], &quot;custom_fields&quot;: {}, &quot;created&quot;: &quot;2023-10-01&quot;, &quot;last_updated&quot;: &quot;2023-10-01T14:05:32.001606+03:30&quot; } ] } </code></pre> <p>I want to use a <code>for</code> loop to iterate over both files (in real, there are more files with different names) and access <code>assigned_object['name']</code>.</p> <p>Expected output is either of these (not sure which one is applicable):</p> <ol> <li>&quot;name&quot;: &quot;A-F3-G23-15-Saeed Unit15&quot;</li> <li>&quot;name&quot;: &quot;A-F3-G23-16-Saeed Unit16&quot;</li> </ol> <p>OR</p> <ol> <li>A-F3-G23-15-Saeed Unit15</li> <li>A-F3-G23-16-Saeed Unit16</li> </ol> <p>This is my attempt (not really sure the way to properly use <code>for</code> loop for files of the current path):</p> <pre><code>import json file_name = 'file1.json' with open(file_name) as file: data = json.load(file) print(data['assigned_object[&quot;name&quot;]']) </code></pre> <p>I get this error:</p> <pre><code>KeyError: 'assigned_object[&quot;name&quot;]' </code></pre>
<python>
2023-10-07 10:28:09
2
4,175
Saeed
77,249,164
12,942,578
How to reduce runtime of parallellizable code in python?
<p>I have a program with some parts parallelizable. I want to run it in parallel (not concurrent) to reduce runtime.</p> <p>I tried Multi-threading <code>threading.Thread</code> &amp; multi-processing <code>processing.Process</code>. But the runtime of the code is not improving. I'm fact, it's getting a little worse (single-threaded runtime is being better than multi-threaded &amp; multi-processed).</p> <p>Can I do something to reduce the runtime?</p> <p>Here are the explanation of my code &amp; the code itself:</p> <p>Basically, I have a 2D array &amp; I have to run a function (<code>check_occ_for_division</code>) on each element of it. This function's run on an element of the array is independent of the run on other elements of the array.</p> <p>So, I divide the map into 4 divisions &amp; run the function on (the elements of) each division in a separate thread.</p> <pre><code>from spatialmath import SE3 import numpy as np from math import sqrt import time from threading import Thread from multiprocessing import Process def dummy(): '''A function to simulate runtime of the original function''' time.sleep(0.5) return True def transform_pose_to_a_different_coordinate_frame(pose): '''Actual function does some coordinate transformations. Redacting it for convenience &amp; just returning the input pose''' return pose def check_occlusions_for_division(x_range, y_range, occlusions): print(x_range, y_range) for observer_x in x_range: for observer_y in y_range: observer_pose = SE3((observer_x, observer_y, observer_z)) observer_pose = transform_pose_to_a_different_coordinate_frame(observer_pose) for observee_x in range(array_width): for observee_y in range(array_height): observee_pose = SE3(observee_x, observee_y, observee_z) observee_pose = transform_pose_to_a_different_coordinate_frame(observee_pose) occluded = dummy() # check_occlusion(observer_pose_omap, observee_pose_omap) # Actually, the above variable `occluded` is calculated by another function `check_occlusion` in a little complex way. # But I'm just skipping that function for convenience. # As, I don't intend to make any modifications to that function to improve runtime. if occluded: occlusions[observer_x][observer_y].append([observee_x, observee_y]) occlusions[observer_x][observer_y] = np.array(occlusions[observer_x][observer_y]) return occlusions omap_res = 0.1 array_width = 61 array_height = 61 observer_z = 3 observee_z = 0.1 num_threads = 4 assert sqrt(num_threads) == int(sqrt(num_threads)) '''Split the array into `num_threads` divisions''' # Here, I'm just preparing some variables to split the square array into 4 divisions num_x_divisions = int(sqrt(num_threads)) num_y_divisions = int(sqrt(num_threads)) x_div_size = array_width // num_x_divisions y_div_size = array_height // num_y_divisions div_ranges = [] for i in range(num_x_divisions): for j in range(num_y_divisions): div_x_range = range(i*x_div_size, (i+1)*x_div_size if i!=num_x_divisions-1 else array_width) div_y_range = range(j*y_div_size, (j+1)*y_div_size if j!=num_y_divisions-1 else array_height) div_ranges.append((div_x_range, div_y_range)) '''Run the function on each division on a separate thread / process''' occlusions = np.empty((int(array_width), int(array_height), 0)).tolist() threads = [] for thread_idx in range(num_threads): (div_x_range, div_y_range) = div_ranges[thread_idx] t = Thread(target=check_occlusions_for_division, args=(div_x_range, div_y_range, occlusions)) # t = Process(target=check_occlusions_for_division, args=(div_x_range, div_y_range, occlusions)) threads.append(t) threads[0].start() threads[1].start() threads[2].start() threads[3].start() threads[0].join() threads[1].join() threads[2].join() threads[3].join() </code></pre>
<python><parallel-processing>
2023-10-07 09:19:58
0
367
PhiloRobotist
77,248,988
273,344
How importing from a script works differently than importing from a module?
<p>I have a structure of files and folders like this:</p> <pre><code>package1/ p1.py package2/ p2.py </code></pre> <p>Contents of <code>package1/p1.py</code>:</p> <pre><code>def p1fun(): print(&quot;p1fun&quot;) </code></pre> <p>Contents of <code>package1/package2/p2.py</code>:</p> <pre><code>import package1.p1 if __name__ == '__main__': package1.p1.p1fun() </code></pre> <p>Now, when I do <code>python -m package1.package2.p2</code>, I get the correct result = <code>p1fun</code>. When I do <code>python package1/package2/p2.py</code>, I get:</p> <pre><code>Traceback (most recent call last): File &quot;/home/marcin/projects/test/package1/package2/p2.py&quot;, line 1, in &lt;module&gt; import package1.p1 ModuleNotFoundError: No module named 'package1' </code></pre> <p>How the import mechanism differs in these two scenarios? How to make the &quot;script&quot; scenario import properly?</p>
<python><python-import>
2023-10-07 08:23:46
1
4,309
Marcin
77,248,947
2,688,158
Am I missing any pre-processing step to data when doing outlier detection using Isolation-Forest for time series human sensor data?
<p>I am working on human sensor data and want to remove outliers if any from the dataset. The data is collected 50times per second. My question is do I need to some pre-processing before using the Isolation-Forest model as I don't any error as such but want to use it in the right way. I have never worked with time series data before so any suggestions would be great.</p> <p>The first 20 rows of the data is shown below: <a href="https://i.sstatic.net/pGpHP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pGpHP.png" alt="Human activity sensor data" /></a></p> <p>after reading the file I am extracting the sensor columns and using it on the model straight away.</p> <pre><code>import pandas as pd import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import IsolationForest from sklearn.metrics import classification_report from sklearn.model_selection import cross_val_score from sklearn.model_selection import TimeSeriesSplit from mpl_toolkits.mplot3d import Axes3D data = pd.read_csv('walking.csv') # Specify the columns that contain sensor readings (exclude 'act' and 'id' columns) sensor_columns = ['rotationRate.x', 'rotationRate.y', 'rotationRate.z', 'userAcceleration.x', 'userAcceleration.y', 'userAcceleration.z'] # Combine all sensor columns into a feature vector X = data[sensor_columns] # Define a range of contamination values to test contamination_values = np.arange(0.01, 0.11, 0.01) # Adjust the range as needed # Create an empty list to store cross-validation scores cv_scores = [] # Initialize time-based cross-validation tscv = TimeSeriesSplit(n_splits=5) # You can adjust the number of splits as needed # Loop through each contamination value and evaluate the model with time-based cross-validation for contamination in contamination_values: scores = [] for train_index, test_index in tscv.split(X): X_train, X_test = X.iloc[train_index], X.iloc[test_index] clf = IsolationForest(contamination=contamination, random_state=42) clf.fit(X_train) outlier_predictions = clf.predict(X_test) scores.append(np.mean(outlier_predictions == -1)) # Calculate the proportion of outliers cv_scores.append(np.mean(scores)) # Find the best contamination value with the highest cross-validation score best_contamination = contamination_values[np.argmax(cv_scores)] print(&quot;Best Contamination Value:&quot;, best_contamination) # Train the final model with the best contamination value on the entire dataset clf = IsolationForest(contamination=best_contamination, random_state=42) clf.fit(X) outlier_predictions = clf.predict(X) # Create a new column to mark outliers in your original DataFrame data['is_outlier'] = outlier_predictions # Print the number of outliers detected print(&quot;Number of Outliers Detected:&quot;, np.sum(outlier_predictions == -1)) </code></pre> <p>I get the no. of outliers along with a warning which says X does not have valid feature names, but Isolation-Forest was fitted with feature names</p>
<python><machine-learning><time-series><outliers><isolation-forest>
2023-10-07 08:10:06
0
417
user2688158
77,248,839
8,245,814
Using a literal number with 35660 digits raises SyntaxError
<p>I'm copying and pasting 10000! from the web, and this number has 35660 digits, but I'm getting the following error:</p> <blockquote> <p>SyntaxError: Exceeds the limit (4300) for integer string conversion: value has 35660 digits; use sys.set_int_max_str_digits() to increase the limit - Consider hexadecimal for huge integer literals to avoid decimal conversion limits.</p> </blockquote> <p>I wrote the following because it's supposed to eliminate the the maximum value allowed to use:</p> <pre><code>import sys sys.set_int_max_str_digits(0) </code></pre> <p>But I still get the error.</p> <p>I went on <a href="https://coolconversion.com/math/factorial/_10000_" rel="nofollow noreferrer">this site</a> and copied the 10000! value and pasted it into my IDE.</p> <pre><code>import sys sys.set_int_max_str_digits(0) x = 28462596809170545189064132121198688901480514017027992307941799942744113400037644437729907867577847758158840621423175288300423399401535187390524211613827161748198241998275924182892597878981242531205946599625986706560161572036032397926328736717055741975962099479720346153698119897092611277500484198845410475544642442136573303076703628825803548967461117097369578603670191071512730587281041158640561281165385325968425825995584688146430425589836649317059251717204276597407446133400054194052462303436869154059404066227828248371512038322178644627183822923899638992827221879702459387693803094627332292570555459690027875282242544348021127559019169425429028916907219097083690539873747452483372899521802363282741217040268086769210451555840567172555372015852132829034279989818449313610640381489304499621599999359670892980190336998484404665419236258424947163178961192041233108268651071354516845540936033009607210346944377982349430780626069422302681885227592057029230843126188497606560742586279448827155956831533440534425446648416894580425709461673613187605234982286326452921529423479870603344290737158688499178932580691483168854251956006172372636323974420786924642956012306288720122652952964091508301336630982733806353972901506581822574295475894399765113865541208125788683704239208764484761569001264889271590706306409661628038784044485191643790807186112370622133415415065991843875961023926713276546986163657706626438638029848051952769536195259240930908614471907390768585755934786981720734372093104825475628567777694081564074962275254993384112809289637516990219870492405617531786346939798024619737079041868329931016554150742308393176878366923694849025999607729684293977427536263119825416681531891763234839190821000147178932184227805135181734921901146246875769835373441456013122615221391178759688367364087207937002992038279198038702372078039140312368997608152840306051116709484722224870389199993442071395836983063962232079115624044250808919914319837120445598344047556759489212101498152454543594285414390843564419984224855478532163624030098442855331829253154206551237079705816393460296247697010388742206441536626733715428700789122749340684336442889847100840641600093623935261248037975293343928764398316390312776450722479267851700826669598389526150759007349215197592659192708873202594066382118801988854748266048342256457705743973122259700671936061763513579529821794290797705327283267501488024443528681645026165662837546519006171873442260438919298506071515390031106684727360135816706437861756757439184376479658136100599638689552334648781746143243573224864326798481981458432703035895508420534788493364582482592033288089025782388233265770205248970937047210214248413342465268206806732314214483854074182139621846870108359582946965235632764870475718351616879235068366271743711915723361143070121120767608697851559721846485985918643641716850899625516820910793570231118518174775010804622585521314764897490660752877082897667514951009682329689732000622392888056658036140311285465929084078033974900664953205873164948093883816198658850827382468034897864757116679890423568018303504133875731972630897909435710687797301633918087868474943633533893373586906405848417828065196275826434429258058422212947649402948622670761832988229004072390403733168207417413251656688443079339447019208905620788387585342512820957359307018197708340163817638278562539516825426644614941044711579533262372815468794080423718587423026200264221822694188626212107297776657401018376182280136857586442185863011539843712299107010094061929413223202773193959467006713695377097897778118288242442920864816134179562017471831609687661043140497958198236445807368209404022211181530051433387076607063149616107771117448059552764348333385744040212757031851527298377435921878558552795591028664457917362007221858143309977294778923720717942857756271300923982397921957581197264742642878266682353915687857271620146192244266266708400765665625807109474398740110772811669918806268726626565583345665007890309050656074633078027158530817691223772813510584527326591626219647620571434880215630815259005343721141000303039242866457207328473481712034168186328968865048287367933398443971236735084527340196309427697652684170174990756947982757825835229994315633322107439131550124459005324702680312912392297979030417587823398622373535054642646913502503951009239286585108682088070662734733200354995720397086488066040929854607006339409885836349865466136727880748764700702458790118046518296111277090609016152022111461543158317669957060974618085359390400067892878548827850938637353703904049412684618991272871562655001270833039950257879931705431882752659225814948950746639976007316927310831735883056612614782997663188070063044632429112260691931278881566221591523270457695867512821990938942686601963904489718918597472925310322480210543841044325828472830584297804162405108110326914001900568784396341502696521048920272140232160234898588827371428695339681755106287470907473718188014223487248498558198439094651708364368994306189650243288353279667190184527620551085707626204244509623323204744707831190434499351442625501701771017379551124746159471731862701565571266295855125077711738338208419705893367323724453280456537178514960308802580284067847809414641838659226652806867978843250660537943046250287105104929347267471267499892634627358167146935060495110340755404658170393481046758485625967767959768299409334026387269378365320912287718077451152622642548771835461108886360843272806227776643097283879056728618036048633464893371439415250259459652501520959536157977135595794965729775650902694428088479761276664847003619648906043761934694270444070215317943583831051404915462608728486678750541674146731648999356381312866931427616863537305634586626957894568275065810235950814888778955073939365341937365700848318504475682215444067599203138077073539978036339267334549549296668759922530893898086430606532961793164029612492673080638031873912596151131890359351266480818568366770286537742390746582390910955517179770580797789289752490230737801753142680363914244720257728891784950078117889336629750436804214668197824272980697579391742229456683185815676816288797870624531246651727622758295493421483658868919299587402095696000243560305289829866386892076992834030549710266514322306125231915131843876903823706205399206933943716880466429711476743564486375026847698148853105354063328845062012173302630676481322931561043551941761050712449024873277273112091945865137493190965162497691657553812198566432207978666300398938660238607357858114394715872800893374165033792965832618436073133327526023605115524227228447251463863269369763762510196714380125691227784428426999440829152215904694437282498658085205186576292992775508833128672638418713277780874446643875352644733562441139447628780974650683952982108174967958836452273344694873793471790710064978236466016680572034297929207446822322848665839522211446859572858403863377278030227591530497865873919513650246274195899088374387331594287372029770620207120213038572175933211162413330422773742416353553587977065309647685886077301432778290328894795818404378858567772932094476778669357537460048142376741194182671636870481056911156215614357516290527351224350080604653668917458196549482608612260750293062761478813268955280736149022525819682815051033318132129659664958159030421238775645990973296728066683849166257949747922905361845563741034791430771561168650484292490281102992529678735298767829269040788778480262479222750735948405817439086251877946890045942060168605142772244486272469911146200149880662723538837809380628544384763053235070132028029488392008132135446450056134987017834271106158177289819290656498688081045562233703067254251277277330283498433595772575956224703707793387146593033088629699440318332665797514676502717346298883777397848218700718026741265997158728035440478432478674907127921672898523588486943546692255101337606377915164597254257116968477339951158998349081888281263984400505546210066988792614558214565319696909827253934515760408613476258778165867294410775358824162315779082538054746933540582469717674324523451498483027170396543887737637358191736582454273347490424262946011299881916563713847111849156915054768140411749801454265712394204425441028075806001388198650613759288539038922644322947990286482840099598675963580999112695367601527173086852756572147583507122298296529564917835071750835741362282545055620270969417476799259229774888627411314587676147531456895328093117052696486410187407673296986649236437382565475022816471926815559883196629848307776666840622314315884384910519058281816740764463033300119710293036455866594651869074475250837841987622990415911793682799760654186088721626654886492344391030923256910633775969739051781122764668486791736049404393703339351900609387268397299246478483727274770977466693599784857120156789000241947269220974984127323147401549980920381459821416481176357147801554231599667838534854486406936410556913531335231184053581348940938191821898694825383960989942822027599339635206217705343572073396250574216769465101608495601439303244304271576099527308684609204422226103154229984444802110098161333824827375218998738205315164927134498105950159974800571591912202154487748750103473246190633941303030892399411985006225902184164409988173214324422108554248620896250260604398180189026317781146617454999771440665232863846363847001655618153861098188111181734191305505024860345856755585637511729774299329074944236579668332700918367338977347901759248885660379952771540569083017311723894140326159612292912225191095948743805673381278538616491842786938417556898047100859868372033615175158097022566275200160956192229925401759878522038545913771783976389811198485803291048751666921195104514896677761598249468727420663437593207852618922687285527671324883267794152912839165407968344190239094803676688707838011367042753971396201424784935196735301444404037823526674437556740883025225745273806209980451233188102729012042997989005423126217968135237758041162511459175993279134176507292826762236897291960528289675223521425234217247841869317397460411877634604625637135309801590617736758715336803958559054827361876112151384673432884325090045645358186681905108731791346215730339540580987172013844377099279532797675531099381365840403556795731894141976511436325526270639743146526348120032720096755667701926242585057770617893798231096986788448546659527327061670308918277206432551919393673591346037757083193180845929565158875244597601729455720505595085929175506510115665075521635142318153548176884196032085050871496270494017684183980582594038182593986461260275954247433376226256287153916069025098985070798660621732200163593938611475394561406635675718526617031471453516753007499213865207768523824884600623735896608054951652406480547295869918694358811197833680141488078321213457152360124065922208508912956907835370576734671667863780908 </code></pre>
<python><python-3.x><integer>
2023-10-07 07:36:00
5
319
Pinteco
77,248,580
6,323,080
Why/when would you use field() for a non-callable default?
<p>I'm trying to understand when it is correct to use field(default=x).</p> <p>I understand in a case where the default is callable, it is wrong not to. For instance:</p> <pre><code>@dataclass class SomeClass: obj_content: list = field(default=[]) </code></pre> <p>instead of</p> <pre><code>@dataclass class SomeClass: obj_content: list = [] </code></pre> <p>But when the default is not, what is the difference?</p> <pre><code>@dataclass class SomeClass: obj_id: int = field(default=0) </code></pre> <p>versus</p> <pre><code>@dataclass class SomeClass: obj_content: int=0 </code></pre>
<python><python-3.x><python-dataclasses>
2023-10-07 06:08:19
1
694
Zeke Arneodo
77,248,315
89,691
Automating a Delphi 2007 application with a Python script - how do I unambiguously reference controls?
<p>I have a legacy Delphi application that I need to automate for testing purposes. I'm having difficulty with directing the automation requests to the correct component. The included example demonstrates the issue (on my environment anyway). Heres a simple Delphi application I'm wanting to control:</p> <p><strong>Project1.dpr</strong></p> <pre><code>program Project1; uses Forms, Unit1 in 'Unit1.pas' {Form1}; {$R *.res} begin Application.Initialize; Application.MainFormOnTaskbar := True; Application.CreateForm(TForm1, Form1); Application.Run; end. </code></pre> <p><strong>Unit1.pas</strong></p> <pre><code>unit Unit1; interface uses Windows, Messages, SysUtils, Variants, Classes, Graphics, Controls, Forms, Dialogs, StdCtrls, ExtCtrls; type TForm1 = class(TForm) Memo1: TMemo; Button1: TButton; CheckBox1: TCheckBox; Timer1: TTimer; Button2: TButton; Button3: TButton; Button4: TButton; Button5: TButton; Label2: TLabel; Panel1: TPanel; procedure ControlClick(Sender: TObject); procedure Timer1Timer(Sender: TObject); private { Private declarations } public { Public declarations } end; var Form1: TForm1; implementation {$R *.dfm} procedure TForm1.ControlClick(Sender: TObject); begin Memo1.Lines.Add (Format ('Control &quot;%s&quot; clicked', [TControl (Sender).Name])) ; end; procedure TForm1.Timer1Timer(Sender: TObject); begin if Assigned(Screen.ActiveControl) then begin Panel1.Caption := Screen.ActiveControl.Name ; end else begin Panel1.Caption := 'None.' ; end ; end ; end. </code></pre> <p><strong>Unit1.dfm</strong></p> <pre><code>object Form1: TForm1 Left = 0 Top = 0 Caption = 'Form1' ClientHeight = 212 ClientWidth = 407 Color = clBtnFace Font.Charset = DEFAULT_CHARSET Font.Color = clWindowText Font.Height = -11 Font.Name = 'Tahoma' Font.Style = [] OldCreateOrder = False PixelsPerInch = 96 TextHeight = 13 object Label2: TLabel Left = 73 Top = 193 Width = 32 Height = 13 Caption = 'Focus:' end object Memo1: TMemo Left = 24 Top = 8 Width = 201 Height = 177 TabOrder = 0 end object Button1: TButton Left = 304 Top = 11 Width = 75 Height = 25 Caption = '1' TabOrder = 1 OnClick = ControlClick end object CheckBox1: TCheckBox Left = 296 Top = 171 Width = 97 Height = 17 Caption = 'CheckBox1' TabOrder = 6 OnClick = ControlClick end object Button2: TButton Left = 304 Top = 42 Width = 75 Height = 25 Caption = '2' TabOrder = 2 OnClick = ControlClick end object Button3: TButton Left = 304 Top = 73 Width = 75 Height = 25 Caption = '3' TabOrder = 3 OnClick = ControlClick end object Button4: TButton Left = 304 Top = 104 Width = 75 Height = 25 Caption = '4' TabOrder = 4 OnClick = ControlClick end object Button5: TButton Left = 304 Top = 135 Width = 75 Height = 25 Caption = '5' TabOrder = 5 OnClick = ControlClick end object Panel1: TPanel Left = 116 Top = 191 Width = 109 Height = 17 TabOrder = 7 end object Timer1: TTimer Interval = 300 OnTimer = Timer1Timer Left = 240 Top = 152 end end </code></pre> <p><strong>Automate.py</strong></p> <pre><code>import pywinauto from pywinauto import application def Click (AForm, AControlName): FControl = AForm [AControlName] FControl.click() return () FormTitle = &quot;Form1&quot; try: app = application.Application().connect(title=FormTitle) form1 = app[FormTitle] Click (form1, &quot;Button1&quot;) Click (form1, &quot;Button2&quot;) Click (form1, &quot;Button3&quot;) Click (form1, &quot;Button4&quot;) Click (form1, &quot;Button5&quot;) Click (form1, &quot;Checkbox1&quot;) exit (0) except pywinauto.application.ProcessNotFoundError: print(f&quot;No running instance of {FormTitle} found.&quot;) exit (1) except Exception as e: print(f&quot;An error occurred: {str(e)}&quot;) exit (1) </code></pre> <p>All is good, except that the buttons are reversed. If I direct a click to Button1, Button5 registers a click. Other experiments have shown that it's not necessarily predictable as to how it is going to redirect the request - the checkbox is OK for example, though I never tried a larger number of those. All I want is an unambiguous way of referencing a control - because the control name doesn't cut it.</p> <p><strong>UPDATE</strong> Further experiments have seemed to indicate that the control's <em>caption</em> is what I need to pass to the call to send a click message to the control.</p>
<python><delphi><ui-automation><pywinauto><delphi-2007>
2023-10-07 03:58:52
1
5,700
rossmcm
77,248,283
4,212,875
Using with open, why does 'rb' for reading json work but 'wb' for writing to a json does not?
<p>I noticed that when using <code>with open</code> to read a json, using either <code>r</code> or <code>rb</code> parameters returns identical results.</p> <pre><code>with open('something.json', 'rb') as f # 'r' returns the same thing t1 = json.load(f) </code></pre> <p>However, when I write to a json with <code>wb</code>, I get an error:</p> <pre><code>with open('something.json', 'wb') as f: json.dump(some_dict, f) </code></pre> <blockquote> <p>TypeError: a bytes-like object is required, not 'str'</p> </blockquote> <p>But <code>w</code> works fine. Why is this the case?</p>
<python><json><python-3.x>
2023-10-07 03:42:23
4
411
Yandle
77,248,200
4,212,875
Correct way to use with open on xml files and ET.parse
<p>For reading in XML files with <code>with open</code> and <code>ET.parse</code>, should we use 'r', 'rb', something else, or does it not matter?</p> <pre><code>with open('something.xml', '??') as f: tree = ET.parse(f) </code></pre> <p>Edit:</p> <p>The reason I want to open this way is because the filepath I have may be an s3 path and ET.parse does not appear to support S3 paths. However, using the package s3fs, I was able to load the file:</p> <pre><code>import s3fs fs = s3fs.S3FileSystem() with fs.open(file) as f: tree = ET.parse(f) </code></pre>
<python><python-3.x><xml><with-statement>
2023-10-07 02:55:19
1
411
Yandle
77,248,175
2,580,891
How can `from src.square import Square` be turned into `from square import Square`?
<p>In the following sample code:</p> <p><a href="https://github.com/panlex2010/sample-python-library" rel="nofollow noreferrer">sample-python-library</a></p> <p>the file containing unit tests is unable to find the class definition in the src directory.</p> <p>Executing the unittest command from the test/ and root directories cause different errors:</p> <pre><code>cd test/ python -m unittest test_square ... from src.square import Square ModuleNotFoundError: No module named 'src' cd .. python -m unittest test/test_square.py ... ModuleNotFoundError: No module named 'test.test_square' </code></pre> <p>With such a basic consideration, I am unable to locate the best practices. Even the official documentation's <a href="https://docs.python.org/3/library/unittest.html#test-discovery" rel="nofollow noreferrer">test-discovery</a> section is of no assistance. Other official Python documentation describes how the runtime path is constructed, but do not appear to provide a clear mechanism for code developers to embed a directory layout structure which is recognized by the runtime.</p> <p>Placing a path_file.pth file containing the local directories also did not work. Not only did this not fix the problem, it appeared to have no effect on the path as report by sys.path, which appears to contradict the documented behavior of Python when encountering *.pth files.</p> <p>Obviously, a system script could be written which manipulates the PYTHONPATH variable. Also sys could be used in the code somewhere. But neither of these solutions is at all elegant.</p> <p>What is the solution? How does everyone else write unit tests and instruct Python to locate the source files from inside the test subdirectory?</p>
<python>
2023-10-07 02:42:46
1
359
cppProgrammer
77,248,138
547,198
Why is Stop Iteration being raised in this code?
<p>I am trying to understand why is <code>StopIteration</code> being raised. My understanding is that when <code>nos</code> reaches <code>9</code>, the <code>if</code> statement evaluates to <code>True</code>. After that, <code>print(next(generate_nos([0, 1, …, 9])))</code> is called, which should result in <code>0</code> as the answer. Any help would be greatly appreciated.</p> <pre class="lang-py prettyprint-override"><code>nos = (i for i in range(10)) def generate_nos(mynos): for i in mynos: yield i * 2 try: while(1): if next(generate_nos(nos)) == 18: print(next(generate_nos(nos))) except StopIteration: print(&quot;StopIteration&quot;) </code></pre>
<python>
2023-10-07 02:20:36
3
7,362
TimeToCodeTheRoad
77,247,941
15,587,184
Summarizing n-grams efficiently in Python on big data
<p>I have a very large dataset of roughly 6 million records, it does look like this snippet:</p> <pre><code>data = pd.DataFrame({ 'ID': ['A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J'], 'TEXT': [ &quot;Mouthwatering BBQ ribs cheese, and coleslaw.&quot;, &quot;Delicious pizza with pepperoni and extra cheese.&quot;, &quot;Spicy Thai curry with cheese and jasmine rice.&quot;, &quot;Tiramisu dessert topped with cocoa powder.&quot;, &quot;Sushi rolls with fresh fish and soy sauce.&quot;, &quot;Freshly baked chocolate chip cookies.&quot;, &quot;Homemade lasagna with layers of cheese and pasta.&quot;, &quot;Gourmet burgers with all the toppings and extra cheese.&quot;, &quot;Crispy fried chicken with mashed potatoes and extra cheese.&quot;, &quot;Creamy tomato soup with a grilled cheese sandwich.&quot; ], 'DATE': [ '2023-02-01', '2023-02-01', '2023-02-01', '2023-02-01', '2023-02-02', '2023-02-02', '2023-02-01', '2023-02-01', '2023-02-02', '2023-02-02' ] }) </code></pre> <p>I want to generate bigrams and trigrams from the column 'TEXT.' I'm interested in two types of ngrams for both trigrams and bigrams: those that start with 'extra' and those that don't start with 'extra.' Once we have those, I want to summarize (count the unique ID frequency) of those ngrams by unique 'DATE.' This means that if an ngram appears in an ID more than once, I will count it only once because I want to know in how many different 'IDs' it ultimately appeared.</p> <p>I'm very new to Python. I come from the R world, in which there is a library called quanteda that uses C programming and parallel computing. Searching for those ngrams looks something like this:</p> <pre><code>corpus_food %&gt;% tokens(remove_punct = TRUE) %&gt;% tokens_ngrams(n = 2) %&gt;% tokens_select(pattern = &quot;^extra&quot;, valuetype = &quot;regex&quot;) %&gt;% dfm() %&gt;% dfm_group(groups = lubridate::date(DATE)) %&gt;% textstat_frequency() </code></pre> <p>yielding my desired results:</p> <pre><code> feature frequency rank docfreq group 1 extra_cheese 3 1 2 all </code></pre> <p>My desired result would look like this:</p> <div class="s-table-container"> <table class="s-table"> <thead> <tr> <th>ngram</th> <th>nunique</th> <th>group</th> </tr> </thead> <tbody> <tr> <td>cheese and</td> <td>3</td> <td>1/02/2023</td> </tr> <tr> <td>and extra</td> <td>2</td> <td>1/02/2023</td> </tr> <tr> <td>extra cheese</td> <td>2</td> <td>1/02/2023</td> </tr> <tr> <td>and extra cheese</td> <td>2</td> <td>1/02/2023</td> </tr> <tr> <td>mouthwatering bbq</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>bbq ribs</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>ribs cheese</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>and coleslaw</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>mouthwatering bbq ribs</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>bbq ribs cheese</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>ribs cheese and</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>cheese and coleslaw</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>delicious pizza</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>pizza with</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>with pepperoni</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>pepperoni and</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>delicious pizza with</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>pizza with pepperoni</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>with pepperoni and</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>pepperoni and extra</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>spicy thai</td> <td>1</td> <td>1/02/2023</td> </tr> <tr> <td>thai curry</td> <td>1</td> <td>1/02/2023</td> </tr> </tbody> </table> </div> <p>I am in no way comparing the two languages, Python and R. They are amazing, but at the moment, I'm interested in a very straightforward and fast method to achieve my results in Python. I am open to hearing of a way to achieve what I'm looking for in a faster and more efficient way in Python. I'm new to Python.</p> <p>So far I have found a way to create the bigrams and trigrams but I have no idea as to how perform the selection of those that start with &quot;extra&quot; and those who don't and this very process of creating the ngrams is taking over an hour so I will take all advice on how to reduce the time.</p> <p>Work around:</p> <pre><code>import nltk from nltk import bigrams from nltk.util import trigrams from nltk.tokenize import word_tokenize data['bigrams'] = data['TEXT'].apply(lambda x: list(bigrams(word_tokenize(x)))) data['trigrams'] = data['TEXT'].apply(lambda x: list(trigrams(word_tokenize(x)))) </code></pre> <p>Reading through some posts, some people suggest on using the gensim lib. Would that be a good direction?</p>
<python><pandas><dataframe><nlp><n-gram>
2023-10-07 00:44:48
1
809
R_Student
77,247,893
20,433,449
ModuleNotFoundError: No module named 'distutils' in Python 3.12
<p>When I try to import <code>customtkinter</code> in Python 3.12, I get the following error:</p> <pre class="lang-none prettyprint-override"><code> File &quot;c:\Users\judel\OneDrive\Documents\Python\main.py&quot;, line 1, in &lt;module&gt; import customtkinter as ttk File &quot;C:\Users\judel\AppData\Local\Programs\Python\Python312\Lib\site-packages\customtkinter\__init__.py&quot;, line 10, in &lt;module&gt; from .windows.widgets.appearance_mode import AppearanceModeTracker File &quot;C:\Users\judel\AppData\Local\Programs\Python\Python312\Lib\site-packages\customtkinter\windows\__init__.py&quot;, line 1, in &lt;module&gt; from .ctk_tk import CTk File &quot;C:\Users\judel\AppData\Local\Programs\Python\Python312\Lib\site-packages\customtkinter\windows\ctk_tk.py&quot;, line 2, in &lt;module&gt; from distutils.version import StrictVersion as Version ModuleNotFoundError: No module named 'distutils' </code></pre> <p>Why is this happening? Why can't a module from the standard library be found?</p>
<python><pip><modulenotfounderror><customtkinter><python-3.12>
2023-10-07 00:22:49
1
619
DarkPhinx
77,247,885
6,814,713
Mocking a module function and it's return
<p>I hope this is a straightforward answer, but I've been stuck trying to sort this out. I'm looking to mock out the following scenario. I previously had something like this that worked:</p> <pre class="lang-py prettyprint-override"><code># path/to/some/module.py class MyClass: # None of the methods or init should ever run in a test def __init__(): pass def do_thing(): pass def get_my_class(): return MyClass() </code></pre> <pre class="lang-py prettyprint-override"><code># path/to/some/other/module.py from path.to.some.module import get_my_class() def do_something() get_my_class().do_thing() return 'something' </code></pre> <pre class="lang-py prettyprint-override"><code># test/path/to/some/other/module.py from path.to.some.other.module import def test_do_something(): with patch(&quot;path.to.some.module.MyClass&quot;) as mock: assert do_something() = 'something' mock().do_thing.assert_called_once() </code></pre> <p>So that was fine - the asserts worked, and the init and class methods were not called. But then I needed to move around some logic, and now I am unable to sort out how to get this to work from a mocking standpoint. The code itself works, I am just unable to get my mocks in a row. See below for the latest structure:</p> <pre class="lang-py prettyprint-override"><code># path/to/some/module.py class MyClass: # None of the methods or init should ever run in a test def __init__(): pass def do_thing(): pass class MyClassSingleton: _my_class = None def get_my_class(): if MyClassSingleton._my_class is None: MyClassSingleton._my_class = MyClass() return MyClassSingleton._my_class </code></pre> <pre class="lang-py prettyprint-override"><code># path/to/some/other/module.py from path.to.some.module import get_my_class() def do_something() get_my_class().do_thing() return 'something' </code></pre> <p>I tried updating my test to be something like:</p> <pre class="lang-py prettyprint-override"><code>with patch(&quot;path.to.some.module.MyClassSingleton._my_class&quot;) as mock: </code></pre> <p>but this leads to the <code>MyClass.__init__</code> code running, which is no bueno.</p> <p>My goal is for everything to work more or less as before from a mocking standpoint, ideally with as simple a setup/boilerplate as possible since I need to apply these changes to hundreds of tests. Any help would be appreciated. Thanks!</p>
<python><mocking><pytest><python-unittest>
2023-10-07 00:18:32
2
2,124
Brendan
77,247,860
18,649,495
Keras weights layout confusion
<p>Recently I realized that in keras, the shape of dense layer's weight matrix is <code>(input_shpe, output_shape)</code>.</p> <p>Shouldn't it be <code>(output_shape, input_shape)</code> so when we dot weight and input together we get a length <code>output_shape</code> tensor?</p> <p>Also, why is the kernal size of convolutional layers <code>(*kernel_size, num_channels, num_filter)</code>? Is there any explaination to put <code>num_filter</code> in the end instead of <code>(num_filter, *k_size, num_channels)</code>? Personally I feel like putting <code>num_filter</code> on the first axis is easier to distinguish weights from different kernels.</p> <p>Thank you!</p>
<python><tensorflow><keras>
2023-10-07 00:06:23
0
1,488
Fed_Dragon
77,247,820
749,434
get a portable function from Polynomial.fit()
<p>I took voltage readings from a LiPo battery discharging every 5 minutes - this gave me ~2700 data points. I pulled them into numpy with the hopes that I could find a polynomial function that would take a voltage and (based on the discharge curve) return the percentage of the battery remaining; in an ideal situation, f(3.3) = 0 and f(4.2) = 100. After a few rounds of modeling I found that an 8th order function worked pretty well (though it goes off the rails if you feed it more than 4.2, which is fine, I expected that and I'll just clamp the results between 0 and 100). Eventually I want to take the coefficients over to a C++ program which means I need to get the &quot;real&quot; coefficients for x^0...x^8. But I can't quite seem to get those values lined up.</p> <p>Here's my code:</p> <pre class="lang-py prettyprint-override"><code>import pandas as pd from numpy.polynomial import Polynomial as P df = pd.read_csv('data.csv') x = df.voltage y = df.percentage p_fitted = P.fit(x, y, 8) # this next line outputs this: # [a] + [b]*x - [c]*x^2 etc print(p_fitted) def percentage(voltage): # I took the [a], [b], [c] values, eg, [a, b, -c] coefficients = [ ... ] total = 0 for i, coef in enumerate(coefficients): total += coef * voltage ** i return total </code></pre> <p>Running <code>p_fitted(3.64)</code> returns <code>49.139</code> which is pretty much right where it should be; running <code>percentage(3.64)</code> returns <code>2376004.025</code> which is definitely NOT where it should be.</p> <p>I've tried a few things - rather than use the function that is shown when I <code>print(p_fitted)</code> for coefficients, I tried <code>p_fitted.convert().coef</code> which gives me astronomically large coefficients, and a similarly large (incorrect) answer. Having seen a note that those coefficients are reversed, I tried to reverse the list produced by the <code>.convert().coef</code> method and the result was still comically large and wrong. I even manually copied down the formula from the first <code>print</code> statement and hard-coded it against <code>3.64</code> - got the same (wrong) answer as what <code>percentage(3.64)</code> returned so at least I know the function does what it's supposed to do. Obviously I'm not getting the correct numbers to plug into the formula but I'm scratching my head on what I <em>do</em> need to do to get those numbers. Is there a way to get an actually portable function out of this class?</p>
<python><numpy>
2023-10-06 23:46:20
0
1,475
tmountjr
77,247,703
9,391,770
semilogy plot does not proper color y-ticks
<p>Issue is with <strong><code>semilogy</code></strong>, not with <code>twinx</code>. From next code</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def plot_me(x): fig, ax1 = plt.subplots() ax1.semilogy(x, color=&quot;blue&quot;) for label in ax1.get_yticklabels(): label.set_color(&quot;blue&quot;) ax2 = ax1.twinx() ax2.semilogy(1/x, 'red') for label in ax2.get_yticklabels(): label.set_color(&quot;red&quot;) </code></pre> <p><code>plot_me(np.arange(.1, 1.3, 1e-1))</code> goes through 1e-1 and 1e0 in left y-axis, through another two logs in right on. All ok:</p> <p><a href="https://i.sstatic.net/nDlJx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nDlJx.png" alt="enter image description here" /></a></p> <p>on the other hand <code>plot_me(np.arange(.1, .3, 1e-2))</code> does not pass at least two log10 (in neither left nor right y-axis), thus both axis got aside 1 log10 y-tick also non log10 y-ticks, which I cannot proper color: <a href="https://i.sstatic.net/chfGg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/chfGg.png" alt="enter image description here" /></a></p> <p>Furthermore, the y-ticks from a <code>semilog</code> are all log10, which contradict what we have seen in y-ticks labels of previous plots!</p> <pre><code>def print_y_ticks(x): fig, ax1 = plt.subplots() ax1.semilogy(x, color=&quot;blue&quot;) for label in ax1.get_yticklabels(): label.set_color(&quot;blue&quot;) print(ax1.get_yticks()) return ax1 ax = print_y_ticks(np.arange(.1, 1.3, 1e-1)) # [1.e-03 1.e-02 1.e-01 1.e+00 1.e+01 1.e+02] ax = print_y_ticks(np.arange(.1, .3, 1e-2)) # [1.e-03 1.e-02 1.e-01 1.e+00 1.e+01] </code></pre> <p>My workaround is next:</p> <pre><code>import numpy as np import matplotlib.pyplot as plt def ticks2log(ax): yticks = ax.get_yticks() ax.set_yticks(yticks, [f'{k:.1e}' for k in yticks]) return ax def plot_me_new(x1, x2): x1 = np.log10(x1) x2 = np.log10(x2) fig, ax1 = plt.subplots() ax1.plot(x1, lw=2, color=&quot;blue&quot;) ax1 = ticks2log(ax1) for label in ax1.get_yticklabels(): label.set_color(&quot;blue&quot;) ax2 = ax1.twinx() ax2.plot(x2, lw=2, color=&quot;red&quot;) ax2 = ticks2log(ax2) for label in ax2.get_yticklabels(): label.set_color(&quot;red&quot;) </code></pre> <p>Result of <code>x = np.arange(.1, .3, 1e-2); plot_me_new(x, 1/x)</code> is proper colored, but much more tedious.</p> <p><a href="https://i.sstatic.net/O1lvz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O1lvz.png" alt="enter image description here" /></a></p> <p>Thanks in advance for the help!</p> <p>PD: issue is with <strong><code>semilogy</code></strong>, not with <code>twinx</code>.</p>
<python><matplotlib><plot><logarithm><yticks>
2023-10-06 22:56:35
1
396
Xopi García
77,247,409
4,430,968
Selecting css element on this site in scrapy (python)
<p>For some reason my CSS selector doesn't return anything on this site. Any idea why?</p> <pre><code>import scrapy class TestSpider(scrapy.Spider): name = &quot;test&quot; start_urls = [ &quot;https://www.toppstiles.co.uk/fixing-finishing/tools-accessories/mixing-drilling/drilling/rubi-drill-bit-diamond-50mm&quot;, # &quot;https://books.toscrape.com/&quot; ] def parse(self, response): print(response.css('h5').get()) </code></pre> <p>What I don't get is why CSS selectors work on other sites and I can also get the <code>head</code> element. Am I being blocked? Can I get around it?</p>
<python><scrapy>
2023-10-06 21:13:53
0
528
Davis
77,247,364
7,452,220
Convert a Pandas Series from Timedelta to Microseconds
<p>I have a Pandas <code>Timedelta</code> column that may be created like this:</p> <pre><code>import pandas as pd tdelta_ser = pd.date_range(start='00:00:00', periods=3, freq='700ms') - pd.date_range(start='00:00:00', periods=3, freq='500ms') tdiff_df = pd.DataFrame(tdelta_ser, columns=['TimeDiff']) print(tdiff_df) TimeDiff 0 0 days 00:00:00 1 0 days 00:00:00.200000 2 0 days 00:00:00.400000 </code></pre> <p>Looking for a really concise <strong>one liner</strong> that will produce a new column with this time delta converted to microseconds, without making assumptions about the internal dtype of the pandas Timedelta column being in int64 nanoseconds.</p> <p><strong>Desired result</strong></p> <pre><code> TimeDiff DiffUsec 0 0 days 00:00:00 0 1 0 days 00:00:00.200000 200000 2 0 days 00:00:00.400000 400000 </code></pre> <p>I tried several methods. The most concise was the one below, but it makes assumptions about the internal working of the Timedetla column being int64 nsecs and requires a scaling factor of 1000 to get it right.</p> <pre><code>tdiff_df['DiffUsec'] = tdiff_df['TimeDiff'].astype('int64') / 1000 print(tdiff_df) TimeDiff DiffUsec 0 0 days 00:00:00 0.0 1 0 days 00:00:00.200000 200000.0 2 0 days 00:00:00.400000 400000.0 </code></pre>
<python><pandas><time-series><timestamp><timedelta>
2023-10-06 21:01:53
3
301
Gerard G
77,247,360
9,422,807
PySpark, add a tag column in union dataframe with condition
<p>I have a union dataframe from two sources. I want to add a flag column based on 'ID' and 'Source', The rule:</p> <ol> <li>If the rating is Null and from only one Source and one ID, flag N</li> <li>If the rating is not Null and from one ID, regardless the Source, flag Y; if Null, flag N</li> <li>If the rating is Null from both Sources and one ID, flag Y</li> </ol> <p>How do I create the flag column? I tried Windows Function, totally WRONG, Is there anyway I can combine Windows Function and When? Or is there a better way to add a flag col? I am stuck.</p> <pre><code>from pyspark.sql.types import StructType, StructField, StringType, IntegerType from pyspark.sql import SparkSession as ss from pyspark.sql.functions import when, col, lit, row_number from pyspark.sql.window import Window spark = ss.builder.appname('new').getOrCreate() data = [(14, 'AA', None), (14, 'AA', None), (15, 'BB', None), (15, 'BB', 2), (16, 'AA', None), (16, 'AA', 1), (16, 'BB', None), (16, 'BB', 2), (17, 'AA', None), (17, 'AA', None), (17, 'BB', None), (17, 'BB', None)] schema=StructType([StructField('ID', IntegerType(), False), StructField('Source', StringType(), False), StructField('rating', IntegerType(), True)]) df = Spark.createDataFrame(data, schema) w = Window.partitionby('ID').orderBy('Source', 'rating') df = df.withColumn('flag', row_number().over(w)) </code></pre> <p>input table:</p> <pre><code>|ID|Source |rating |--|-----|------ | 14 | AA |Null | 14 | AA |NUll | 15 | BB |Null | 15 | BB |2 | 16 | AA |Null | 16 | AA |1 | 16 | BB |Null | 16 | BB |2 | 17 | AA |Null | 17 | AA |Null | 17 | BB |Null | 17 | BB |Null </code></pre> <p>Ideally output table:</p> <pre><code>| ID| Source | rating| flag | --| -----|------|----- | 14 | AA |Null |N | 14 | AA |NUll |N | 15 | BB |Null |N | 15 | BB |2 |Y | 16 | AA |Null |N | 16 | AA |1 |Y | 16 | BB |Null |N | 16 | BB |2 |Y | 17 | AA |Null |Y | 17 | AA |Null |Y | 17 | BB |Null |Y | 17 | BB |Null |Y </code></pre>
<python><pyspark><union-all><partition-by>
2023-10-06 21:01:32
0
413
Liu Yu
77,247,271
2,299,245
Read in sheet names only from Excel using pyspark.pandas
<p>I have about 30 Excel files that I want to read into Spark dataframes, probably using pyspark.pandas.</p> <p>I am trying to read them like this:</p> <pre><code>import pyspark.pandas as ps my_files = ps.DataFrame(dbutils.fs.ls(&quot;abfss://my_container@my_env.dfs.core.windows.net/my_files/&quot;) dataframes = [] def read_and_format_file(input_path): xl = ps.read_excel(input_path, sheet_name = &quot;Aggregate Data USD&quot;, skiprows = 5) dataframes.append(xl) my_files['path'].apply(read_and_format_file) </code></pre> <p>My problem is that I do not know the name of the sheet within each workbook. It can vary. The only pattern I can use is that the name will have 'Data USD' in it. The first sheet it might be called 'Fred Bloggs Data USD', then the second one 'John Smith Data USD' etc. So I think I need a way to check the sheet names before using read_excel, and then only read the sheet I want.</p> <p>Open to ideas please. Thanks.</p>
<python><apache-spark><pyspark><pyspark-pandas>
2023-10-06 20:41:55
1
949
TheRealJimShady
77,247,241
2,827,771
Pandas barh plot part of plot not showing
<p>I am using the following code to generate a barh plot with data stored in a pandas dataframe. Problem is that part of the first bar is not showing and I'm not sure how to address that. Here is the code:</p> <pre><code>fig = plt.figure(figsize = (10, 6)) ax = fig.add_subplot() ax2 = ax.twiny() import matplotlib.pyplot as plt import numpy as np import pandas as pd from io import StringIO s = StringIO(&quot;&quot;&quot;name amount price A 40929 4066443 B 93904 9611272 C 188349 19360005 D 248438 24335536 E 205622 18888604 F 140173 12580900 G 76243 6751731 H 36859 3418329 I 29304 2758928 J 39768 3201269 K 30350 2867059&quot;&quot;&quot;) df = pd.read_csv(s, delimiter=' ', skipinitialspace=True) df.plot(x = 'name', y = 'price', kind = 'barh', ax = ax, color = 'steelblue', width = 0.4, position = 0) df.plot(x = 'name', y = 'amount', kind = 'barh', ax = ax2, color = 'salmon', width = 0.4, position = 1) </code></pre> <p>and here is how the plot looks like:</p> <p><a href="https://i.sstatic.net/P6sgO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/P6sgO.png" alt="enter image description here" /></a></p> <p>as you can see, part of plot for name <code>K</code> is not showing properly. How could I add some sort of a margin to create space between that axis and the plot itself?</p>
<python><pandas><plot><bar-chart>
2023-10-06 20:34:57
1
13,628
ahajib
77,247,230
656,912
How do I specify LaTeX codes in Python 3.12 f-strings?
<p>How do I put LaTeX codes in a Python 3.12 f-string?</p> <p>For example, the following <code>matplotlib</code> fragment now results in errors (<code>\</code> is a bad escape)</p> <pre class="lang-py prettyprint-override"><code>bif.set_title(f&quot;$x_n(r); , n\in {{ {bif_steps_min},\ldots,{bif_steps_min + bif_steps_show} }}$&quot;, fontsize=9, color='gray') </code></pre> <p>and if I avoid the error with</p> <pre class="lang-py prettyprint-override"><code>bif.set_title(f&quot;$x_n(r); , n\\in {{ {bif_steps_min},\\ldots,{bif_steps_min + bif_steps_show} }}$&quot;, fontsize=9, color='gray') </code></pre> <p>I just get &quot;in&quot; and &quot;ldots&quot; as text rather than the expected symbols (which I've had rendered successfully in the past).</p> <p>How do I get the string generated by the f-string to include the full LaTeX symbol specification (e.g., <code>\ldots</code>)?</p>
<python><latex><f-string><python-3.12>
2023-10-06 20:32:22
0
49,146
orome
77,247,141
4,729,764
Recursive lazy loading in async SQLAlchemy: Greenlet spawn error
<p>I'm trying to load and serialize a deeply nested model using SQLAlchemy into a json response (using pydantic <code>ParentType</code>), which seems difficult using an async engine.</p> <p>I have a <code>Parent</code> db model, which is related to a <code>Child</code>, which in turn is related to <code>GrandChild</code> and so on...</p> <p>Now, I can eagerly load the 1st level of relationships using:</p> <pre><code>stmt = select(Parent).filter(Parent.id == id) stmt = stmt.options(joinedload(getattr(Parent, &quot;children&quot;))) </code></pre> <p>But this will fail because <code>Child</code>'s <code>children</code> will need to be lazy-loaded.</p> <p>I can eager load every relation of my Parent model recursively and so on... but this method will go past my required depth of <code>ParentType</code> and will result in extra queries that I do not need.</p> <p>How can I lazy-load in async according to my serialization model?</p> <p>Options:</p> <ul> <li>To pair the serialization model and eager load the required relations?</li> <li>Somehow create a new greenlet to fetch the relation?</li> </ul>
<python><sqlalchemy><asyncpg>
2023-10-06 20:12:51
0
3,114
GRS
77,247,062
1,432,980
deserialize json array field into an object using pydantic
<p>I have an object, that has an inner object with custom serializer</p> <pre><code>class ParameterModel(BaseModel): size_x: int size_y: int @model_serializer def ser_model(self) -&gt; list: return [self.size_x, self.size_y] class ObjectModel(BaseModel): name: str parameters: ParameterModel </code></pre> <p>So when I create this object and serialize it, the result looks like this</p> <pre><code>o_m = ObjectModel(name='Cube', parameters=ParameterModel(size_x=10, size_y=10)) o_m_json = o_m.model_dump_json() print(o_m.model_dump_json()) # {&quot;name&quot;:&quot;Cube&quot;,&quot;parameters&quot;:[10,10]} </code></pre> <p>But now I want to deserialize that type of input. When I try using</p> <pre><code>ObjectModel.model_validate_json(o_m_json) </code></pre> <p>I get the error</p> <pre><code>Input should be an object [type=model_type, input_value=[10, 10], input_type=list] </code></pre> <p>So, as far as I understand, due to the source being a list it does not match with the expected object type. But is there a way to tell Pydantic that this field should be deserialized into an object?</p>
<python><pydantic>
2023-10-06 19:53:42
1
13,485
lapots
77,247,057
10,969,942
Efficient Method for Flood Filling a Numpy Boolean Matrix: How to Replace All 'False' Values Enclosed by 'True'
<p>For a given image mask matrix of shape <code>H x W</code> containing only <code>True</code> and <code>False</code> values, I wish to convert all <code>False</code> values to <code>True</code> when they are entirely enclosed by <code>True</code> values.</p> <p>e.g.</p> <pre><code>mask = np.array([ [True, True, True, True, True ], [True, False, True, False, True ], [True, True, True, False, True ], [True, False, False, False, True ], [True, True, True, True, True ] ]) </code></pre> <p>The result should be</p> <pre><code>np.array([ [True, True, True, True, True ], [True, True, True, True, True ], [True, True, True, True, True ], [True, True, True, True, True ], [True, True, True, True, True ] ]) </code></pre> <p>I'm seeking an efficient approach to accomplish this, perhaps through a robust image processing library, rather than crafting a custom Python helper function.</p>
<python><numpy><opencv><image-processing>
2023-10-06 19:51:25
1
1,795
maplemaple
77,247,053
1,222,122
Log response times for all HTTP requests from a docker container
<p>I have built a docker image that contains a pre-built application that I cannot add logging statements to. The application makes many HTTP requests as it executes, and I would like to log those requests, and in particular, the time it takes for those responses to be fulfilled. Is there something I add to my docker container, such as a proxy, that would allow me to log response times to all outgoing HTTP requests?</p>
<python><docker><httprequest><http-proxy>
2023-10-06 19:50:17
1
749
Gillfish
77,246,933
7,212,809
Handling timeouts with run-in-executor and asyncio and processPoolExecutor
<p>I'm using asyncio and ProcessPoolExecutor to run a piece of blocking code like this:</p> <pre><code>result = await loop.run_in_executor(pool, long_running_function) </code></pre> <p>I don't want long_running_function to last more than 2 seconds and I can't do proper timeout handling within it because that function comes from a third-party library.</p> <p>What are my options?</p> <ol> <li>Add a timeout dectorator to <code>long_running_function?</code> <a href="https://pypi.org/project/timeout-decorator/" rel="nofollow noreferrer">https://pypi.org/project/timeout-decorator/</a></li> <li>Use Pebble <a href="https://pebble.readthedocs.io/en/latest/#pools-and-asyncio" rel="nofollow noreferrer">https://pebble.readthedocs.io/en/latest/#pools-and-asyncio</a></li> </ol> <p>Which is the best option?</p> <p>This q is very close to <a href="https://stackoverflow.com/questions/34452590/timeout-handling-while-using-run-in-executor-and-asyncio">Timeout handling while using run_in_executor and asyncio</a>, for which no workaround was reported.</p> <p>It's been a few years. <em>Is</em> there a workaround?</p>
<python><python-3.x><asynchronous><process><python-asyncio>
2023-10-06 19:23:26
1
7,771
nz_21
77,246,875
489,088
How to get csvreader's DictReader to process one line at a time?
<p>I am unzipping a very large CSV file in a memory-constrained environment. For this reason I unzip a line at a time like so:</p> <pre><code>with zipfile.ZipFile(temp_file.name) as zip_content: filename = zip_content.namelist()[0] with zip_content.open(filename, mode=&quot;r&quot;) as content: for line in content: print(line) </code></pre> <p>Which correctly yields each line as a byte array.</p> <pre><code>b'name,age,city\n' b'John,12,Madrid\n' ... </code></pre> <p>I'd like to process these lines with a <code>csv.DictReader</code> so I can reliably access each field.</p> <p>However, clearly I cannot create a new dict reader inside the loop for each line.</p> <p>I'm tempted to just roll my own solution parsing the headers and then creating these dictionaries for each line, but I wonder if there is some quick way to leverage <code>DictReader</code>.</p> <p>What is a way of accomplishing this while avoiding reading the entire file into memory first?</p>
<python><python-3.x><csv><streaming>
2023-10-06 19:13:35
2
6,306
Edy Bourne
77,246,869
3,261,292
Python reading byte string that was saved as string
<p>I have the following string that I read from a DB:</p> <pre><code>html_page = &quot;&quot;&quot;b'&lt;html lang=&quot;en&quot;&gt;&lt;body&gt; Accompagnement g\xc3\xa9n\xc3\xa9ral Droit des Affaires :&amp;lt;/strong&amp;gt;&amp;lt;/p&amp;gt;&amp;lt;ul&amp;gt; &amp;lt;li&amp;gt;D\xc3\xa9velopper la connaissance des activit\xc3\xa9s techniques et commerciales d\xe2\x80\x99InterCloud, l\xe2\x80\x99identification et la ma\xc3\xaetrise des risques juridiques inh\xc3\xa9rents&amp;lt;/li&amp;gt; &amp;lt;li&amp;gt;Assurer une veille sur les sujets qui peuvent toucher les interlocuteurs internes (droit des contrats, droit international, Droit du num\xc3\xa9rique, Donn\xc3\xa9es personnelles, Cloud Act, Data Act, Droit de la concurrence, Anti-corruption\xe2\x80\xa6)&amp;lt;/li&amp;gt; &amp;lt;li&amp;gt;Fournir des notes d\xe2\x80\x99information, recommandations ou documentation juridiques afin de mettre en conformit\xc3\xa9 l\xe2\x80\x99entreprise ou permettre d\xe2\x80\x99anticiper les \xc3\xa9volutions r\xc3\xa8glementaires.&amp;lt;/li&amp;gt; &amp;lt;/ul&amp;gt;&lt;/body&gt;&lt;/html&gt;'&quot;&quot;&quot; </code></pre> <p>And this is how I read it:</p> <pre><code>SELECT CONVERT(uncompress(html_page) USING utf8) FROM TableX </code></pre> <p>The <code>html_page</code> column type in the DB is <code>longblob</code>.</p> <p>It seems that the HTML string was inserted into the DB as a byte string, and when I read it, I get it as a string but the byte characters. When I print the type of <code>html_page</code>, it shows <code>str</code> in python.</p> <p>My question is, How can I convert it to string without the bytes characters so I can decode characters like <code>\xe2\x80\x99</code>?</p> <p>I don't know if the solution can be through the SQL query or using Python. However, I tried both but nothing worked. I went through all the other similar questions here on Stackoverflow, and nothing helped.</p>
<python><mysql><string><character-encoding>
2023-10-06 19:12:19
0
5,527
Minions
77,246,782
2,696,565
Plotly Dash theme: LUX (Dark mode)
<p>I want to use the built-in bootswatch theme <a href="https://bootswatch.com/lux/" rel="nofollow noreferrer">LUX</a> but looks like only light mode is included in Dash i.e. when I use the below sample code, theme is applied in light mode. How can I have the dark mode instead? I don't care much about the ability to toggle between light and dark modes, just want to use this theme in the dark mode as the default and the only option. Thanks for helping.</p> <pre><code>import dash import dash_bootstrap_components as dbc app = dash.Dash(external_stylesheets=[dbc.themes.LUX]) </code></pre>
<python><plotly><plotly-dash>
2023-10-06 18:54:07
1
629
user2696565
77,246,769
2,232,418
Output string list variable from python script to another python scrip in Azure Pipelines
<p>I've tried several articles and threads on Stackoverflow but cannot seem to find a solution the problem I'm having. This might be a simple answer but I can't wrap my head around it.</p> <p>I have two steps in my YAML pipeline script. Both call python scripts.</p> <p>In the first Python script I can making an API call to ultimately retrieve a list of strings. I am storing these as a Pipeline variable:</p> <pre><code>repo_languages = ['csharp', 'java'] print(f'##vso[task.setvariable variable=REPO_LANGAUGES;]{repo_languages}') </code></pre> <p>I want to then retrieve this variable in a separate python script in a later build step.</p> <p>I can get this to work for a string but not a list of strings with:</p> <pre><code>repo_languages = os.environ.get('REPO_LANGAUGES') </code></pre> <p>I feel like this is probably a trivial answer, but I just can't see where I'm going wrong.</p>
<python><azure-devops><azure-pipelines>
2023-10-06 18:52:07
1
2,787
Ben
77,246,508
1,791,475
How to add a prefix or suffix for unnesting in Polars
<p>I have the following code snippet where I am attempting to unnest <code>metadata</code>. However, within the <code>metadata</code>, there is a value named &quot;<code>language</code>&quot; which clashes with the existing &quot;<code>language</code>&quot; column name in the dataframe, thus causing a duplication error.</p> <p>I would like to add a prefix or suffix to the column names generated during the unnesting to resolve this, but I haven't been able to do so. Even, putting an alias for &quot;<code>language</code>&quot; didn't work. Any solutions would be appreciated.</p> <p>How to Avoid Column Name Duplication Errors When Unnesting in Polars?</p> <p><strong>Sample Data</strong></p> <pre class="lang-py prettyprint-override"><code>converted_data = [ { &quot;tags&quot;: [&quot;tag1&quot;, &quot;tag2&quot;], &quot;valueNum&quot;: 123.45, &quot;language&quot;: &quot;English&quot;, &quot;metadata&quot;: { &quot;country&quot;: &quot;Spain&quot;, &quot;region&quot;: &quot;Europe&quot; } }, { &quot;tags&quot;: [&quot;tag3&quot;, &quot;tag4&quot;], &quot;valueNum&quot;: 678.90, &quot;language&quot;: &quot;French&quot;, &quot;metadata&quot;: { &quot;language&quot;: &quot;German&quot;, &quot;country&quot;: &quot;Germany&quot;, &quot;region&quot;: &quot;Europe&quot; } } ] </code></pre> <p><strong>Code:</strong></p> <pre class="lang-py prettyprint-override"><code>( pl.DataFrame(converted_data) .with_columns( pl.col(&quot;tags&quot;).cast(pl.List(pl.String)), pl.col(&quot;valueNum&quot;).cast(pl.Float64), pl.col(&quot;language&quot;).alias(&quot;lang&quot;) ) .unnest(&quot;metadata&quot;) ) </code></pre> <pre><code>DuplicateError: column with name 'language' has more than one occurrence </code></pre>
<python><dataframe><python-polars>
2023-10-06 17:53:07
3
1,112
Alireza Ghaffari
77,246,485
3,800,106
Run dash application with main function
<p>I have a dash application(.py) with running code which looks like,</p> <pre><code># whatever.py from dash import Dash, html, dcc import plotly.express as px import pandas as pd app = Dash() # assume you have a &quot;long-form&quot; data frame # see https://plotly.com/python/px-arguments/ for more options df = pd.DataFrame({ &quot;Fruit&quot;: [&quot;Apples&quot;, &quot;Oranges&quot;, &quot;Bananas&quot;, &quot;Apples&quot;, &quot;Oranges&quot;, &quot;Bananas&quot;], &quot;Amount&quot;: [4, 1, 2, 2, 4, 5], &quot;City&quot;: [&quot;SF&quot;, &quot;SF&quot;, &quot;SF&quot;, &quot;Montreal&quot;, &quot;Montreal&quot;, &quot;Montreal&quot;] }) fig = px.bar(df, x=&quot;Fruit&quot;, y=&quot;Amount&quot;, color=&quot;City&quot;, barmode=&quot;group&quot;) app.layout = html.Div(children=[ html.H1(children='Hello Global Apps'), html.Div(children=''' Dash: A web application framework for your data. '''), dcc.Graph( id='example-graph', figure=fig ) ]) if __name__ == &quot;__main__&quot;: app.run_server(debug=True) </code></pre> <p>I execute this using <code>HOST=&lt;xxxx&gt; PORT=&lt;xxxx&gt; python -m whatever</code></p> <p>I would like to eliminate the if segment from the .py file while still running the dash server serving traffic</p> <pre><code>if __name__ == &quot;__main__&quot;: app.run_server(debug=True) </code></pre> <p>Is there any way to achieve this? I am willing to write another python wrapper file, or execute anything via command line.</p> <p><strong>Please note</strong>: I am not flexible with whatever.py and can't make any changes to that except abstracting away the two line segment.</p> <p>One option to abstract away the if segment that comes to mind is by appending this segment to the .py file on the fly. We can do this during execution via command line, but that doesn't seem very stable or reliable.</p>
<python><python-3.x><plotly-dash>
2023-10-06 17:46:09
1
748
Gauraang Khurana
77,246,370
2,032,771
Creating and plotting a time-series from a multi-dimensional input
<p>I have some data like :</p> <pre><code>Timestamp, Dimension, Value t1, d1, v1 t2, d1, v2 t3, d1, v3 ... tn, d2, v4 tn+1, d2, v4 ... tm, d1, v5 ... </code></pre> <p>Is there a way with Pandas to convert these to a set of time series that I can plot <code>d1</code>, <code>d2</code> ...<code>dn</code> with timestamp on the x-axis and value on the y-axis</p> <p>UPDATE:</p> <p>Output of <code>df.head().to_dict()</code></p> <pre><code>{'Timestamp': {0: 1696603859.706954, 1: 1696603859.707053, 2: 1696603859.707162, 3: 1696603859.707236, 4: 1696603859.707281}, ' Dim': {0: 'WT', 1: 'WH-0x00', 2: 'WH-0x80', 3: 'WH-Others', 4: 'WT-0x00'}, ' Value': {0: 24, 1: 1, 2: 1, 3: 1, 4: 41}, ' Count': {0: 24, 1: 44, 2: 44, 3: 44, 4: 44}} </code></pre>
<python><pandas>
2023-10-06 17:22:05
1
1,680
avrono
77,246,025
15,341,457
Keras - adding an attention layer
<p>I am building an encoder-decoder architecture to do text summarization on restaurant reviews. I have been following this <a href="https://blog.paperspace.com/implement-seq2seq-for-text-summarization-keras/" rel="nofollow noreferrer">guide</a>. My model has 3 LSTM layers for the the encoder and one LSTM layer for decoding. This is what it looks like right now:</p> <pre><code>latent_dim = 300 embedding_dim = 200 encoder_inputs = Input(shape=(max_review_len, )) enc_emb = Embedding(x_voc, embedding_dim, trainable=True)(encoder_inputs) encoder_lstm1 = LSTM(latent_dim, return_sequences=True, return_state=True, dropout=0.4, recurrent_dropout=0.4) encoder_lstm2 = LSTM(latent_dim, return_sequences=True, return_state=True, dropout=0.4, recurrent_dropout=0.4) (encoder_output2, state_h2, state_c2) = encoder_lstm2(encoder_output1) encoder_lstm3 = LSTM(latent_dim, return_state=True, return_sequences=True, dropout=0.4, recurrent_dropout=0.4) (encoder_outputs, state_h, state_c) = encoder_lstm3(encoder_output2) decoder_inputs = Input(shape=(None, )) dec_emb_layer = Embedding(y_voc, embedding_dim, trainable=True) dec_emb = dec_emb_layer(decoder_inputs) decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True, dropout=0.4, recurrent_dropout=0.2) (decoder_outputs, decoder_fwd_state, decoder_back_state) = \ decoder_lstm(dec_emb, initial_state=[state_h, state_c]) decoder_concat_input = Concatenate(axis=-1, name='concat_layer')([decoder_outputs, attn_out]) decoder_dense = TimeDistributed(Dense(y_voc, activation='softmax')) decoder_outputs = decoder_dense(decoder_outputs) model = Model([encoder_inputs, decoder_inputs], decoder_outputs) model.summary() </code></pre> <p>In order to improve the summarization results I would like to add an <strong>attention layer</strong>, ideally like this (as suggested by this <a href="https://www.analyticsvidhya.com/blog/2019/06/comprehensive-guide-text-summarization-using-deep-learning-python/" rel="nofollow noreferrer">guide</a>):</p> <pre><code>Attention layer attn_layer = AttentionLayer(name='attention_layer') attn_out, attn_states = attn_layer([encoder_outputs, decoder_outputs]) decoder_concat_input = Concatenate(axis=-1, name='concat_layer')([decoder_outputs, attn_out]) </code></pre> <p>The guide suggests doing the following import:</p> <pre><code>from attention import Attention </code></pre> <p>However this leads to an invalid syntax error on the declaration of the <code>attn_layer</code>variable. Even resolving the error leads to an <code>ÀttentionLayer not defined</code> error.</p> <p>I've turned to the <a href="https://github.com/thushv89/attention_keras" rel="nofollow noreferrer">attention_keras</a> module which seems to be just what I need but <code>pip install attention_keras</code> is unsuccesful:</p> <pre><code>ERROR: Could not find a version that satisfies the requirement attention_keras (from versions: none) </code></pre>
<python><tensorflow><keras><deep-learning><lstm>
2023-10-06 16:18:59
1
332
Rodolfo
77,245,901
7,553,746
How to create a payload for requests using **kwargs?
<p>I'm trying to use **kwags to create a payload to send to an API but some parameters are optional. This code works fine for type as if it's not present it defaults to all but I also want to add <code>start_date</code> and <code>end_date</code> then realised this would not work as the parameters are optional and I may limit the response accidentally.</p> <pre><code>class GenericApi(): def __init__(self, generic_api_key): self.generic_api_key = generic_api_key self.generic_api_base_url = 'https://www.genericapi.com/api/v1/conferences/' # method to return GenericApi conferences def get_contacts(self, **kwargs): payload = { 'type': kwargs.get('type', 'all'), } resp = requests.get(self.generic_api_base_url, headers={'API-KEY': self.generic_api_key}, json=payload) return resp.json() ga = GenericApi(generic_api_key='1234567890') contact_list = ga.get_contacts(type='past') print(json.dumps(contact_list, indent=4)) </code></pre> <p>Would an approach like this be appropriate or am I missing a more Pythonic way?</p> <pre><code>def test(self, **kwargs): payload = {} for key, value in kwargs.items(): payload[key] = value resp = requests.get(self.generic_api_base_url, headers={'API-KEY': self.generic_api_key}, params=payload) return resp.json() </code></pre>
<python><python-requests>
2023-10-06 16:00:10
1
3,326
Johnny John Boy
77,245,857
13,400,491
Use !reference or anchors with arguments in GitLab YAML
<p>In my GitLab YAML, I have several jobs that use the same basic command in their scripts but with different arguments. For example:</p> <pre class="lang-yaml prettyprint-override"><code>stages: - lint isort: stage: lint script: - pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org --default-timeout=1000 isort - isort . --check-only --verbose black: stage: lint script: - pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org --default-timeout=1000 black - black . --check --verbose --diff --color </code></pre> <p>The <code>pip install --trusted-host pypi.org --trusted-host files.pythonhosted.org --default-timeout=1000</code> command is the same in every job but only the package name changes. (In my real code, this line is actually even longer and is used by several jobs multiple times, not just two once each)</p> <p>I'd like to be able to define a macro/function within GitLab's YAML to allow me to <strong>call this same line in each job with an argument for the package name</strong>. I also <strong>need to be able to call this multiple times within a job</strong>, <strong>and also have other code in the <code>script</code> section</strong> before and after these calls.</p> <p>Ideally, I'd kinda expect my code for the <code>isort</code> job to look like this:</p> <pre class="lang-yaml prettyprint-override"><code>isort: stage: lint script: - *.my_pip_install isort # or: !reference [.my_pip_install] isort - isort . --check-only --verbose </code></pre> <p>So far, I've tried a few approaches:</p> <ul> <li><p>Using <code>!reference</code> to reuse the command after I define it in a hidden job elsewhere e.g. inside <code>.my_pip_install</code>'s <code>script</code> section. I then reference it by using <code>!reference [.my_pip_install, script]</code> in my job. This would work except I can't find a way to pass the argument &quot;isort&quot; to the reference or append that string to the end of the call</p> </li> <li><p>Using <code>!reference</code> with a variable. For example, the code for <code>.my_pip_install</code>'s <code>script</code> might be <code>pip install ... $PACKAGE</code>. Then in my calling job, I simply define my <code>variables</code> e.g. <code>PACKAGE = &quot;isort&quot;</code>. This partially works but I can't use this method more than once per job</p> </li> <li><p>Using YAML anchors. I define a hidden script like with the <code>!reference</code> method but mark the end of the line with <code>&amp;my_pip_install</code>. Then in my job's script I can use <code>*my_pip_install</code> or <code>&lt;&lt;: *my_pip_install</code> to have the code be placed in. This has two problems: I can't find a way to supply my argument, and -- if using the <code>&lt;&lt;:</code> method -- I can't have additional lines of code in my script</p> </li> <li><p>Defining a bash function e.g. <code>my_pip_install (){ ...}</code> in the <code>default</code> <code>before_script</code>. This works when I call <code>my_pip_install &quot;isort&quot;</code> in my job scripts but it's hacky since this is outside of YAML/GitLab validation and checking. For example, in the runner logs the actual command is not shown when this function is executed, which makes debugging hard. It also doesn't show any substitution when viewing the &quot;Merged YAML&quot; tab in the GitLab Pipeline Editor. I'd prefer to use what's provided by GitLab/YAML in an elegant way for the pipeline</p> </li> </ul>
<python><pip><gitlab><yaml><pipeline>
2023-10-06 15:53:55
0
367
user174358
77,245,851
3,177,186
Shutil copy2 says it copied the files, but I can't actually see them in Windows explorer
<pre class="lang-py prettyprint-override"><code>def do_it(): global files_to_move global copymove for file_dict in files_to_move.values(): if check_path(file_dict['new_path']) == False: return if (file_dict['disabled']): continue new_name = file_dict['new_path']+'\\'+file_dict['new_name'] if os.path.exists(new_name): status_update(f&quot;Error! the file we were going to move already exists? {new_name}&quot;) continue try: if copymove == 'MOVE': os.rename(file_dict['path_file'], new_name) status_update(f&quot;Moved {file_dict['path_file']} to { new_name}&quot;) else: shutil.copy2(file_dict['path_file'], new_name) status_update(f&quot;Copied {file_dict['path_file']} to { new_name}&quot;) except OSError as e: status_update(f'Error renaming file: {e}') update_files() return </code></pre> <p>Using the above code, I have a listing of files (a dict of file information including name, path, newname, etc) that I want to copy from the old location to the new location. When I run this, it seems to work and says that the files are copied. However, they don't actually show in Windows explorer on my computer so I would assume it failed.</p> <p>However, when I update the file listing, it detects the files when I do os.listdir(path). Somehow it's detecting files that weren't copied. And if I run the operation again (which detects collisions and renames the files before copy to accomodate), I end up with duplicate phantom files. What is going on!?</p> <p>How do I make the file copy actually complete? What should I be using in my code instead to make this work?</p>
<python><windows><copy>
2023-10-06 15:53:02
1
2,198
not_a_generic_user
77,245,834
16,383,578
How to update multiple dictionary keys that reference to the same value simultaneously?
<p>I am working on a GUI program using PyQt6 and I am using QStyleSheet to stylize it, the stylesheet is hundreds of lines long and contains a lot of redundancies, so I want to write a class to manage the styling.</p> <p>The basic idea is simple, I have a few template strings in a dictionary, and an instance of the class holds values of the attributes. A stylesheet is created by retrieving the right template strings and filling the attributes contained withing the instance's dictionary using string formatting.</p> <p>The problem is that I want to change the styling, and I want the changes made be reflected on all widgets that use the same value. For example I have loads of widgets using the same text color, and I want to change the text color in one place and all other places that use the same value automatically get updated.</p> <p>Example:</p> <pre><code>class Dummy: def __init__(self): self._color = '#4000ff' @property def color(self): return self._color dum = Dummy() config = {'QLabel': {'color': dum.color}, 'QCheckBox': {'color': dum.color}} dum._color = '#ff00ff' </code></pre> <p>The above code doesn't work:</p> <pre><code>In [115]: dum._color = '#ff00ff' In [116]: config Out[116]: {'QLabel': {'color': '#4000ff'}, 'QCheckBox': {'color': '#4000ff'}} </code></pre> <p>The intended result is:</p> <pre><code>{'QLabel': {'color': '#ff00ff'}, 'QCheckBox': {'color': '#ff00ff'}} </code></pre> <p>Of course I can change them one by one but that would be cumbersome. What are ways that I can mutate the string in one place and have all values referencing it get automatically updated?</p> <hr /> <p>What I want is basically a pointer, really. Something like the following C++ code:</p> <pre class="lang-cpp prettyprint-override"><code>#include &lt;unordered_map&gt; #include &lt;string&gt; using std::unordered_map; using std::string; string color = &quot;#4000ff&quot;; unordered_map&lt;string, unordered_map&lt;string, string&gt;&gt; config = { {&quot;QLabel&quot;, { {&quot;Color&quot;, color} }}, {&quot;QCheckBox&quot;, { {&quot;Color&quot;, color} }} } string *colorp = &amp;color; *colorp = &quot;#ff00ff&quot; </code></pre> <p>I know in C++ strings are mutable but that is beside the point. And of course the above code won't compile because it lacks a main function however it serves to illustrate the point.</p> <p>And the existing answer doesn't solve the problem, because I have many attributes in a dictionary that is passed to the class constructor. I would need to use a function to dynamically add properties to the instance which would add much more complexity than needed.</p> <hr /> <p>And the method used in the answer won't work, either.</p> <p>Here is a more comprehensive example:</p> <pre><code>style = &quot;&quot;&quot; QComboBox {{ border: 2px {borderstyle} {bordercolor}; border-radius: 8px; background: {background}; color: {color}; selection-background-color: {alternate_background}; selection-color: {highlight}; }} &quot;&quot;&quot; values = { 'borderstyle': 'outset', 'bordercolor': '#552b80', 'background': '#422e80', 'color': '#c000ff', 'alternate_background': '#7800d7', 'highlight': '#ffb2ff' } style.format_map(values) </code></pre> <p>I need to do this for many widgets, the values are passed by a dictionary and I can't just store the object in the dictionary and query its attributes.</p>
<python><dictionary>
2023-10-06 15:50:46
2
3,930
Ξένη Γήινος
77,245,726
19,675,781
How to use a row as legend in Seaborn barplot
<p>I have a dataframe like this:</p> <pre><code>index = ['Col-45', 'Col-68', 'Col-17', 'Col-69', 'Col-43', 'Col-49', 'Col-91', 'Col-13', 'Col-14', 'Col-18', 'Col-38', 'Col-37', 'Col-40', 'Col-44', 'Col-32', 'Col-82', 'Col-75', 'Col-19', 'Col-5', 'Col-6', 'Col-16', 'Col-4', 'Col-7', 'Col-41', 'Col-10', 'Col-31', 'Col-12', 'Col-11', 'Col-42', 'Col-30', 'Col-76', 'Col-46', 'Col-83', 'Col-73', 'Col-63', 'Col-9', 'Col-28', 'Col-51', 'Col-74', 'Col-65', 'Col-50', 'Col-64', 'Col-86', 'Col-79', 'Col-80', 'Col-81', 'Col-55', 'Col-1', 'Col-57', 'Col-2', 'Col-61', 'Col-53', 'Col-88', 'Col-47', 'Col-3', 'Col-58', 'Col-29', 'Col-59', 'Col-8', 'Col-276', 'Col-56', 'Col-62', 'Col-52', 'Col-54'] Brand = ['LG','LG','LG','LG','LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'LG', 'Vivo', 'Vivo', 'Vivo', 'Vivo', 'Vivo', 'Vivo', 'Vivo', 'Sony', 'Sony', 'Sony', 'Sony', 'Sony', 'Sony', 'Sony', 'Pixel', 'Pixel', 'Pixel', 'Pixel', 'Huawei', 'Huawei', 'Huawei', 'Apple', 'Apple', 'Apple', 'Xiaomi', 'Xiaomi', 'Xiaomi', 'Lenovo', 'Lenovo', 'Lenovo', 'Panasonic', 'Panasonic', 'Panasonic', 'Beetle', 'Beetle', 'Samsung', 'Samsung', 'Nothing', 'Nothing', 'Nikon', 'Nikon', 'Canon', 'Canon', 'Coby', 'Coby', 'Onida', 'Amara', 'Roxy'] Score = [4.75, 0.91, 0.79, 0.65, 0.62, 0.57, 0.38, 0.33, 0.27, 0.25, 0.25, 0.22, 0.16, 0.11, 0.02, 0.01, 3.89, 3.08, 2.1 , 1.75, 0.42, 0.27, 0.18, 4.44, 1.18, 0.8 , 0.74, 0.52, 0.25, 0.08, 1.13, 0.75, 0.54, 0.04, 1.03, 0.11, 0. , 5.53, 5.24, 4.98, 0.98, 0.78, 0.06, 0.76, 0.28, 0.04, 1.1 , 0.38, 0.25, 0.98, 0.01, 1.17, 0.61, 0.29, 0.19, 0.12, 0.01, 0.13, 0. , 4.37, 3.59, 0.53, 0.39, 1.3 ] Choice = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 0, 0, 0, 0] Result = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0] colors = ['blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'pink', 'pink', 'pink', 'pink', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'green', 'red', 'red', 'blue', 'blue', 'blue', 'blue'] col_d = {'blue':'Not Detected by Choice &amp; Result', 'green':'Detected by Choice', 'pink':'Detected by Result','red':'Detected by Choice &amp; Result'} df = pd.DataFrame({'index':index,'Brand':Brand,'Score':Score,'Choice':Choice,'Result':Result}).set_index('index').T df.loc['Legend'] = [col_d[i] for i in colors] display(df) </code></pre> <p>From this dataframe, I created a barplot for index row Score Using seaborn library.<br /> This worked well too.</p> <pre><code>fig = plt.figure() sns.set(rc={'figure.figsize': (15,4)}) g1 = sns.barplot(data=df.loc[['Score']],palette=colors,hue=df.loc['Legend']) g1.patch.set_edgecolor('black') g1.patch.set_linewidth(0.5) g1.set_facecolor('white') g1.set_ylabel(f'Brand',weight='bold',fontsize=15) g1.set_xlabel(None) g1.set_xticklabels(g1.get_xticklabels(),rotation=90,fontsize=10) plt.show() </code></pre> <p>Then I want to use the index row Legend as hue in the barplot. This gives me the ValueError: Cannot use <code>hue</code> without <code>x</code> and <code>y</code></p> <p>Can anyone help me with this?</p>
<python><matplotlib><seaborn><bar-chart>
2023-10-06 15:32:35
1
357
Yash
77,245,654
10,973,108
Scrap data from "dinamic" XHR - this request does not appears in HTML
<p>I'm trying to scrape data from URL that get the data from an API. But, the problem of the XHR request it's that the URL generate is dinamic.</p> <p>I mean, whenever we do a request in the same URL, the XHR API URL changes.</p> <p>The same resource after reload request.</p> <p><a href="https://i.sstatic.net/B0xTH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/B0xTH.png" alt="enter image description here" /></a> <a href="https://i.sstatic.net/LRyBl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LRyBl.png" alt="enter image description here" /></a></p> <p>It's the first time that I see something like it. I don't understand the logic that was generating this URL to do the request in the API.</p> <p>I'm trying to use &quot;playwright&quot; to intercept this XHR, but I don't know if this lib can do it.</p> <p>In this case, if I can get the XHR requests that this page does, I'll achieve to do the same requests that the pages is doing.</p> <p>Is this thought a good way to go or would it be better to try something else?</p> <p>Some tips to investigate better the HTML to try generate this URL?</p> <p>I can pass the URL if you need.</p>
<python><web-scraping><python-requests><request><xmlhttprequest>
2023-10-06 15:21:53
0
348
Daniel Bailo
77,245,595
995,431
FastAPI TestClient overriding lifespan function
<p>In a more complicated setup using the <a href="https://python-dependency-injector.ets-labs.org/" rel="noreferrer">python dependency injector</a> framework I use the lifespan function for the FastAPI app object to correctly wire everything.</p> <p>When testing I'd like to replace some of the objects with different versions (fakes), and the natural way to accomplish that seems to me like I should override or mock the lifespan function of the app object. However I can't seem to figure out if/how I can do that.</p> <p>MRE follows</p> <pre class="lang-py prettyprint-override"><code>import pytest from contextlib import asynccontextmanager from fastapi.testclient import TestClient from fastapi import FastAPI, Response, status greeting = None @asynccontextmanager async def _lifespan(app: FastAPI): # Initialize dependency injection global greeting greeting = &quot;Hello&quot; yield @asynccontextmanager async def _lifespan_override(app: FastAPI): # Initialize dependency injection global greeting greeting = &quot;Hi&quot; yield app = FastAPI(title=&quot;Test&quot;, lifespan=_lifespan) @app.get(&quot;/&quot;) async def root(): return Response(status_code=status.HTTP_200_OK, content=greeting) @pytest.fixture def fake_client(): with TestClient(app) as client: yield client def test_override(fake_client): response = fake_client.get(&quot;/&quot;) assert response.text == &quot;Hi&quot; </code></pre> <p>So basically in the <code>fake_client</code> fixture I'd like to change it to use the <code>_lifespan_override</code> instead of the original <code>_lifespan</code>, making the dummy test-case above pass</p> <p>I'd have expected something like <code>with TestClient(app, lifespan=_lifespan_override) as client:</code> to work, but that's not supported. Is there some way I can mock it to get the behavior I want?</p> <p>(The mre above works if you replace &quot;Hi&quot; with &quot;Hello&quot; in the assert statement)</p> <p>pyproject.toml below with needed dependencies</p> <pre><code>[tool.poetry] name = &quot;mre&quot; version = &quot;0.1.0&quot; description = &quot;mre&quot; authors = [] [tool.poetry.dependencies] python = &quot;^3.10&quot; fastapi = &quot;^0.103.2&quot; [tool.poetry.group.dev.dependencies] pytest = &quot;^7.1.2&quot; httpx = &quot;^0.25.0&quot; [build-system] requires = [&quot;poetry-core&quot;] build-backend = &quot;poetry.core.masonry.api&quot; </code></pre> <p>EDIT: Tried extending my code with the suggestion from Hamed Akhavan below as follows</p> <pre class="lang-py prettyprint-override"><code>@pytest.fixture def fake_client(): app.dependency_overrides[_lifespan] = _lifespan_override with TestClient(app) as client: yield client </code></pre> <p>but it doesn't work, even though it looks like it should be the right approach. Syntax problem?</p>
<python><python-3.x><dependency-injection><fastapi>
2023-10-06 15:14:33
4
325
ajn
77,245,262
7,009,806
How to automatically remove the namespace attached to a friend function defined within a C++ class, when using SWIG?
<p>I am trying to wrap a C++ class designed by a colleague of mine, using SWIG, for scripting programming in Python. These classes are defined as a bunch of source files which I have, but do not want to modify.</p> <p>All the classes are defined within a given namespace and include friend functions, for global treatment on its objects without having to use methods on any of the objects.</p> <p>SWIG complains about the fact that the automatic identifier generated for such functions cannot be wrapped automatically (<code>Warning 503: Can't wrap 'ns::inf' unless renamed to a valid identifier.</code>. See below code). And, when I use the <code>%rename</code> keyword in the SWIG interface file (<code>%rename(inf) ns::inf;</code>. See below), the Python library file is correctly generated, but not the C++ <code>.cxx</code> intermediate file. The new error is then:</p> <pre><code>mwe_wrap.cxx: In function ‘PyObject* _wrap_inf(PyObject*, PyObject*)’: mwe_wrap.cxx:3369:16: error: ‘inf’ is not a member of ‘ns’ 3369 | result = ns::inf((ns::mwe const &amp;)*arg1,(ns::mwe const &amp;)*arg2); | ^~~ </code></pre> <p>I have reduced my problem to a minimal working example (with which I got the preceding error messages), available in the following files extracts.</p> <p><strong>mwe.cpp</strong></p> <pre><code>#include &quot;mwe.h&quot; namespace ns { mwe::mwe(void) { a = 1; b = -1; } mwe::mwe(long par1, long par2) { a = par1; b = par2; } mwe inf(const mwe &amp;obj1, const mwe &amp;obj2) { mwe objtemp(obj1.a, obj1.b); if (obj2.a &gt; obj1.a) objtemp.a = obj2.a; if (obj2.b &lt; obj1.b) objtemp.b = obj2.b; return(objtemp); } } </code></pre> <p><strong>mwe.h</strong></p> <pre><code>#ifndef __MWE_H__ #define __MWE_H__ namespace ns { class mwe { protected: long a; long b; public: mwe(void); mwe(long, long); friend mwe inf(const mwe &amp;, const mwe &amp;); }; } #endif </code></pre> <p><strong>mwe.i</strong></p> <pre><code>// Module name %module mwe // Header %rename(newinf) ns::inf; // The following block is compulsory, because of the use of the `ns` namespace. See https://stackoverflow.com/a/3762478/7009806 %{ #include &quot;mwe.h&quot; %} %include &quot;mwe.h&quot; </code></pre> <p>My compilation commands are the following ones (getting advantage of <a href="https://packages.debian.org/trixie/python-dev-is-python3" rel="nofollow noreferrer">python-config</a>):</p> <pre><code>swig -c++ -python -Wall mwe.i g++ -fPIC -c -Wall mwe_wrap.cxx $(python-config --cflags) </code></pre> <p>How can I manage to <strong>automatically</strong> remove the <code>ns::</code> namespace from the generated <code>inf</code> function in <code>mwe_wrap.cxx</code> (at line 3369 on my system)?</p> <p>As a comparison, this function is correctly generated with the specified new name in <code>mwe.py</code>:</p> <pre><code>def newinf(arg1, arg2): return _mwe.newinf(arg1, arg2) </code></pre> <p>I <strong>really</strong> thought the <code>.Py</code> and <code>.cxx</code> would be consistent on such a matter. In facts, if I remove that <code>ns::</code> <em>manually</em> and keep on compiling the shared object library and finally load it into Python, everything works as a charm (using <code>newinf</code>). Could it be some bug of some sort?</p>
<python><c++><swig>
2023-10-06 14:26:45
1
337
Olivier
77,245,234
512,480
AWS API Gateway, use unified Python file for Lambdas
<p>I am building my first REST API using API Gateway. I intend to use Python Lambda functions to do the work. I'm seeing that each Lambda function comes with its own space in which to insert Python code, but to me it would make a lot better sense to have one Python file containing all my handlers. I see that Lambda allows us to choose the name of the handler for the Lambda function, so that would seem to be workable. Is there some way of doing this?</p>
<python><aws-lambda><aws-api-gateway>
2023-10-06 14:24:16
1
1,624
Joymaker
77,245,216
2,562,058
How to concatenate map objects without transforming them in iterables in python?
<p>Consider the following code:</p> <pre><code>def square(x): return x**2 def cube(x): return x**3 in1 = [0, 2, 3] in2 = [2, 1, 9] y1 = map(square, in1) y2 = map(cube, in2) </code></pre> <p>How to concatenate the two maps objects without explicitly converting the iterators <code>y1</code> and <code>y2</code> into iterables?</p>
<python><python-3.x>
2023-10-06 14:19:51
1
1,866
Barzi2001
77,244,931
15,673,412
Scipy differential evolution - non-continuous bounds
<p>I am trying to setup a <code>differential_evolution</code> algorithm. I have approximately 15 variables, each with its own bounds (let's suppose they all share the same bounds <code>(150,250)</code>. I have successfully accomplished to do so.</p> <p>Now, I would like to add the possibility for any parameter to be set to 0 (or any small number would be fine).</p> <p>Is there any way that I could set the bounds for my parameters to be <code>(0,0.1) U (150, 250)</code>? Can this be achieved through some kind of constraint?</p>
<python><scipy><genetic-algorithm><differential-evolution>
2023-10-06 13:38:09
1
480
Sala
77,244,743
13,146,029
Django AssertionError GraphQL
<p>I have an error on my GraphQL instance, I load the /graphql endpoint and it pops up with the below error.</p> <pre><code>AssertionError: You need to pass a valid Django Model in AuthorType.Meta, received &quot;None&quot;. </code></pre> <p>My schema for each is as follows:</p> <pre class="lang-py prettyprint-override"><code>class UserType(DjangoObjectType): class Meta: model = get_user_model() class AuthorType(DjangoObjectType): class Meta: models = models.Profile class PostType(DjangoObjectType): class Meta: model = models.Post class TagType(DjangoObjectType): class Meta: model = models.Tag </code></pre> <p>I also have a Query within that schema:</p> <pre class="lang-py prettyprint-override"><code>class Query(graphene.ObjectType): all_posts = graphene.List(PostType) author_by_username = graphene.Field(AuthorType, username=graphene.String()) post_by_slug = graphene.Field(PostType, slug=graphene.String()) posts_by_author = graphene.List(PostType, username=graphene.String()) posts_bt_tag = graphene.List(PostType, tag=graphene.String()) def resolve_all_posts(root, info): return ( models.Post.objects.prefetch_related(&quot;tags&quot;).select_related(&quot;author&quot;).all() ) def resolve_author_by_username(root, info, username): return models.Profile.objects.select_related(&quot;user&quot;).get( user__username=username ) def resolve_post_by_slug(root, info, slug): return ( models.Post.objects.prefetch_related(&quot;tags&quot;) .select_related(&quot;author&quot;) .get(slug=slug) ) def resolve_posts_by_author(root, info, username): return ( models.Post.objects.prefetch_related(&quot;tags&quot;) .select_related(&quot;author&quot;) .filter(author__user__username=username) ) def resolve_posts_bt_tag(root, info, tag): return ( models.Post.objects.prefetch_related(&quot;tags&quot;) .select_related(&quot;author&quot;) .filter(tags__name__iexact=tag) ) </code></pre> <p>Here is the model for the blog posts:</p> <pre class="lang-py prettyprint-override"><code>class Profile(models.Model): user = models.OneToOneField( get_user_model(), on_delete=models.PROTECT, ) website = models.URLField(blank=True) bio = models.CharField(max_length=240, blank=True) def __str__(self) -&gt; str: return self.user.get_username() class Tag(models.Model): name = models.CharField(max_length=50, unique=True) def __str__(self): return self.name class Post(models.Model): class Meta: ordering = [&quot;-publish_date&quot;] title = models.CharField(max_length=255, unique=True) subtitle = models.CharField(max_length=255, blank=True) slug = models.SlugField(max_length=255, unique=True) body = models.TextField meta_description = models.CharField(max_length=150, blank=True) date_created = models.DateTimeField(auto_now_add=True) date_modified = models.DateTimeField(auto_now=True) publish_date = models.DateTimeField(blank=True, null=True) published = models.BooleanField(default=False) author = models.ForeignKey(Profile, on_delete=models.PROTECT) tags = models.ManyToManyField(Tag, blank=True) </code></pre> <p>and for context I pulled from this tutorial whish is on the real python website <a href="https://realpython.com/python-django-blog/" rel="nofollow noreferrer">https://realpython.com/python-django-blog/</a></p> <p>Any help would be greatly appreciated</p>
<python><django>
2023-10-06 13:11:03
0
317
Graham Morby
77,244,582
4,530,214
Apply function element-wise
<p>In xarray, how can I apply a non-vectorize, non-universal function to a DataArrray, such that I can map each element in the values to a new one ? The custom function shall take a scalar value and return a scalar value:</p> <pre class="lang-py prettyprint-override"><code>import numpy as np import xarray as xr def custom_function(x): # imagine some non-vectorized, non-numpy-ufunc stuff return type(x) # dummy example function data = xr.DataArray([1, 2, 3, 4], dims='x') # this doesn't work, custom_function actually gets send the whole array [1, 2, 3, 4] # xr.apply_ufunc(custom_function, data) # I'd expect something like this, where this is basically a loop on all elementss # xr.apply(custom_function, data) # in pandas, I would just use the .apply or .map method of Series import pandas as pd s = data.to_series() s.apply(custom_function) s.map(custom_function) </code></pre>
<python><pandas><python-xarray>
2023-10-06 12:49:48
2
546
mocquin
77,244,347
1,143,207
DynamoDB BatchGetItem Keys Array generation with multiple sort keys in python
<p>I might be missing something, but I struggle to generate multiple sort keys dict for use with Boto3 Batch get Item in python.</p> <p>For all other DB, which don't use sort key, I have something like this:</p> <p>I have a method that queries dynamo, but I use helper that populates the keys node:</p> <pre><code> DYNAMODB_TABLE: { 'Keys': generate_keydicts(myids, table_name) } }) </code></pre> <p>and:</p> <pre><code> if table_to_query == 'BossTable': return [{'id': myid} for myid in myids] </code></pre> <p>And that works beautiful - as charm.</p> <p>Now, this other table that is using sort key would need to have additional &quot;json&quot; object in the loop, something like:</p> <pre><code> if table_to_query == 'SortedKeyBossTable': keys = [] for myid in myids: single_key = {{'id': myid},{&quot;sortkey&quot;:&quot;sortkey1&quot;}, {'id': myid},{&quot;sortkey&quot;:&quot;sortkey2&quot;}} keys.append(single_key) return keys </code></pre> <p>But this is not a valid, hashable JSON - so it fails.</p> <p>If I remove outside curly brackets - then it's a tuple - again fails.</p> <p>I've tried to pass that manually, in code - and it worked, but I don't know how to generate this &quot;json&quot; that is not valid json, or object that is a list of objects.</p> <p>I'm not even sure how is that called, and how can that be achieved.</p> <p>Before anyone asks - I have 2k IDs that I need to check, out of 2 million entries, and - I guess I should have done regular query per ID, but since I have the code for batch item in place, and all post processing that I'm doing - I hoped it would be matter of adding one line.</p> <p>I've also looked here: <a href="https://stackoverflow.com/questions/65108590/is-it-possible-to-use-multiple-sort-values-in-aws-sdk-dynamodb-batchgetitem">Is it possible to use multiple sort values in aws sdk dynamodb batchGetItem?</a></p> <p>And some other places, but can't find a working example how to generate that object in python.</p>
<python><amazon-dynamodb>
2023-10-06 12:13:35
1
1,520
Balkyto
77,244,324
12,133,068
snakemake get number of input of a rule
<p>I want to do something simple: get the number of input that has a <code>snakemake</code> rules, and use this value in my shell commands. I have a working solution with the trick below, but this is very ugly and I guess/hope there is something more pretty to get <code>N</code>?</p> <pre><code>rule x: input: files = [&quot;file1&quot;, &quot;file2&quot;], output: &quot;output.txt&quot; shell: &quot;&quot;&quot; N=$(echo {input.files} | tr ' ' '\\n' | wc -l); # this returns 2, but the code is ugly! touch {output} &quot;&quot;&quot; </code></pre>
<python><snakemake>
2023-10-06 12:08:33
1
334
Quentin BLAMPEY
77,244,199
906,658
How do I freeze the requirements of a tox test environment?
<p>I usually install my requirements from an abstract file like so:</p> <pre><code># requirements.in pytest </code></pre> <p>I define test environments in a <code>tox.ini</code> like so:</p> <pre class="lang-ini prettyprint-override"><code>[testenv] deps = -rrequirements.in commands = pytest </code></pre> <p>How do I freeze the installed versions of the dependencies to a <code>requirements.txt</code> file?</p> <p>I tried using this bash one-liner:</p> <pre class="lang-ini prettyprint-override"><code>commands_post = python -m pip freeze --all &gt; requirements.d/pytest.txt </code></pre> <p>However, this prints the requirements to the standard output instead of redirecting them to the file:</p> <pre class="lang-bash prettyprint-override"><code>pytest-py312: commands_post[0]&gt; python -m pip freeze --all '&gt;' requirements.d/pytest.txt </code></pre> <p>Note that the <code>&gt;</code> is getting escaped instead of being interpreted.</p>
<python><pip><subprocess><requirements.txt><tox>
2023-10-06 11:49:21
5
14,598
Bengt
77,244,092
1,044,117
Why monkey patching (via setattr) magic methods does not work?
<p><code>x</code> is an instance of some object:</p> <pre><code>&gt; x = type('someobject', (), {})() &gt; repr(x) &lt;someobject object at 0x10172d990&gt; </code></pre> <p>Why monkey patching a magic method (e.g. <code>__repr__</code>) does not work?</p> <pre><code>&gt; setattr(x, '__repr__', lambda self: f'&lt;someobj {id(self)}&gt;') &gt; repr(x) &lt;someobject object at 0x10172d990&gt; </code></pre> <p>(original <code>__repr__</code> is used)</p> <pre><code>&gt; x.__repr__(x) &lt;someobj 4319271312&gt; </code></pre>
<python>
2023-10-06 11:32:43
2
19,047
fferri
77,243,985
326,016
Multiple Choice user input and routing in Langchain Python
<p>I am building a chatbot for my website with Langchain and I want to lead the user through the conversation with different answer options that I want to predefine for each step.</p> <p>So my chatbot will greet the user with a message like this:</p> <blockquote> <p>Hi there, how can I help you? Do you want to know more about:</p> <ol> <li>my services</li> <li>book appointment</li> <li>personal information</li> </ol> </blockquote> <p>So now, when the user types in &quot;1&quot; or &quot;services&quot;, the chatbot should ask for the website of the user. This input should be then used to feed into an agent to do some research what the company is doing and show the results to the user for validation. I don't need a fully working example, I just need a push into the right direction how to approach this.</p> <p>I tried it with <code>SequentialChain</code> and <code>MultiPromptChain</code> but I didn't had any luck to implement something close to the conversation flow I described above.</p> <p>I guess I need multiple components for this. A memory buffer and a Router. I use the <code>ConversationSummaryMemoryBuffer</code> currently, but I don't know how to implement the flow I described with a Router and how the chatbot should understand in which step of the conversation we are? Is this done automatically by the <code>ConversationSummaryMemoryBuffer</code>? Any helps is much appreciated.</p>
<python><langchain><py-langchain>
2023-10-06 11:11:51
1
1,158
Abenil
77,243,820
10,566,155
Import Python file in databricks notebook
<p>I am trying to import python files from databricks notebook. I am able to import the function but it keep giving me error - <em><strong>NameError: name 'col' is not defined</strong></em>.</p> <p>I have a file /Workspace/Shared/Common/common.py. The code in that file is:</p> <pre><code>def cast_datatypes(df_target, df_source): cols = df_target.columns types = [f.dataType for f in df_target.schema.fields] for i, c in enumerate(cols): df_source = df_source.withColumn(c, col(c).cast(types[i])) df_source = df_source.select(cols) return df_source </code></pre> <p>I am calling this function from a notebook, and did below approaches</p> <p>Approach 1:</p> <pre><code>import sys sys.path.append(&quot;/Workspace/Shared/Common/common.py&quot;) from common import cast_datatypes from pyspark.sql.functions import col &lt;Spark Session declaration&gt; df_final = cast_datatypes(df_target_A, df_source_A) </code></pre> <p>Approach 2:</p> <pre><code>spark.sparkContext.addPyFile(&quot;/Workspace/Shared/Common/common.py&quot;) import common as C from pyspark.sql.functions import col df_final = C.cast_datatypes(df_target_A, df_source_A) </code></pre> <p>Both of the approach was able to import the function, but failed to use 'col'. The error I am getting is:</p> <pre><code>File /Workspace/Shared/Common/common.py:13, in cast_datatypes(df_target, df_source) 11 types = [f.dataType for f in df_target.schema.fields] 12 for i, c in enumerate(cols): ---&gt; 13 df_source = df_source.withColumn(c, col(c).cast(types[i])) 14 df_source = df_source.select(cols) 15 return df_source NameError: name 'col' is not defined </code></pre> <p>Do we need to pass all the arguments that import function is using? If yes, then how did I modularize my notebooks in databricks?</p>
<python><azure><pyspark><jupyter-notebook><azure-databricks>
2023-10-06 10:46:43
2
329
Harshit Mahajan
77,243,809
15,704,972
How do I select columns on the innermost level of a Pandas DataFrame with multiindexed columns without knowing the amount of levels?
<p>I have a function that needs to select certain columns in the innermost level of a dataframe with multiindexed columns by name, while also reporting all higher column levels. The names of the columns in the innermost level are consistent, but the amount of multiindex levels above them isn't fixed. I might for instance get</p> <pre><code> foo bar baz ham spam ham spam ham spam 0 1 2 2 3 4 42 ... </code></pre> <p>or</p> <pre><code> foo bar baz one two three ham spam ham spam ham spam 0 1 2 2 3 4 42 ... </code></pre> <p>but want everything that's in the <code>spam</code> column for both, i.e.</p> <pre><code> foo bar baz spam spam spam 0 2 3 42 ... </code></pre> <p>or</p> <pre><code> foo bar baz one two three spam spam spam 0 2 3 42 ... </code></pre> <p>respectively. If I had a fixed amount of MultiIndex levels, I could access this by</p> <pre class="lang-py prettyprint-override"><code>df.loc[:, pd.IndexSlice[:, [&quot;spam&quot;]]] </code></pre> <p>for two levels,</p> <pre class="lang-py prettyprint-override"><code>df.loc[:, pd.IndexSlice[:, :, [&quot;spam&quot;]]] </code></pre> <p>for three and so on, but how do I do it when I don't know the amount of levels?</p>
<python><pandas><multi-index>
2023-10-06 10:45:51
1
561
KeyboardCat
77,243,668
3,735,871
Airflow schedule job - how to run it before the end of interval
<p>I'm trying to schedule a yearly job in Airflow, that runs every year on Oct 1st. The problem is, for this year's job (today is Oct 6th 2023), the job won't run till next year (2024 Oct 1st ). I need to run past several years' jobs as well this year's.</p> <p>How should I schedule the job? I can't run command to trigger the job. Thanks.</p>
<python><airflow>
2023-10-06 10:22:27
1
367
user3735871
77,243,599
1,739,884
How to type hint arguments of factory method, where the managed instances are from generic class with additional constructor args
<p>Consider the following case where we have a Manager which manages instances of certain classes. These classes must be a subclass of a given class but can be user-defined (i.e. are a generic).</p> <p>Since the Manager class has ownership of the instances, the question is how to correctly type-hint the situation as the to be managed Instance class may have user defined constructor.</p> <p>Example:</p> <pre><code>class Base: &quot;&quot;&quot;Base class for all instances&quot;&quot;&quot; def __init__(self, base_arg1: int, base_arg2: float): pass InstanceType = TypeVar('InstanceType', bounds=Base) # must be subclass of Base class Manager(Generic[InstanceType]): &quot;&quot;&quot;The manager class. Is a factory of Instances, creates and owns them. However instances may have custom extra arguments in their constructor. How to type-hint them correctly?&quot;&quot;&quot; def __init__(self, instance_class: Type[InstanceType]): self.factory = instance_class pass def new_instance(self, ??? ) # how to type hint the extra args of the constructor of InstanceType (without the base_args from Base.__init__)? self.factory( ???, base_arg1=123, base_arg2=4.56 ) pass class Instance(Base): def __init__(self, extra_arg: bool, base_arg1: int, base_arg2: float): # note this has extra_arg: bool pass instance_manager = Manager(Instance) # note this will be correctly inferred as Manager[Instance] instance_manager.new_instance(extra_arg=True) # How can this be correctly type-checked ?? </code></pre> <p>I was thinking along the lines of using a Callable in the manager constructor, to capture the function arguments. But then I am unsure how to proceed.</p> <pre><code>class Manager(Generic[InstanceType]): def __init__(self, instance_class: Callable[ [???], InstanceType]): # use Callable instead of Type pass </code></pre> <p>This leaves the questions:</p> <ol> <li>how to capture the arguments of the Callable and use these in the definition of 'new_instance' (but without(!) the base_args)</li> <li>how to enforce that the arguments contain the arguments of the Base class (i.e. base_arg1, and base_arg2).</li> </ol> <p>For me 1. is most important, I could live without 2.</p>
<python><python-typing>
2023-10-06 10:11:39
1
6,230
Andreas H.
77,243,572
717,069
Python GIL and deadlock detection?
<p>When using Python a multi-threaded environment, it is quite easy to deadlock as we need to lock the Python GIL. While trying to lock the GIL, we may hold other locks and the Python thread holding the GIL may call us trying to get the same lock. Ideally, we'd have a <code>PyGILSTATE_TryEnsure()</code> such that on failure we can examine the locks we hold and raise a &quot;would deadlock&quot; exception. I couldn't find anything in the Python docs that allows me to do this.</p> <p>Is there a way to achieve this?</p>
<python><multithreading><deadlock><gil>
2023-10-06 10:08:17
0
1,740
Jan Wielemaker
77,243,519
717,069
Threads, the Python GIL and save/restore thread state
<p>I'm working on making Python threads cooperate with SWI-Prolog threads, where SWI-Prolog supports fully independent native threads. We are talking about embedding Python. After initializing the embedded Python interpreter, the calling thread has a Python thread state. To allow other threads, I need to call <code>PyEval_SaveThread()</code> and before working with Python again from this thread I need to call <code>PyEval_RestoreThread()</code>. So, I <strong>think</strong> the logic is</p> <ul> <li>Thread M initialized Python.</li> <li>Thread M calls <code>tstate = PyEval_SaveThread()</code></li> <li>If I want to make a call to Python <ul> <li>If I am thread M <ul> <li>If <code>tstate != NULL</code>, call <code>PyEval_RestoreThread(tstate)</code> and set <code>tstate</code> to NULL else call <code>PyGILState_Ensure()</code> (<em>recursive call</em>)</li> <li>Increment <code>nesting</code></li> <li>call Python</li> <li>Decrement <code>nesting</code></li> <li>If <code>nesting == 0</code>, save the state again using <code>tstate = PyEval_SaveThread()</code> else call <code>PyGILState_Release()</code></li> </ul> </li> <li>else <ul> <li>call <code>PyGILState_Ensure()</code></li> <li>call Python</li> <li>call <code>PyGILState_Release()</code></li> </ul> </li> </ul> </li> </ul> <p>While this seems to work, the asymmetry is rather inconvenient and feels fragile. Is there some way to get in a symmetric situation such that I can simply get/release the GIL without bothering about the calling thread or thread state?</p>
<python><multithreading><gil>
2023-10-06 09:59:48
0
1,740
Jan Wielemaker
77,243,328
7,179,299
Web scraping of a sports table in R
<p>I need to web scrape the table information from the following link, using R or Python: <a href="https://euroleaguefantasy.euroleaguebasketball.net/en/stats-fantasy-euroleague" rel="nofollow noreferrer">https://euroleaguefantasy.euroleaguebasketball.net/en/stats-fantasy-euroleague</a></p> <p>So far I have tried the <code>rvest</code> package but no luck.</p> <pre class="lang-r prettyprint-override"><code>url &lt;- &quot;https://euroleaguefantasy.euroleaguebasketball.net/en/stats-fantasy-euroleague&quot; library(rvest) read_html(url) #&gt; {html_document} #&gt; &lt;html lang=&quot;en&quot;&gt; #&gt; [1] &lt;head&gt;\n&lt;meta http-equiv=&quot;Content-Type&quot; content=&quot;text/html; charset=UTF-8 ... #&gt; [2] &lt;body class=&quot;loading&quot;&gt;\n&lt;app-root&gt;&lt;/app-root&gt;&lt;button id=&quot;ot-sdk-btn&quot; clas ... </code></pre> <p><sup>Created on 2023-10-06 with <a href="https://reprex.tidyverse.org" rel="nofollow noreferrer">reprex v2.0.2</a></sup></p> <p>I cannot retrieve, or I do not know how to retrieve, any content from here, since <code>read_html(URL)[1]</code> or <code>read_html(URL)[2]</code> provides no content.</p> <p>How can I continue?</p>
<python><r><web-scraping><rvest>
2023-10-06 09:30:29
2
3,168
LDT
77,243,323
1,374,987
Trying to install older version of python package from pypi but pip says version not available
<p>I'm trying to install an older version of the <code>opencv-python</code> package, specifically <a href="https://pypi.org/project/opencv-python/4.1.0.25/" rel="nofollow noreferrer">4.1.0.25</a> but when running <code>pip install opencv-python==4.1.0.25</code> it complains that <code>Could not find a version that satisfies the requirement opencv-python==4.1.0.25</code>. The package clearly exists in pypi repo and it doesn't seem to be deprecated (except if I'm missing the flag). Is there a way to install it? Aren't all packages available in the pypi website available via <code>pip</code>?</p>
<python><pip><pypi>
2023-10-06 09:30:26
0
4,696
PentaKon
77,243,224
2,562,058
map() returns an iterator that yelds numpy arrays. How to convert it in a numpy array?
<p>I am using a <code>map</code> function where each element pointed by the returned iterator is a <code>numpy</code> array, i.e.</p> <pre><code>result = list(map(func, values)) print(result[:2])) [array([0.981, 0.23]), array([1.231, -0.56])] </code></pre> <p>I wish the result to be a unique <code>numpy</code> array, i.e.</p> <pre><code>print(result[:2]) array([[0.981, 0.23],[1.231, -0.56]]) </code></pre> <p>I tried to use <code>result = np.fromiter(map(func, values))</code> but I get the following deprecation warning:</p> <pre><code> DeprecationWarning: Conversion of an array with ndim &gt; 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.) </code></pre> <p>How to solve the problem?</p> <p>EDIT: although I had good answers, I would like to convert to np.array directly from the returned iterator from <code>map</code>, thus skipping the conversion from iterator to iterable.</p> <p>EDIT2: Here is a minimum working example</p> <pre><code>def func(u): # The size of A,B,C,D may change, so it is good idea to keep the function as general as possible A = np.array([[-1, 0], [-0.2, 1]]).reshape(2, 2) B = np.array([1, 0]).reshape(2, 1) C = np.array([[1, 0], [0, 1]]).reshape(2, 2) D = np.array([1, 1]).reshape(2, 1) xx = A @ xx + B @ u return C @ xx + D @ u N = 100 values = np.ones(N).reshape(N, 1) result = np.array(list(map(func, values))) </code></pre>
<python><python-3.x><numpy>
2023-10-06 09:15:03
3
1,866
Barzi2001
77,243,216
22,466,650
How to find the closest non null values to a target value?
<p>Sorry for the title but basically I'm dealing with a dataset that looks like a pacman game.</p> <p>So I need to find the coordinates of the closest ghosts to the pacman.</p> <pre><code>import pandas as pd pacman = [ list(&quot; G&quot;), list(&quot; P G &quot;), list(&quot; G &quot;), list(&quot; G &quot;), list(&quot; G &quot;), ] df = pd.DataFrame(pacman) print(df) 0 1 2 3 4 5 6 7 8 9 10 0 G 1 P G 2 G 3 G 4 G </code></pre> <p>I expect (0, 10) and (1, 5) as an output since they are the coordinates of the closest ghosts to pacman</p> <p>My code below gives only <code>(0, 10)</code>. Can you explain why please or give other solution ?</p> <pre><code>data = df.values.flatten() g_indices = np.where(data == 'G')[0] p_indices = np.where(data == 'P')[0] min_index = np.argmin(np.abs(g_indices - p_indices)) g_index = g_indices[min_index] coordinates = (df.iloc[:, g_index].idxmax(), g_index) print(coordinates) (0, 10) </code></pre>
<python><pandas><numpy>
2023-10-06 09:13:55
1
1,085
VERBOSE
77,243,198
1,136,850
Python ModuleNotFoundError even after appending it
<p>I keep getting <code>ModuleNotFoundError: No module named 'pandas'</code>, while I already included its location:</p> <pre><code>import pandas as pd from flask import Flask, request, jsonify, send_file from io import BytesIO import re import json from Crypto.Cipher import AES from base64 import b64decode import os import mysql.connector import sys sys.path.append(&quot;/home/master/.local/lib/python3.7/site-packages&quot;) </code></pre> <p>I am using <code>python3</code>.</p> <pre><code>python3 -m site --user-site /home/master/.local/lib/python3.7/site-packages </code></pre> <p>Any advise?</p>
<python><python-3.x><flask><python-module>
2023-10-06 09:11:18
4
3,227
CairoCoder
77,243,110
5,338,465
Python lambda type hint in pycharm
<p>I'm using Pycharm to develop python scripts.</p> <p>I have a list of class and want to sort this list by a class property.</p> <p>However, when I use:</p> <pre><code>results: list[HotelInfo] = get_result() results.sort(key=lambda x: x.price) </code></pre> <p>PyCharm's type hint shows that &quot;x&quot; is of type &quot;any&quot; and can't get class property.</p> <p>Even if I set <code>x: CLASS_NAME</code>, the lambda seems doesn't display correct type.</p> <p><a href="https://i.sstatic.net/kTELr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kTELr.png" alt="enter image description here" /></a></p> <p>How can I solve this problem in PyCharm?</p>
<python><python-3.x><pycharm>
2023-10-06 08:56:24
1
1,050
Vic
77,243,080
3,339,667
Combining Concatenate and wraps to get passthrough kwarg type hinting in python
<p>I've got a fun question I've been struggling with. Here's a toy bit of code:</p> <pre class="lang-py prettyprint-override"><code>from functools import wraps from pydantic import BaseModel class Config(BaseModel): vegetable: str = &quot;potato&quot; fruit: str = &quot;strawberry&quot; class SomeClass: @wraps(Config) def configure(self, something: str = &quot;whoa&quot;, **kwargs): print(something) return Config(**kwargs) s = SomeClass() s.configure() </code></pre> <p>The goal here is for the type hinter to be able to tell the user that <code>self</code> doesn't have to be specified, <code>something</code> is a string, and expand the <code>kwargs</code>.</p> <p>I was hoping to to combine <code>wraps</code> and Concatenate somehow to get the best of both worlds: annotating the default function args+kwargs (except kwargs) itself, and also annotating the available kwargs that come from the Config initialiser... but I've hit a wall with it.</p> <p>Does anyone have a simple solution I'm overlooking?</p>
<python><python-typing><python-decorators>
2023-10-06 08:49:54
0
353
Samreay
77,242,993
14,365,042
How to reshape pandas dataframe into a symmetric matrix (corr-like square matrix)?
<p>I have a df like below:</p> <pre><code> name1 name2 value 0 A B 1300 1 A C 150 2 A D 300 3 B C 450 4 B D 200 5 C D 300 </code></pre> <p>I tried to <code>pivot</code> the table to plot a corr-like heatmap based on the <code>value</code> column:</p> <pre><code>table = df.pivot(columns='name1', index='name2', values='value') table </code></pre> <p>The result is:</p> <pre><code>name1 A B C name2 B 1300.0 NaN NaN C 150.0 450.0 NaN D 300.0 200.0 300.0 </code></pre> <p>How can I create a square matrix that contains A-D in columns and rows with value of <code>name1</code> <code>name2</code> is same to <code>name2</code> <code>name1</code> (<code>A</code>/<code>B</code> = <code>B</code>/<code>A</code> = <code>1300</code>)?</p> <p>Desired output:</p> <pre><code> A B C D A NaN 1300.0 150.0 300.0 B 1300.0 NaN 450.0 200.0 C 150.0 450.0 NaN 300.0 D 300.0 200.0 300.0 NaN </code></pre> <p>Reproducible input:</p> <pre><code>import pandas as pd products_list = [['A', 'B', 1300], ['A', 'C', 150], ['A', 'D', 300], ['B', 'C', 450], ['B', 'D', 200], ['C', 'D', 300]] df = pd.DataFrame(products_list, columns=['name1', 'name2', 'value']) </code></pre>
<python><pandas><dataframe><pivot>
2023-10-06 08:34:52
1
305
Joe
77,242,992
11,328,614
Pytest asyncio, howto await in setup and teardown?
<p>I'm using <code>pytest-asyncio</code> to test my <code>asyncio</code> based library.</p> <p>I'm using the class-level approach, in which a <code>TestClass</code> has several <code>TestMethods</code> which are invoked by the test framework one after each other.</p> <p>The <code>setup</code> method initializes the <code>ClassUnderTest</code>. The <code>teardown</code> method currently does nothing. However, I commented the intended functionality in the <code>teardown</code>.</p> <p>What I would like to do, is to implement an <code>async</code> <code>teardown</code> and/or <code>setup</code>, so I can <code>await</code> some async clean-up code. Is this possible?</p> <p>I didn't find something about this in the <code>pytest-asyncio</code> documentation, which is very brief. Therefore, I'm asking this question. Maybe someone has stumbled over a similar problem and found a way to do it anyway.</p> <pre class="lang-py prettyprint-override"><code>import asyncio import random import pytest class ClassUnderTest: def __init__(self): self._queue = asyncio.Queue() self._task1 = None self._task2 = None async def start(self): self._task1 = asyncio.create_task(self.producer()) self._task2 = asyncio.create_task(self.consumer()) async def stop(self): self._task1.cancel() self._task2.cancel() return await asyncio.gather(self._task1, self._task2, return_exceptions = True) @property def tasks(self): return self._task1, self._task2 async def producer(self): try: while True: if self._queue.qsize() &lt; 10: self._queue.put_nowait(random.randint(0, 10)) await asyncio.sleep(50) except asyncio.CancelledError: print(&quot;Finito!&quot;) raise async def consumer(self): try: while True: if self._queue.qsize() &gt; 0: elem = self._queue.get_nowait() print(elem) await asyncio.sleep(100) except asyncio.CancelledError: print(&quot;Finito!&quot;) raise @pytest.mark.asyncio class TestClass: &quot;&quot;&quot; Tests my asynio code &quot;&quot;&quot; def setup_method(self): self._my_class_under_test = ClassUnderTest() def teardown_method(self): &quot;&quot;&quot; if not tasks[0].cancelled() or not tasks[1].cancelled(): await self._my_class_under_test.stop() &quot;&quot;&quot; async def test_start(self): await self._my_class_under_test.start() tasks = self._my_class_under_test.tasks assert not tasks[0].cancelled() assert not tasks[1].cancelled() await self._my_class_under_test.stop() async def test_stop(self): await self._my_class_under_test.start() tasks = self._my_class_under_test.tasks return_values = await self._my_class_under_test.stop() assert tasks[0].cancelled() assert tasks[1].cancelled() assert isinstance(return_values[0], asyncio.CancelledError) assert isinstance(return_values[1], asyncio.CancelledError) async def test_producer(self): pass async def test_consumer(self): pass if __name__ == &quot;__main__&quot;: pytest.main([__file__]) </code></pre> <p><strong>Output:</strong></p> <pre><code>/home/user/.config/JetBrains/PyCharm2023.2/scratches/asyncio_test_setup_teardown.py ============================= test session starts ============================== platform linux -- Python 3.10.13, pytest-7.4.2, pluggy-1.3.0 rootdir: /home/user/.config/JetBrains/PyCharm2023.2/scratches plugins: timeout-2.1.0, asyncio-0.21.1 asyncio: mode=strict collected 2 items asyncio_test_setup_teardown.py .. [100%] ============================== 2 passed in 0.01s =============================== Process finished with exit code 0 </code></pre>
<python><python-3.x><python-asyncio><teardown><pytest-asyncio>
2023-10-06 08:33:57
2
1,132
Wör Du Schnaffzig
77,242,892
6,562,240
Python List Comprehension: If Item Doesn't Exist, Insert a Value?
<p>I have two lists which i want to merge together as two separate arrays into one dataframe. The issue is, the data i am working with is patchy and has missing information</p> <p><strong>Data</strong></p> <p><a href="https://i.sstatic.net/JyQOq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JyQOq.png" alt="Data" /></a></p> <p><strong>Create 3 Lists</strong></p> <pre><code>Name = [item[0] for item in Data if item] Age = [item[1] for item in Data if item] Fruit = [item[2] for item in Data if item] </code></pre> <p>Which produces:</p> <pre><code>Name = ['John', 'Eric', 'Dave', 'Mike', 'Charlotte'] Age = ['32', '25', '31'] Fruit = ['Apple', 'Banana', 'Pear'] </code></pre> <p>However, this clearly causes an error when trying to <code>pd.DataFrame</code> as the lists are not the same length.</p> <p>Is there a way i can improve my list comprehension to insert a blank or default value to ensure my lists remain the same length, resulting in something like:</p> <pre><code>Name = ['John', 'Eric', 'Dave', 'Mike', 'Charlotte'] Age = ['32', '', '25', '31', ''] Fruit = ['', 'Apple', 'Banana', 'Pear', ''] </code></pre>
<python><list><list-comprehension>
2023-10-06 08:17:00
1
705
Curious Student
77,242,732
1,850,007
How to multiply array by each element of second array and concatenate them?
<p>My matrix:</p> <pre><code>a = np.array([[1, 2], [3, 4]]) b = np.array([[5, 6]]) </code></pre> <p>I want <code>c</code> as:</p> <pre><code>np.array([[5, 10, 6, 12], [15, 20, 18, 24]]) </code></pre> <p>Entry wise multiplication of <code>a</code> by <code>b</code>, then concatenate them. How to do this effectively without double loops?</p>
<python><numpy><kronecker-product>
2023-10-06 07:53:45
2
1,062
Lost1
77,242,703
2,385,132
Tensorflow dataset causes a memory leak
<p>I'm creating a dataset for my xception model training, I'm doing it like this:</p> <pre><code>def load_dataset(height: int, width: int, path: str, kind: str, batch_size=32) -&gt; tf.data.Dataset: directory = os.path.join(path, kind) return keras.utils.image_dataset_from_directory( directory=directory, labels='inferred', label_mode='categorical', batch_size=batch_size, image_size=(height, width)) def load_rebalanced_dataset( height: int, width: int, path: str, kind: str, batch_size=32, do_repeat: bool = True) -&gt; (tf.data.Dataset, list[str]): dataset = load_dataset(height, width, path, kind, batch_size) classes = dataset.class_names num_classes = len(classes) class_datasets = [] for class_idx in range(num_classes): class_datasets.append(dataset.filter(lambda x, y: tf.reduce_any(tf.equal(tf.argmax(y, axis=-1), class_idx)))) balanced_ds = tf.data.Dataset.sample_from_datasets(class_datasets, [1.0 / num_classes] * num_classes) if do_repeat: balanced_ds = balanced_ds.repeat() balanced_ds = balanced_ds.cache().prefetch(tf.data.AUTOTUNE) return balanced_ds, classes </code></pre> <p>Something in the <code>load_rebalanced_dataset</code> function is making tf fill up all the available RAM memory (64 GB). It trains w/o problems for the first ~17 epochs until I get an OOM error. Is there anything I can do about it?</p>
<python><tensorflow><tensorflow-datasets>
2023-10-06 07:50:28
1
3,930
Marek M.
77,242,502
4,706,952
Color scatterplot by grouping in pandas
<p>Having a pandas <code>df</code> as follows:</p> <pre><code># Sample data data = { 'x': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10], 'y': [10, 9, 8, 7, 6, 5, 4, 3, 2, 1], 'z': ['A', 'B', 'C', 'A', 'B', 'C', 'A', 'B', 'C', 'A'] } # Creating a pandas DataFrame df = pd.DataFrame(data) </code></pre> <p>I try to make a scatterplot that colors each group individually.</p> <p>I tried (following my intuition..):</p> <pre><code>df.groupby('z').plot.scatter('x', 'y', color = 'z') </code></pre> <p>This doesn't work..</p> <hr /> <p>So I searched and found:</p> <pre><code>plt.figure() for z in df['z'].unique(): subset = df[df['z'] == z] plt.scatter(subset['x'], subset['y']) </code></pre> <p>This works and produces:</p> <p><a href="https://i.sstatic.net/hq6ad.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hq6ad.png" alt="enter image description here" /></a>.</p> <p>But to me, this seems like a 'workaround'. Is there really no way to use the pandas grouping directly as an argument for the plotting method?</p>
<python><pandas>
2023-10-06 07:17:33
1
7,517
symbolrush
77,242,474
3,161,120
pytest: how to terminate FastAPI TestClient after each test?
<p>I am testing FastAPI application that keeps open WebSocket connection forever. By design, only the client is supposed to close WebSocket. After test execution, pytest doesn't finish, it is hanging forever.</p> <ol> <li>What's the proper way of testing such an application?</li> <li>Is there any way to terminate the fixture after test execution?</li> </ol> <p><strong>EDIT 2023-11-29</strong></p> <p>This problem happens on Ubuntu 22 (<code>Python 3.11.6, pytest-7.4.3, pluggy-1.3.0</code>, <code>plugins: respx-0.20.2, anyio-3.7.1, asyncio-0.21.1</code>).</p> <p>On Fedora 38 (<code>platform linux -- Python 3.11.6, pytest-7.4.3, pluggy-1.3.0</code>, <code>plugins: anyio-3.7.1, asyncio-0.21.1, respx-0.20.2</code>) pytest exits correctly.</p> <p>It's interesting that the same plugins are printed in different order on these systems.</p> <p>Example:</p> <pre><code>$ pytest -k test_comedy_tom_hanks_async ======================================================================= test session starts ======================================================================== platform linux -- Python 3.11.5, pytest-7.4.2, pluggy-1.3.0 -- /home/gbajson/.cache/pypoetry/virtualenvs/vox-fdgLK-f1-py3.11/bin/python cachedir: .pytest_cache rootdir: /storage/amoje/Sync/area22/vox-async configfile: pytest.ini plugins: timeout-2.1.0, anyio-3.7.1, asyncio-0.21.1 asyncio: mode=Mode.STRICT collected 13 items / 9 deselected / 4 selected run-last-failure: no previously failed tests, not deselecting items. src/tests/test_main.py::test_comedy_tom_hanks_async[asyncio+uvloop-3200.0-0.1] PASSED &lt;&lt;&lt; HANGING FOREVER &gt;&gt;&gt; </code></pre> <p>Here is my fixture used for testing:</p> <pre><code>@pytest.fixture(scope=&quot;function&quot;, name=&quot;websocket&quot;) async def fixture_ws_audio(): client = TestClient(app) with client.websocket_connect(&quot;/ws/audio&quot;) as websocket: yield websocket </code></pre> <p>I already tried:</p> <ul> <li><p><code>@pytest.mark.timeout(20)</code></p> </li> <li><p>I added the following fixture to execute some actions after each test, but <code>pytest</code> never reaches <code>logger.info(&quot;Test finished.&quot;)</code></p> </li> </ul> <pre><code>@pytest.fixture(autouse=True) def run_before_and_after_tests(): &quot;&quot;&quot;Fixture to execute asserts before and after a test is run&quot;&quot;&quot; # Setup: fill with any logic you want logger.info(&quot;Starting test.&quot;) yield # this is where the testing happens logger.info(&quot;Test finished.&quot;) </code></pre> <p><strong>EDIT 2023-10-09</strong></p> <p>Both, backend and test code, use <code>asyncio.to_thread</code> calls, example:</p> <pre><code> try: coro = asyncio.to_thread(read_from_ws, websocket) coro_waited = asyncio.wait_for(coro, timeout) results = await asyncio.gather(coro_waited) text = results[0] except asyncio.TimeoutError: logger.info(&quot;asyncio.TimeoutError: text: %s&quot;, text) break </code></pre> <p>When I terminate pytest session after executing all tests I see:</p> <pre><code>======== 4 passed, 5 deselected in 77.85s (0:01:17) ======== ^CException ignored in: &lt;module 'threading' from '/usr/lib/python3.11/threading.py'&gt; Traceback (most recent call last): File &quot;/usr/lib/python3.11/threading.py&quot;, line 1553, in _shutdown atexit_call() File &quot;/usr/lib/python3.11/concurrent/futures/thread.py&quot;, line 31, in _python_exit t.join() File &quot;/usr/lib/python3.11/threading.py&quot;, line 1112, in join self._wait_for_tstate_lock() File &quot;/usr/lib/python3.11/threading.py&quot;, line 1132, in _wait_for_tstate_lock if lock.acquire(block, timeout): ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ KeyboardInterrupt: </code></pre>
<python><websocket><pytest><fastapi><ubuntu-22.04>
2023-10-06 07:12:01
1
1,830
gbajson
77,242,246
386,861
How to implement conditional colors on a time series plot in Altair
<p>I've got a dataset that I'm trying to work up around hospital admissions over time.</p> <p>Here's the python</p> <pre><code>import pandas as pd import altair as alt import numpy as np pd.options.display.max_columns = 10 pd.options.display.max_rows = 50 pd.set_option('display.max_colwidth', None) alt.data_transformers.disable_max_rows() df = pd.read_csv(&quot;sorted_data.csv&quot;) </code></pre> <h1>The data is here: <a href="https://drive.google.com/file/d/1Uvaq93-pMYSUgXbq2N2WAgtWn8t2Tbn2/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/1Uvaq93-pMYSUgXbq2N2WAgtWn8t2Tbn2/view?usp=sharing</a></h1> <pre><code># Create dropdown for Major region major_region_dropdown = alt.binding_select(options=df['Major region'].unique().tolist(), name=&quot;Major region: &quot;) major_region_selection = alt.selection_point(fields=['Major region'], bind=major_region_dropdown) #, value={&quot;Major region&quot;: &quot;Major region&quot;} # Create dropdown for variable variable_dropdown = alt.binding_select(options=df['variable'].unique().tolist(), name=&quot;Category: &quot;) variable_selection = alt.selection_point(fields=['variable'], bind=variable_dropdown) #, value={&quot;variable&quot;: &quot;Total admissions&quot;} Removed because broke code # Main chart main_chart = alt.Chart(df).mark_line(point=True).encode( x='Year:O', y='mean(quantity):Q', color=alt.Color('Region:N', legend=None), # Set legend to None tooltip=[&quot;Major region&quot;, &quot;Region&quot;, &quot;mean(quantity):Q&quot;] ).add_params( major_region_selection ).transform_filter( major_region_selection ).add_params( variable_selection ).transform_filter( variable_selection ).properties( width=450, height=600, title=&quot;Region&quot;) main_chart </code></pre> <p><a href="https://i.sstatic.net/RhAKe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RhAKe.png" alt="enter image description here" /></a></p> <p>The first problem with the chart is that although the Major Region selector defaults to 'North West', it displays every line which is confusing. Oddly, you can select another region and it settles down.</p> <p>The second problem is that there are often too many minor regions to make out trends. Take this for the Major region of London.</p> <p><a href="https://i.sstatic.net/j77gr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j77gr.png" alt="enter image description here" /></a></p> <p>There are too many sub-regions to list in a legend so instead I'd like to have every line in light grey and then turn red when one of the points is selected. . Ideally, i'd like to have selection on an individual.</p> <p>How do I put that condition into place. I know it's under the color attribute, but I can't work it out.</p> <p>HERE'S the data: <a href="https://drive.google.com/file/d/13Pw2wfr82laHXw7KKqp-nXeJt2cjwtZC/view?usp=sharing" rel="nofollow noreferrer">https://drive.google.com/file/d/13Pw2wfr82laHXw7KKqp-nXeJt2cjwtZC/view?usp=sharing</a></p>
<python><pandas><altair>
2023-10-06 06:25:23
0
7,882
elksie5000
77,242,163
3,082,759
Azure Functions Postgres ID violation error on auto increment ID
<p>I have an Azure function that retrieve data from API and save it to Postgres DB. Now I have this returning error with duplicated ID</p> <blockquote> <p>Error occurred during database operation: duplicate key value violates unique constraint 'contacts_pkey' DETAIL: Key (id)=(12736022) already exists.</p> </blockquote> <p>Here is my insert statement in function</p> <pre><code>def process_contacts(dbString, data, db_table_contacts): sql_contacts = f&quot;&quot;&quot;INSERT INTO {db_table_contacts}(uuid, name, age, gender, .....) VALUES (%s, %s, %s, %s, %s, .....) ON CONFLICT (uuid) DO UPDATE SET </code></pre> <p>I'm processing records in batches of 250 per transaction. Here is also my constrain in Postgres</p> <p><a href="https://i.sstatic.net/Tpsmx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Tpsmx.png" alt="postgres contrains" /></a></p> <p>What I'm missing here, records are autocimeneting on ID and should update set when there is duplicate ?</p>
<python><postgresql><azure-functions>
2023-10-06 06:07:07
1
387
PiotrK
77,242,012
3,211,801
Using bigquery client inside asyncio causes hanging issue in "OpenSSL/SSL.py", in recv_into
<p>We are doing some read operations using python bigquery library.</p> <pre><code>Python version: Python 3.8.10 Google bigquery package: https://pypi.org/project/google-cloud-bigquery/ (version: 2.30.1) </code></pre> <pre class="lang-py prettyprint-override"><code>from google.oauth2 import service_account from google.cloud import bigquery import logging import asyncio import faulthandler import traceback from concurrent.futures import ThreadPoolExecutor, ProcessPoolExecutor logger = logging.getLogger('test') logger.setLevel(logging.ERROR) def get_big_query_client(source_details): print(&quot;get client&quot;) credentials_dict = source_details['auth_config'] credentials = service_account.Credentials.from_service_account_info(credentials_dict) project_id = credentials_dict[&quot;project_id&quot;] client = bigquery.Client(credentials=credentials, project=project_id) return client async def check_and_run_dd_for_tenants(tenant_ids, source_detail, table_name): await asyncio.gather(*[flatten_column_details_of_dataset(source_detail, table_name) for tenant_id in tenant_ids]) async def flatten_column_details_of_dataset(source_details, table_name): curr_column_data = [] client = get_big_query_client(source_details) schema_name = 'test_schema' query_table_name = f&quot;{schema_name}.{table_name}&quot; details = {'query_table_name':query_table_name} table_reference = details.get(&quot;query_table_name&quot;) try: table_details = client.get_table(table_reference) print(&quot;table_details&quot;,table_details) for _schema_data in table_details.schema: data_type = _schema_data.field_type except Exception as exp: logger.error(exp) logger.error(traceback.format_exc()) pass finally: client.close() return curr_column_data def main(): logger = logging.getLogger(&quot;Run DD&quot;) logger.setLevel(logging.INFO) faulthandler.enable(all_threads=True) faulthandler.dump_traceback_later(180, repeat=True) source_details = { &quot;auth_config&quot;: { &quot;auth_provider_x509_cert_url&quot;: &quot;*************&quot;, &quot;auth_uri&quot;: &quot;https://accounts.google.com/o/oauth2/auth&quot;, &quot;client_email&quot;: &quot;**************&quot;, &quot;client_id&quot;: &quot;****************&quot;, &quot;client_x509_cert_url&quot;: &quot;**********************&quot;, &quot;private_key&quot;: &quot;*****************&quot;, &quot;private_key_id&quot;: &quot;***************&quot;, &quot;project_id&quot;: &quot;**********************&quot;, &quot;token_uri&quot;: &quot;https://oauth2.googleapis.com/token&quot;, &quot;type&quot;: &quot;service_account&quot; } } table_names = ['test1'] tenant_id = [] for i in range(1,30): tenant_id.append(i) for table in table_names: res = asyncio.run(check_and_run_dd_for_tenants(tenant_id, source_details, table)) print(res) if __name__ == &quot;__main__&quot;: main() </code></pre> <p>We are trying to run things concurrently using asyncio. When we run this code, the code is hanging sometimes.</p> <p>Upon examining the trace using <code>faulthandler</code> , we got the following trace.</p> <pre><code>Thread 0x00007fde83fff700 (most recent call first): File &quot;/usr/local/lib/python3.8/dist-packages/OpenSSL/SSL.py&quot;, line 1799 in recv_into File &quot;/usr/local/lib/python3.8/dist-packages/urllib3/contrib/pyopenssl.py&quot;, line 319 in recv_into File &quot;/usr/lib/python3.8/socket.py&quot;, line 669 in readinto File &quot;/usr/lib/python3.8/http/client.py&quot;, line 277 in _read_status File &quot;/usr/lib/python3.8/http/client.py&quot;, line 316 in begin File &quot;/usr/lib/python3.8/http/client.py&quot;, line 1348 in getresponse File &quot;/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py&quot;, line 440 in _make_request File &quot;/usr/local/lib/python3.8/dist-packages/urllib3/connectionpool.py&quot;, line 699 in urlopen File &quot;/usr/local/lib/python3.8/dist-packages/requests/adapters.py&quot;, line 489 in send File &quot;/usr/local/lib/python3.8/dist-packages/requests/sessions.py&quot;, line 701 in send File &quot;/usr/local/lib/python3.8/dist-packages/requests/sessions.py&quot;, line 587 in request File &quot;/usr/local/lib/python3.8/dist-packages/google/auth/transport/requests.py&quot;, line 480 in request File &quot;/usr/local/lib/python3.8/dist-packages/google/cloud/_http/__init__.py&quot;, line 379 in _do_request File &quot;/usr/local/lib/python3.8/dist-packages/google/cloud/_http/__init__.py&quot;, line 341 in _make_request File &quot;/usr/local/lib/python3.8/dist-packages/google/cloud/_http/__init__.py&quot;, line 482 in api_request File &quot;/usr/local/lib/python3.8/dist-packages/google/api_core/retry.py&quot;, line 190 in retry_target File &quot;/usr/local/lib/python3.8/dist-packages/google/api_core/retry.py&quot;, line 283 in retry_wrapped_func File &quot;/usr/local/lib/python3.8/dist-packages/google/cloud/bigquery/client.py&quot;, line 760 in _call_api File &quot;/usr/local/lib/python3.8/dist-packages/google/cloud/bigquery/client.py&quot;, line 1012 in get_table File &quot;/connector_interfaces/big_query.py&quot;, line 804 in get_files File &quot;/usr/lib/python3.8/concurrent/futures/thread.py&quot;, line 57 in run File &quot;/usr/lib/python3.8/concurrent/futures/thread.py&quot;, line 80 in _worker File &quot;/usr/lib/python3.8/threading.py&quot;, line 870 in run File &quot;/usr/lib/python3.8/threading.py&quot;, line 932 in _bootstrap_inner File &quot;/usr/lib/python3.8/threading.py&quot;, line 890 in _bootstrap </code></pre> <p>We can see this line in the trace. <code>table_details = client.get_table(table_reference)</code></p> <p>Normally we will have async and await for the function in our code. But since we are using library, we cannot add async/await to the library's normal functions.</p> <p><strong>Attempted fixes:</strong></p> <ol> <li>We have tried with introducing lock</li> </ol> <pre class="lang-py prettyprint-override"><code>import threading lock = threading.Lock() lock.acquire() # Having our code related to reading here table_details = client.get_table(table_reference) lock.release() </code></pre> <p>This approach didn't work.</p> <ol start="2"> <li><p>Adding timeout to <code>get_table</code> function. <code>table_details = client.get_table(table_reference, timeout=180)</code> Still we got the same error.</p> </li> <li><p>Setting socket blocking mode to False</p> </li> </ol> <pre class="lang-py prettyprint-override"><code>import socket s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setblocking(False) s.settimeout(300) </code></pre> <ol start="4"> <li>Having the read part in a separate function and calling it using</li> </ol> <pre class="lang-py prettyprint-override"><code>loop = asyncio.get_running_loop() res = await loop.run_in_executor(None, my_new_read_func,arg1) print(res) </code></pre> <p>All 4 approaches we are facing issue.</p> <p>How to avoid this hanging issue?</p> <p>NOTE:</p> <p>Corresponding issue is raised in bigquery's github repo also. <a href="https://github.com/googleapis/python-bigquery/issues/1674" rel="nofollow noreferrer">https://github.com/googleapis/python-bigquery/issues/1674</a></p> <p>The tables that we have are also smaller tables only.</p>
<python><google-bigquery><openssl><python-asyncio>
2023-10-06 05:17:34
0
882
Nandha
77,241,984
14,963,549
How to take a screenshot by web scraping code (Selenium) in Databricks for Azure?
<p>I'm starting in the world of web scraping. my try is so simple. I'm download and installing firefox. Then, I start by going to a google, and just to confirm this code is working I'm trying to take a screenshot and save it into a temporary path in order to display at the end.</p> <p>Unfortunately, when I execute the code nothing seems to be wrong, but when it finish, I just get an image totally in blank as follow:</p> <p><a href="https://i.sstatic.net/cvcNc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cvcNc.png" alt="enter image description here" /></a></p> <p>My code is is described down below:</p> <p>Installing Selenium CMD:</p> <pre><code>!pip install selenium !pip install webdriver-manager </code></pre> <p>Firefox download and installing in temporary path CMD:</p> <pre><code>%sh wget https://ftp.mozilla.org/pub/firefox/releases/106.0.1/linux-x86_64/en-US/firefox-106.0.1.tar.bz2 -O /tmp/firefox.tar.bz2 %sh tar -xvf /tmp/firefox.tar.bz2 -C /tmp/ %sh sudo apt-get update -y %sh sudo apt-get install -y wget bzip2 libxtst6 libgtk-3-0 libx11-xcb-dev libdbus-glib-1-2 libxt6 libpci-dev libudev-dev %sh rm -rf /var/lib/apt/lists/* </code></pre> <p>Algorithm CMD:</p> <pre><code>from selenium import webdriver from selenium.webdriver.firefox.service import Service from selenium.webdriver.firefox.options import Options from webdriver_manager.firefox import GeckoDriverManager import time from IPython.display import Image # Temporary path screenshot_path = '/tmp/web.png' # Firefox controller configuration service = Service(executable_path=GeckoDriverManager().install()) options = Options() options.set_preference(&quot;browser.download.folderList&quot;, 2) options.set_preference(&quot;browser.download.manager.showWhenStarting&quot;, False) options.set_preference(&quot;browser.download.dir&quot;, '/tmp/head_count_data/') options.headless = False options.add_argument('--headless') options.binary_location = '/tmp/firefox/firefox' driver = webdriver.Firefox(options=options, service=service) login_url = &quot;https://www.google.com/&quot; time.sleep(20) # Screenshot driver.save_screenshot(screenshot_path) time.sleep(20) # Show results Image(filename=screenshot_path) </code></pre> <p><strong>Could you tell me what I'm doing wrong?</strong> I've tried many times, I restarted the kernel, changed image format to .jpg and I checked out I've got installed PIL library by using (where I've got 9.2 version in the results).</p>
<python><selenium-webdriver><web-scraping><databricks><azure-databricks>
2023-10-06 05:07:48
0
419
Xkid
77,241,916
8,253,860
How to test the payload by mocking POST request in pytest?
<p>I have a function that sends some payload to an external API using <code>requests.post()</code> method. I want to test whether the payload satisfies certain conditions in the tests. Based on that, I need to mock requests post and capture the payload in the testing function. Ideally, I don't want to add another external dependency and just stick to pytest but it is flexible if there are no other ways.</p> <p>There are a few questions like this but they don't explore the similar use-case so they are not helpful.</p> <p>I'm looking for something like this:</p> <pre><code>def mock_request_post(**kwargs): # check if kwargs satisfy test conditions def test_api(args): # mock requests module with `mock_request_post` # make api call </code></pre>
<python><unit-testing><python-requests><pytest>
2023-10-06 04:42:20
1
667
Ayush Chaurasia
77,241,833
264,136
variables to hold "none" if the list items don't exist
<pre><code>uut_image = image_file[0] hub_image = image_file[1] hub_sn_image = image_file[2] </code></pre> <p>I want the variables to hold &quot;none&quot; (string) if the list items don't exist.</p>
<python>
2023-10-06 04:14:35
1
5,538
Akshay J
77,241,797
12,892,937
Python QuasarDB connection refused
<p>I'm following QuasarDB guide here: <a href="https://doc.quasar.ai/master/primer.html" rel="nofollow noreferrer">https://doc.quasar.ai/master/primer.html</a></p> <pre><code>&gt; docker run -d --name qdb-server bureau14/qdb &gt; docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 6b28287a871b bureau14/qdb &quot;/opt/qdb/scripts/qd…&quot; 32 minutes ago Up 31 minutes 2836-2837/tcp, 3836/tcp qdb-server </code></pre> <p>This is the example code to run QuasarDB:</p> <pre><code>import quasardb # connecting, default port is 2836 c = quasardb.Cluster(&quot;qdb://127.0.0.1:2836&quot;) # creating a timeseries object my_ts = c.ts(&quot;stocks&quot;) # creating the timeseries in the database # it will throw an error if the timeseries already exist my_ts.create([quasardb.ColumnInfo(quasardb.ColumnType.Double, &quot;close&quot;)]) </code></pre> <hr /> <pre><code>python3 primer.py File &quot;/home/dtl/Documents/qdb/primer.py&quot;, line 5, in &lt;module&gt; c = quasardb.Cluster(&quot;qdb://127.0.0.1:2836&quot;) quasardb.quasardb.Error: at qdb_connect: Connection refused. </code></pre> <p>What might cause <code>Connection refused</code> in this case? The example doesn't include this scenario.</p> <p>Edit: Stackoverflow doesn't have tag <code>quasardb</code> :\</p>
<python><docker>
2023-10-06 04:02:03
1
1,831
Huy Le
77,241,756
9,357,484
Error running code taking from Gymnasium tutorial
<p>I would like to run the following code using PyCharm IDE</p> <pre><code>import gymnasium as gym env = gym.make(&quot;LunarLander-v2&quot;, render_mode=&quot;human&quot;) observation, info = env.reset(seed=42) for _ in range(1000): action = env.action_space.sample() # this is where you would insert your policy observation, reward, terminated, truncated, info = env.step(action) if terminated or truncated: observation, info = env.reset() env.close() </code></pre> <p>I got the code from <a href="https://gymnasium.farama.org/" rel="nofollow noreferrer">here</a>. I installed &quot;Gymnasium&quot; in my computer using &quot;pip install gymnasium&quot;. The Python version I am using is &quot;Python 3.11.5&quot;.</p> <p>ERROR: Failed building wheel for box2d-py Failed to build box2d-py ERROR: Could not build wheels for box2d-py, which is required to install pyproject.toml-based projects&quot; - What will be the solution?</p>
<python><openai-gym>
2023-10-06 03:48:57
2
3,446
Encipher