QuestionId
int64
74.8M
79.8M
UserId
int64
56
29.4M
QuestionTitle
stringlengths
15
150
QuestionBody
stringlengths
40
40.3k
Tags
stringlengths
8
101
CreationDate
stringdate
2022-12-10 09:42:47
2025-11-01 19:08:18
AnswerCount
int64
0
44
UserExpertiseLevel
int64
301
888k
UserDisplayName
stringlengths
3
30
βŒ€
79,346,838
16,854,651
Issues in updating Excel table range and conditional formatting programmatically using openpyxl, python
<p>I am using Python <strong>openpyxl</strong> to update an Excel sheet that contains a table and conditional formatting. The table has a predefined style, and the conditional formatting uses a 3-color scale applied to a specific range. I am trying to:</p> <ul> <li>Extend the table range to include a new column.</li> <li>Apply the same table formatting to the extended range.</li> <li>Update the conditional formatting to cover the extended range as well.</li> </ul> <p>Image below is the initial Excel sheet I want to update,</p> <p><a href="https://i.sstatic.net/Z4jo8APm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Z4jo8APm.png" alt="table before" /></a></p> <p>and below is the code I am using</p> <pre><code>import openpyxl from openpyxl.formatting.formatting import ConditionalFormattingList from openpyxl.formatting.rule import ColorScaleRule path = &quot;test_colors.xlsx&quot; wb = openpyxl.load_workbook(path) sheet = wb[&quot;tab1&quot;] sheet.tables[&quot;tab1_table&quot;].ref = &quot;A1:D4&quot; new_rule_range = &quot;A2:D4&quot; rule = ColorScaleRule(start_type='min', start_color='00FF00', mid_type='percentile', mid_value=50, mid_color='FFFF00', end_type='max', end_color='AA4A44') sheet.conditional_formatting=ConditionalFormattingList() sheet.conditional_formatting.add(new_rule_range, rule) wb.save(path) wb.close() </code></pre> <p>but once I open the Excel file I get the following warning pop-up</p> <p><a href="https://i.sstatic.net/82aZukmT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/82aZukmT.png" alt="warning message" /></a></p> <p>if I click yes, I get the table below with the rules applied without any table formatting.</p> <p><a href="https://i.sstatic.net/6HSbO6rB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6HSbO6rB.png" alt="table after" /></a></p> <p>can someone help figure out the problem, and how can I resolve it?</p>
<python><python-3.x><pandas><openpyxl>
2025-01-10 19:37:01
1
344
Josal
79,346,700
843,400
mlflow.pytorch.load_model failing due to failure "No module named 'src.<mymodelname>'" issue with unpickler.load
<p>I am working to support some model developers by prototyping some functionality with MLflow model registry.</p> <p>We successfully register versions of the model fine (it's a Pytorch Lightning model), and I recently also added code_paths to the log_model call so that theoretically all of the original model code ends up registered / in the artifacts (I'm not sure if this is best practice though? But either way the model code is super complex and I'm not sure how I can extract out just the stuff needed by the serializer on the consuming side atm).</p> <pre><code> mlflow.pytorch.log_model( pytorch_model=cli.model, artifact_path=&quot;model&quot;, registered_model_name=&quot;MyModelName&quot;, code_paths=[&quot;src&quot;], ) </code></pre> <p>In any case, I can see the model registered and the artifacts tree looks like this now when I view them from the UI:</p> <pre><code>model |--checkpoints |--code |--src |--&lt;mymodelname&gt; |-- a bunch of other modules and nested stuff under here |--a bunch of other stuff here |--data |--model.pth |--pickle_module_info.txt (this is a small file with just the text &quot;mlflow.pytorch.pickle_module&quot; in it) |--MLmodel (small file w/ metadata about the model) |--conda.yaml |--python_env.yaml |--requirements.txt </code></pre> <p>I have a totally separate package outside of the model package and I'm testing consuming the model as an &quot;external&quot; client who wants to use the registered pre-trained model for further fine-tuning or inference. I confirmed that the code is getting retrieved when I do something like this:</p> <pre><code>model_uri = f&quot;models:/{model_name}/{model_version}&quot; local_path = mlflow.artifacts.download_artifacts(model_uri) </code></pre> <p>the retrieved artifacts there include everything I listed above.</p> <p>However, when I try to load the model:</p> <pre><code> loaded_model = mlflow.pytorch.load_model(model_uri) </code></pre> <p>It fails with the error: <code>ModuleNotFoundError: No module named 'src.&lt;mymodelname&gt;'</code></p> <p>Am I going about this totally wrong or is there just something small I'm missing here (like getting the picker to recognize the original model code under code/ somehow since it's all there?)?</p> <p>Any help would be much appreciated!</p>
<python><pytorch><mlflow><pytorch-lightning>
2025-01-10 18:31:56
0
3,906
CustardBun
79,346,575
5,344,240
Overriding __str__ and __repr__ magic methods for built-in float class in Python
<p>The class below works as expected:</p> <pre><code>class Storage(float): def __new__(cls, value, unit): instance = super().__new__(cls, value) instance.unit = unit return instance def __str__(self): return f&quot;{super().__repr__()} {self.unit}&quot; # 1. # return f&quot;{super().__str__()} {self.unit}&quot; # 2. def __repr__(self): return f'Storage({super().__repr__()}, &quot;{self.unit}&quot;)' # a. # return f'Storage({super().__str__()}, &quot;{self.unit}&quot;)' # b. </code></pre> <p>I did not know which dunder method to call on <code>super()</code> so I tried all variants, see commented lines. I thought they would all work since <code>__str__()</code> and <code>__repr__()</code> work the same for floats.</p> <ul> <li>Case 1a gives the desired output:</li> </ul> <pre><code>&gt;&gt;&gt; storage = Storage(512, &quot;GB&quot;) &gt;&gt;&gt; str(storage) '512.0 GB' &gt;&gt;&gt; repr(storage) 'Storage(512.0, &quot;GB&quot;)' </code></pre> <ul> <li><p>Case 1b gives expected <code>str()</code> output but <code>RecursionError</code> for <code>repr()</code>.</p> </li> <li><p>Case 2a gives expected <code>repr()</code> output but unexpected <code>str()</code> output:</p> </li> </ul> <pre><code>&gt;&gt;&gt; str(storage) 'Storage(512.0, &quot;GB&quot;) GB' </code></pre> <ul> <li>Case 2b gives <code>RecursionError</code> for both <code>str()</code> and <code>repr()</code>.</li> </ul> <p>Can you please help me understand how these overridden methods get tangled up?</p>
<python>
2025-01-10 17:40:53
1
455
Andras Vanyolos
79,346,555
6,038,082
How to use Python logger in help messages without the __main__.INFO getting printed
<p>I am using 'logging' module to log my info,error and warning messages. However to print the help message, I am removing the logger handler by removeHandler otherwise <strong>main</strong>.INFO gets printed with the help messages. So I am using 'print' function instead for that.</p> <p>Is there a way to use logger for help messages as well without the <strong>main</strong>.INFO getting printed in it ?</p> <p>Below is my code :</p> <pre><code>import logging def show_help(): 'This is help messages' help_msg = '''\n The valid options:\n [-src_path : &lt;option to provide src_path ] [-dest_path : &lt;option to provide dest_path ] ''' print(help_msg) def define_logger(): log_path = &quot;/usr/scripts/logs/my_script.log&quot; # Create a custom logger logger = logging.getLogger(__name__) logger.setLevel(logging.DEBUG) # Create a console handler f_handler = logging.FileHandler(log_path) con_handler = logging.StreamHandler() con_handler.setLevel(logging.DEBUG) # Create a formatter and add it to the handler con_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s') con_handler.setFormatter(con_format) f_format = logging.Formatter('%(name)s - %(levelname)s - %(message)s') f_handler.setFormatter(f_format) # Add the handler to the logger logger.addHandler(con_handler) logger.addHandler(f_handler) return logger, con_handler if __name__ == '__main__': logger, con_handler = define_logger() if args[1] == '-help_msg': logger.removeHandler(con_handler) show_help() logger.addHandler(con_handler) </code></pre>
<python>
2025-01-10 17:34:50
1
1,014
A.G.Progm.Enthusiast
79,346,395
7,425,379
Why do I get "AttributeError: 'str' object has no attribute 'value'" when trying to use darts ExponentialSmoothing with a "trend" argument?
<p>Here is the code I have:</p> <pre><code># Define models models = { 'ExponentialSmoothing': [ ExponentialSmoothing(trend='add', seasonal='add', seasonal_periods=52), ExponentialSmoothing(trend='add', seasonal='mul', seasonal_periods=12) ], 'SeasonalARIMA': [ ARIMA(p=1, d=1, q=1, seasonal_order=(1, 1, 1, 52)), ARIMA(p=1, d=1, q=1, seasonal_order=(1, 1, 1, 12)) ], 'FFT': [ FFT(nr_freqs_to_keep=10), FFT(nr_freqs_to_keep=5) ] } def evaluate_models(train, test, model_list): performance = [] for model in model_list: start_time = time.time() model.fit(train) forecast = model.predict(len(test)) end_time = time.time() # Ensure forecast and test are TimeSeries objects if not isinstance(forecast, TimeSeries): raise ValueError(f&quot;Forecast is not a TimeSeries object: {forecast}&quot;) if not isinstance(test, TimeSeries): raise ValueError(f&quot;Test is not a TimeSeries object: {test}&quot;) performance.append({ 'Model': type(model).__name__, 'MAE': mae(test, forecast), 'MSE': mse(test, forecast), 'MASE': mase(test, forecast, train), 'Forecast Bias': (forecast.mean() - test.mean()).values()[0], 'Time Elapsed (s)': end_time - start_time }) return pd.DataFrame(performance) # Evaluate weekly data performance_weekly = {} for name, model_list in models.items(): performance_weekly[name] = evaluate_models(train_weekly, test_weekly, model_list) # Evaluate monthly data performance_monthly = {} for name, model_list in models.items(): performance_monthly[name] = evaluate_models(train_monthly, test_monthly, model_list) # Display results display(pd.concat(performance_weekly.values())) display(pd.concat(performance_monthly.values())) </code></pre> <p>I get an error like this:</p> <pre class="lang-none prettyprint-override"><code> AttributeError: 'str' object has no attribute 'value' File &lt;command-3594900232608958&gt;, line 42 40 performance_weekly = {} 41 for name, model_list in models.items(): ---&gt; 42 performance_weekly[name] = evaluate_models(train_weekly, test_weekly, model_list) 44 # Evaluate monthly data 45 performance_monthly = {} File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/darts/models/forecasting/exponential_smoothing.py:123, in ExponentialSmoothing.fit(self, series) 118 if self.seasonal_periods is None and series.has_range_index: 119 seasonal_periods_param = 12 121 hw_model = hw.ExponentialSmoothing( 122 series.values(copy=False), --&gt; 123 trend=self.trend if self.trend is None else self.trend.value, 124 damped_trend=self.damped, 125 seasonal=self.seasonal if self.seasonal is None else self.seasonal.value, 126 seasonal_periods=seasonal_periods_param, 127 freq=series.freq if series.has_datetime_index else None, </code></pre> <p>The context:</p> <p>I do timeseries forecasting.</p> <p>Is this because of the methodology I have in splitting the training and test dataset?</p>
<python><time-series><forecasting><u8darts>
2025-01-10 16:42:13
1
1,134
GaB
79,346,210
3,858,619
Python cmislib3 PDF being white after createDocumentFromString
<p>I need to send PDF file over CMIS using Browser bindings. I have the following code, but on my CMIS application my PDF is blank. Furthermore, I try using createDocument() but it's not working.</p> <p>BTW, my PDF before open and read/decode/encode is good. So I think there is an issue while converting to b64 or other stuff. Do you have any idea if I do something wrong ?</p> <pre class="lang-py prettyprint-override"><code>from cmislib.model import CmisClient from cmislib.browser.binding import BrowserBinding _cmis_client = CmisClient(repository_url, cmis_username, cmis_password, binding=BrowserBinding()) _repo = _cmis_client.getDefaultRepository() _root_folder = _repo.getObjectByPath(base_dir) </code></pre> <pre class="lang-py prettyprint-override"><code>with open(path, 'rb') as file: file_name = path.split('/')[-1] file_content = file.read().decode('ISO-8859-1') _root_folder.createDocumentFromString(file_name, contentString=file_content, contentType='application/pdf') </code></pre> <pre class="lang-py prettyprint-override"><code>with open(path, 'r') as file: file_name = path.split('/')[-1] _root_folder.createDocument(file_name, contentFile=file.read(), contentType='application/pdf') # _root_folder.createDocument(file_name, contentFile=file, contentType=content_type) # this one not working also </code></pre>
<python><cmis>
2025-01-10 15:35:47
0
1,093
Nathan30
79,345,994
10,037,034
How to run a task for specific time?
<p>I am creating a DAG in Airflow, and within this DAG, I need to trigger another DAG using a TimeSensor. The goal is to set the target time to between 2:00 AM 3:00 AM, and if the Timesensor is triggered after 2:00 AM, it should wait until the next day's 2:00 AM. However, I am encountering the following error:</p> <pre><code>TypeErrors not supported between instances of 'datetime time' and 'datetime datetime&quot; </code></pre> <p>My code;</p> <pre><code> from airflow.models import DAG from datetime import timedelta, datetime from airflow.providers.amazon.aws.sensors.s3 import S3KeySensor from airflow.operators.python_operator import PythonOperator from airflow.providers.ssh.operators.ssh import SSHOperator from airflow.operators.trigger_dagrun import TriggerDagRunOperator from airflow.sensors.time_sensor import TimeSensor from datetime import datetime, time from airflow.utils.state import State from airflow.models import DagRun import pytz import pendulum def calculate_next_target_time(): now = pendulum.now('Europe/Istanbul') target_time = now.replace(hour=2, minute=0, second=0, microsecond=0) if now &gt;= target_time: target_time = target_time.add (days=1) return target_time def wait_for_time(): next_target_time = calculate_next_target_time() print(f&quot;Next target time: (next_target_time}&quot;) return next_target_time with DAG( '_dag', default_args={ 'depends_on_past' : False, 'email': [''], 'email_on_failure': False, 'email_on_retry': False,}, schedule_interval= None, start_date=datetime (2021, 1, 1), catchup=False) as dag: dummy_task= SSHOperator ( task_id= &quot;dummy_task&quot; command = &quot;dummy-task&quot; ssh_conn_id = &quot;dummy&quot;,) wait_for_2am = TimeSensor ( task_id= 'wait_for_2am', poke_interval=60, timeout=3000, mode= 'poke', target_time=calculate_next_target_time(), dag=dag) wait_for_2am_task = PythonOperator ( task_id= &quot;wait_for_2am_task&quot; python_callable=wait_for_time, dag=dag) service_trigger = TriggerDagRunOperator ( task_id= &quot;service_trigger&quot;, trigger_dag_id= &quot;other_dag&quot;,) dummy_task Β» wait_for_2am_task Β» wait_for_2am Β» service_trigger </code></pre> <p>how can i solve this problem. Timesensor is only available for time (like hour).</p>
<python><airflow>
2025-01-10 14:19:07
0
1,311
Sevval Kahraman
79,345,986
7,959,614
Fast(est) exponentiation of numpy 3D matrix
<p><code>Q</code> is a 3D matrix and could for example have the following shape:</p> <blockquote> <p>(4000, 25, 25)</p> </blockquote> <p>I want raise <code>Q</code> to the power <code>n</code> for <code>{0, 1, ..., k}</code> and sum it all. Basically, I want to calculate</p> <blockquote> <p>\sum_{i=0}^{k-1}Q^n</p> </blockquote> <p>I have the following function that works as expected:</p> <pre><code>def sum_of_powers(Q: np.ndarray, k: int) -&gt; np.ndarray: Qs = np.sum([ np.linalg.matrix_power(Q, n) for n in range(k) ], axis=0) return Qs </code></pre> <p>Is it possible to speed up my function or is there a faster method to obtain the same output?</p>
<python><numpy>
2025-01-10 14:16:00
3
406
HJA24
79,345,979
14,254,771
Show waiting input string when attaching to a container
<p>I'm trying to run a client-server application in Python using docker-compose:</p> <p>I wrote a client.py file that awaits input: <code>command=input(&quot;enter command&quot;)</code>. The client.py sends requests to the server based on the command it got.</p> <p>The project tree looks like this:</p> <pre><code>. β”œβ”€β”€ client β”‚Β Β  β”œβ”€β”€ client.py β”‚Β Β  └── Dockerfile β”œβ”€β”€ docker-compose.yml └── server β”œβ”€β”€ Dockerfile └── server.py </code></pre> <p>The server.py receives the request and sends back a response to the client.</p> <p>Now, I'm trying to build and run the scripts:</p> <p>This is the docker-compose.yml:</p> <pre class="lang-yaml prettyprint-override"><code> version: '3' services: server: build: context: ./server network: host command: python ./server.py ports: - 1337:1337 client: build: context: ./client network: host command: python ./client.py depends_on: - server network_mode: host stdin_open: true tty: true </code></pre> <p>The <code>Dockerfile</code>s are pretty simple:</p> <p>In client folder:</p> <pre><code>FROM python:latest ADD client.py /client/ WORKDIR /client/ </code></pre> <p>In server folder:</p> <pre><code>FROM python:latest RUN pip install bcrypt ADD server.py /server/ WORKDIR /server/ </code></pre> <p>I run <code>sudo docker-compose up</code> and then attach to the client container: <code>sudo docker attach a3_client_1</code> but it doesn't print the string in the input. It prints it only in the terminal where I ran <code>sudo docker-compose up</code> and only after I input a command in the other attached terminal.</p> <p>Is there a way to show the prints made before the attach ?</p> <p>Maybe there is a way to run the script only when someone attaches to the container ?</p>
<python><python-3.x><docker><docker-compose>
2025-01-10 14:13:53
1
499
Wahalez
79,345,801
273,593
Typing issue using a taskgroup in a taskflow-based dag
<p>I'm trying to improve the readability of my dags using taskgroups.</p> <p>I also rely on the taskflow syntax, and the ability to map function parameters / return values to xcom automatically.</p> <p>Also also :) I'm a mypy / pyright user, and I try to keep the source code of my project type-annotated.</p> <p>With this scenario in mind, I'm trying to understand what is the best way to describe my dags and taskgroups.</p> <p>Let me share a (simplified) example of my dags:</p> <pre class="lang-py prettyprint-override"><code>@task def source() -&gt; str: return 'blablabla' @task def task_len(data: str) -&gt; int: return len(data) @task def task_mul(times: int) -&gt; str: return 'x' * times @task_group def tg(data: str) -&gt; str: return task_mul(task_len(data)) @task def sink(data: str) -&gt; None: print(data) @dag() def dag_tg() -&gt; None: sink(tg(source())) dag_tg() </code></pre> <p>Here you can see that I have some tasks that return / consume from xcom.</p> <p>I also have a taskgroup, where I want to capture the needed input xcom and describe the output one.</p> <p>In the dag my goal is to &quot;use&quot; the taskgroup as a task, that just consume and produce xcom &quot;directly&quot;.</p> <p>This setup &quot;works&quot;, as it is loaded correctly in airflow, it runs as expected etcetera.</p> <p>Yet, I have a bunch of errors from mypy: on the tg definition I have the error <code>Value of type variable &quot;FReturn&quot; of &quot;task_group&quot; cannot be &quot;str&quot;</code>, and in the dag, where tg is used, I have <code>Argument 1 has incompatible type &quot;DAGNode&quot;; expected &quot;str&quot;</code>.</p> <p>It seems that - at least from the typing perspective - the taskgroup decorator expect a function that return a DAGNode (and this is also visible on <a href="https://github.com/apache/airflow/blob/main/airflow/decorators/task_group.py#L182" rel="nofollow noreferrer">https://github.com/apache/airflow/blob/main/airflow/decorators/task_group.py#L182</a> ).</p> <p>So my question is twofold:</p> <ul> <li><p>Assuming the airflow annotations are right - am I using the taskgroup &quot;wrong&quot;? Should I avoid using taskflow-style functions in a group? Do this dag works just by chance?</p> </li> <li><p>Assuming my usage is correct - should airflow annotations be improved to support this scenario?</p> </li> </ul>
<python><airflow><python-typing>
2025-01-10 13:08:35
0
1,703
Vito De Tullio
79,345,778
1,627,106
Poetry not updating sub dependencies despite version constraint allowing it
<p>My code is using a package (<code>taskiq</code>), which in turn depends on another package (<code>pycron</code>). Why is <code>poetry update</code> not updating <code>pycron</code> despite version constraints allowing it?</p> <pre class="lang-shell prettyprint-override"><code>$ poetry update Updating dependencies Resolving dependencies... (2.2s) No dependencies to install or update $ poetry show -o pycron 3.0.0 3.1.1 Simple cron-like parser, which determines if current datetime matches conditions. $ poetry show --why --tree pycron taskiq 0.11.10 Distributed task queue with full async support └── pycron &gt;=3.0.0,&lt;4.0.0 </code></pre>
<python><python-poetry>
2025-01-10 12:58:34
1
1,712
Daniel
79,345,695
9,467,944
Pair data located in the same string, AWK or other
<p>I have a file with strings like this</p> <pre><code>1A,1B,1C, 2A,2B,2C, </code></pre> <p>between the group &quot;1&quot; and the group &quot;2&quot; there is a tab (each string has a different number of elements inside the file); I would need to pair the &quot;A&quot;, &quot;B&quot;, and &quot;C&quot;, and move each pair in a new line; for e.g.</p> <pre><code>1A-2A 1B-2B 1C-2C </code></pre> <p>I was looking with AWK or Bash, but I can not come out.</p>
<python><bash><awk>
2025-01-10 12:21:03
3
549
Emma Athan
79,345,689
11,790,979
Trouble calling an activate script from within a powershell script
<p>I'm trying to write some powershell scripts to &quot;one-click&quot; some setup for project initialisation.</p> <p>I am unable to get it to call the activate script. Typically, in a terminal I would create the virtual environment, then call the activate script using <code>.\venv\Scripts\Activate</code>. However, because I want to have the script be flexible, I am letting (forcing) users to pick a name for their virtual environment.</p> <p>Onto the script:</p> <pre class="lang-bash prettyprint-override"><code>function Make-Python-Environment { [CmdletBinding()] #make an advance function Param( [Parameter(Mandatory)] [ValidateNotNullOrEmpty()] [String]$VirtualEnvName, [Parameter(Mandatory)] [TransformBooleanInputs()] # not relevant to this, just parses string to boolean t/f [switch]$IncludeDotEnv = $False ) begin { Write-Host &quot;This is only a test&quot; } end { $ThisDir = (Get-Item .).FullName if($IncludeDotEnv){ New-Item &quot;.env&quot; -Force Write-Host &quot;.env file has been created in the current directory.&quot; } else { Write-Host &quot;Skipping .env creation.&quot; } if($VirtualEnvName){ py -m venv $VirtualEnvName Write-Host &quot;Created virtual environment at&quot; (Join-Path -Path $ThisDir -ChildPath $VirtualEnvName) .\$VirtualEnvName\Scripts\Activate #&lt;-- point of failure } } } </code></pre> <p>This is the output when I run the script:</p> <pre class="lang-none prettyprint-override"><code>PS C:\Users\Alan&gt; Make-Python-Environment cmdlet Make-Python-Environment at command pipeline position 1 Supply values for the following parameters: VirtualEnvName: fa IncludeDotEnv: 1 This is only a test Directory: C:\Users\Alan Mode LastWriteTime Length Name ---- ------------- ------ ---- -a---- 09/01/2025 13:41 0 .env .env file has been created in the current directory. Created virtual environment at C:\Users\Alan\fa .\$VirtualEnvName\Scripts\Activate : The term '.\$VirtualEnvName\Scripts\Activate' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling of the name, or if a path was included, verify that the path is correct and try again. At C:\Users\Alan\Documents\PythonSetup.ps1:54 char:9 + .\$VirtualEnvName\Scripts\Activate + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : ObjectNotFound: (.\$VirtualEnvName\Scripts\Activate:String) [], CommandNotFoundException + FullyQualifiedErrorId : CommandNotFoundException </code></pre> <p>I have also tried wrapping the the VirtualEnvName in parenthesis <code>.\($virtualEnvName)\Scripts\Activate</code> but that also fails.</p>
<python><powershell>
2025-01-10 12:18:55
1
713
nos codemos
79,345,665
4,465,920
Pact file not being saved to configured directory
<p>For some reason the Json contract file Pact generates after running the test I created is being saved to the project's root folder instead of using the path I have defined on its configuration. This application is running in WSL, and it saves to the wrong place in the virtual environment and in my local folder, too.</p> <p>I have used their example <a href="https://pact-foundation.github.io/pact-python/examples/tests/test_00_consumer/#examples.tests.test_00_consumer.pact" rel="nofollow noreferrer">from their documentation</a> and tried writing the path in different ways, but nothing seems to work.</p> <p>I'm using <code>pact-python&gt;=2.2.2</code>, and installed it using uv.</p> <p>This is what I added to my conftest.py file:</p> <pre><code>@pytest.fixture(scope=&quot;module&quot;) def pact() -&gt; Generator[Pact, None, None]: consumer = Consumer(&quot;Consumer&quot;) pact_dir = Path(Path(__file__).parent / &quot;contract/pacts&quot;) pact = consumer.has_pact_with( Provider(&quot;Provider&quot;), pact_dir=pact_dir, # publish_to_broker=True, # Mock service configuration # host_name=MOCK_URL.host, # port=MOCK_URL.port, # Broker configuration # broker_base_url=str(broker), # broker_username=broker.user, # broker_password=broker.password, ) pact.start_service() yield pact pact.stop_service() </code></pre> <p>I ran the test in debug mode, checked the <code>pact_dir</code> and it is set as it's supposed to be.</p> <pre><code>PosixPath('/mnt/c/project/tests/contract/pacts') </code></pre> <p>I'm starting to suspect it could be an issue with Pact itself, but can't be sure.</p>
<python><windows-subsystem-for-linux><pact>
2025-01-10 12:09:50
1
927
apires
79,345,483
1,166,789
Install package from private Github repository with dependency on another private Github repository
<p>I am having troubles installing a package from a private Github repository (I am using a personal access token for this), but this package has a dependency to another private Github repository from the same organization (so the PAT token is valid) which is not correctly resolved during the installation. Here is the example:</p> <p>I have a package, let's call it <code>pbd</code>, that has a <code>requirements.txt</code> file:</p> <pre><code>fastapi[standard]==0.115.2 python-dotenv~=1.0.1 tagger @ git+https://${PAT_TOKEN}@github.com/MY_ORGANIZATION/tagger@v1.0.1 </code></pre> <p>As you can see, the <code>tagger</code> package is installed directly from the Github repository of my organization. As usual, this package also has a <code>requirements.txt</code> file with their own dependencies, which looks like this:</p> <pre><code>jsonpickle~=4.0.1 requests~=2.32.3 joblib~=1.4.2 tex @ git+https://${PAT_TOKEN}@github.com/MY_ORGANIZATION/tex@v1.0.2 </code></pre> <p>Likewise the previous package, the <code>tagger</code> package also needs to install a <code>tex</code> package from my organization, so the PAT token is also valid and works fine.</p> <p>Ok, I have already generated my PAT token, exported it to an <code>ENV</code> variable, put it in the Secrets of all my respositories, etc.</p> <p>If I try to install the <code>tagger</code> package, everything works fine and I don't get any error. The problem comes when I try to install the <code>pbd</code> package. When I do <code>pip install -r requirements.txt</code> with the <code>pbd</code> requirements.txt file, it correctly resolves the <code>tagger</code> package, but it fails when injecting the <code>${PAT_TOKEN}</code> for the internal dependency of the <code>tagger</code> (the <code>tex</code> package).</p> <p>Here is the output of <code>pip install -r requirements.txt</code> for <code>pbd</code> package:</p> <pre><code>... Collecting tagger@ git+https://****@github.com/MY_ORGANIZATION/tagger@v1.0.1 (from -r requirements.txt (line 3)) Cloning https://****@github.com/MY_ORGANIZATION/tagger (to revision v1.0.1) to /tmp/pip-install-l1b_8yth/tagger_afff0d9438154e0bbd86cc84bd9a6408 Running command git clone --filter=blob:none --quiet 'https://****@github.com/MY_ORGANIZATION/tagger' /tmp/pip-install-l1b_8yth/tagger_afff0d9438154e0bbd86cc84bd9a6408 Resolved https://****@github.com/MY_ORGANIZATION/tagger to commit 04060492b1907ca817366f20be6a87a32680bf04 Installing build dependencies: started Installing build dependencies: finished with status 'done' ... Collecting tex@ git+https://****@github.com/MY_ORGANIZATION/tex@v1.0.2 (from tagger@ git+https://***@github.com/MY_ORGANIZATION/tagger@v1.0.1-&gt;-r requirements.txt (line 3)) Cloning https://****@github.com/MY_ORGANIZATION/tex (to revision v1.0.2) to /tmp/pip-install-l1b_8yth/tex_66e1729f35a1430fa58d589084f2aacd Running command git clone --filter=blob:none --quiet 'https://****@github.com/MY_ORGANIZATION/tex' /tmp/pip-install-l1b_8yth/tex_66e1729f35a1430fa58d589084f2aacd fatal: could not read Password for 'https://${PAT_TOKEN}@github.com': No such device or address error: subprocess-exited-with-error Γ— git clone --filter=blob:none --quiet 'https://****@github.com/MY_ORGANIZATION/tex' /tmp/pip-install-l1b_8yth/tex_66e1729f35a1430fa58d589084f2aacd did not run successfully. β”‚ exit code: 128 ╰─&gt; See above for output. note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error </code></pre> <p>Any idea on this? It seems like <code>pip</code> doesn't replace <code>ENV</code> variables for dependencies of dependencies in a &quot;recursive&quot; sense.</p> <p>Thanks.</p>
<python><github><pip><access-token>
2025-01-10 11:03:51
0
351
Javier
79,345,392
1,473,517
Running functions in parallel and seeing their progress
<p>I am using joblib to run four processes on four cores in parallel. I would like to see the progress of the four processes separately on different lines. However, what I see is the progress being written on top of each other to the same line until the first process finishes.</p> <pre><code>from math import factorial from decimal import Decimal, getcontext from joblib import Parallel, delayed from tqdm import trange import time def calc(n_digits): # number of iterations n = int(n_digits+1/14.181647462725477) n = n if n &gt;= 1 else 1 # set the number of digits for our numbers getcontext().prec = n_digits+1 t = Decimal(0) pi = Decimal(0) deno = Decimal(0) for k in trange(n): t = ((-1)**k)*(factorial(6*k))*(13591409+545140134*k) deno = factorial(3*k)*(factorial(k)**3)*(640320**(3*k)) pi += Decimal(t)/Decimal(deno) pi = pi * Decimal(12) / Decimal(640320 ** Decimal(1.5)) pi = 1/pi # no need to round return pi def parallel_with_joblib(): # Define the number of cores to use n_cores = 4 # Define the tasks (e.g., compute first 100, 200, 300, 400 digits of pi) tasks = [1200, 1700, 900, 1400] # Run tasks in parallel results = Parallel(n_jobs=n_cores)(delayed(calc)(n) for n in tasks) if __name__ == &quot;__main__&quot;: parallel_with_joblib() </code></pre> <p>I would also like the four lines to be labelled &quot;Job 1 of 4&quot;, &quot;Job 2 of 4&quot; etc.</p> <hr /> <p>Following the method of @Swifty and changing the number of cores to 3 and the number of tasks to 7 and changing leave=False to leave=True I have this code:</p> <pre><code>from math import factorial from decimal import Decimal, getcontext from joblib import Parallel, delayed from tqdm import trange import time def calc(n_digits, pos, total): # number of iterations n = int(n_digits + 1 / 14.181647462725477) n = n if n &gt;= 1 else 1 # set the number of digits for our numbers getcontext().prec = n_digits + 1 t = Decimal(0) pi = Decimal(0) deno = Decimal(0) for k in trange(n, position=pos, desc=f&quot;Job {pos + 1} of {total}&quot;, leave=True): t = ((-1) ** k) * (factorial(6 * k)) * (13591409 + 545140134 * k) deno = factorial(3 * k) * (factorial(k) ** 3) * (640320 ** (3 * k)) pi += Decimal(t) / Decimal(deno) pi = pi * Decimal(12) / Decimal(640320 ** Decimal(1.5)) pi = 1 / pi # no need to round return pi def parallel_with_joblib(): # Define the number of cores to use n_cores = 3 # Define the tasks (e.g., compute first 100, 200, 300, 400 digits of pi) tasks = [1200, 1700, 900, 1400, 800, 600, 500] # Run tasks in parallel results = Parallel(n_jobs=n_cores)(delayed(calc)(n, pos, len(tasks)) for (pos, n) in enumerate(tasks)) if __name__ == &quot;__main__&quot;: parallel_with_joblib() </code></pre> <p>I have change it to leave=True as I don't want the blank lines that appear otherwise.</p> <p>This however gives me:</p> <p><a href="https://i.sstatic.net/2C8DkuM6.png" rel="noreferrer"><img src="https://i.sstatic.net/2C8DkuM6.png" alt="enter image description here" /></a></p> <p>and then at the end it creates even more mess:</p> <p><a href="https://i.sstatic.net/xVJAggfi.png" rel="noreferrer"><img src="https://i.sstatic.net/xVJAggfi.png" alt="enter image description here" /></a></p> <p>How can this be fixed?</p>
<python><joblib><tqdm>
2025-01-10 10:34:59
3
21,513
Simd
79,345,354
2,915,050
FileNotFoundError within Python package on a file in same directory as calling function
<p>I have created a Python package, and one of the functions I have written opens a <code>.txt</code> file in the same directory as that function. Locally, this works fine, but when the package is built and executed a <code>FileNotFoundError</code> is produced.</p> <p>My package directory looks like this:</p> <pre><code>- my_app |_ __init__.py |_ __main__.py |_ file.txt </code></pre> <p>In <code>__main__.py</code>, I have a function that looks like this to open <code>file.txt</code>:</p> <pre><code>from pathlib import Path def open_file(): filepath = Path(__file__).parent / &quot;file.txt&quot; with open(filepath) as f: return f.read() </code></pre> <p>When I run it locally as <code>python -m my_app</code>, this works fine. However, when I build it into a package and then execute it I get the following error:</p> <p><code>FileNotFoundError: [Errno 2] No such file or directory: '/opt/hostedtoolcache/Python/3.11.11/x64/lib/python3.11/site-packages/my_app/file.txt'</code></p>
<python><python-packaging>
2025-01-10 10:22:05
0
1,583
RoyalSwish
79,345,299
2,957,687
Best default location for shared object files
<p>I have compiled C code to be called by a Python script. Of course I can include it with <code>cdll.LoadLibrary(&quot;./whatever.so&quot;)</code>, but I would prefer it to be accessible to all Python scripts in different folders. The idea is that I use <strong>default paths</strong> for shared objects and do not change environment variables or system files to do that.</p> <p>According to one of the answers on <a href="https://stackoverflow.com/questions/1099981/why-cant-python-find-shared-objects-that-are-in-directories-in-sys-path">Why can&#39;t Python find shared objects that are in directories in sys.path?</a>, <code>/usr/local/lib</code> should work. Namely, <code>/etc/ld.so.conf.d/libc.conf</code> includes that folder. So I used <code>sudo cp -a whatever.so /usr/local/lib</code> and <code>sudo ldconfig</code>. However, <code>cdll.LoadLibrary(&quot;whatever.so&quot;)</code> does not find the file.</p> <p>Following other suggestions, I have run <code>python -m site</code>, and <code>/usr/local/lib</code> is unfortunately not on the list. Probably the third element, <code>/usr/lib/python3.9</code>, is the best choice, but how can I automatically select it on the <code>cp</code> command?</p> <p>To summarise, is there a good default place to put shared objects (<code>.so</code>) without having to change environment variables and/or system files, and how can I choose it automatically. [I want to write a such <code>Makefile</code> code that puts compiled shared object into path.]</p>
<python><shared-libraries><dynamic-linking>
2025-01-10 10:01:22
1
921
Pygmalion
79,345,142
8,638,267
arabic(hijri) date picker for tkinter python
<p>I could only find georgian date picker, is there a way to show hijri/arabic dates on a date picker? I'm not looking for a converter from georgian to hijri.</p>
<python><tkinter><datepicker>
2025-01-10 09:08:38
1
1,173
John Sall
79,345,130
2,269,457
Create geoparquet file from large data set in chunks in python
<p>I am trying to achieve chunkwise writing of geoparquet files. While writing a parquet file chunkwise via <code>pyarrow.RecordBatch</code> is trivial for <code>pandas</code> chunks and also well documented, doing the same when creating a geoparquet file via <code>geopandas</code> (or any other approach) seems to be lacking documentation.</p> <p>So far, I've done the following:</p> <pre class="lang-py prettyprint-override"><code>import sys import pathlib import pyarrow as pa import pyarrow.parquet as pq import pandas as pd import numpy as np import geopandas as gpd def create_sample_data(filename: str): df = pd.DataFrame(np.random.randint(-90, 90, size=(1000000, 3)), columns=list('xyz')) df.to_csv(filename, sep=' ', header=False, index=False) def convert_geoparquet(input_file, output_file): df = pd.read_csv(input_file, header=None, names=['x', 'y', 'z'], sep=' ', dtype='int32') gdf = gpd.GeoDataFrame(df, geometry=gpd.points_from_xy(df.x, df.y), crs=4326) gdf.to_parquet(output_file, compression='brotli') def convert_parquet(input_file, output_file): new_schema = pa.schema([('x', pa.int32()), ('y', pa.int32()), ('z', pa.int32())]) with pq.ParquetWriter(output_file, schema=new_schema, compression='brotli') as writer: with pd.read_csv(input_file, header=None, names=['x', 'y', 'z'], sep=' ', dtype='int32', chunksize=1000) as reader: for i, df in enumerate(reader): batch = pa.RecordBatch.from_pandas(df, schema=new_schema) writer.write_batch(batch) def convert(input_file, conv_type='parquet'): output_file = f&quot;&quot;&quot;{pathlib.Path(input_file).with_suffix('')}.{conv_type}&quot;&quot;&quot; if conv_type == 'parquet': convert_parquet(input_file, output_file) elif conv_type == 'geoparquet': convert_geoparquet(input_file, output_file) else: sys.exit(1) if __name__ == '__main__': input_file = './testdata.csv' create_sample_data(input_file) convert(input_file, 'parquet') convert(input_file, 'geoparquet') </code></pre> <p>I'm running python 3.11, pyarrow 18.1.0, geopandas 1.0.1 and pandas 2.2.3.</p> <p>Both geoparquet and parquet files get created easily, however, geoparquet creation is obviusly much slower since it's reading in the whole file instead of iterating chunkwise over it (besides other implications that this brings). There seem to be multiple 'standards' of writing geoparquets, geopandas seems to be the most intuitive for me.</p> <p>TLDR: <strong>Is there a way to implement chunkwise geoparquet creation over large tabular data sets?</strong></p> <p>Cheers and many thanks in advance</p>
<python><parquet><geopandas><pyarrow>
2025-01-10 09:02:53
1
403
Sacha Viquerat
79,344,960
687,331
Extracting vendor info from Probe Request using Scapy
<p>Trying to extract the vendor information (Apple, Samsung, etc) from Probe Request coming from mobile, So far no luck. Not sure where the corrections to be made to get this info.</p> <p>Adding my code:</p> <pre><code>import codecs from scapy.all import * from netaddr import * def handler(p): if not (p.haslayer(Dot11ProbeResp) or p.haslayer(Dot11ProbeReq) or p.haslayer(Dot11Beacon)): return rssi = p[RadioTap].dBm_AntSignal dst_mac = p[Dot11].addr1 src_mac = p[Dot11].addr2 ap_mac = p[Dot11].addr2 global macf maco = EUI(src_mac) try: macf = maco.oui.registration().org except NotRegisteredError: macf = &quot;Not available&quot; info = f&quot;rssi={rssi:2}dBm, dst={dst_mac}, src={src_mac}, ap={ap_mac}, manf= {macf}&quot; if p.haslayer(Dot11ProbeReq): stats = p[Dot11ProbeReq].network_stats() ssid = str(stats['ssid']) channel = None if &quot;channel&quot; in stats: channel = stats['channel'] print(f&quot;[ProbReq ] {info}&quot;) print(f&quot;ssid = {ssid}, channel ={channel}&quot;) #rate= {rates} sniff(iface=&quot;wlan1&quot;, prn=handler, store=0) </code></pre>
<python><scapy><probe>
2025-01-10 07:55:17
1
1,985
Anand
79,344,934
1,849,773
Mails sent with attachments, using the library exchangelib in Python, are NOT visible in the sent folder
<p>I have email addresses stored in AWS Workmail and I want to use them with an AWS Lambda in order to send mails to different destination email addresses. I managed to send mails however only the mails WITHOUT attachments are visibile in the sent folder (named Sent Items in AWS Workmail). This is a huge problem for use because we need to list all sent mails with and without attachments.</p> <p>Here is my code in Python (I installed the libraries <code>exchangelib</code>, <code>pytz</code>and <code>lxml</code>), can you tell me what I am missing or is there another way to do so:</p> <pre><code>import time from exchangelib.items import ( SEND_TO_ALL_AND_SAVE_COPY, ) from exchangelib import Account, Credentials, DELEGATE, Configuration, Message, FileAttachment, UTC, UTC_NOW # Specify my AWS Wormail email address and its password email_address = &quot;xxxx@my-domain.com&quot; password = &quot;xxxxxxx&quot; # Set credentials credentials = Credentials(username=email_address, password=password) config = Configuration( credentials=credentials, service_endpoint=f'https://ews.mail.eu-west-1.awsapps.com/EWS/Exchange.asmx', auth_type='basic' ) account = Account(primary_smtp_address=email_address, autodiscover=False, config=config, access_type=DELEGATE) # Prepare and send the mail to send with an attachment message = Message( account=account, subject=&quot;test attachment in sent&quot;, folder=account.sent, body=&quot;some test&quot;, to_recipients=[ &quot;xxxxx@gmail.com&quot; # A destination email address ], ) with open(&quot;my-file.txt&quot;, &quot;rb&quot;) as file: content = file.read() message.attach( FileAttachment(name=&quot;my-file.txt&quot;, content=content) ) message.send_and_save() print(message) time.sleep(30) print(&quot;######################&quot;) # List the mails sent sent_emails = account.sent.all() for sent_mail in sent_emails: print(&quot;Subject:&quot;, sent_mail.subject) print(&quot;Sender:&quot;, sent_mail.sender) print(&quot;Recipients:&quot;, sent_mail.to_recipients) print(&quot;Attachments:&quot;, sent_mail.attachments) print(&quot;----------&quot;) ``` </code></pre>
<python><amazon-web-services><exchangelib><amazon-workmail>
2025-01-10 07:45:42
0
1,042
Yassir S
79,344,879
2,944,736
How do I run torch.distributed between Docker containers on separate instances using the bridge network?
<p>I am trying to run a simple torch.distributed script between two Docker containers running on separate instances. Below is the code I am using:</p> <pre><code>import os import torch import torch.distributed as dist def init_distributed(): os.environ['MASTER_ADDR'] = &quot;10.12.27.241&quot; os.environ['MASTER_PORT'] = '29500' node_rank = int(os.environ.get('RANK', 0)) # 1 for worker world_size = 2 dist.init_process_group( backend='gloo', rank=node_rank, world_size=world_size ) print(f&quot;Initialized process group: rank {node_rank} of {world_size}&quot;) return node_rank, world_size def send_receive_message(rank, world_size): if rank == 0: # Node 0 sends a message message = torch.tensor([42, 43, 44], dtype=torch.int64) dist.send(message, dst=1) print(f&quot;Rank {rank} sent message: {message}&quot;) else: # Node 1 receives the message message = torch.zeros(3, dtype=torch.int64) dist.recv(message, src=0) print(f&quot;Rank {rank} received message: {message}&quot;) if __name__ == &quot;__main__&quot;: rank, world_size = init_distributed() send_receive_message(rank, world_size) # Barrier to ensure all processes have completed dist.barrier() # Clean up dist.destroy_process_group() </code></pre> <p>I am able to run this script successfully when using the --network=host option for docker run. However, due to organizational restrictions, I am required to use the --network=bridge option. When I use --network=bridge, I encounter the following error:</p> <pre><code>[E110 05:59:45.095859745 ProcessGroupGloo.cpp:143] Gloo connectFullMesh failed with [../third_party/gloo/gloo/transport/tcp/pair.cc:144] no error Traceback (most recent call last): File &quot;/data/exp/com.py&quot;, line 36, in &lt;module&gt; rank, world_size = init_distributed() File &quot;/data/exp/com.py&quot;, line 12, in init_distributed dist.init_process_group( File &quot;/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py&quot;, line 83, in wrapper return func(*args, **kwargs) File &quot;/usr/local/lib/python3.10/dist-packages/torch/distributed/c10d_logger.py&quot;, line 97, in wrapper func_return = func(*args, **kwargs) File &quot;/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py&quot;, line 1527, in init_process_group default_pg, _ = _new_process_group_helper( File &quot;/usr/local/lib/python3.10/dist-packages/torch/distributed/distributed_c10d.py&quot;, line 1744, in _new_process_group_helper backend_class = ProcessGroupGloo( RuntimeError: Gloo connectFullMesh failed with [../third_party/gloo/gloo/transport/tcp/pair.cc:144] no error </code></pre> <p>How can I configure torch.distributed to work with the bridge network when running containers on separate instances? What additional steps or configurations are required to make Gloo backend communication succeed in this setup?</p> <p>Any guidance or pointers would be greatly appreciated!</p>
<python><docker><pytorch><docker-network><gloo>
2025-01-10 07:16:35
1
579
Jay Dharmendra Solanki
79,344,840
1,573,589
mariadb MEDIUMTEXT COMPRESSED updating takes forether
<p>I am using mariadb to store HTML files, and the column defined as MEDIUMTEXET CONPRESSED for HTML with some other columns storing INT and VARCHAR keys. Yet I encountered a rather bizarre behaviour:</p> <ol> <li>When I create a record with HTML, it flies, however</li> <li>When I create a record first, and add HTML later, it takes several minutes (!!!) to add one HTML.</li> </ol> <p>Needless to say, HTML is not indexed (I don't think you can for a COMPRESSED column). I have about 100,000 of HTML files in the table. Each HTML file is about 150K</p> <p><strong>Added:</strong></p> <p>Here is the code</p> <pre><code> loadHTML(self, conn, driver, report=False): cleaner = Cleaner() cleaner.javascript = True # This is True because we want to activate the javascript filter cleaner.style = True # This is True because we want to activate the styles &amp; stylesheet filter preport(f&quot;in selLoadHTML for {type(self)}&quot;) cursor = conn.cursor() try: self.openMyURL(driver, report=report) preport(f&quot;opened {self.url}&quot;) the_html = driver.page_source preport(f&quot;loaded {len(the_html)} bytes&quot;) self.html = cleaner.clean_html(the_html) preport(f&quot;cleaned {len(self.html)} bytes&quot;) cursor.execute(&quot;&quot;&quot; update Page set html=%s, htmlLoaded=NOW(), htmlError=NULL&quot;&quot;&quot;, (self.html,)) preport(f&quot;updated the database&quot;) except Exception as e: print(f&quot;Error opening {self.url}, exception {e}&quot;) cursor.execute(&quot;&quot;&quot; update Page set htmlError=NOW()&quot;&quot;&quot;) conn.commit() cursor.close() return </code></pre> <p><code>preport</code> is a wrapper around <code>print</code> that reports time since the previous reporting</p> <p>Here is an output:</p> <pre><code>After 0:00:08.334192 , opened &lt;url here&gt; After 0:00:00.049598 , loaded 918988 bytes After 0:00:00.032835 , cleaned 43692 bytes After 0:03:06.277489 , updated the database </code></pre> <p>As you can see, it took over 3 minutes to save 43K of HTML, which prior to that was NULL</p> <p>Here is a schema (simplified)</p> <pre><code>CREATE TABLE Page ( url VARCHAR(100) UNIQUE KEY, html MEDIUMTEXT COMPRESSED, htmlError DATETIME, refId int unsigned, foreign key (refId) references Reference (refId) </code></pre> <p>For some other cases, I was just populating the database:</p> <pre><code>cur.execute(&quot;&quot;&quot;INSERT INTO Reference (refId, refText) VALUES (%s, %s) ON DUPLICATE KEY UPDATE refText = %s&quot;&quot;&quot;, (self.ref_id, self.ref_text, self.ref_text)) cur.execute(&quot;&quot;&quot;INSERT IGNORE INTO Page (url, html, refId) VALUES (%s, %s, %s) &quot;&quot;&quot;, (self.url, self.html, self.ref_id)) </code></pre> <p>I don't have performance data, but it flied - no more than several seconds per record.</p>
<python><mariadb><mariadb-10.5>
2025-01-10 06:58:49
1
337
Alex J.
79,344,258
1,227,012
PostgreSQL connection problem from a Python script that uses psycopg2. What am I missing?
<p>I have a pretty vanilla PostgreSQL installation. I'm trying to use a feature of <code>jsonschematoddl</code> to create a table(s) from a JSON schema file.</p> <p>I can load the schema file fine, but my connection to the local PostgreSQL database always fails with a 403 &quot;Forbidden,&quot; which I assume is an authentication/authorization issue in my configuration.</p> <p>I tried connecting with both <code>psycopg2</code> and <code>sqlalchemy</code>, but both behave the same. I can connect fine via <code>psql</code> as both my user and the <code>postgres</code> user.</p> <p>I've tried several URI variations to no avail.</p> <p>I'm clearly overlooking something, so any pointers or suggestions are appreciated.</p> <p>Here's my python script:</p> <pre class="lang-py prettyprint-override"><code>import json with open('/Users/n123/sauce_device.json') as f: schema = json.load(f) pg_uri = 'postgresql://postgres@127.0.0.1:5432/postgres' # With psycopg2 import psycopg2 conn = psycopg2.connect(pg_uri) #conn = psycopg2.connect(dbname='postgres', user='n123', host='localhost', password='power') # OR with sqlalchemy # from sqlalchemy import create_engine # conn = create_engine(pg_uri).raw_connection() from jsonschema2ddl import JSONSchemaToDatabase translator = JSONSchemaToDatabase( schema, root_table_name='sauce_devices', ) translator.create_tables(conn) translator.create_links(conn) translator.analyze(conn) conn.comit() </code></pre> <p>Here's the error I get when the script is run:</p> <pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last): File &quot;/Users/n123/schema-to-ddl.py&quot;, line 18, in &lt;module&gt; translator = JSONSchemaToDatabase( ^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/n123/.pyenv/versions/3.12.1/lib/python3.12/site-packages/jsonschema2ddl/translators.py&quot;, line 59, in __init__ self._validate_schema() File &quot;/Users/n123/.pyenv/versions/3.12.1/lib/python3.12/site-packages/jsonschema2ddl/translators.py&quot;, line 73, in _validate_schema metaschema_uri = urlopen(metaschema_uri).url ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/n123/.pyenv/versions/3.12.1/lib/python3.12/urllib/request.py&quot;, line 215, in urlopen return opener.open(url, data, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/n123/.pyenv/versions/3.12.1/lib/python3.12/urllib/request.py&quot;, line 521, in open response = meth(req, response) ^^^^^^^^^^^^^^^^^^^ File &quot;/Users/n123/.pyenv/versions/3.12.1/lib/python3.12/urllib/request.py&quot;, line 630, in http_response response = self.parent.error( ^^^^^^^^^^^^^^^^^^ File &quot;/Users/n123/.pyenv/versions/3.12.1/lib/python3.12/urllib/request.py&quot;, line 559, in error return self._call_chain(*args) ^^^^^^^^^^^^^^^^^^^^^^^ File &quot;/Users/n123/.pyenv/versions/3.12.1/lib/python3.12/urllib/request.py&quot;, line 492, in _call_chain result = func(*args) ^^^^^^^^^^^ File &quot;/Users/n123/.pyenv/versions/3.12.1/lib/python3.12/urllib/request.py&quot;, line 639, in http_error_default raise HTTPError(req.full_url, code, msg, hdrs, fp) urllib.error.HTTPError: HTTP Error 403: Forbidden </code></pre>
<python><postgresql>
2025-01-09 23:09:14
0
1,168
David Nedrow
79,344,159
1,926,221
Disable PySpark to print info when running
<p>I have started to use PySpark. Version of PySpark is <code>3.5.4</code> and it's installed via <code>pip</code>.</p> <p>This is my code:</p> <pre><code>from pyspark.sql import SparkSession pyspark = SparkSession.builder.master(&quot;local[8]&quot;).appName(&quot;test&quot;).getOrCreate() df = pyspark.read.csv(&quot;test.csv&quot;, header=True) print(df.show()) </code></pre> <p>Every time I run the program using:</p> <pre><code>python test_01.py </code></pre> <p>It prints all this info about pyspark(in yellow):</p> <p><a href="https://i.sstatic.net/xFavPNKi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xFavPNKi.png" alt="enter image description here" /></a></p> <p>How to disable it, so it will not print it.</p>
<python><apache-spark><pyspark><pipenv>
2025-01-09 22:07:59
2
3,726
IGRACH
79,344,148
1,504,016
FastAPI dynamic advanced dependencies /
<p>I started from the example shown on this page : <a href="https://fastapi.tiangolo.com/advanced/advanced-dependencies/#use-the-instance-as-a-dependency" rel="nofollow noreferrer">https://fastapi.tiangolo.com/advanced/advanced-dependencies/#use-the-instance-as-a-dependency</a></p> <pre><code>from typing import Annotated from fastapi import Depends, FastAPI app = FastAPI() class FixedContentQueryChecker: def __init__(self, fixed_content: str): self.fixed_content = fixed_content def __call__(self, q: str = &quot;&quot;): if q: return self.fixed_content in q return False checker = FixedContentQueryChecker(&quot;bar&quot;) @app.get(&quot;/query-checker/&quot;) async def read_query_check(fixed_content_included: Annotated[bool, Depends(checker)]): return {&quot;fixed_content_in_query&quot;: fixed_content_included} </code></pre> <p>Now, I would like to be able to use the same kind of dependency injection but with a dynamically defined value, using a decorator.</p> <pre><code>def config_checker(value): checker = FixedContentQueryChecker(value) def f(func): @functools.wraps(func) async def wrap_func(*args, fixed_content_included: Annotated[bool, Depends(checker)], **kwargs): return await func(*args, fixed_content_included, **kwargs) return wrap_func return f @app.get(&quot;/query-bar-checker/&quot;) @config_checker(value=&quot;bar&quot;) async def read_query_check_bar(fixed_content_included: Annotated[bool, Depends(??)]): return {&quot;fixed_content_in_query&quot;: fixed_content_included} @app.get(&quot;/query-foo-checker/&quot;) @config_checker(value=&quot;foo&quot;) async def read_query_check_foo(fixed_content_included: Annotated[bool, Depends(??)]): return {&quot;fixed_content_in_query&quot;: fixed_content_included} </code></pre> <p>The problem is I need to define the <code>fixed_content_included</code> as a dependency in the routes so that it won't be treated as a query parameter. But if I provide anything in the Depends() function in the route definition, it won't be able to be overridden by the decorator so that the parametrized function would be used.</p> <p>How can I proceed ?</p>
<python><dependencies><fastapi><decorator>
2025-01-09 22:03:30
1
2,649
ibi0tux
79,344,035
29,131,715
How to add requirements.txt to uv environment
<p>I am working with <a href="https://docs.astral.sh/uv/" rel="noreferrer">uv</a> for the first time and have created a venv to manage my dependencies. Now, I'd like to install some dependencies from a <em>requirements.txt</em> file.</p> <p>How can this be achieved with uv?</p> <p>I already tried manually installing each requirement using <code>uv pip install ...</code>. However, this gets tedious for a large list of requirements.</p>
<python><python-venv><uv>
2025-01-09 21:04:59
1
413
BitsAndBytes
79,343,989
1,718,989
Python Spyder IDE - How do view the docstring for a method in the REPL?
<p>Currently I am going through a course and am learning about Spyder and the interface. I downloaded Anaconda and am using Spyder 6.0.1 with Python 3.12.</p> <p>The text book was going through a section on how to &quot;figure things out in Python&quot;. One suggestion was to go to the REPL in Spyder and try typing <strong>?</strong> after a method to view its docstring.</p> <p>So for example</p> <p><code>'test'.&lt;tab on keyboard to pull up available methods&gt;capitialize</code><strong>?</strong></p> <p>The key thing was to add a question mark at the end of the piece of code and then after hitting enter you should see the following output</p> <p>Signature:.....</p> <p>Docstring:.....</p> <p>All I am hitting is &quot;object not found&quot;</p> <p>I know in Python there is also a <code>help(module)</code> but if I try to do a <code>help(capitalize)</code> I also get another error.</p> <p>Not sure whats the best way to get more documentation and would appreciate any advice</p>
<python><documentation><docstring>
2025-01-09 20:44:17
0
311
chilly8063
79,343,939
15,547,292
Get char limit for subprocess args
<p>In Python, is there a way to get the maximum number of characters that can be passed to a subprocess on the host in question?</p> <p>I'm asking for implementing xargs-like functionality, i.e. running a command on a (potentially huge) list of file paths collected previously by python functions, so I'd need to know how many args/chars I can pass at a time without exceeding limits, and divide into multiple calls if necessary.</p>
<python><subprocess>
2025-01-09 20:19:20
1
2,520
mara004
79,343,888
13,100,489
Why does my SVG render correctly in web viewers but not in pydiffvg even when stroke attributes are missing?
<p>I’m trying to render an SVG using pydiffvg, but it fails or renders incorrectly (the structure is fine but the color is very off, missing elements) when certain stroke attributes like stroke-width are missing in the SVG. The same SVG renders perfectly in web browsers or tools like Inkscape, which seem to &quot;guess&quot; or use defaults for these missing attributes.</p> <p>Here’s what I’ve tried:</p> <ol> <li>Assigning default stroke values (black or transparent) in my parser.</li> <li>Ensuring stroke-width is set to 1.0 if undefined.</li> </ol> <p>Despite these attempts, pydiffvg still struggles, while browsers handle it effortlessly. What exactly are browsers doing differently when handling incomplete SVGs, and how can I replicate this behavior to ensure pydiffvg renders the SVG correctly?</p> <p>Here's the SVG I am using:</p> <pre><code> &lt;?xml version=&quot;1.0&quot; encoding=&quot;utf-8&quot;?&gt; &lt;!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools --&gt; &lt;svg width=&quot;800px&quot; height=&quot;800px&quot; viewBox=&quot;0 0 1024 1024&quot; class=&quot;icon&quot; version=&quot;1.1&quot; xmlns=&quot;http://www.w3.org/2000/svg&quot;&gt;&lt;path d=&quot;M301.226643 1002.666249A34.773319 34.773319 0 0 1 266.666658 968.106263V399.999833c0-45.226648 71.253304-152.53327 116.266618-209.706579l4.479998-5.759998V109.439954h248.959896v75.093302l4.479999 5.759998C686.079816 247.466564 757.33312 354.773186 757.33312 399.999833v568.10643A34.773319 34.773319 0 0 1 722.559801 1002.666249z&quot; fill=&quot;#F05071&quot; /&gt;&lt;path d=&quot;M615.039846 130.773279V191.99992l9.173329 11.519995c53.759978 68.053305 111.78662 162.559932 111.786621 196.479918v568.10643a13.439994 13.439994 0 0 1-13.226662 13.226661H301.226643a13.439994 13.439994 0 0 1-13.226661-13.226661V399.999833c0-33.706653 58.026642-127.999947 111.78662-196.479918L408.959932 191.99992V130.773279h206.293247m42.666649-42.666649H366.293283V177.066593S245.333333 330.453196 245.333333 399.999833v568.10643A55.89331 55.89331 0 0 0 301.226643 1023.999573h421.333158A55.89331 55.89331 0 0 0 778.666444 968.106263V399.999833c0-69.546638-120.95995-222.93324-120.959949-222.93324V88.10663z&quot; fill=&quot;#5C2D51&quot; /&gt;&lt;path d=&quot;M321.706635 21.333324l380.373175 0 0 106.666623-380.373175 0 0-106.666623Z&quot; fill=&quot;#9FDBAD&quot; /&gt;&lt;path d=&quot;M680.959818 42.666649v63.999973H343.039959V42.666649h337.919859m15.146661-42.666649H327.893299a27.519989 27.519989 0 0 0-27.519989 27.519989v94.293294A27.519989 27.519989 0 0 0 327.893299 149.333271h367.999847a27.519989 27.519989 0 0 0 27.519988-27.519988V27.519989A27.519989 27.519989 0 0 0 695.893146 0z&quot; fill=&quot;#5C2D51&quot; /&gt;&lt;path d=&quot;M266.666658 469.333138h490.666462v341.333191H266.666658z&quot; fill=&quot;#9FDBAD&quot; /&gt;&lt;path d=&quot;M735.999796 490.666462v298.666542H287.999982V490.666462h447.999814m42.666648-42.666649H245.333333v383.99984h533.333111V447.999813z&quot; fill=&quot;#5C2D51&quot; /&gt;&lt;/svg&gt; </code></pre> <p>Here's what the SVG is supposed to look like: <a href="https://i.sstatic.net/vTf5fAno.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vTf5fAno.png" alt="enter image description here" /></a></p> <p>And this is what the rendering looks like (ignore the orientation I have changed that): <a href="https://i.sstatic.net/MbJY9DpB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MbJY9DpB.png" alt="enter image description here" /></a></p> <p>Here's my rendering code block:</p> <pre><code>if shape_type == &quot;custom&quot; and svg_path: try: _, _, parsed_shapes, parsed_shape_groups = svg_to_scene(svg_path) # Filter out invalid shapes parsed_shapes = [ shape for shape in parsed_shapes if hasattr(shape, &quot;points&quot;) or hasattr(shape, &quot;center&quot;) ] print(f&quot;Parsed {len(parsed_shapes)} valid shapes from {svg_path}.&quot;) if not parsed_shapes: print(f&quot;Warning: No valid shapes were parsed from {svg_path}.&quot;) continue scale_factor = shape_info.get(&quot;size&quot;, min(self.canvas_width, self.canvas_height) * 0.01) # Process each logical group based on count for g in range(count): group_shapes = [] for i, (shape, shape_group) in enumerate(zip(parsed_shapes, parsed_shape_groups)): if hasattr(shape, &quot;points&quot;): shape.points = shape.points.to(self.device) * scale_factor shape.initial_points = shape.points.clone() elif hasattr(shape, &quot;center&quot;): shape.center = shape.center.to(self.device) * scale_factor shape.initial_points = shape.center.unsqueeze(0).clone() # Set default colors if missing shape_group.fill_color = ( shape_group.fill_color.clone().to(self.device) if shape_group.fill_color is not None else torch.tensor([1.0, 1.0, 1.0, 1.0], device=self.device) ) shape_group.stroke_color = ( shape_group.stroke_color.clone().to(self.device) if shape_group.stroke_color is not None else shape_group.fill_color.clone().to(self.device) # Default to fill color ) if isinstance(shape, pydiffvg.Path): new_shape = self.copy_path(shape) elif isinstance(shape, pydiffvg.Polygon): new_shape = pydiffvg.Polygon( points=shape.points.clone(), is_closed=shape.is_closed, ) elif isinstance(shape, pydiffvg.Circle): new_shape = pydiffvg.Circle( radius=shape.radius.clone(), center=shape.center.clone(), ) else: print(f&quot;Warning: Unsupported shape type {type(shape)}.&quot;) continue new_shape.initial_points = shape.initial_points.clone() group_shapes.append(new_shape) self.shapes.append(new_shape) new_shape_group = pydiffvg.ShapeGroup( shape_ids=torch.tensor([len(self.shapes) - 1], dtype=torch.int32).to(self.device), fill_color=shape_group.fill_color, stroke_color=shape_group.stroke_color, ) self.shape_groups.append(new_shape_group) self.shapes_by_logical_group.append(group_shapes) # Initialize position and orientation position = torch.tensor( [ center_x + (torch.rand(1).item() - 0.5) * 2 * max_radius, center_y + (torch.rand(1).item() - 0.5) * 2 * max_radius, ], requires_grad=True, device=self.device, ) orientation = torch.tensor( [torch.rand(1).item() * 360], requires_grad=True, device=self.device, ) # Apply transformations to group shapes for shape in group_shapes: if hasattr(shape, &quot;initial_points&quot;): angle = torch.deg2rad(orientation[0]) rotation_matrix = torch.tensor([ [torch.cos(angle), -torch.sin(angle)], [torch.sin(angle), torch.cos(angle)] ]).to(self.device) transformed_points = torch.matmul(shape.initial_points, rotation_matrix.T) + position shape.points = transformed_points.clone() elif isinstance(shape, pydiffvg.Circle): shape.center = position.clone() self.parameters.position.append(position) self.parameters.orientation.append(orientation) logical_group_count += 1 except Exception as e: print(f&quot;Error processing custom SVG {svg_path}: {e}&quot;) continue </code></pre> <p>Any insights into how to preprocess or parse such SVGs would be appreciated!</p>
<python><svg><rendering>
2025-01-09 19:57:59
0
447
Ravish Jha
79,343,882
564,709
python, .env, and shell environment variables
<p>I could use some help understanding how python interacts with .env files and shell environment variables.</p> <p>I have a <code>.env</code> file that has general configurations for accessing our production server. I also have a number of separate customer-specific <code>.env.CUSTOMER</code> files that contain specific environment variables for that customer. My configuration looks something like this:</p> <p><code>.env</code></p> <pre class="lang-bash prettyprint-override"><code>DB_PATH=zzz DB_PASSWORD=zzz DB_USERNAME=zzz </code></pre> <p><code>.env.customer1</code></p> <pre class="lang-bash prettyprint-override"><code>CLIENT_ID=xxx CLIENT_SECRET=yyy </code></pre> <p><code>.env.customer2</code></p> <pre class="lang-bash prettyprint-override"><code>export CLIENT_ID=aaa export CLIENT_SECRET=bbb </code></pre> <p>Note that the <code>.env.customer1</code> does not use the <code>export</code> keyword whereas <code>.env.customer2</code> does use the <code>export</code> keyword.</p> <p>When I want to run a particular script, this configuration forces me to <code>source .env &amp;&amp; source .env.CUSTOMER</code> to make sure I'm not inadvertently interacting with the wrong customer's account/data/users/etc. This works as expected in the shell when I run <code>echo $DB_PATH $CLIENT_ID</code>, so I know these bash environment variables are being correctly set, regardless of whether I use <code>.env.customer1</code> or <code>.env.customer2</code>.</p> <p>When I try to use these environment variables in a python script, that's where things are really confusing. For example, I have a python script that looks something like this:</p> <pre class="lang-py prettyprint-override"><code># my_script.py import os import dotenv dotenv.load_dotenv() # this loads variables from .env file, which I understand print(os.environ.get(&quot;DB_PATH&quot;, &quot;no DB_PATH&quot;)) print(os.environ.get(&quot;CLIENT_ID&quot;, &quot;no CLIENT_ID&quot;)) </code></pre> <p>When I run this script, here is what I see:</p> <pre class="lang-none prettyprint-override"><code>$ source .env &amp;&amp; source .env.customer1 $ echo $DB_PATH $CLIENT_ID zzz xxx $ python my_script.py zzz no CLIENT_ID $ $ $ source .env &amp;&amp; source .env.customer2 $ echo $DB_PATH $CLIENT_ID zzz aaa $ python my_script.py zzz aaa $ $ $ CLIENT_ID=wtf python my_script.py zzz wtf </code></pre> <p>My approximate understanding is that the variables in the <code>.env</code> file are being loaded by <code>dotenv.load_dotenv()</code> regardless. But for some reason, only shell variables that have been exported are being added to <code>os.environ</code>. Does anyone know why this is?</p> <p>Is there a better way to manage a workflow like this? Is there a better way to use the dotenv package to work with these different environment variables?</p>
<python><bash><python-dotenv>
2025-01-09 19:55:10
0
3,336
dino
79,343,784
4,613,606
pyspark - Issue in converting hex to decimal
<p>I am facing an issue while converting hex to decimal (learned from <a href="https://stackoverflow.com/q/47930150/4613606">here</a>) in pyspark.</p> <pre><code>from pyspark.sql.functions import col, sha2, conv, substring # User data with ZIPs user_data = [ (&quot;100052441000101&quot;, &quot;21001&quot;), (&quot;100052441000102&quot;, &quot;21002&quot;), (&quot;100052441000103&quot;, &quot;21002&quot;), (&quot;user1&quot;, &quot;21001&quot;), (&quot;user2&quot;, &quot;21002&quot;) ] df_users = spark.createDataFrame(user_data, [&quot;user_id&quot;, &quot;ZIP&quot;]) # Generate SHA-256 hash from the user_id df_users = df_users.withColumn(&quot;hash_key&quot;, sha2(col(&quot;user_id&quot;), 256)) # Convert the hexadecimal hash (sha2 output) to decimal df_users = df_users.withColumn(&quot;hash_substr1&quot;, substring(col('hash_key'), 1, 16)) df_users = df_users.withColumn(&quot;hash_substr2&quot;, substring(col('hash_key'), 1, 15)) df_users = df_users.withColumn(&quot;hash_int1&quot;, conv(col('hash_substr1'), 16, 10).cast(&quot;bigint&quot;)) df_users = df_users.withColumn(&quot;hash_int2&quot;, conv(col('hash_substr2'), 16, 10).cast(&quot;bigint&quot;)) df_users.show() </code></pre> <p>The output I get is:</p> <pre><code>+---------------+-----+--------------------+----------------+---------------+-------------------+-------------------+ | user_id| ZIP| hash_key| hash_substr1| hash_substr2| hash_int1| hash_int2| +---------------+-----+--------------------+----------------+---------------+-------------------+-------------------+ |100052441000101|21001|3cf4b90397964f6b2...|3cf4b90397964f6b|3cf4b90397964f6|4392338961672327019| 274521185104520438| |100052441000102|21002|e18aec7bb2a60b62d...|e18aec7bb2a60b62|e18aec7bb2a60b6| null|1015753888833888438| |100052441000103|21002|e55127f9f61bbe433...|e55127f9f61bbe43|e55127f9f61bbe4| null|1032752028895525860| | user1|21001|0a041b9462caa4a31...|0a041b9462caa4a3|0a041b9462caa4a| 721732164412679331| 45108260275792458| | user2|21002|6025d18fe48abd451...|6025d18fe48abd45|6025d18fe48abd4|6928174017724202309| 433010876107762644| +---------------+-----+--------------------+----------------+---------------+-------------------+-------------------+ </code></pre> <p>Note that <code>hash_int1</code> is <code>null</code> for 2nd and 3rd records. However, when I try to get the corresponding int using python, I get some value:</p> <pre><code>hexes = [&quot;e18aec7bb2a60b62&quot;, &quot;e18aec7bb2a60b6&quot;, &quot;e55127f9f61bbe43&quot;, &quot;e55127f9f61bbe4&quot;] [int(h, 16) for h in hexes] [16252062221342215010, 1015753888833888438, 16524032462328413763, 1032752028895525860] </code></pre> <p>The values are same when they are not null.</p> <p>The final objective is to generate replicable random values</p> <pre><code>df_users = df_users.withColumn(&quot;random_value&quot;, (col(&quot;hash_int1&quot;) % 10**12) / 10**12) </code></pre>
<python><pyspark><hash><hex>
2025-01-09 19:14:15
1
1,126
Gaurav Singhal
79,343,740
2,091,169
Torch model call works fine on Github Codespaces, but crashes on Render.com
<p>The following code:</p> <pre class="lang-py prettyprint-override"><code>from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC import torch audio_proc = Wav2Vec2Processor.from_pretrained( &quot;vitouphy/wav2vec2-xls-r-300m-timit-phoneme&quot; ) audio_model = Wav2Vec2ForCTC.from_pretrained( &quot;vitouphy/wav2vec2-xls-r-300m-timit-phoneme&quot; ) def compute_phonemes(audio_content: bytes) -&gt; list[dict[str, Any]]: try: audio_file = io.BytesIO(audio_content) speech, sr = librosa.load(audio_file, sr=16000) except Exception as e: print(&quot;Error loading speech&quot;, e) return [] model_inputs = audio_proc( speech, sampling_rate=sr, return_tensors=&quot;pt&quot;, padding=True ) with torch.no_grad(): logits = audio_model(**model_inputs).logits </code></pre> <p>works completely fine on Github Codespaces 2CPU instance and runs in about .4 seconds</p> <p>However when the exact same code, the exact same API, is deployed on Render.com β€” which usually works like a charm β€” then I got the following error logs, systematically:</p> <pre class="lang-none prettyprint-override"><code>[2025-01-09 18:25:30 +0000] [98] [CRITICAL] WORKER TIMEOUT (pid:141) [2025-01-09 18:25:31 +0000] [98] [ERROR] Worker (pid:141) was sent code 134! </code></pre> <p>I traced down the issue and this is exactly this line that triggers the SIGABRT signal (code 134):</p> <pre><code>logits = audio_model(**model_inputs).logits </code></pre> <p>The same code will run &quot;fine&quot; on Railway.com but take 10s instead of 0.4s on Github Codespaces.</p> <p>I can't see any difference in configuration between the two environments. Same Python version (3.12) and same PyTorch version (2.2.0)</p> <p>Any idea or pointers to a solution (or an alternative) would be welcome.</p>
<python><pytorch><render.com>
2025-01-09 18:56:24
0
23,338
Jivan
79,343,728
5,561,472
ValueError: Invalid constraint expression. The constraint expression resolved to a trivial Boolean (True) instead of a Pyomo object
<p>My code is as follows:</p> <pre class="lang-py prettyprint-override"><code>import pyomo.environ as pyo model = pyo.ConcreteModel() model.x = pyo.Var(range(2), domain=pyo.Reals) model.Constraint2 = pyo.Constraint(expr=sum([x for x in model.x]) &gt;= 0) </code></pre> <p>I am getting the error:</p> <pre><code>ValueError: Invalid constraint expression. The constraint expression resolved to a trivial Boolean (True) instead of a Pyomo object </code></pre> <p>But the error disappear if I try:</p> <pre><code>model.Constraint2 = pyo.Constraint(expr=model.x[0] + model.x[1] &gt;= 0) </code></pre> <p>Looks like it is impossible to iterate a variable like a list in <code>pyomo</code>. Is it correct? I am not able to find it in documentation.</p>
<python><pyomo>
2025-01-09 18:51:06
1
6,639
Andrey
79,343,703
8,800,836
Generalized Kronecker product with different type of product in numpy or scipy
<p>Consider two boolean arrays</p> <pre class="lang-py prettyprint-override"><code>import numpy as np A = np.asarray([[True, False], [False, False]]) B = np.asarray([[False, True], [True, True]]) </code></pre> <p>I want to take the kronecker product of <code>A</code> and <code>B</code> under the <strong>xor</strong> operation. The result should be:</p> <pre class="lang-py prettyprint-override"><code>C = np.asarray([[True, False, False, True], [False, False, True, True], [False, True, False, True], [True, True, True, True]]) </code></pre> <p>More generally, is there a simple way to implement the Kronecker product with some multiplication operator distinct from the operator <code>*</code>, in this instance the <strong>xor</strong> operator <code>^</code>?</p>
<python><numpy><scipy><xor><kronecker-product>
2025-01-09 18:37:35
1
539
Ben
79,343,664
3,137,388
gunicorn is not maintaining persistent TLS connections
<p>We need to write a Python server which maintains persistent connections so when another request comes, it will use the old connection instead of creating new connection. We used <code>flask</code> and <code>gunicorn</code> for this.</p> <p>Python code:</p> <pre class="lang-py prettyprint-override"><code> from flask import Flask, request, make_response, jsonify app = Flask(__name__) @app.route('/v1') def get_availability(): response = make_response(&quot;Custom Response&quot;, 204) return response @app.route('/v2') def get_ping(): response = make_response(&quot;Custom Response&quot;, 200) return response @app.errorhandler(404) def not_found(error): return jsonify({'error': 'Custom message for unavailable path'}), 404 </code></pre> <p>And we are running the python server using gunicorn as below</p> <pre><code>gunicorn --keyfile key.pem --certfile cert.pem --bind 127.0.0.1:8080 app:app </code></pre> <p>I wrote a simple shell script which sends 2 <code>curl</code> commands as below.</p> <pre><code>curl -H &quot;Connection: keep-alive&quot; -H &quot;Keep-Alive: timeout=5, max=100&quot; https://127.0.0.1:8080/v1 -v -k curl -H &quot;Connection: keep-alive&quot; -H &quot;Keep-Alive: timeout=5, max=100&quot; https://127.0.0.1:8080/v2 -v -k </code></pre> <p>When I run the shell script, I am getting the responses but TLS connections are getting closed after each curl command and new curl command is not reusing the connection as below</p> <pre><code> * Trying 127.0.0.1:8080... * Connected to 127.0.0.1 (127.0.0.1) port 8080 * ALPN: curl offers h2,http/1.1 * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS * ALPN: server did not agree on a protocol. Uses default. * Server certificate: * subject: C=XX; L=Default City; O=Default Company Ltd * start date: Jan 8 22:43:00 2025 GMT * expire date: Jan 8 22:43:00 2026 GMT * issuer: C=XX; L=Default City; O=Default Company Ltd * SSL certificate verify result: self-signed certificate (18), continuing anyway. * Certificate level 0: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption * using HTTP/1.x &gt; GET /v1 HTTP/1.1 &gt; Host: 127.0.0.1:8080 &gt; User-Agent: curl/8.5.0 &gt; Accept: */* &gt; Connection: keep-alive &gt; Keep-Alive: timeout=5, max=100 &gt; * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing &lt; HTTP/1.1 200 OK &lt; Server: gunicorn &lt; Date: Thu, 09 Jan 2025 18:09:21 GMT &lt; Connection: close &lt; Content-Type: text/html; charset=utf-8 &lt; Content-Length: 15 &lt; * Closing connection * TLSv1.3 (OUT), TLS alert, close notify (256): Custom Response* Trying 127.0.0.1:8080... * Connected to 127.0.0.1 (127.0.0.1) port 8080 * ALPN: curl offers h2,http/1.1 * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS * ALPN: server did not agree on a protocol. Uses default. * Server certificate: * subject: C=XX; L=Default City; O=Default Company Ltd * start date: Jan 8 22:43:00 2025 GMT * expire date: Jan 8 22:43:00 2026 GMT * issuer: C=XX; L=Default City; O=Default Company Ltd * SSL certificate verify result: self-signed certificate (18), continuing anyway. * Certificate level 0: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption * using HTTP/1.x &gt; GET /v1 HTTP/1.1 &gt; Host: 127.0.0.1:8080 &gt; User-Agent: curl/8.5.0 &gt; Accept: */* &gt; Connection: keep-alive &gt; Keep-Alive: timeout=5, max=100 &gt; * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing &lt; HTTP/1.1 200 OK &lt; Server: gunicorn &lt; Date: Thu, 09 Jan 2025 18:09:21 GMT &lt; Connection: close &lt; Content-Type: text/html; charset=utf-8 &lt; Content-Length: 15 &lt; * Closing connection * TLSv1.3 (OUT), TLS alert, close notify (256): </code></pre> <p>I thought <code>gunicorn</code> maintains persistent connections but it is closing the connections. Can anyone please let me know if there is anyway to maintain persistent TLS connections.</p>
<python><flask><gunicorn><keep-alive>
2025-01-09 18:21:48
1
5,396
kadina
79,343,557
311,864
Using %matplotlib widget in jupyer notebook and I get the interactive plot coming out nicely. But everything is duplicated
<p>Using %matplotlib widget in jupyer notebook and I get the interactive plot coming out nicely. But everything is duplicated.</p> <p>When I remove the display keyword. It renders but I need the HBox in order to align the widgets. If not nothing would be aligned.</p> <pre><code>%matplotlib widget import tomolyzer as t import numpy as np import matplotlib.pyplot as plt from ipywidgets import interact, IntSlider, FloatSlider, Dropdown, VBox, HBox, Layout from IPython.display import display # Import display for rendering widgets import logging from scipy.fft import fft2, fftshift import scipy.ndimage as S import os from tqdm import tqdm fig, axs = plt.subplots(1, 2, figsize=(10, 5)) # 1 row and 3 columns figure size of 15 inches wide and 5 inches tall. axs[0].axis(&quot;off&quot;) axs[1].axis(&quot;off&quot;) std = 0 max = 0 factor = max - std*2 i=0 image = data[i][testing_radius, :, :].copy() axs[0].imshow(image, cmap='viridis', aspect='auto') # Display the slice axs[0].set_title(f'{body.centers_scaled[title][i]}') # Title for the slice axs[0].set_xlabel(f'{np.mean(image):.2e}') axs[0].set_ylabel('YX') axs[0].set_xticks([0,mid_x/2,mid_x]) axs[0].set_yticks([0,mid_y/2,mid_y]) axs[1].imshow(image, cmap='viridis', aspect='auto') # Display the slice def updatePlot(val): min = np.min(data[i][18]) max = np.max(data[i][18]) mean = np.mean(data[i][18]) std = np.std(data[i][18]) threshold= max - std*val image = data[i][testing_radius, :, :].copy() # positve intercept image[np.where(image &gt; max - std*val)] = 0 image[np.where(image &lt; max - std*val)] = 1 axs[1].clear() # Clear previous plot axs[1].imshow(image, cmap='viridis', aspect='auto') # Display the slice axs[1].text( 0.5, -0.1, f&quot;{threshold:.2e}&quot;, transform=axs[1].transAxes, ha='center', va='top', fontsize=12 ) std_slider = FloatSlider(min=0,max=7,step=0.001,value=0,description='Std',orientation='vertical', layout=Layout(height='300px')) x=interact(updatePlot,val = std_slider) plots_and_slider = HBox([std_slider,fig.canvas], layout=Layout(align_items='center')) display(plots_and_slider) </code></pre>
<python><matplotlib><jupyter>
2025-01-09 17:43:16
1
413
n8CodeGuru
79,343,402
10,634,126
Pygooglenews import failure
<p>I have the following versions installed:</p> <pre><code>Name: pygooglenews Version: 0.1.3 </code></pre> <pre><code>Name: feedparser Version: 6.0.11 </code></pre> <p>Nevertheless, when I try to import pygooglenews, I get the following error:</p> <pre><code>--------------------------------------------------------------------------- AttributeError Traceback (most recent call last) &lt;ipython-input-6-7a5a1e5c2c1c&gt; in &lt;module&gt; ----&gt; 1 from pygooglenews import GoogleNews ~/venv/lib/python3.9/site-packages/pygooglenews/__init__.py in &lt;module&gt; ----&gt; 1 import feedparser 2 from bs4 import BeautifulSoup 3 import urllib 4 from dateparser import parse as parse_date 5 import requests ~/venv/lib/python3.9/site-packages/feedparser.py in &lt;module&gt; 91 else: 92 # Python 3.1 deprecates decodestring in favor of decodebytes ---&gt; 93 _base64decode = getattr(base64, 'decodebytes', base64.decodestring) 94 95 # _s2bytes: convert a UTF-8 str to bytes if the interpreter is Python 3 AttributeError: module 'base64' has no attribute 'decodestring' </code></pre> <p>Is there a workaround for this?</p>
<python><feedparser><pygooglenews>
2025-01-09 16:37:45
1
909
OJT
79,343,187
14,824,108
Implementation for a recursive computation in python
<p>I'm trying to implement the recursion below for <code>tilde_alpha_t</code>: <a href="https://i.sstatic.net/YTfPlXx7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YTfPlXx7.png" alt="enter image description here" /></a></p> <p>I have <code>torch</code> tensors for <code>t</code> (time) which looks like this</p> <pre><code>t tensor([816, 724, 295, 54, 205, 373, 665, 656, 690, 280, 234, 155, 31, 684, 159, 749, 893, 795, 375, 443, 121, 881, 477, 326, 337, 970, 384, 247, 511, 432, 563, 753, 764, 77, 294, 803, 935, 507, 196, 744, 140, 641, 746, 844, 337, 4, 259, 276, 909, 962, 460, 372, 620, 466, 15, 244, 456, 829, 491, 620, 943, 925, 82, 856, 782, 117, 609, 909, 198, 626, 992, 998, 672, 762, 602, 85, 46, 529, 42, 841, 441, 237, 839, 953, 87, 941, 987, 980, 304, 690, 19, 598, 687, 483, 806, 366, 807, 136, 997, 708, 902, 751, 560, 245, 375, 688, 912, 547, 11, 285, 5, 83, 104, 346, 312, 236, 335, 664, 435, 762, 575, 184, 341, 618, 257, 634, 355, 762]) </code></pre> <p>and a tensor for corresponding <code>alphas_t</code> which is</p> <pre><code>alphas tensor([0.4948, 0.4966, 0.4992, 0.4999, 0.4995, 0.4990, 0.4973, 0.4974, 0.4970, 0.4993, 0.4994, 0.4996, 0.4999, 0.4971, 0.4996, 0.4962, 0.4909, 0.4953, 0.4989, 0.4987, 0.4997, 0.4918, 0.4985, 0.4991, 0.4991, 0.4683, 0.4989, 0.4993, 0.4984, 0.4987, 0.4981, 0.4962, 0.4960, 0.4998, 0.4992, 0.4951, 0.4850, 0.4984, 0.4995, 0.4963, 0.4996, 0.4975, 0.4963, 0.4938, 0.4991, 0.4999, 0.4993, 0.4993, 0.4893, 0.4747, 0.4986, 0.4990, 0.4977, 0.4986, 0.4999, 0.4994, 0.4986, 0.4943, 0.4985, 0.4977, 0.4830, 0.4870, 0.4998, 0.4932, 0.4956, 0.4997, 0.4978, 0.4893, 0.4995, 0.4976, 0.3951, 0.2222, 0.4972, 0.4960, 0.4978, 0.4998, 0.4999, 0.4983, 0.4999, 0.4939, 0.4987, 0.4994, 0.4940, 0.4794, 0.4998, 0.4835, 0.4311, 0.4535, 0.4992, 0.4970, 0.4999, 0.4979, 0.4971, 0.4985, 0.4950, 0.4990, 0.4950, 0.4996, 0.2813, 0.4968, 0.4900, 0.4962, 0.4981, 0.4994, 0.4989, 0.4971, 0.4889, 0.4982, 0.4999, 0.4992, 0.4999, 0.4998, 0.4997, 0.4990, 0.4992, 0.4994, 0.4991, 0.4973, 0.4987, 0.4960, 0.4980, 0.4995, 0.4991, 0.4977, 0.4993, 0.4976, 0.4990, 0.4960]) </code></pre> <p>Any hint on how to achieve that? My attempt is as follows:</p> <pre><code>def compute_tilde_alphas(times, alphas): &quot;&quot;&quot; Compute tilde_alpha_t for each t in the batch, starting from 0. Args: times: Tensor of times (shape [batch_size]). alphas: Tensor of alpha values corresponding to times (shape [batch_size]). Returns: tilde_alphas: Tensor of computed tilde_alpha values (shape [batch_size]). &quot;&quot;&quot; # Ensure times and alphas are in the same batch dimension assert times.shape == alphas.shape, &quot;times and alphas must have the same shape&quot; batch_size = times.shape[0] tilde_alphas = torch.zeros(batch_size, device=alphas.device) # Iterate over the batch for i in range(batch_size): t = int(times[i]) # Current time for this batch element alpha_t = alphas[i] # Corresponding alpha # Recursively compute tilde_alpha for this t tilde_alpha_t = 0 # Start with tilde_alpha_0 = 0 for step in range(1, t + 1): # Assume t defines the recursion depth tilde_alpha_t = torch.sqrt(alpha_t) * (1 + tilde_alpha_t) # Store the result tilde_alphas[i] = tilde_alpha_t return tilde_alphas </code></pre> <p>I think it may be correct, although I'm pretty sure there may be faster ways of obtaining the result here.</p>
<python><pytorch>
2025-01-09 15:32:34
1
676
James Arten
79,342,997
13,441,462
Wagtail - add custom link option
<p>I would like to customize <code>External</code> link type in Wagtail links in admin <code>RichText</code> field - I need to change (or at least disable) the link validation (e.g., to my custom regex) in admin frontend, as my links have different format to <code>https://&lt;something&gt;</code>.</p> <p>Does anybody know how to do that without forking <code>wagtail</code> and making own addition in <a href="https://github.com/wagtail/wagtail/blob/main/wagtail/admin/templates/wagtailadmin/chooser/_link_types.html" rel="nofollow noreferrer">this</a> and <a href="https://github.com/wagtail/wagtail/blob/main/wagtail/admin/templates/wagtailadmin/chooser/external_link.html" rel="nofollow noreferrer">this</a> file? Any help would be appreciated.</p> <p><a href="https://i.sstatic.net/TTpnmSJj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TTpnmSJj.png" alt="how I envision this" /></a></p>
<python><django><wagtail><wagtail-admin>
2025-01-09 14:34:41
0
409
Foreen
79,342,989
1,788,656
Xarray.open_dateset uses more than double the size of the file itself
<p>All, I am opening NetCDF files from <a href="https://cds.climate.copernicus.eu/datasets/derived-era5-land-daily-statistics?tab=download" rel="nofollow noreferrer">Copernicus data center</a> using xarray version 2024-11-0, using open_dataset function as the following:</p> <pre><code>import xarray as xr file1=xr.open_dataset(&quot;2021-04.nc&quot;) tem = file1['t2m'] </code></pre> <p>The netcdf file is available on <a href="https://app.box.com/s/2xsp8x6ak94di47eq1bv2ojwwani3ojs" rel="nofollow noreferrer">the box</a>, the reader can also download any file sample from the aforementioned data center.</p> <p>Although the file size is 16.6 Mb, tem variable seems to take double the size of the actual file as could be seen below (end of the first line) or monitored by using the top command</p> <pre><code>&lt;xarray.DataArray 't2m' (valid_time: 30, latitude: 411, longitude: 791)&gt; Size: 39MB [9753030 values with dtype=float32] Coordinates: number int64 8B ... * latitude (latitude) float64 3kB 38.0 37.9 37.8 37.7 ... -2.8 -2.9 -3.0 * longitude (longitude) float64 6kB -18.0 -17.9 -17.8 ... 60.8 60.9 61.0 * valid_time (valid_time) datetime64[ns] 240B 2021-04-01 ... 2021-04-30 Attributes: (12/32) GRIB_paramId: 167 GRIB_dataType: fc GRIB_numberOfPoints: 325101 GRIB_typeOfLevel: surface GRIB_stepUnits: 1 GRIB_stepType: instant ... GRIB_totalNumber: 0 GRIB_units: K long_name: 2 metre temperature units: K standard_name: unknown GRIB_surface: 0.0 </code></pre> <p>Any idea why xarray uses all that memory. This is not problematic for small files, but it is too problematic for large files and for heavy computation when many copies of the same variable are created.</p> <p>I can use <code>file1[t2m].astype(β€˜float16’)</code>, which reduces the size to half, but I found that most values are rounded to the first decimal, so I am losing actual data. I want to read the actual data without having to use memory beyond the size of the data file.</p> <p>This is how the data looks like when being read as float 32</p> <pre><code>&lt;xarray.DataArray 't2m' (valid_time: 30)&gt; Size: 120B array([293.87134, 296.0669 , 299.4065 , 302.60474, 305.29443, 306.87646, 301.10645, 302.47388, 299.23267, 294.26587, 295.239 , 299.19238, 302.20923, 307.48193, 307.2202 , 310.6953 , 315.64746, 312.76416, 305.2173 , 299.25488, 299.9475 , 302.3435 , 306.32422, 312.75342, 299.99878, 300.59155, 303.36475, 307.11768, 308.49292, 310.6853 ], dtype=float32) Coordinates: </code></pre> <p>and this is how it looks like under float 16</p> <pre><code>&lt;xarray.DataArray 't2m' (valid_time: 30)&gt; Size: 60B array([293.8, 296. , 299.5, 302.5, 305.2, 307. , 301. , 302.5, 299.2, 294.2, 295.2, 299.2, 302.2, 307.5, 307.2, 310.8, 315.8, 312.8, 305.2, 299.2, 300. , 302.2, 306.2, 312.8, 300. , 300.5, 303.2, 307. , 308.5, 310.8], dtype=float16) </code></pre> <p>Moreover, when I dump the data to the RAM and trace the amount the memory being used, it several fold the actuall size of the file data.</p> <pre><code>import psutil process = psutil.Process() print(&quot;memory used in MB=&quot;, process.memory_info().rss / 1024**2) tem.data print(&quot;memory used in MB=&quot;, process.memory_info().rss / 1024**2) </code></pre> <p>Thanks</p>
<python><python-3.x><python-2.7><multidimensional-array><python-xarray>
2025-01-09 14:33:26
1
725
Kernel
79,342,924
2,496,293
Can't enable PTP on basler cameras
<p>I am following <a href="https://docs.baslerweb.com/precision-time-protocol#enabling-ptp-clock-synchronization" rel="nofollow noreferrer">this</a> documentation in attempt to clock-sync multiple basler cameras. However, the code instructions I find there don't seem to work.</p> <p>Below is small self-contained piece of code:</p> <pre class="lang-py prettyprint-override"><code>from pypylon import pylon _tlf: pylon.TlFactory = pylon.TlFactory.GetInstance() devices: list[pylon.DeviceInfo] = list(filter(lambda d: d.GetModelName() == &quot;a2A4200-12gcBAS&quot;, _tlf.EnumerateDevices())) print([d.GetFriendlyName() for d in devices]) cam_array: pylon.InstantCameraArray = pylon.InstantCameraArray(len(devices)) for device, cam in zip(devices, cam_array, strict=True): cam.Attach(_tlf.CreateDevice(device)) cam.Open() for cam in cam_array: assert cam.IsOpen() cam.PtpEnable.Value = True </code></pre> <p>This script correctly prints my camera serial numbers:</p> <pre><code>['Basler a2A4200-12gcBAS (40400219)', 'Basler a2A4200-12gcBAS (40400220)', 'Basler a2A4200-12gcBAS (40400221)', 'Basler a2A4200-12gcBAS (40400222)'] </code></pre> <p>But then raises an error:</p> <pre><code>Traceback (most recent call last): File &quot;/home/samdm/Safe/0041-VFE/rt-vfe/packages/vfe_rt_plugin_basler/scripts/ptp_stackoverflow.py&quot;, line 14, in &lt;module&gt; cam.PtpEnable.Value = True ^^^^^^^^^^^^^^^^^^^ File &quot;/home/samdm/Safe/0041-VFE/rt-vfe/packages/vfe_rt_plugin_basler/.venv/lib/python3.12/site-packages/pypylon/genicam.py&quot;, line 2073, in SetValue return _genicam.IBoolean_SetValue(self, Value, Verify) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ _genicam.AccessException: Node is not writable. : AccessException thrown in node 'PtpEnable' while calling 'PtpEnable.SetValue()' (file 'BooleanT.h', line 61) </code></pre> <p>The odd thing is that in the pylonviewer, I can check/uncheck the <code>Enable Ptp</code> value just fine.</p> <p>Am I missing a step?</p>
<python><basler>
2025-01-09 14:13:29
1
2,441
Sam De Meyer
79,342,896
5,118,420
How can I override settings for code ran in urls.py while unit testing django
<p>my django app has a env var <code>DEMO</code> which, among other thing, dictate what endpoints are declared in my <code>urls.py</code> file.</p> <p>I want to unit tests these endpoints, I've tried <code>django.test.override_settings</code> but I've found that <code>urls.py</code> is ran only once and not once per unit test.</p> <p>My code look like this:</p> <pre><code># settings.py DEMO = os.environ.get(&quot;DEMO&quot;, &quot;false&quot;) == &quot;true&quot; </code></pre> <pre><code># urls.py print(f&quot;urls.py: DEMO = {settings.DEMO}&quot;) if settings.DEMO: urlpatterns += [ path('my_demo_endpoint/',MyDemoAPIView.as_view(),name=&quot;my-demo-view&quot;) ] </code></pre> <pre><code># test.test_my_demo_endpoint.py class MyDemoEndpointTestCase(TestCase): @override_settings(DEMO=True) def test_endpoint_is_reachable_with_demo_equals_true(self): print(f&quot;test_endpoint_is_reachable_with_demo_equals_true: DEMO = {settings.DEMO}&quot;) response = self.client.get(&quot;/my_demo_endpoint/&quot;) # this fails with 404 self.assertEqual(response.status_code, 200) @override_settings(DEMO=False) def test_endpoint_is_not_reachable_with_demo_equals_false(self): print(f&quot;test_endpoint_is_not_reachable_with_demo_equals_false: DEMO = {settings.DEMO}&quot;) response = self.client.get(&quot;/my_demo_endpoint/&quot;) self.assertEqual(response.status_code, 404) </code></pre> <p>when running this I get:</p> <pre><code>urls.py: DEMO = False test_endpoint_is_reachable_with_demo_equals_true: DEMO = True &lt;test fails with 404&gt; test_endpoint_is_not_reachable_with_demo_equals_false: DEMO = False &lt;test succeed&gt; </code></pre> <p>urls.py is ran only once before every test, however I want to test different behavious of urls.py depending on settings</p> <p>Using a different settings file for testing is not a solution because different tests requires different settings. Directly calling my view in the unit test means that the urls.py code stays uncovered and its behaviour untested so this is also not what I want.</p> <p>How can I override settings for code ran in urls.py?</p> <p>Thank you for your time.</p>
<python><django><testing>
2025-01-09 14:01:02
4
385
Jean Bouvattier
79,342,804
10,567,859
pySpark Hadoop AWS s3 requester-pays.enabled config doesn't work
<p>I am trying to read AWS S3 bucket with pyspark. The bucket requires requester to pay to read.</p> <p>However, it doesn't seem to work although the similar credentials on aws-cli works and the reason that I believe <code>spark.hadoop.fs.s3a.requester-pays.enabled</code> config is the reason is because if I remove the parameter <code>--request-payer requester</code> on aws-cli I get the exactly same error.</p> <p>Below is my code for pyspark configuration</p> <pre><code>spark = SparkSession.builder \ .appName(&quot;MainnetBlocksStreamingJob&quot;) \ .config(&quot;spark.jars.packages&quot;, &quot;org.apache.hadoop:hadoop-aws:3.2.0,com.amazonaws:aws-java-sdk-bundle:1.11.375&quot;) \ .config(&quot;spark.hadoop.fs.s3a.access.key&quot;, S3_ACCESS_KEY) \ .config(&quot;spark.hadoop.fs.s3a.secret.key&quot;, S3_SECRET_KEY) \ .config(&quot;spark.hadoop.fs.s3a.endpoint&quot;, &quot;s3.amazonaws.com&quot;) \ .config(&quot;spark.hadoop.fs.s3a.impl&quot;, &quot;org.apache.hadoop.fs.s3a.S3AFileSystem&quot;) \ .config(&quot;spark.hadoop.fs.s3a.path.style.access&quot;, &quot;true&quot;) \ .config(&quot;spark.hadoop.fs.s3a.requester-pays.enabled&quot;, &quot;true&quot;) \ .config(&quot;spark.hadoop.fs.s3a.requester.pays.enabled&quot;, &quot;true&quot;) \ .config('spark.hadoop.fs.s3a.aws.credentials.provider', 'org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider')\ .getOrCreate() </code></pre> <p>And I running pyspark with the command</p> <pre><code>spark-submit \--packages io.delta:delta-spark_2.12:3.3.0,org.apache.hadoop:hadoop-aws:3.2.0,com.amazonaws:aws-java-sdk-bundle:1.11.375 \ --conf &quot;spark.driver.extraJavaOptions=-Dlog4j.configuration=file:log4j.properties&quot; \ --conf spark.hadoop.fs.s3a.requester-pays.enabled=true \ dataproc_jobs/streaming.py </code></pre> <p>Thank you.</p>
<python><amazon-web-services><amazon-s3><hadoop><pyspark>
2025-01-09 13:33:12
1
523
Edward Chew
79,342,757
1,506,763
Add buttom from admin panel to ModelForm in Django
<p>In the admin panel of a django app when you are adding a database entry that links data from another table on a foreign key you get a dropdown to select the entry and 3 buttons to edit, add or view an entry like this:</p> <p><a href="https://i.sstatic.net/LhS7OaJd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LhS7OaJd.png" alt="enter image description here" /></a></p> <p>However when you create a form for this model to take data from a user, the linked data only shows the dropdown menu and no buttons:</p> <p><a href="https://i.sstatic.net/AJvjBAg8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJvjBAg8.png" alt="enter image description here" /></a></p> <p>Is there any way that I can add the 'add' and 'view' button on the input form so that the user can view an entry in more detail if unsure or add a new entry if the required information is not already in the database?</p> <p>It seems like it should be pretty simple if the functionality is already there for admin panel and would be neater than implementing some custom pages to load the data for viewing or redirecting to another form to add the new data?</p>
<python><django>
2025-01-09 13:20:17
0
676
jpmorr
79,342,713
10,750,537
Deducing the behavior of non-blocking sys.stdin.read()
<p>Could you please explain me the logic of the return value of <code>sys.stdin.read.read(10)</code> in following code, according to the documentation? Honestly, I was not able to deduce it.</p> <pre><code>import sys, os os.set_blocking(sys.stdin.fileno(), False) c = sys.stdin.read(10) print(c) </code></pre> <p>This is my top-bottom reasoning about what I should expect, which is probably flawed because it finishes in a dead end:</p> <ol> <li><p><a href="https://docs.python.org/3/library/sys.html#sys.stdin" rel="nofollow noreferrer"><code>sys.stdin</code> </a> is among (citation) &quot;regular texts file like those returned by <a href="https://docs.python.org/3/library/functions.html#open" rel="nofollow noreferrer">open()</a>&quot;, so by deduction it is a <code>TextIOWrapper</code> object.</p> </li> <li><p>A <a href="https://docs.python.org/3/library/io.html#io.TextIOWrapper" rel="nofollow noreferrer"><code>TextIOWrapper</code></a> object is</p> </li> </ol> <blockquote> <p>A buffered text stream providing higher-level access to a BufferedIOBase buffered binary stream. It inherits from TextIOBase.</p> </blockquote> <ol start="3"> <li>All <code>io.TextIO</code> documentation does not discriminate between blocking/non-blocking. Specifically, <a href="https://docs.python.org/3/library/io.html#io.TextIOBase.read" rel="nofollow noreferrer"><code>io.TextIOBase.read(size=-1, /)</code></a></li> </ol> <blockquote> <p>Read and return at most size characters from the stream as a single <em>str</em>. If size is negative or None, reads until EOF.</p> </blockquote> <ol start="4"> <li><a href="https://docs.python.org/3/library/io.html#io.BufferedIOBase" rel="nofollow noreferrer"><code>BufferedIOBase</code> documentation</a> states that methods such as <code>read()</code> (bold is mine):</li> </ol> <blockquote> <p>[...] <strong>can raise</strong> <code>BlockingIOError</code> if the underlying raw stream is in non-blocking mode and cannot take or give enough data;</p> </blockquote> <p>To complicate things further, I noticed that <code>sys.stdout.buffer</code> exists (at least in my CPython implementation) and it is a <code>BufferedReader</code>, which inherits from <code>BufferedIOBase</code>, and</p> <ol start="5"> <li>the documentation states that <a href="https://docs.python.org/3/library/io.html#io.BufferedReader.read" rel="nofollow noreferrer"><code>io.BufferedReader.read(size=-1, /)</code></a>:</li> </ol> <blockquote> <p>Read and return <em>size</em> bytes, or if <em>size</em> is not given or negative, until EOF or if the read call would block in non-blocking mode.</p> </blockquote> <p>(by the way, my English is not good enough to understand what shall be returned in case of a call that blocks in non-blocking mode).</p> <p>Finally:</p> <ol start="6"> <li><a href="https://peps.python.org/pep-3116/#non-blocking-i-o" rel="nofollow noreferrer">PEP3116</a> states that (bold is mine; today <code>IOError</code> has been replaced by <code>OSError</code>)</li> </ol> <blockquote> <p>At the Buffered I/O and Text I/O layers, if a read or write fails due a non-blocking condition, they <strong>raise</strong> an <code>IOError</code> with <code>errno</code> set to <code>EAGAIN</code>.</p> </blockquote> <p>which seems to be in contrast with 4) (&quot;raise&quot; vs &quot;can raise&quot;) and 5) (&quot;raise&quot; vs not-raise at all).</p>
<python><python-3.x><io>
2025-01-09 13:07:34
0
381
JtTest
79,342,389
13,259,162
Numpy grayscale image to black and white
<p>I use the MNIST dataset that contains 28x28 grayscale images represented as numpy arrays with 0-255 values. I'd like to convert images to black and white only (0 and 1) so that pixels with a value over 128 will get the value 1 and pixels with a value under 128 will get the value 0.</p> <p>Is there a simple method to do so?</p>
<python><numpy><mnist>
2025-01-09 11:16:47
1
309
NoΓ© Mastrorillo
79,342,235
6,815,232
How to download image with selenium from etsy?
<p>I need dwonload image from etsy</p> <p>But after run this code, return this error:</p> <p><code>requests.exceptions.SSLError: HTTPSConnectionPool(host='i.etsystatic.com', port=443): Max retries exceeded with url: /40895858/r/il/764bf3/4699592436/il_340x270.4699592436_edpm.jpg (Caused by SSLError(SSLEOFError(8, 'EOF occurred in violation of protocol (_ssl.c:1145)'))) </code></p> <p>My code is:</p> <pre><code>URL_input = &quot;https://i.etsystatic.com/40895858/r/il/764bf3/4699592436/il_340x270.4699592436_edpm.jpg&quot; headers = { &quot;User-Agent&quot;: &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.131 Safari/537.3&quot; } r = requests.get(URL_input, headers=headers, stream=True) </code></pre> <p>How to issue this problem?</p>
<python><web-scraping><selenium-chromedriver>
2025-01-09 10:22:12
1
1,706
mySun
79,342,183
4,451,315
DuckDBPyRelation from Python dict?
<p>In Polars / pandas / PyArrow, I can instantiate an object from a dict, e.g.</p> <pre><code>In [12]: pl.DataFrame({'a': [1,2,3], 'b': [4,5,6]}) Out[12]: shape: (3, 2) β”Œβ”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β” β”‚ a ┆ b β”‚ β”‚ --- ┆ --- β”‚ β”‚ i64 ┆ i64 β”‚ β•žβ•β•β•β•β•β•ͺ═════║ β”‚ 1 ┆ 4 β”‚ β”‚ 2 ┆ 5 β”‚ β”‚ 3 ┆ 6 β”‚ β””β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”˜ </code></pre> <p>Is there a way to do that in DuckDB, without going via pandas / pyarrow / etc.?</p>
<python><duckdb>
2025-01-09 10:07:38
1
11,062
ignoring_gravity
79,342,159
1,090,562
Finding solutions to linear system of equations with integer constraint in scipy
<p>I have a system of equations where each equation is a linear equation with boolean constraints. For example:</p> <pre><code>x1 + x2 + x3 = 2 x1 + x4 = 1 x2 + x1 = 1 </code></pre> <p>And each <code>x_i</code> is either 0 or 1. Sometimes there might be a small positive (&lt;5) coefficient (for example <code>x1 + 2 * x3 + x4 = 3</code>. Basically a standard linear programming task. What I need to do is to find all <code>x_i</code> which are guaranteed to be 0 and all <code>x_j</code> which are guaranteed to be 1. Sorry if my terminology is not correct here but by guaranteed I mean that if you generate all possible solutions you in all of them all <code>x_i</code> will be 0 and in all of them <code>x_j</code> will be 1.</p> <p>For example my equation has only 2 solutions:</p> <ul> <li><code>1, 0, 1, 0</code></li> <li><code>0, 1, 1, 1</code></li> </ul> <p>So you do not have guaranteed 0 and have <code>x_3</code> as a guaranteed 1.</p> <p><strong>I know how to solve this problem with <a href="https://developers.google.com/optimization/introduction/python" rel="nofollow noreferrer">or-tools</a> by generating all solutions</strong> and it works for my usecases (equations are pretty constrained so usually there are &lt; 500 solutions although the number of variables is big enough to make the whole combinatorial search impossible).</p> <p>The big problem is that I can't use that library (system restrictions above my control) and only libraries available in my case are numpy and scipy. I found that scipy has <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.linprog.html" rel="nofollow noreferrer">scipy.optimize.linprog</a>.</p> <p>It seems like I have found a way to generate one solution</p> <pre><code>import numpy as np from scipy.optimize import linprog A_eq = np.array([ [1, 1, 1, 0], # x1 + x2 + x3 = 2 [1, 0, 0, 1], # x1 + x4 = 1 [1, 1, 0, 0] # x1 + x2 = 1 ]) b_eq = np.array([2, 1, 1]) c = np.zeros(4) bounds = [(0, 1)] * 4 res = linprog(c, A_eq=A_eq, b_eq=b_eq, bounds=bounds, method='highs-ipm') if res.success: print(res.x) </code></pre> <p>But I can't find a way to generate all solutions. Also I am not sure whether there is a better way to do it as all I need to know is to find guaranteed values.</p>
<python><numpy><scipy><linear-programming><scipy-optimize>
2025-01-09 09:59:05
8
224,221
Salvador Dali
79,342,104
11,348,853
Dynamic type annotation for Django model managers custom method
<p>I need help with type hint for a custom model manager method.</p> <p>This is my custom manager and a base model. I inherit this base model to other models so that I don't have to write common fields again and again.</p> <pre class="lang-py prettyprint-override"><code>class BaseManager(models.Manager): def get_or_none(self, *args, **kwargs): try: return self.get(*args, **kwargs) except self.model.DoesNotExist: return None class BaseModel(models.Model): id = models.UUIDField( primary_key=True, default=uuid.uuid4, verbose_name=&quot;ID&quot;, editable=False ) created_at = models.DateTimeField(auto_now_add=True) updated_at = models.DateTimeField(auto_now=True) objects = BaseManager() # Set the custom manager class Meta: abstract = True </code></pre> <p>This is an example model:</p> <pre class="lang-py prettyprint-override"><code>class MyModel(BaseModel): category = models.CharField(max_length=10) </code></pre> <p>Now for this:</p> <pre class="lang-py prettyprint-override"><code>my_object = MyModel.objects.get_or_none( category=&quot;...&quot;, ) </code></pre> <p>The type annotation is like this when I hover in my IDE:</p> <pre><code>my_object: BaseModel | None = MyModel.objects. get_or_none(... </code></pre> <p>But I want the type annotation like this:</p> <pre><code>my_object: MyModel | None = MyModel.objects. get_or_none(... </code></pre> <p>How can I do that? This works for the default methods like get and filter. But how to do this for custom methods like get_or_none?</p> <p>Please help me.</p> <p>Thanks</p>
<python><django><django-models>
2025-01-09 09:40:24
1
2,387
Nahidujjaman Hridoy
79,341,901
4,907,305
How to enter a multi-line command in %debug in a python Jupyter notebook?
<p>How to enter a multi-line command in %debug in a python Jupyter notebook?</p> <p>If you try enter, ctrl+enter, shift+enter, alt+enter the first line will be executed, instead of going to a second line.</p>
<python><debugging><jupyter-notebook><ipdb>
2025-01-09 08:33:31
1
384
Jens Wagemaker
79,341,886
1,045,755
Pandera validation on dynamic/unknown columns
<p>I am trying to use Pandera to validate at least some of my dataframe.</p> <p>As an example I have one that only contains a date column, and then some columns which I don't really know in advance. At least not the name. However, I know that every column should contain floats or NaN's.</p> <p>So how do I validate that dataframe?</p> <p>For example my DataFrameModel class looks like this:</p> <pre><code>class OutputSchema(pa.DataFrameModel): date: Series[dt.datetime] </code></pre> <p>And then I will have let's say 10 other columns I don't know the name of, but I know that they should contain floats. How would that look like if I for example use the pa.check_types such as:</p> <pre><code>@pa.check_types(lazy=True) def get_data(self) -&gt; DataFrame[OutputSchema]: df = get_df() return df </code></pre> <p>So right now I can have 100 of other columns beside the date column, which are not validated. So in theory they could contain stuff I don't want. But as stated, I know, that every other column should be columns containing floats or NaNs. So how do I make a general validation on columns I don't know in advance ?</p>
<python><pandera>
2025-01-09 08:26:03
1
2,615
Denver Dang
79,341,787
14,739,428
threads not correctly running in Celery
<p>here is the testing code:</p> <pre><code>@app.task(base=AbortableServerSideContextTask) def test_watch_dog(): from common.utils.test_with_heartbeat import TestWithWatchDog with TestWithWatchDog(): times = 0 while times &lt; 10: from flask import current_app current_app.logger.info(f'main thread counting times:{times}') times += 1 time.sleep(2) # processing </code></pre> <pre><code>import threading from flask import current_app class TestWithWatchDog: def __init__(self): self.watch_dog = threading.Thread(target=self._watch_dog, daemon=False) self.watch_dog_shutdown_event = threading.Event() self.logger = current_app.logger def __enter__(self): self.logger.info(&quot;TestWithHeartBeat entered&quot;) self.watch_dog.start() def __exit__(self, exc_type, exc_val, exc_tb): self.watch_dog_shutdown_event.set() self.watch_dog.join() current_app.logger.info(&quot;TestWithHeartBeat exited&quot;) def _watch_dog(self): self.logger.info('watch_dog started') while not self.watch_dog_shutdown_event.is_set(): self.logger.info(f'watch_dog heartbeat!') self.watch_dog_shutdown_event.wait(timeout=3) self.logger.info('watch_dog stopped') </code></pre> <p>I'm trying to run a Celery instance to process some async task, I need a heartbeat thread to check if the task is alive.</p> <p>starting by command: <code>celery -A cell worker -l debug --pool gevent --concurrency=500</code></p> <p>When I use sleep to block the main thread and simulate a time-consuming operation, the heartbeat thread can run normally and print 'watch_dog heartbeat!'. However, when actually executing the time-consuming business logic, the heartbeat thread cannot run, unless I manually insert commands like time.sleep in the code to make the main thread pause.</p> <p>This is indeed a big problem,If the time-consuming task takes too long to complete and the heartbeat doesn't arrive in time, I'll terminate the task in another service.</p> <p>How can I ensure that the daemon thread runs normally at the time it is supposed to execute?</p>
<python><python-3.x><flask><celery>
2025-01-09 07:57:18
0
301
william
79,341,326
2,896,120
Semantic Kernel Plugin Function: Arguments must be in JSON format
<p>I'm using semantic kernel and azure open ai as my search. I have this plugin I created:</p> <pre><code>class QuerySQLTablesPlugin: @kernel_function( description=&quot;Get the schema of the table to create the query the user needs&quot;) async def fetch_schema_async(self, table_name: str): table_name = table_name.split(&quot;.&quot;)[-1] return await sync_to_async(self.fetch_schema_sync)(table_name) @kernel_function( description=&quot;Takes in a query to be executed in the database&quot;) async def execute_query(self, query_to_execute: str): return await sync_to_async(self.fetch_query_sync)(query_to_execute) def fetch_schema_sync(self, table_name): with connections['default'].cursor() as cursor: cursor.execute( f&quot;SELECT COLUMN_NAME, DATA_TYPE FROM INFORMATION_SCHEMA.COLUMNS WHERE TABLE_NAME = '{table_name}'&quot; ) schema = cursor.fetchall() print(f&quot;Fetched schema: {schema}&quot;) return schema def fetch_query_sync(self, query_to_execute: str): with connections['default'].cursor() as cursor: cursor.execute(query_to_execute) result = cursor.fetchall() results = [dict(row) for row in result] print(f&quot;Query results: {results}&quot;) return json.dumps(results) </code></pre> <p>The <code>fetch_schema_async</code> function gets called successfully, however, for the <code>execute_query</code> function, it gives me the following error:</p> <p><code>QuerySQLTables-execute_query: Function Call arguments are not valid JSON.. Trying tool call again.</code></p> <p>Even though, I'm able to see the query_to_execute, it looks like this: <code> {'name': 'QuerySQLTables-execute_query', 'arguments': '{\n &quot;query_to_execute&quot;: &quot;SELECT is_staff FROM dbo.user WHERE username = \'Bob\'&quot;\n }'}}]}, {'role': 'tool', 'content': 'The tool call arguments are malformed. Arguments must be in JSON format. Please try again.', 'tool_call_id': 'call_1BcXUS4WUdozITfhWmY1BuGB'},</code></p> <p>Why am I getting this error? The function is not executing at all.</p>
<python><openai-api><semantic-kernel>
2025-01-09 03:45:09
0
2,960
user2896120
79,341,136
3,137,388
Flask is closing https connections even after setting WSGIRequestHandler.protocol_version to http/1.1
<p>We need to write a flask application which should not close the connection after serving the client. Below is the python code</p> <pre><code>from flask import Flask, request, make_response, jsonify from werkzeug.serving import WSGIRequestHandler app = Flask(__name__) @app.route('/v1') def get_availability(): response = make_response(&quot;Custom Response&quot;, 204) return response @app.route('/v2') def get_ping(): response = make_response(&quot;Custom Response&quot;, 200) return response @app.errorhandler(404) def not_found(error): return jsonify({'error': 'Custom message for unavailable path'}), 404 if __name__ == &quot;__main__&quot;: WSGIRequestHandler.protocol_version = &quot;HTTP/1.1&quot; app.run(ssl_context=('cert.pem', 'key.pem'), port=8080) </code></pre> <p>I used below curl command to send a request.</p> <pre><code>curl --cert cert.pem --key key.pem https://127.0.0.1:8080/v1 -v -k </code></pre> <p>It is working and I am getting the response but the connection is not keep alive.</p> <p>Below is curl command outout</p> <pre><code>* Trying 127.0.0.1:8080... * Connected to 127.0.0.1 (127.0.0.1) port 8080 * ALPN: curl offers h2,http/1.1 * TLSv1.3 (OUT), TLS handshake, Client hello (1): * TLSv1.3 (IN), TLS handshake, Server hello (2): * TLSv1.3 (IN), TLS handshake, Encrypted Extensions (8): * TLSv1.3 (IN), TLS handshake, Certificate (11): * TLSv1.3 (IN), TLS handshake, CERT verify (15): * TLSv1.3 (IN), TLS handshake, Finished (20): * TLSv1.3 (OUT), TLS change cipher, Change cipher spec (1): * TLSv1.3 (OUT), TLS handshake, Finished (20): * SSL connection using TLSv1.3 / TLS_AES_256_GCM_SHA384 / X25519 / RSASSA-PSS * ALPN: server did not agree on a protocol. Uses default. * Server certificate: * subject: C=XX; L=Default City; O=Default Company Ltd * start date: Jan 8 22:43:00 2025 GMT * expire date: Jan 8 22:43:00 2026 GMT * issuer: C=XX; L=Default City; O=Default Company Ltd * SSL certificate verify result: self-signed certificate (18), continuing anyway. * Certificate level 0: Public key type RSA (4096/152 Bits/secBits), signed using sha256WithRSAEncryption * using HTTP/1.x &gt; GET /v1 HTTP/1.1 &gt; Host: 127.0.0.1:8080 &gt; User-Agent: curl/8.5.0 &gt; Accept: */* &gt; * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * TLSv1.3 (IN), TLS handshake, Newsession Ticket (4): * old SSL session ID is stale, removing &lt; HTTP/1.1 204 NO CONTENT &lt; Server: Werkzeug/3.1.3 Python/3.11.9 &lt; Date: Thu, 09 Jan 2025 01:12:02 GMT &lt; Content-Type: text/html; charset=utf-8 &lt; Connection: close &lt; * Closing connection * TLSv1.3 (OUT), TLS alert, close notify (256): </code></pre> <p>Through tcpdump, I verified and it is infact flask application is sending reset connection.</p> <pre><code>ip-10-10-10-10:100$ sudo tcpdump -i lo port 8080 dropped privs to tcpdump tcpdump: verbose output suppressed, use -v[v]... for full protocol decode listening on lo, link-type EN10MB (Ethernet), snapshot length 262144 bytes 01:00:01.142531 IP localhost.47782 &gt; localhost.webcache: Flags [S], seq 3077322951, win 65495, options [mss 65495,sackOK,TS val 3930690793 ecr 0,nop,wscale 7], length 0 01:00:01.142547 IP localhost.webcache &gt; localhost.47782: Flags [S.], seq 2243081118, ack 3077322952, win 65483, options [mss 65495,sackOK,TS val 3930690793 ecr 3930690793,nop,wscale 7], length 0 01:00:01.142561 IP localhost.47782 &gt; localhost.webcache: Flags [.], ack 1, win 512, options [nop,nop,TS val 3930690793 ecr 3930690793], length 0 01:00:01.145999 IP localhost.47782 &gt; localhost.webcache: Flags [P.], seq 1:518, ack 1, win 512, options [nop,nop,TS val 3930690796 ecr 3930690793], length 517: HTTP 01:00:01.146012 IP localhost.webcache &gt; localhost.47782: Flags [.], ack 518, win 508, options [nop,nop,TS val 3930690796 ecr 3930690796], length 0 01:00:01.156443 IP localhost.webcache &gt; localhost.47782: Flags [P.], seq 1:2198, ack 518, win 512, options [nop,nop,TS val 3930690807 ecr 3930690796], length 2197: HTTP 01:00:01.156465 IP localhost.47782 &gt; localhost.webcache: Flags [.], ack 2198, win 499, options [nop,nop,TS val 3930690807 ecr 3930690807], length 0 01:00:01.157599 IP localhost.47782 &gt; localhost.webcache: Flags [P.], seq 518:598, ack 2198, win 512, options [nop,nop,TS val 3930690808 ecr 3930690807], length 80: HTTP 01:00:01.157743 IP localhost.webcache &gt; localhost.47782: Flags [P.], seq 2198:2453, ack 598, win 512, options [nop,nop,TS val 3930690808 ecr 3930690808], length 255: HTTP 01:00:01.157786 IP localhost.47782 &gt; localhost.webcache: Flags [P.], seq 598:722, ack 2453, win 511, options [nop,nop,TS val 3930690808 ecr 3930690808], length 124: HTTP 01:00:01.157798 IP localhost.webcache &gt; localhost.47782: Flags [P.], seq 2453:2708, ack 722, win 512, options [nop,nop,TS val 3930690808 ecr 3930690808], length 255: HTTP 01:00:01.169686 IP localhost.webcache &gt; localhost.47782: Flags [FP.], seq 2708:2940, ack 722, win 512, options [nop,nop,TS val 3930690820 ecr 3930690808], length 232: HTTP 01:00:01.169801 IP localhost.47782 &gt; localhost.webcache: Flags [.], ack 2941, win 512, options [nop,nop,TS val 3930690820 ecr 3930690808], length 0 01:00:01.169867 IP localhost.47782 &gt; localhost.webcache: Flags [P.], seq 722:746, ack 2941, win 512, options [nop,nop,TS val 3930690820 ecr 3930690808], length 24: HTTP 01:00:01.169878 IP localhost.webcache &gt; localhost.47782: Flags [R], seq 2243084059, win 0, length 0 01:00:41.470495 IP localhost.59314 &gt; localhost.webcache: Flags [S], seq 2761050291, win 65495, options [mss 65495,sackOK,TS val 3930731121 ecr 0,nop,wscale 7], length 0 01:00:41.470509 IP localhost.webcache &gt; localhost.59314: Flags [S.], seq 3096878784, ack 2761050292, win 65483, options [mss 65495,sackOK,TS val 3930731121 ecr 3930731121,nop,wscale 7], length 0 01:00:41.470521 IP localhost.59314 &gt; localhost.webcache: Flags [.], ack 1, win 512, options [nop,nop,TS val 3930731121 ecr 3930731121], length 0 01:00:41.473914 IP localhost.59314 &gt; localhost.webcache: Flags [P.], seq 1:518, ack 1, win 512, options [nop,nop,TS val 3930731124 ecr 3930731121], length 517: HTTP 01:00:41.473927 IP localhost.webcache &gt; localhost.59314: Flags [.], ack 518, win 508, options [nop,nop,TS val 3930731124 ecr 3930731124], length 0 01:00:41.482397 IP localhost.webcache &gt; localhost.59314: Flags [P.], seq 1:2198, ack 518, win 512, options [nop,nop,TS val 3930731133 ecr 3930731124], length 2197: HTTP 01:00:41.482413 IP localhost.59314 &gt; localhost.webcache: Flags [.], ack 2198, win 499, options [nop,nop,TS val 3930731133 ecr 3930731133], length 0 01:00:41.483329 IP localhost.59314 &gt; localhost.webcache: Flags [P.], seq 518:598, ack 2198, win 512, options [nop,nop,TS val 3930731134 ecr 3930731133], length 80: HTTP 01:00:41.483483 IP localhost.webcache &gt; localhost.59314: Flags [P.], seq 2198:2453, ack 598, win 512, options [nop,nop,TS val 3930731134 ecr 3930731134], length 255: HTTP 01:00:41.483509 IP localhost.59314 &gt; localhost.webcache: Flags [P.], seq 598:735, ack 2453, win 511, options [nop,nop,TS val 3930731134 ecr 3930731134], length 137: HTTP 01:00:41.483539 IP localhost.webcache &gt; localhost.59314: Flags [P.], seq 2453:2708, ack 735, win 511, options [nop,nop,TS val 3930731134 ecr 3930731134], length 255: HTTP 01:00:41.495010 IP localhost.webcache &gt; localhost.59314: Flags [FP.], seq 2708:2891, ack 735, win 512, options [nop,nop,TS val 3930731145 ecr 3930731134], length 183: HTTP 01:00:41.495079 IP localhost.59314 &gt; localhost.webcache: Flags [.], ack 2892, win 512, options [nop,nop,TS val 3930731145 ecr 3930731134], length 0 01:00:41.495176 IP localhost.59314 &gt; localhost.webcache: Flags [P.], seq 735:759, ack 2892, win 512, options [nop,nop,TS val 3930731145 ecr 3930731134], length 24: HTTP 01:00:41.495185 IP localhost.webcache &gt; localhost.59314: Flags [R], seq 3096881676, win 0, length 0 </code></pre> <p>Flags in last line of tcpdump is [R] which is reset connection. Can anyone please help me to understand why flask is closing the connection even though after setting <strong>WSGIRequestHandler.protocol_version</strong> to <strong>HTTP/1.1</strong> in the code and is there anyway to fix this.</p>
<python><flask>
2025-01-09 01:19:58
0
5,396
kadina
79,341,135
11,505,680
Python initialization with multiple inheritance
<p>I have the following class hierarchy. The goal is for the calling code to choose either a base <code>Foo</code> object or a <code>Foobar</code> object that also provides the additional <code>Bar</code> functionality.</p> <pre class="lang-py prettyprint-override"><code>class Foo: def __init__(self): self.foo = 'foo' class Bar: def __init__(self, attr): self.attr = attr def bar(self): print(self.attr + 'bar') class Foobar(Foo, Bar): def __init__(self): super().__init__(self.foo) </code></pre> <p>But when I try to run it:</p> <pre class="lang-py prettyprint-override"><code>&gt;&gt;&gt; fb = Foobar() AttributeError: 'Foobar' object has no attribute 'foo' </code></pre> <p>What's the right way to initialize <code>Foobar</code>? I've read a number of articles and SO posts about initialization with multiple inheritance, but none where one base constructor requires a property of the other base class.</p> <p>EDIT:</p> <p>My actual use case is derived from <a href="https://www.pythonguis.com/tutorials/pyside6-plotting-matplotlib/" rel="nofollow noreferrer">https://www.pythonguis.com/tutorials/pyside6-plotting-matplotlib/</a>. &quot;<code>Bar</code>&quot; is actually <code>FigureCanvasQTAgg</code> and &quot;<code>Foobar</code>&quot; corresponds to <code>MplCanvas</code>. <code>Foobar</code> must derive from <code>FigureCanvasQTAgg</code> because the <code>Foobar</code> object will be passed to a bunch of PySide6 code that uses attributes I don't know about. I'm trying to break out the regular <code>matplotlib</code> code into another base class (<code>Foo</code>) to I can make an alternate front end that doesn't use PySide6, but there may be a different way to achieve this goal.</p> <p>EDIT 2:</p> <p>Looks like the whole approach may be flawed. It would certainly be less taxing for my little brain to create <code>foo</code> in a separate function before trying to instantiate either a <code>Foo</code> or a <code>Foobar</code>.</p>
<python><multiple-inheritance><super>
2025-01-09 01:18:36
3
645
Ilya
79,341,058
9,686,427
Counting the hashtags in a collection of tweets: two methods with inconsistent results
<p>I'm playing around with a numpy dataframe containing two columns: 'tweet_text' and 'cyberbullying_type'. It was created through <a href="https://www.kaggle.com/datasets/saurabhshahane/cyberbullying-dataset" rel="nofollow noreferrer">this dataset</a> as follows:</p> <p><code>df = pd.read_csv('data/cyberbullying_tweets.csv')</code></p> <p>I'm currently trying to count the total number of hashtags used in each 'cyberbullying_type' using two different methods, each of which -I think- counts duplicates. However, each method gives me a different answer:</p> <hr /> <h2>First Method:</h2> <pre><code>import re # Define the pattern for valid hashtags hashtag_pattern = r'#[A-Za-z0-9]+' # Function to count the total number of hashtags in a dataframe def count_total_hashtags(dataframe): return dataframe['tweet_text'].str.findall(hashtag_pattern).apply(len).sum() for category in df['cyberbullying_type'].unique(): count = count_total_hashtags(df[df['cyberbullying_type'] == category]) print(f&quot;Number of hashtags in all tweets for the '{category}' category: {count}&quot;) </code></pre> <p>Output: <code>'not_cyberbullying': 3265, 'gender': 2691, 'religion': 1798, 'other_cyberbullying': 1625, 'age': 728, 'ethnicity': 1112,</code></p> <hr /> <h2>Second Method:</h2> <p>The next method is more manual:</p> <pre><code>def count_hashtags_by_category(dataframe): hashtag_counts = {} for category in dataframe['cyberbullying_type'].unique(): # Filter tweets by category category_tweets = dataframe[dataframe['cyberbullying_type'] == category] # Count hashtags in each tweet hashtag_counts[category] = category_tweets['tweet_text'].apply( lambda text: sum(1 for word in text.split() if word.startswith('#') and word[1:].isalnum()) ).sum() return hashtag_counts # Count hashtags for each category hashtags_per_category = count_hashtags_by_category(df) print(hashtags_per_category) </code></pre> <p>The output: <code>{'not_cyberbullying': 3018, 'gender': 2416, 'religion': 1511, 'other_cyberbullying': 1465, 'age': 679, 'ethnicity': 956}</code></p> <hr /> <p>Why do the answers differ?</p>
<python><dataframe><numpy><numpy-ndarray><python-re>
2025-01-09 00:17:27
2
484
Sam
79,340,961
511,302
Google credentials not getting a refresh token?
<p>I have tried to access gmail through the google oauth2 client. However I cannot seem to make it work:</p> <pre><code>def readEmails(): &quot;&quot;&quot;Shows basic usage of the Gmail API. Lists the user's Gmail labels. &quot;&quot;&quot; creds = None # The file token.json stores the user's access and refresh tokens, and is # created automatically when the authorization flow completes for the first # time. if os.path.exists('token.json'): creds = Credentials.from_authorized_user_file('token.json', SCOPES) # If there are no (valid) credentials available, let the user log in. if not creds or not creds.valid: if creds and creds.expired and creds.refresh_token: creds.refresh(Request()) else: flow = InstalledAppFlow.from_client_secrets_file( # your creds file here. Please create json file as here https://cloud.google.com/docs/authentication/getting-started credential_path, SCOPES) creds = flow.run_local_server(port=4000) # Save the credentials for the next run with open('token.json', 'w') as token: token.write(creds.to_json()) try: # Call the Gmail API service = build('gmail', 'v1', credentials=creds) results = service.users().messages().list(userId='me', labelIds=['INBOX'], q=&quot;is:unread&quot;).execute() messages = results.get('messages',[]); if not messages: print('No new messages.') else: print('would read') return except Exception as error: print(f'An error occurred: {error}') </code></pre> <p>It runs fine for the first run, getting all the information I wish. However the second run, where it would read the token file it gives the error:</p> <pre><code>ValueError: Authorized user info was not in the expected format, missing fields refresh_token. </code></pre> <hr /> <p>if I look into the json I <em>indeed</em> do not see any refresh token part of the json.</p> <p>I notice this already happens in the &quot;from_authorized_user_file&quot;, indeed the refresh token isn't even sent initially.</p>
<python><google-oauth>
2025-01-08 23:07:42
0
9,627
paul23
79,340,945
20,295,949
Selenium Twitter Scraper Closes Immediately – Not Detecting New Tweets
<p>I'm trying to write a Selenium script that scrapes Twitter for new tweets from a specific user after the script starts running. The goal is to print and save tweets to a CSV if they are posted after the script begins execution.</p> <p>Here’s the code I’m working with:</p> <pre><code>from selenium import webdriver from selenium.webdriver.common.by import By from selenium.webdriver.support.ui import WebDriverWait from selenium.webdriver.support import expected_conditions as EC import pandas as pd import time from datetime import datetime, timezone class TwitterScraper: def __init__(self, username, max_scrolls=10): self.username = username self.max_scrolls = max_scrolls # Set start time to timezone-aware datetime self.start_time = datetime.now(timezone.utc) options = webdriver.ChromeOptions() options.add_argument(&quot;--disable-gpu&quot;) options.add_argument(&quot;--no-sandbox&quot;) options.add_argument(&quot;--disable-dev-shm-usage&quot;) self.driver = webdriver.Chrome(options=options) self.url = f'https://twitter.com/{username}' self.last_tweet_time = None def start(self): self.driver.get(self.url) scroll_attempts = 0 while scroll_attempts &lt; self.max_scrolls: try: WebDriverWait(self.driver, 10).until( EC.presence_of_element_located((By.CSS_SELECTOR, &quot;article&quot;)) ) tweets = self.driver.find_elements(By.CSS_SELECTOR, &quot;article&quot;) for tweet in tweets: timestamp_element = tweet.find_element(By.TAG_NAME, &quot;time&quot;) tweet_time = datetime.fromisoformat(timestamp_element.get_attribute(&quot;datetime&quot;)) # Ensure tweet_time is timezone-aware if tweet_time.tzinfo is None: tweet_time = tweet_time.replace(tzinfo=timezone.utc) else: tweet_time = tweet_time.astimezone(timezone.utc) if tweet_time &gt; self.start_time: tweet_text = tweet.text print(&quot;New Tweet Detected:&quot;) print(tweet_text) # Save to CSV df = pd.DataFrame([{ &quot;text&quot;: tweet_text, &quot;time&quot;: tweet_time }]) df.to_csv('latest_tweet.csv', mode='a', header=False, index=False) print(&quot;Latest tweet saved to latest_tweet.csv&quot;) # Scroll down self.driver.execute_script(&quot;window.scrollTo(0, document.body.scrollHeight);&quot;) time.sleep(5) scroll_attempts += 1 except Exception as e: print(f&quot;Error: {e}&quot;) scroll_attempts += 1 self.driver.quit() if __name__ == &quot;__main__&quot;: scraper = TwitterScraper(&quot;FabrizioRomano&quot;, max_scrolls=20) scraper.start() </code></pre> <p>Issue: When I run the script, the Chrome browser opens but closes after a few seconds without printing or detecting any tweets. I plan to switch to headless mode later, but for now, I just need the core functionality to work. The output is empty.</p> <p>I suspect the issue is related to either:</p> <p>How tweets are detected (perhaps the article or time selectors are incorrect or not being found).</p> <p>The scrolling logic or the start_time comparison. Would appreciate any guidance on why the browser closes so quickly without scraping tweets and if the approach for detecting tweets posted after the script starts is correct.</p>
<python><selenium-webdriver><web-scraping><xpath><css-selectors>
2025-01-08 22:52:57
0
319
HamidBee
79,340,748
5,125,230
Regex for a particular character in an unpaired tag
<p>I'm trying to find a character within an unbalanced pair of tags. I can identify it within a matched set, and when the matching pairs are a single character each, but I can't seem to hit on the syntax to find it within an unmatched set when the pairs are multiple characters each. I've tried several combinations of lookaheads without success.</p> <p>The tags are <code>&lt;lsq&gt;</code> and <code>&lt;rsq&gt;</code>; one of each is a matching pair. A line can have neither, one or more pairs, and/or one or more unmatched pairs, i.e. <code>&lt;lsq&gt;</code> without a matching <code>&lt;rsq&gt;</code>. (Although it's theoretically possible to have an <code>&lt;rsq&gt;</code> without a <code>&lt;lsq&gt;</code>, I haven't run across any and I'm not concerned with them.)</p> <p>I'm trying to find instances of a ’ (right single quote) in an unmatched pair, i.e. after an <code>&lt;lsq&gt;</code> that has no corresponding <code>&lt;rsq&gt;</code>. This might be because EOL is reached before an <code>&lt;rsq&gt;</code>, or because another <code>&lt;lsq&gt;</code> occurs first.</p> <p>Sample data:</p> <pre><code>&lt;p&gt;&lt;lsq&gt;Line one, matched one,&lt;rsq&gt;&lt;/p&gt; &lt;p&gt;&lt;lsq&gt;Line two, unmatched’ one. &lt;lsq&gt;Line two, matched’ pair one.&lt;rsq&gt;&lt;/p&gt; &lt;p&gt;Line three, ’fore no tag.&lt;/p&gt; &lt;p&gt;Line four, ’fore first tag. &lt;lsq&gt;Line four, unmatched one’.&lt;/p&gt; &lt;p&gt;Line five free text before. &lt;lsq&gt;Line five, matched one,&lt;rsq&gt; &lt;lsq&gt;line five, ’matched two.&lt;rsq&gt; Line five’ free text after.&lt;/p&gt; &lt;p&gt;&lt;lsq&gt;Line six matched one&lt;rsq&gt;, line six free text! &lt;lsq&gt;Line six matched two hittin’ and sittin’ and goin’ on forever.&lt;rsq&gt;&lt;/p&gt; &lt;p&gt;&lt;lsq&gt;Line seven unmatched’ one.&lt;/p&gt; &lt;p&gt;Line eight free text. Line &lt;lsq&gt;eight’ unmatched one, &lt;lsq&gt;unmatched’ two.&lt;/p&gt; </code></pre> <p>The regex should match only the ’ in line two (unmatched’ one only), line four (unmatched one’ only), line seven (unmatched’), and the first one (eight’) in line eight (I can run it multiple times to find the next one(s)). (I included the words here to make it clear where the match is; but I'm only looking for the ’ itself.)</p> <p>This is in python, with the regex to be part of a search and replace, e.g.</p> <pre><code>regex.sub(r&quot;regex&quot;, r&quot;&lt;tag&gt;&quot;, text_being_processed) </code></pre> <p>This assumes only the ’ is matched; if it's easier with capture groups, I can adjust the replace as necessary.</p> <p>Ignoring EOL for a moment, I tried looking for text between two <code>&lt;lsq&gt;</code>'s without an intervening <code>&lt;rsq&gt;</code>, but I am obviously not handling the negative lookahead correctly:</p> <pre><code>(?&lt;=&lt;lsq&gt;)(?!&lt;rsq&gt;).*?(?=&lt;lsq&gt;) </code></pre> <p>It does find consecutive <code>&lt;lsq&gt;</code>s, but even ones with an <code>&lt;rsq&gt;</code> in-between. I tried moving the <code>&lt;rsq&gt;</code> lookahead around and a few other things, but none of it was correct. And that's before trying to find a particular character inside the unmatched pair. I've searched both SO and the web in general for similar examples, but haven't been able to find one.</p>
<python><python-3.x><regex>
2025-01-08 21:03:23
1
614
vr8ce
79,340,503
20,295,949
How to Adjust Nitter Scraper to Print New Tweets in Real-Time?
<p>I'm using the ntscraper library to fetch tweets from a specific user. Currently, the script fetches the most recent tweet, but it only pulls pre-existing tweets at the time the script runs. Here's the code I'm using:</p> <pre><code>from ntscraper import Nitter import pandas as pd # Initialize the scraper scraper = Nitter() # Fetch the most recent tweet (limit to 1) tweets_data = scraper.get_tweets(&quot;Vader_AI_&quot;, mode='user', number=1) # Extract the latest tweet if tweets_data and 'tweets' in tweets_data and len(tweets_data['tweets']) &gt; 0: latest_tweet = tweets_data['tweets'][0] # First tweet is the most recent print(&quot;Latest Tweet:&quot;) print(f&quot;Text: {latest_tweet['text']}&quot;) print(f&quot;Link: {latest_tweet['link']}&quot;) # Optional: Save to CSV df = pd.DataFrame([latest_tweet]) df.to_csv('latest_tweet.csv', index=False) print(&quot;Latest tweet saved to latest_tweet.csv&quot;) else: print(&quot;No tweets found.&quot;) </code></pre> <p>Is there a way to adjust this so that it continuously monitors the Twitter page and prints a new tweet in real-time as soon as it is posted? Essentially, I'd like the script to wait and detect new tweets instead of fetching older ones.</p> <p><strong>Would something like Selenium or Scrapy be necessary, or can this be achieved with ntscraper alone?. I'm trying to avoid APIs.</strong></p> <p>Any suggestions on the best approach to implement this would be greatly appreciated.</p> <p>Thank you.</p>
<python><selenium-webdriver><web-scraping><scrapy>
2025-01-08 19:14:13
1
319
HamidBee
79,340,487
5,312,606
Properly re-expose submodule (or is this a bug in pylance)
<p>I am working on a python package <code>chemcoord</code> with several subpackages, some of whom should be exposed to the root namespace.</p> <p>The repository is <a href="https://github.com/mcocdawc/chemcoord" rel="nofollow noreferrer">here</a>, the relevant <code>__init__.py</code> file is <a href="https://github.com/mcocdawc/chemcoord/blob/master/src/chemcoord/__init__.py" rel="nofollow noreferrer">here</a>.</p> <p>For example there is a <code>chemcoord.cartesian_coordinates.xyz_functions</code> that should be accessible as <code>chemcoord.xyz_functions</code></p> <p>Accessible, in particular, means that the user should be able to write:</p> <pre class="lang-py prettyprint-override"><code>from chemcoord.xyz_functions import allclose </code></pre> <p>If I write in my <code>__init__.py</code></p> <pre class="lang-py prettyprint-override"><code>import chemcoord.cartesian_coordinates.xyz_functions as xyz_functions </code></pre> <p>then I can use <code>chemcoord.xyz_functions</code> in the code, but I cannot do</p> <pre class="lang-py prettyprint-override"><code>from chemcoord.xyz_functions import allclose </code></pre> <p>If I do the additional ugly/hacky (?) trick of modifying <code>sys.modules</code> in the <code>__init__.py</code> as in</p> <pre class="lang-py prettyprint-override"><code>import sys sys.modules[&quot;chemcoord.xyz_functions&quot;] = xyz_functions </code></pre> <p>then I can write</p> <pre class="lang-py prettyprint-override"><code>from chemcoord.xyz_functions import allclose </code></pre> <p>But it feels ugly and hacky.</p> <p>Recently I got warnings from <code>PyLance</code> about</p> <blockquote> <p>Import &quot;chemcoord.xyz_functions&quot; could not be resolved</p> </blockquote> <p>Which leads to my two questions:</p> <ol> <li>Is my approach of reexposing the submodule correct, or is there a cleaner way?</li> <li>If the answer to question 1 is solved and I still get warnings from <code>PyLance</code>, is there a bug in <code>PyLance</code>?</li> </ol>
<python><packaging><pyright>
2025-01-08 19:09:33
1
1,897
mcocdawc
79,340,472
2,711,474
How to change drop-down object in Rally
<p>I have yet to figure out how to change any object represented by a drop-down selector on Rally to another value. I can copy an existing one from another User Story or the original Object into a new User Story, but I can't change any of the values in that object.</p> <p>I've tried way too many different methods to list here, including two recommended by Google's AI search that didn't work at all. Any suggestions?</p>
<python><pyral>
2025-01-08 19:03:25
0
425
TesseractE
79,340,141
7,631,505
Issue solving differential equation with solve_ivp depending on t_span
<p>I'm using <code>solve_ivp</code> but I'm getting strange results depending on the settings of the problem, though i think it's more an issue with the implementation of the solver than a coding issue, but I'd love for someone to provide an input. So my code is as follows:</p> <pre><code>import numpy as np from scipy.integrate import solve_ivp import matplotlib.pyplot as plt omega_ir = 1.0 * 2 * np.pi gamma_ir = 0.5 def pulse(t): E0 = 1 w = 0.35 return -E0 * np.exp(-((t - 0) ** 2) / (w**2)) * np.sin((t - 0) / 0.33) def equations(t, y): Q_ir, P_ir = y # Equations of motion dQ_ir = P_ir dP_ir = -(omega_ir**2) * Q_ir - gamma_ir * P_ir + pulse(t) return [dQ_ir, dP_ir] initial_conditions = [0, 0] # Time span for the simulation t_span = (-1, 40) t_eval = np.linspace(t_span[0], t_span[1], 1000) solution = solve_ivp( equations, t_span, initial_conditions, t_eval=t_eval, ) Q_ir = solution.y[0] print(solution.message) fig, ax = plt.subplots() ax.plot(t_eval, Q_ir / max(Q_ir)) ax.plot(t_eval, pulse(t_eval) / max(pulse(t_eval))) ax.set_xlabel(&quot;Time&quot;) ax.set_ylabel(&quot;Normalised intensity Intesnity&quot;) plt.show() </code></pre> <p>So it's a simple pendulum/oscillator problem. if I run this it works without issues, the message is: <code>The solver successfully reached the end of the integration interval.</code> great. The plot is as follows (blue is the position of the oscillator, orange is the initial excitation, both normalised for clarity): <a href="https://i.sstatic.net/izrOIXj8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/izrOIXj8.png" alt="" /></a> Very nice. However if I try to change the <code>t_span=(-15, 40)</code> The plot is as follows: <a href="https://i.sstatic.net/AaHpSw8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AaHpSw8J.png" alt="" /></a> Note that is still normalised, the blue line is actually ~1e-50 or something like that. I of course thought it was an issue with sampling, so I tried to change it to finer sampling (like 10000 points) without success. This seem to happen if the first time point is earlier than -11.</p> <p>I think this is an issue with the math or the implementation of the solver. I tried changing to other methods, but it seems to give similar nonsensical (~0) results. I don't know much about the theory of these solvers, so I'd be happy if someone could point me to the direction of solving this issue, or let me know if this is not solvable. Thanks</p>
<python><scipy><ode>
2025-01-08 17:07:37
1
316
mmonti
79,339,965
4,431,798
Why does pyrtools imshow() print a value range that's different from np.min() and np.max()?
<p><strong>quick problem description:</strong></p> <p>pyrtools imshow function giving me different and negative ranges</p> <p><strong>details:</strong></p> <p>im following the tutorial at <a href="https://pyrtools.readthedocs.io/en/latest/tutorials/01_tools.html" rel="nofollow noreferrer">https://pyrtools.readthedocs.io/en/latest/tutorials/01_tools.html</a></p> <p>since i dont have .pgm image, i'm using the below .jpg image.</p> <p><a href="https://i.sstatic.net/zBCBJG5n.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zBCBJG5n.jpg" alt="ein.jpg , input image" /></a></p> <p>here is the modified python code</p> <pre><code>import matplotlib.pyplot as plt import pyrtools as pt import numpy as np import cv2 # Load the JPG image oim = cv2.imread('/content/ein.jpg') print(f&quot;input image shape: {oim.shape}&quot;) # Convert to grayscale if it's an RGB image if len(oim.shape) == 3: # Check if the image has 3 channels (RGB) oim_gray = cv2.cvtColor(oim, cv2.COLOR_BGR2GRAY) else: oim_gray = oim # Already grayscale print(f&quot;grayscale image shape: {oim_gray.shape}&quot;) # Check the range of the oim_gray print(f&quot;value range of oim_gray: {np.min(oim_gray), np.max(oim_gray)}&quot;) # Subsampling imSubSample = 1 im = pt.blurDn(oim_gray, n_levels=imSubSample, filt='qmf9') # Check the range of the subsampled image print(f&quot;value range of im: {np.min(im), np.max(im)}&quot;) # Display the original and subsampled images pt.imshow([oim_gray, im], title=['original (grayscale)', 'subsampled'], vrange='auto2', col_wrap=2); </code></pre> <p>and here is the output</p> <pre><code>input image shape: (256, 256, 3) grayscale image shape: (256, 256) value range of oim_gray: (4, 245) value range of im: (5.43152380173187, 251.90158197570906) </code></pre> <p>with the given image</p> <p><a href="https://i.sstatic.net/woz6FSY8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/woz6FSY8.png" alt="output image" /></a></p> <p>as you can see, from the printouts, both the images oim_gray and im are in positive range. there aren't any negative values on neither im and oim_gray.</p> <p>but when checking the output image, i see the range -38 &amp; 170 (please check the text over the output image). this doesn't make any sense, and don't understand. can you help on this to understand?</p>
<python><image-processing>
2025-01-08 16:09:10
1
441
SoajanII
79,339,759
4,536,981
Custom thumbnail upload succeeds but is not applied to YouTube Shorts
<p>I’m uploading a custom thumbnail for a YouTube Shorts video using the YouTube Data API v3. The API response indicates success, but the uploaded thumbnail is not applied to the video.</p> <pre><code>if thumbnail_path: try: # Verify thumbnail meets requirements img = Image.open(thumbnail_path) logging.info(f&quot;Verifying thumbnail dimensions: {img.size}&quot;) if img.size != (720, 1280): logging.error(f&quot;Invalid thumbnail dimensions: {img.size}. Expected: (720, 1280)&quot;) img.close() return video_id img.close() if os.path.getsize(thumbnail_path) &gt; 2 * 1024 * 1024: logging.error(&quot;Thumbnail file size exceeds 2MB limit&quot;) return video_id # Create MediaFileUpload object for thumbnail thumbnail_media = MediaFileUpload( thumbnail_path, mimetype='image/jpeg', resumable=False ) # Set thumbnail with proper request thumbnail_response = self.youtube.thumbnails().set( videoId=video_id, media_body=thumbnail_media ).execute() if 'items' in thumbnail_response: logging.info(f&quot;Thumbnail successfully set for video {video_id}&quot;) logging.info(f&quot;Original thumbnail dimensions: 720x1280&quot;) logging.info(f&quot;YouTube generated thumbnails: {thumbnail_response['items']}&quot;) else: logging.warning(f&quot;Thumbnail upload response missing items: {thumbnail_response}&quot;) # Wait for thumbnail processing time.sleep(10) # Increased wait time except Exception as e: logging.error(f&quot;Failed to upload thumbnail: {str(e)}&quot;) logging.error(f&quot;Thumbnail path: {thumbnail_path}&quot;) logging.error(f&quot;Video ID: {video_id}&quot;) logging.error(f&quot;Full error: {traceback.format_exc()}&quot;) </code></pre> <p>The uploaded thumbnail meets all the documented requirements:</p> <ol> <li>Dimensions: 1280x720</li> <li>File size: Less than 2MB</li> <li>Format: JPEG</li> </ol> <p>Is this a limitation of YouTube Shorts, or is there something specific I need to do to apply a custom thumbnail to Shorts?</p>
<python><youtube-api><youtube-data-api>
2025-01-08 15:28:31
0
596
Wocugon
79,339,719
6,413,657
Extract Hijri date from excel file and plot it
<p>I have an excel sheet contain the following data:</p> <div class="s-table-container"><table class="s-table"> <thead> <tr> <th>Date</th> <th>counts</th> </tr> </thead> <tbody> <tr> <td>1446/05/25</td> <td>12</td> </tr> <tr> <td>1446/05/26</td> <td>2</td> </tr> <tr> <td>1446/05/26</td> <td>6</td> </tr> <tr> <td>1446/05/26</td> <td>1</td> </tr> <tr> <td>1446/05/26</td> <td>1</td> </tr> <tr> <td>1446/05/26</td> <td>6</td> </tr> <tr> <td>1446/05/27</td> <td>6</td> </tr> <tr> <td>1446/05/27</td> <td>6</td> </tr> <tr> <td>1446/05/28</td> <td>4</td> </tr> <tr> <td>1446/05/28</td> <td>6</td> </tr> <tr> <td>1446/05/29</td> <td>9</td> </tr> </tbody> </table></div> <p>where I want to plot them together. But my problem is I have a duplicated date in x shown in plot. How to make them unique by summing up the counts and show unique date? <a href="https://i.sstatic.net/Ix7fUeWk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ix7fUeWk.png" alt="enter image description here" /></a></p>
<python><pandas><dataframe>
2025-01-08 15:18:53
1
363
Yousra Gad
79,339,649
9,396,198
Static type checker (mypy, pytype, pyright) complains with dynamically generated `Union`
<p>How does one go about creating a TypeAdapter from a dynamic <code>Union</code> without running into linting errors?</p> <pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, TypeAdapter class Lion(BaseModel): roar: str class Tiger(BaseModel): roar: str ZOO = { &quot;lion&quot;: Lion, &quot;tiger&quot;: Tiger, # ... } Animal = Union[*[animal for animal in ZOO.values()]] # Error: Invalid type alias: expression is not a valid type AnimalTypeAdapter: TypeAdapter[Animal] = TypeAdapter(Animal) # Error: Variable &quot;Animal&quot; is not valid as a type </code></pre>
<python><python-typing><mypy><pydantic>
2025-01-08 14:58:20
1
1,558
fmagno
79,339,594
13,132,640
How to implement a custom equality comparison which can test pd.DataFrame attributes?
<p>I have a custom dataclass, which is rather lage (many attributes, methods). Some attributes are pandas dataframes. The default __eq__ comparison does not work for the attribtues which are pandas dataframes. Hence, I started trying to write a custom __eq__ function to handle this. I came up with this, which seems to work:</p> <pre><code> def __eq__(self, other): attribs = [a for a in dir(self) if (not a.startswith('_'))&amp;(callable(self.__getattribute__(a))==False)] for a in attribs: if isinstance(self.__getattribute__(a),pd.DataFrame): same = self.__getattribute__(a).equals(other.__getattribute__(a)) else: same = (self.__getattribute__(a)==other.__getattribute__(a)) if not same: break return same </code></pre> <p>I'm testing by creating a class instance, saving to pickle and then reading that pickle file to a new variable.</p> <p>However, it left me with two questions:</p> <ol> <li><p>I had to add the limitation that no callables are compared, since when I compare the callables I get &quot;false&quot; even though the instances are the same (the error was not clear enough for me to understand why I get &quot;False&quot;). How does the default dataclass __eq__ actually handle this, are callables also ignored?</p> </li> <li><p>I assume there is a faster, vectorized way to check the attributes without having to use this for loop approach, but I don't see exactly how it would work. Any thoughts?</p> </li> </ol>
<python><class><equality>
2025-01-08 14:40:18
0
379
user13132640
79,339,539
10,200,255
Execution Reverted on Base Layer 2 Buy transaction using web3.py
<p>I'm trying to write a service using python and web3.py to execute a &quot;buy&quot; transaction on a <a href="https://basescan.org/address/0xf66dea7b3e897cd44a5a231c61b6b4423d613259" rel="nofollow noreferrer">Base smart contract</a>.</p> <p>I can't figure out why my buy transactions aren't working from my code but seemingly the same transaction is working fine on the UI site for this contract. This is my wallet I've been testing with the failed txns are sent from the code: <a href="https://basescan.org/address/0xf276b155bed843b3daf21847ce25cf3f806bdca0" rel="nofollow noreferrer">https://basescan.org/address/0xf276b155bed843b3daf21847ce25cf3f806bdca0</a></p> <p>My python code to execute the buy looks like this, the transactions are successfully sent and return hashes, but the buy transaction is reverted: (I also tried converting the gas values from str to hex because thats how the txn looked from the website, as well as removing value key)</p> <pre><code>from rest_framework.views import APIView from rest_framework.response import Response from rest_framework import status from .models import Holding import os import sys from web3 import Web3 from eth_account import Account from .config import w3, CONTRACT_ADDRESS, GAS_LIMIT, VIRTUALS_CA from .contract_abi import CONTRACT_ABI from .virtual_protocol_contract import VIRTUAL_ABI from .utils import estimate_gas_price class BuyTokenView(APIView): def post(self, request): data = request.data # Validate input if 'contract_address' not in data: return Response({&quot;error&quot;: &quot;Missing required fields&quot;}, status=status.HTTP_400_BAD_REQUEST) ca = data['contract_address'] amount = data['amount'] private_key = load_environment() account = setup_account(private_key) print(f&quot;Connected with address: {account.address}&quot;) # Execute transaction execute_buy_transaction(account, ca, float(amount)) return Response({&quot;Message&quot;: f&quot;Buying Token {ca}&quot;}, status=status.HTTP_200_OK) def load_environment(): &quot;&quot;&quot;Load environment variables.&quot;&quot;&quot; private_key = os.getenv('PRIVATE_KEY') if not private_key: print(&quot;Error: PRIVATE_KEY not found in environment variables&quot;) sys.exit(1) return private_key def setup_account(private_key): &quot;&quot;&quot;Setup account from private key.&quot;&quot;&quot; try: account = Account.from_key(private_key) return account except ValueError as e: print(f&quot;Error setting up account: {e}&quot;) sys.exit(1) def execute_buy_transaction(account, token_address, amount): &quot;&quot;&quot;Execute token purchase transaction.&quot;&quot;&quot; contract = w3.eth.contract(address=CONTRACT_ADDRESS, abi=CONTRACT_ABI) virtuals_contract = w3.eth.contract(address=VIRTUALS_CA, abi=VIRTUAL_ABI) float_value = float(amount) approve_tokens(account, virtuals_contract, int(amount)) # Prepare transaction transaction = contract.functions.buy(amountIn=int(amount), tokenAddress=token_address).build_transaction({ 'from': account.address, 'gas': hex(GAS_LIMIT), 'maxFeePerGas': hex(estimate_gas_price(w3)), 'maxPriorityFeePerGas': hex(w3.to_wei('2', 'gwei')), 'nonce': hex(w3.eth.get_transaction_count(account.address)), }) transaction.pop('value', None) # Sign and send transaction try: signed_txn = w3.eth.account.sign_transaction(transaction, account.key) tx_hash = w3.eth.send_raw_transaction(signed_txn.raw_transaction) print(f&quot;Transaction sent. Hash: {tx_hash.hex()}&quot;) # Wait for transaction receipt print(&quot;Waiting for transaction confirmation...&quot;) receipt = w3.eth.wait_for_transaction_receipt(tx_hash) return receipt except Exception as e: print(f&quot;Error executing transaction: {e}&quot;) return None # Function to approve tokens def approve_tokens(account, contract, amount): nonce = w3.eth.get_transaction_count(account.address) tx = { 'nonce': nonce, 'to': contract.address, 'value': w3.to_wei(0, 'ether'), 'gas': 200000, 'gasPrice': w3.eth.gas_price, 'data': contract.encode_abi(&quot;approve&quot;, args=[account.address, amount]), 'chainId': w3.eth.chain_id } try: signed_tx = w3.eth.account.sign_transaction(tx, account.key) tx_hash = w3.eth.send_raw_transaction(signed_tx.raw_transaction) print(f&quot;Approve Transaction sent. Hash: {tx_hash.hex()}&quot;) # Wait for transaction receipt print(&quot;Waiting for transaction confirmation...&quot;) receipt = w3.eth.wait_for_transaction_receipt(tx_hash) return receipt except Exception as e: print(f&quot;Error approving tokens: {e}&quot;) return None </code></pre>
<python><ethereum><web3py><evm>
2025-01-08 14:28:08
1
360
degenTy
79,339,486
1,089,161
Finding loops between numbers in a list of sets
<h5>I marked <a href="https://stackoverflow.com/a/79341725/1089161">my answer</a> as the answer because it is the one that is doing what I was wanting and anyone wanting to do the same thing should start there. But I would love to see a better answer (order of magnitude for the generation of all loops for example D shown) in Python and will happily select a better answer. The original OP follows.</h5> <p>Given a list of sets like:</p> <pre><code>sets=[{1,2},{2,3},{1,3}] </code></pre> <p>the product <code>(1,2,3)</code> will be generated twice in <code>itertools.product(*sets)</code>, as the literals <code>(1,2,3)</code> and <code>(2,3,1)</code>, because there is a loop. If there is no loop there will be no duplication, even though there might be lots of commonality between sets.</p> <p>A loop is formed to A in a set when you travel to B in the same set and then to B in another set that has A or to B in another set with C which connects to a set with A. e.g. <code>1&gt;2--2&gt;3--3&gt;1</code> where '--' indicates movement between sets and '&gt;' indicates movement within the set. The smallest loop would involve a pair of numbers in common between two sets, e.g. <code>a&gt;b--b&gt;a</code>. {edit: @ravenspoint's notation is nice, I suggest using <code>{a}-b-{a}</code> instead of the above.} Loops in canonical form should not have a bridging value used more than once: either this represents a case where the loop traced back on itself (like in a cursive &quot;i&quot;) or there is a smaller loops that could be made (like the outer and inner squares on the Arch de Triumph](<a href="https://commons.wikimedia.org/wiki/File:Paris_Arc_de_Triomphe.jpg" rel="nofollow noreferrer">https://commons.wikimedia.org/wiki/File:Paris_Arc_de_Triomphe.jpg</a>).</p> <p>What type of graph structure could I be using to represent this? I have tried representing each set as a node and then indicating which sets are connected to which, but this is not right since for <code>[{1,2},{1,3},{1,4}]</code>, there is a connection between all sets -- the common 1-- but there is no loop. I have also tried assigning a letter to each number in each set, but that doesn't seem right, either, since then I don't know how to discriminate against loops <em>within</em> a set.</p> <p>This was motivated by this question <a href="https://stackoverflow.com/a/51887027/1089161">about generating unique products</a>.</p> <p>Sample sets like the following (which has the trivial loop <code>4&gt;17--17&gt;4</code> and longer loops like <code>13&gt;5--5&gt;11--11&gt;13</code>)</p> <pre><code>[{1, 13, 5}, {11, 13}, {17, 11, 4, 5}, {17, 4, 1}] </code></pre> <p>can be generated as shown in the docstring of <a href="https://stackoverflow.com/a/79341725/1089161">core</a>.</p> <h6>Alternate visualization analogy</h6> <p>Another way to visualize the &quot;path/loop&quot; is to think of connecting points on a grid: columns x contain elements y of the sets and equal elements are in the same row. A loop is a path that starts at one point and ends at the same point by travelling vertically or horizontally from point to point and must include both directions of motion. A suitable permutation of rows and columns would reveal a <a href="https://mathworld.wolfram.com/StaircasePolygon.html" rel="nofollow noreferrer">staircase polygon</a>.</p> <h6>see also</h6> <p><a href="https://github.com/networkx/networkx/discussions/6394" rel="nofollow noreferrer">simple cycles in undirected graph</a></p> <p><a href="https://stackoverflow.com/questions/9804127/finding-polygons-within-an-undirected-graph">polygons in undirected graph</a></p>
<python><set><graph-theory><graph-traversal>
2025-01-08 14:11:40
5
19,565
smichr
79,339,377
7,441,757
Remove X.Y.dev0 packages from poetry package resolution
<p>We have several packages where feature branch releases are versioned with .devN (.dev0). Releasing packages from feature branches allows you to more quickly test integrations. This follows <a href="https://peps.python.org/pep-0440/" rel="nofollow noreferrer">Pep 420</a>.</p> <p>However, this creates some problems because <code>pip</code> and <code>poetry</code> (we use the last one for our environment management) now takes the <code>.dev</code> packages when running <code>poetry update</code>. Any option to filter this?</p>
<python><pip><python-poetry>
2025-01-08 13:36:41
0
5,199
Roelant
79,339,311
14,720,380
Is it possible to run a callable/function with a different virtual environment in Python?
<p>I have a WebAPI where it is wrapping an external library, the API needs to be able to handle multiple versions of a external library. Two ways I can think of handling this would be to:</p> <ol> <li>Have multiple instances of the API server, for each version of the library and use a reverse proxy to route traffic to the specific version based on a URL parameter</li> <li>Within python, have the API create a virtual environment for each version of the library you want to run and then when you call the endpoint it will run my code within a process, within the corresponding virtual environment.</li> </ol> <p>I am leaning towards trying to get #2 to work, if it is possible to do, as it will be simpler to deploy.</p> <p>To do this, I would like to be able to create a FastAPI instance where it can get the version of a package you want to run, then run the function within a process and return the result. A minimal example of this would be:</p> <pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI import numpy as np app = FastAPI() @app.get(&quot;/{version}/version&quot;) def get_version(version: str) -&gt; str: &quot;&quot;&quot; This function will create a venv for the numpy version if it doesnt exist, and then activate that venv and run the code within as a process on that virtual environment. &quot;&quot;&quot; return np.__version__ </code></pre>
<python>
2025-01-08 13:21:22
1
6,623
Tom McLean
79,339,277
20,295,949
AttributeError: module 'requests' has no attribute 'models' when using Twikit in Python
<p>I'm trying to fetch tweets using the twikit Python library, but I keep encountering this error:</p> <pre><code>Traceback (most recent call last): File &quot;c:\users\gaming\anaconda3\lib\site-packages\requests\__init__.py&quot;, line 71, in main await get_tweets(QUERY, minimum_tweets=MINIMUM_TWEETS) File &quot;c:\users\gaming\anaconda3\lib\site-packages\requests\__init__.py&quot;, line 42, in get_tweets tweets = await client.search_tweet(query, product='top') ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Gaming\anaconda3\Lib\site-packages\twikit\client\client.py&quot;, line 706, in search_tweet response, _ = await self.gql.search_timeline(query, product, count, cursor) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Gaming\anaconda3\Lib\site-packages\twikit\client\gql.py&quot;, line 157, in search_timeline return await self.gql_get(Endpoint.SEARCH_TIMELINE, variables, FEATURES) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Gaming\anaconda3\Lib\site-packages\twikit\client\gql.py&quot;, line 122, in gql_get return await self.base.get(url, params=flatten_params(params), headers=headers, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Gaming\anaconda3\Lib\site-packages\twikit\client\client.py&quot;, line 210, in get return await self.request('GET', url, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Gaming\anaconda3\Lib\site-packages\twikit\client\client.py&quot;, line 141, in request await self.client_transaction.init(self.http, ct_headers) File &quot;C:\Users\Gaming\anaconda3\Lib\site-packages\twikit\x_client_transaction\transaction.py&quot;, line 34, in init self.home_page_response = self.validate_response(home_page_response) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File &quot;C:\Users\Gaming\anaconda3\Lib\site-packages\twikit\x_client_transaction\transaction.py&quot;, line 60, in validate_response if not isinstance(response, (bs4.BeautifulSoup, requests.models.Response)): ^^^^^^^^^^^^^^^ AttributeError: module 'requests' has no attribute 'models' </code></pre> <p>This is the relevant portion of my code:</p> <pre><code>from twikit import Client import asyncio from configparser import ConfigParser import csv from datetime import datetime from random import randint import time # Constants MINIMUM_TWEETS = 30 QUERY = &quot;chatgpt&quot; config = ConfigParser() config.read(r'C:\Users\Gaming\Documents\Python Tweets\Account.ini') USERNAME = config['x']['username'] EMAIL = config['x']['email'] PASSWORD = config['x']['password'] client = Client(language='en-us') async def authenticate(): try: client.load_cookies('cookies.json') except FileNotFoundError: await client.login(auth_info_1=USERNAME, auth_info_2=EMAIL, password=PASSWORD) client.save_cookies('cookies.json') async def get_tweets(query, minimum_tweets=MINIMUM_TWEETS): tweet_count = 0 tweets = None with open('tweets.csv', 'w', newline='', encoding='utf-8') as file: writer = csv.writer(file) writer.writerow(['Count', 'Username', 'Text', 'Created_At', 'Retweets', 'Likes']) while tweet_count &lt; minimum_tweets: if tweets is None: tweets = await client.search_tweet(query, product='top') else: tweets = await tweets.next() for tweet in tweets: tweet_count += 1 tweet_data = [tweet_count, tweet.user.name, tweet.text, tweet.created_at, tweet.retweet_count, tweet.like_count] writer.writerow(tweet_data) if not tweets: break wait_time = randint(5, 10) await asyncio.sleep(wait_time) async def main(): await authenticate() await get_tweets(QUERY, minimum_tweets=MINIMUM_TWEETS) if __name__ == &quot;__main__&quot;: try: loop = asyncio.get_running_loop() if loop.is_running(): asyncio.create_task(main()) else: asyncio.run(main()) except RuntimeError: asyncio.run(main()) </code></pre> <p>What I've tried: Ensured requests is correctly installed (pip show requests shows version 2.31.0). There are no requests.py files in the project directory that might shadow the real module. Reinstalled requests using pip install --force-reinstall requests. Question: Why is this error happening, and how can I fix the AttributeError: module 'requests' has no attribute 'models' when trying to fetch tweets with twikit?</p> <p>If there is another alternative way of doing it without having to deal with twitters api limit I'm happy to try. Thanks</p>
<python>
2025-01-08 13:11:32
0
319
HamidBee
79,339,017
378,783
What is wrong with my python setup? Wrong lib loaded in some cases when multiple versions installed
<p>I think that it is the best to describe my problem with series of prompts. I am missing something, don't know what? How does python3.13 load python 3.11 libs???</p> <p>Solutions like delete other versions and similar are not good. I want to fix it as it is.</p> <pre><code>C:\Temp\2025\01\python_ver_check (master) &gt;where python C:\Program Files\Python313\python.exe C:\Program Files\Python311\python.exe C:\OSGeo4W\bin\python.exe C:\Users\UserName\AppData\Local\Microsoft\WindowsApps\python.exe C:\Temp\2025\01\python_ver_check (master) &gt;python --version Python 3.13.1 C:\Temp\2025\01\python_ver_check (master) &gt;type globaj.py import glob for g in glob.glob('*'): print(g) C:\Temp\2025\01\python_ver_check (master) &gt;globaj.py Traceback (most recent call last): File &quot;C:\Temp\2025\01\python_ver_check\globaj.py&quot;, line 1, in &lt;module&gt; import glob File &quot;C:\Program Files\Python313\Lib\glob.py&quot;, line 5, in &lt;module&gt; import re File &quot;C:\Program Files\Python313\Lib\re\__init__.py&quot;, line 126, in &lt;module&gt; from . import _compiler, _parser File &quot;C:\Program Files\Python313\Lib\re\_compiler.py&quot;, line 18, in &lt;module&gt; assert _sre.MAGIC == MAGIC, &quot;SRE module mismatch&quot; ^^^^^^^^^^^^^^^^^^^ AssertionError: SRE module mismatch C:\Temp\2025\01\python_ver_check (master) &gt;python globaj.py globaj.py net48 net8 C:\Temp\2025\01\python_ver_check (master) &gt;assoc .py .py=Python.File C:\Temp\2025\01\python_ver_check (master) &gt;ftype Python.File Python.File=&quot;C:\WINDOWS\py.exe&quot; &quot;%L&quot; %* C:\Temp\2025\01\python_ver_check (master) &gt;py --version Python 3.13.1 C:\Temp\2025\01\python_ver_check (master) &gt;echo %PATH% C:\Program Files\Python313\Scripts\;C:\Program Files\Python313\;C:\Program Files (x86)\Common Files\Oracle\Java\java8path;... </code></pre>
<python><python-3.x><environment-variables>
2025-01-08 11:45:28
1
7,117
watbywbarif
79,338,848
4,451,315
"n_unique" aggregation using DuckDB relational API which counts nulls
<p>Similar to <a href="https://stackoverflow.com/questions/79314406/n-unique-aggregation-using-duckdb-relational-api/">&quot;n_unique&quot; aggregation using DuckDB relational API</a></p> <p>But, I need to count null values</p> <p>Say I have</p> <pre class="lang-py prettyprint-override"><code>import duckdb rel = duckdb.sql('select * from values (1, 4), (2, null), (null, null) df(a, b)') rel </code></pre> <pre class="lang-py prettyprint-override"><code>Out[3]: β”Œβ”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β” β”‚ a β”‚ b β”‚ β”‚ int32 β”‚ int32 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€ β”‚ 1 β”‚ 4 β”‚ β”‚ 2 β”‚ NULL β”‚ β”‚ NULL β”‚ NULL β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>I would like to make a <code>duckdb.Expression</code> which I can use to count the number of unique values including nulls</p> <p>The solution suggested in the linked question:</p> <pre class="lang-py prettyprint-override"><code>def n_unique(column_name: str) -&gt; duckdb.Expression: return duckdb.FunctionExpression( 'array_unique', duckdb.FunctionExpression( 'array_agg', duckdb.ColumnExpression(column_name) ) ) </code></pre> <p>is not quite right here, as it skips nulls:</p> <pre><code>In [39]: rel.aggregate([n_unique('a'), n_unique('b')]) Out[39]: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ array_unique(array_agg(a)) β”‚ array_unique(array_agg(b)) β”‚ β”‚ uint64 β”‚ uint64 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ 2 β”‚ 1 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>My expected output would be:</p> <pre><code>In [39]: rel.aggregate([n_unique('a'), n_unique('b')]) Out[39]: β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ array_unique(array_agg(a)) β”‚ array_unique(array_agg(b)) β”‚ β”‚ uint64 β”‚ uint64 β”‚ β”œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€ β”‚ 3 β”‚ 2 β”‚ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ </code></pre> <p>How can I achieve that?</p>
<python><duckdb>
2025-01-08 10:53:10
1
11,062
ignoring_gravity
79,338,790
7,959,614
Get adjacency matrices of networkx.MultiDiGraph
<p>I want to obtain the adjacency matrices of a <code>networkx.MultiDiGraph</code>. My code looks as follows:</p> <pre><code>import numpy as np import networkx as nx np.random.seed(123) n_samples = 10 uv = [ (1, 2), (2, 3), (3, 4), (4, 5), (5, 6) ] G = nx.MultiDiGraph() for u, v in uv: weights = np.random.uniform(0, 1, size=n_samples) G.add_edges_from([(u, v, dict(sample_id=s+1, weight=weights[s])) for s in range(n_samples)]) A = nx.to_numpy_array(G=G, nodelist=list(G.nodes)) </code></pre> <p>As the <a href="https://networkx.org/documentation/stable/reference/generated/networkx.convert_matrix.to_numpy_array.html" rel="nofollow noreferrer">docs</a> state the default of <code>nx.to_numpy_array()</code> for this type of graph is to sum the weights of the multiple edges. Therefore, the output look as follows:</p> <pre><code>[[0. 5.44199353 0. 0. 0. 0. ] [0. 0. 4.12783997 0. 0. 0. ] [0. 0. 0. 5.37945594 0. 0. ] [0. 0. 0. 0. 4.95418265 0. ] [0. 0. 0. 0. 0. 5.18942126] [0. 0. 0. 0. 0. 0. ]] </code></pre> <p>I would like to obtain 10 adjacency matrices, one for each <code>s</code>. My desired output should look as follows:</p> <pre><code>print(A.shape) &gt;&gt; (6, 6, 10) </code></pre> <p>Please advice</p>
<python><networkx>
2025-01-08 10:34:59
1
406
HJA24
79,338,759
4,105,440
Explicit cast of a lazy frame not possible with type mismatch?
<p>I've only been using polars for a few months now (coming from pandas) so forgive me if I'm interpreting things wrong :) I want to read many parquet files, merge them into a single dataframe and then write this to disk. As some of the files have columns with the wrong type I'm trying to do an explicit cast</p> <pre class="lang-py prettyprint-override"><code>df = pl.scan_parquet('../output/extraction/part_*.parquet') schema = { 'station_id': pl.String, 'datetime_utc': pl.Datetime(time_unit='ns', time_zone='UTC'), 'rain_rate': pl.Float64, } df = df.with_columns([ pl.col(name).cast(dtype, strict=False) for name, dtype in schema.items() ]) df.sink_parquet('../output/extraction/merged.parquet') </code></pre> <p>But I'm getting a <code>SchemaError: data type mismatch for column rain_rate: expected: binary, found: f64</code>.</p> <p>The eager version of this works without issues</p> <pre class="lang-py prettyprint-override"><code>dfs = [] for file in glob('../output/extraction/part_*.parquet'): df = pl.read_parquet(file) df = df.with_columns([ pl.col(name).cast(dtype, strict=False) for name, dtype in schema.items() ]) dfs.append(df) df = pl.concat(dfs) df.write_parquet('../output/extraction/merged.parquet') </code></pre> <p>I kind of understand the error, and the eager version works for me as I don't have many files, but it could become an issue if I'm working with large amount of data. Is there no way to do the same with lazy dataframes?</p>
<python><dataframe><casting><python-polars><polars>
2025-01-08 10:26:21
1
673
Droid
79,338,728
1,021,819
Snowflake / Snowpark / Python connector - how to resolve unexpected '@"SNOWPARK_TEMP_STAGE_..."'?
<p>When trying to append to an existing snowflake table via a python stored procedure u using write_pandas, i.e.</p> <pre class="lang-py prettyprint-override"><code>session.write_pandas(my_df, table_name=my_table_name, overwrite=True,database='MY_DATABASE',schema='MY_SANDBOX') </code></pre> <p>I am getting:</p> <pre><code>snowflake.connector.errors.ProgrammingError: 001003 (42000): SQL compilation error: syntax error line 1 at position 106 unexpected '@&quot;SNOWPARK_TEMP_STAGE_...&quot;'. in function MY_FUNCTION with handler udf_py_....compute </code></pre> <p>What's going on? Thanks!</p>
<python><stored-procedures><snowflake-cloud-data-platform>
2025-01-08 10:15:02
0
8,527
jtlz2
79,338,599
8,510,149
Why has the shap value array in a shap explanation object a different shape than expected from shap plots?
<p>I'm creating an Explanation object using SHAP. This object holds shap values, base values and the actual data. My problem is that the shap value array in the explanation object comes in a shape that is different from what shap plots expects. Does anybody now why this is occurring?</p> <p>Ref: <a href="https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/migrating-to-new-api.html" rel="nofollow noreferrer">https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/migrating-to-new-api.html</a></p> <pre><code>import pandas as pd import numpy as np from sklearn.datasets import make_classification from sklearn.model_selection import train_test_split from sklearn.ensemble import RandomForestClassifier import shap # Create a dataset with 5 features and 1 binary target X, y = make_classification(n_samples=1000, n_features=5, n_informative=3, n_redundant=0, n_clusters_per_class=1, random_state=42) # Split the dataset into training and testing sets X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42) # Build a random forest classifier, fit to training data model = RandomForestClassifier(n_estimators=75, random_state=42, max_depth=4, min_samples_split = 30) model.fit(X_train, y_train) # Create an Explanation object explainer = shap.TreeExplainer(model, X_test) explanation = explainer(X_test) </code></pre> <p>Then we can have look at the explanation object</p> <pre><code>print(explanation.values.shape) print(explanation.base_values.shape) print(explanation.data.shape) Output: (200, 5, 2) (200, 2) (200, 5) </code></pre> <p>Now, this explanation object dosen't work becasue the shape of the explanation.values array (shap values).</p> <p>The code below should work according to Shap documentation, but returns an error. Ref: <a href="https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/bar.html" rel="nofollow noreferrer">https://shap.readthedocs.io/en/latest/example_notebooks/api_examples/plots/bar.html</a></p> <pre><code>shap.plots.bar(explanation) Output: TypeError: only integer scalar arrays can be converted to a scalar index File &lt;command-2990284182217532&gt;, line 1 ----&gt; 1 shap.plots.bar(explanation) File /local_disk0/.ephemeral_nfs/cluster_libraries/python/lib/python3.11/site-packages/shap/plots/_bar.py:220, in &lt;listcomp&gt;(.0) 218 # see how many individual (vs. grouped at the end) features we are plotting 219 if num_features &lt; len(values[0]): --&gt; 220 num_cut = np.sum([len(orig_inds[feature_order[i]]) for i in range(num_features-1, len(values[0]))]) 221 values[:,feature_order[num_features-1]] = np.sum([values[:,feature_order[i]] for i in range(num_features-1, len(values[0]))], 0) 223 # build our y-tick labels </code></pre> <p>However, if I do this operation it works. Although, this can hardly be way this should work. Does anybody have some insights on this problem?</p> <pre><code>explanation.values = explanation.values[:, :, 1] shap.plots.bar(explanation) </code></pre>
<python><shap>
2025-01-08 09:40:20
0
1,255
Henri
79,338,219
13,396,497
Panda iterate rows and multiply nth row values to next(n+1) row value
<p>I am trying to iterate multiple column rows and multiply nth row to n+1 row after that add columns.</p> <p>I tried below code and it's working fine.</p> <p>Is there any other simply way to achieve the subtraction and multiplication part together?</p> <pre><code>import pandas as pd df = pd.DataFrame({'C': [&quot;Spark&quot;,&quot;PySpark&quot;,&quot;Python&quot;,&quot;pandas&quot;,&quot;Java&quot;], 'F' : [2,4,3,5,4], 'D':[3,4,6,5,5]}) df1 = pd.DataFrame({'C': [&quot;Spark&quot;,&quot;PySpark&quot;,&quot;Python&quot;,&quot;pandas&quot;,&quot;Java&quot;], 'F': [1,2,1,2,1], 'D':[1,2,2,2,1]}) df = pd.merge(df, df1, on=&quot;C&quot;) df['F_x-F_y'] = df['F_x'] - df['F_y'] df['D_x-D_y'] = df['D_x'] - df['D_y'] for index, row in df.iterrows(): df['F_mul'] = df['F_x-F_y'].mul(df['F_x-F_y'].shift()) df['D_mul'] = df['D_x-D_y'].mul(df['D_x-D_y'].shift()) df['F+D'] = df['F_mul'] + df['D_mul'] </code></pre> <p>Output -</p> <pre><code> C F_x D_x F_y D_y F_x-F_y D_x-D_y F_mul D_mul F+D 0 Spark 2 3 1 1 1 2 NaN NaN NaN 1 PySpark 4 4 2 2 2 2 2.0 4.0 6.0 2 Python 3 6 1 2 2 4 4.0 8.0 12.0 3 pandas 5 5 2 2 3 3 6.0 12.0 18.0 4 Java 4 5 1 1 3 4 9.0 12.0 21.0 </code></pre>
<python><pandas><dataframe>
2025-01-08 07:11:11
3
347
RKIDEV
79,337,879
1,783,652
Pyinstaller Windows executable cannot find some joysticks when Python script can
<p>This Python script will print out a list of attached joystick devices on a Windows PC:</p> <pre class="lang-none prettyprint-override"><code>from pyjoystick.sdl2 import Joystick print (&quot;--- Joysticks ---&quot;) print (&quot;\n&quot;.join([j.name for j in Joystick.get_joysticks()])) </code></pre> <p><a href="https://pypi.org/project/pyjoystick/" rel="nofollow noreferrer">pyjoystick</a> is required. It can be installed through pip: <code>pip install pyjoystick</code></p> <p><strong>Problem:</strong></p> <p>When I run the above script through the Python interpreter, I get:</p> <pre><code>&gt; python jtest.py --- Joysticks --- Xbox One Controller Heusinkveld Sim Pedals Sprint vJoy Device </code></pre> <p>but if I compile that program with PyInstaller, the &quot;Heusinkveld Sim Pedals Sprint&quot; device does not show:</p> <pre><code>&gt; pyinstaller jtest.py ... &gt;dist\jtest\jtest.exe --- Joysticks --- Xbox One Controller vJoy Device </code></pre> <p><strong>Question:</strong></p> <p>Why is that? How can I make the PyInstaller executable report the same joystick devices?</p> <p>I'm running Python 3.9.7 and pyinstaller 6.4.0 (with pyinstaller-hooks-contrib 2024.2)</p>
<python><pyinstaller><sdl-2>
2025-01-08 03:04:21
0
567
Brian K
79,337,859
1,190,077
In a numba function, replace cuda.popc() by CPU equivalent if not in CUDA
<p>I am writing common code that supports both numba-jitting on CPU and numba.cuda-jitting on GPU.</p> <p>It all works well, except that deep inside the common code, I would like to use an intrinsic instruction that counts the number of bits in an integer. It is <code>cuda.popc()</code> for the CUDA path, and a helper function <code>cpu_popc()</code> for the CPU path. Unfortunately, <code>cuda.popc</code> is only valid in numba.cuda-jitted GPU kernel and <code>cpu_popc</code> is only valid in a numba-jitted CPU function.</p> <p>Is there any way to realize both <code>cpu_compute</code> and <code>gpu_compute</code> -- short of duplicating the entire common code?</p> <p>Here is a simple framework to test this:</p> <pre class="lang-py prettyprint-override"><code># CPU equivalent ctpop, from https://stackoverflow.com/a/77103233 @numba.extending.intrinsic def popc_helper(typing_context, src): def codegen(context, builder, signature, args): return numba.cpython.mathimpl.call_fp_intrinsic(builder, &quot;llvm.ctpop.i64&quot;, args) return numba.uint64(numba.uint64), codegen @numba.njit(numba.uint64(numba.uint64)) def cpu_popc(x): &quot;&quot;&quot;Return the (population) count of set bits in an integer.&quot;&quot;&quot; return popc_helper(x) @numba.njit def common_function(x): # ... # some_long_code_that_should_not_get_duplicated. # ... # return cpu_popc(x) # This works on the CPU path. return cuda.popc(x) # This works on the GPU path. @numba.njit def cpu_compute(n=5): array_in = np.arange(n) array_out = np.empty_like(array_in) for i, value in enumerate(array_in): array_out[i] = common_function(value) return array_out @cuda.jit def gpu_kernel(array_in, array_out): thread_index = cuda.grid(1) if thread_index &lt; len(array_in): array_out[thread_index] = common_function(array_in[thread_index]) def gpu_compute(n=5): array_in = np.arange(n) array_out = cuda.device_array_like(array_in) gpu_kernel[1, len(array_in)](cuda.to_device(array_in), array_out) return array_out.copy_to_host() # print(cpu_compute()) print(gpu_compute()) </code></pre>
<python><cuda><numba>
2025-01-08 02:44:05
1
3,196
Hugues
79,337,751
4,316,166
"Right" way to define helper functions for Django models' Meta classes
<p>I'm trying to simplify a very verbose and repetitive <code>Check</code> constraint for a model, whose logic is currently very hard to parse, factoring out and aptly renaming the repetitive parts.</p> <p>Basically I want to turn this (simplified and abstracted snippet):</p> <pre class="lang-py prettyprint-override"><code>class MyModel(models.Model): class Meta: constraints = [ models.CheckConstraint( check = ( ( models.Q(foo = True) &amp; ( ( models.Q(field1 = 'X') &amp; models.Q(field2 = 'Y') &amp; models.Q(field3 = 'Z') ) | ( models.Q(field4 = 1) &amp; models.Q(field5 = 2) &amp; models.Q(field6 = 3) ) ) ) | ( models.Q(foo = False) &amp; ( ( models.Q(field1 = 'X') &amp; models.Q(field2 = 'Y') &amp; models.Q(field3 = 'Z') ) | ( models.Q(field4 = 4) &amp; models.Q(field5 = 5) &amp; models.Q(field6 = 6) ) ) ) ), name = 'foo', ), ] </code></pre> <p>Into this:</p> <pre class="lang-py prettyprint-override"><code>class MyModel(models.Model): class Meta: constraints = [ models.CheckConstraint( check = ( ( models.Q(foo = True) &amp; ( condition1 | condition2 ) ) | ( models.Q(foo = False) &amp; ( condition1 | condition3 ) ) ), name = 'foo', ), ] </code></pre> <p>What I've tried / thought of trying:</p> <ol> <li>Factoring out the conditions, both as attributes and methods in Meta itself; <strong>that didn't work</strong>: <code>TypeError: 'class Meta' got invalid attribute(s): condition1,condition2,condition3</code>;</li> <li>Factoring out the conditions, both as attributes and methods in MyModel; <strong>that of course wouldn't work</strong>, as you can't directly reference <code>MyModel</code> from inside <code>Meta</code>;</li> <li>Factoring out the conditions as attributes at <code>models.py</code>'s root (hence outside of both <code>Meta</code> and <code>MyModel</code>); <strong>that worked</strong>; however, <strong>this leaves the conditions loosely coupled with <code>MyModel</code> and its <code>Meta</code></strong>, which is undesirable in this case, as those conditions are strictly tied to both, and they're not meant to be reused anywhere anyways.</li> </ol> <p>So even though the problem has been solved by simply putting the conditions somewhere else, I was wondering: <em>is there an elegant way to do the same but preserving the strict coupling between the conditions and <code>MyModel</code> or its <code>Meta</code> class</em>?</p>
<python><django>
2025-01-08 01:14:01
0
858
kos
79,337,556
1,914,781
plotly - add vline with default grid color
<p>I would like to add a vline with default gride color. below code use <code>lightgray</code> as vline color, how to make it the same as the default grid color?</p> <pre><code>import plotly.graph_objects as go def main(): x = [1, 2, 3, 4, 5] y = [10, 11, 12, 11, 9] fig = go.Figure() fig.add_trace(go.Scatter(x=x, y=y, mode='lines')) fig.add_vline(x=2.5, line_width=.5, line_dash=&quot;solid&quot;, line_color=&quot;lightgray&quot;) fig.update_layout(title='demo',template=&quot;plotly_dark&quot;,xaxis_title='x', yaxis_title='y') fig.show() return </code></pre> <p><a href="https://i.sstatic.net/blubGOUr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/blubGOUr.png" alt=" [" /></a></p>
<python><plotly>
2025-01-07 22:52:47
1
9,011
lucky1928
79,337,434
4,240,413
What's the best way to use a sklearn feature selector in a grid search, to evaluate the usefulness of all features?
<p>I am training a sklearn classifier, and inserted in a pipeline a feature selection step. Via grid search, I would like to determine what's the number of features that allows me to maximize performance. Still, I'd like to explore in the grid search the possibility that <em>no feature selection</em>, just a &quot;passthrough&quot; step, is the optimal choice to maximize performance.</p> <p>Here's a reproducible example:</p> <pre class="lang-py prettyprint-override"><code>import seaborn as sns from sklearn.pipeline import Pipeline from sklearn.model_selection import GridSearchCV from sklearn.linear_model import LogisticRegression from sklearn.feature_selection import SequentialFeatureSelector from sklearn.preprocessing import StandardScaler, OneHotEncoder from sklearn.compose import ColumnTransformer from sklearn.impute import SimpleImputer # Load the Titanic dataset titanic = sns.load_dataset('titanic') # Select features and target features = ['age', 'fare', 'sex'] X = titanic[features] y = titanic['survived'] # Preprocessing pipelines for numeric and categorical features numeric_features = ['age', 'fare'] numeric_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant')), ('scaler', StandardScaler()) ]) categorical_features = ['sex'] categorical_transformer = Pipeline(steps=[ ('imputer', SimpleImputer(strategy='constant')), ('onehot', OneHotEncoder(drop='first')) ]) # Combine preprocessing steps preprocessor = ColumnTransformer(transformers=[ ('num', numeric_transformer, numeric_features), ('cat', categorical_transformer, categorical_features) ]) # Initialize classifier and feature selector clf = LogisticRegression(max_iter=1000, solver='liblinear') sfs = SequentialFeatureSelector(clf, direction='forward') # Create a pipeline that includes preprocessing, feature selection, and classification pipeline = Pipeline(steps=[ ('preprocessor', preprocessor), ('feature_selection', sfs), ('classifier', clf) ]) # Define the parameter grid to search over param_grid = { 'feature_selection__n_features_to_select': [2], 'classifier__C': [0.1, 1.0, 10.0], # Regularization strength } # Create and run the grid search grid_search = GridSearchCV(pipeline, param_grid, cv=5) grid_search.fit(X, y) # Output the best parameters and score print(&quot;Best parameters found:&quot;, grid_search.best_params_) print(&quot;Best cross-validation score:&quot;, grid_search.best_score_) </code></pre> <p><code>X</code> here has three features (even after the <code>preprocessor</code> step), but the grid search code above doesn't allow to explore models in which all 3 features are used, as setting</p> <pre><code> feature_selection__n_features_to_select: [2,3] </code></pre> <p>will give a <code>ValueError: n_features_to_select must be &lt; n_features</code>.</p> <p>The obstacle here is that <code>SequentialFeatureSelector</code> doesn't consider the selection of all features (aka a passthrough selector) as a valid feature selection.</p> <p>In other words, I would like to run a grid search that considers also the setting of</p> <pre><code>('feature_selection', 'passthrough') </code></pre> <p>in the space of possible pipeline configurations. Is there an idiomatic/nice way to do that?</p>
<python><machine-learning><scikit-learn>
2025-01-07 21:39:17
1
6,039
Davide Fiocco
79,337,364
610,569
How to align two string's offset given a list of substrings offsets?
<p>Given <code>a</code> and <code>b</code> relating to a list of substrings in <code>c</code>:</p> <pre><code>a = &quot;how are you ?&quot; b = &quot;wie gehst's es dir?&quot; c = [ (&quot;how&quot;, &quot;wie&quot;), (&quot;are&quot;, &quot;gehst's&quot;), (&quot;you&quot;, &quot;es&quot;) ] </code></pre> <p>What's the optimal method to get the offsets that produce:</p> <pre><code>offsets = [ (&quot;how&quot;, &quot;wie&quot;, (0, 3), (0, 3)), (&quot;are&quot;, &quot;gehst's&quot;, (4, 6), (4, 11)), (&quot;you&quot;, &quot;es&quot;, (7, 9), (12, 14)) ] </code></pre> <hr /> <p>From ChatGPT, it suggests the simplistic manner by doing:</p> <p>To generate the desired offsets from the given strings a and b and the list of substring pairs c, we need to find the starting and ending positions (indices) of each substring from a in a itself, and each substring from b in b itself.</p> <p>Steps:</p> <ul> <li>Iterate over each pair of substrings in the list c.</li> <li>Find the starting and ending positions of the substring from a within the string a.</li> <li>Find the starting and ending positions of the substring from b within the string b.</li> <li>Store the pair of substrings and their corresponding positions.</li> </ul> <pre><code>a = &quot;how are you ?&quot; b = &quot;wie gehst's es dir?&quot; c = [ (&quot;how&quot;, &quot;wie&quot;), (&quot;are&quot;, &quot;gehst's&quot;), (&quot;you&quot;, &quot;es&quot;) ] # Create the offsets list offsets = [] for substring_a, substring_b in c: # Find the start and end indices for substring_a in string a start_a = a.find(substring_a) end_a = start_a + len(substring_a) - 1 # Find the start and end indices for substring_b in string b start_b = b.find(substring_b) end_b = start_b + len(substring_b) - 1 # Append the result as a tuple offsets.append((substring_a, substring_b, (start_a, end_a), (start_b, end_b))) # Output the result print(offsets) </code></pre> <p>But is there something more optimal especially of the terms are repeated? E.g.</p> <pre><code>a = &quot;how are you ? are you okay ?&quot; b = &quot;wie gehst's es dir? geht es dir gut &quot; c = [ (&quot;how&quot;, &quot;wie&quot;), (&quot;are&quot;, &quot;gehst's&quot;), (&quot;you&quot;, &quot;es&quot;), (&quot;are&quot;, &quot;geht&quot;), (&quot;you&quot;, &quot;es&quot;), (&quot;okay&quot;, &quot;gut&quot;) ] </code></pre>
<python><string><dynamic-programming><offset><text-alignment>
2025-01-07 21:16:17
4
123,325
alvas
79,337,347
6,357,916
Different values in group by columns before and after group by in pandas dataframe
<p>I havw following code:</p> <pre><code>sum_columns = ['p', 'q', 'r', 'Ax', 'Ay', 'Az'] avg_columns = ['Bx', 'By', 'Bz', 'G2 C03'] agg_map = {col: 'sum' for col in sum_columns} agg_map.update({col: 'mean' for col in avg_columns}) df = df.groupby(['MAC C01', 'MAC C02', 'MAC C03'], as_index=False).agg(agg_map) </code></pre> <p>Dataframe before group by:</p> <pre><code> p q r Ax Ay Az Bx By Bz G2 C03 MAC C01 MAC C02 MAC C03 0 0.0 0.0 0.1 0 0 0 0 0 0 0.0 0.00000 0.00000 0.0 1 0.0 0.1 0.1 0 0 0 0 0 0 0.0 0.00000 0.00000 0.0 2 0.0 0.1 0.0 0 0 0 0 0 0 0.0 0.00000 0.00000 0.0 3 0.0 0.0 0.1 0 0 0 0 0 0 0.0 0.00000 0.00000 0.0 4 0.0 0.0 0.0 -34 2 251 -23 78 45 0.0 19.12377 73.07937 0.0 ... ... ... ... .. .. ... .. .. .. ... ... ... ... 14676 0.3 -0.3 -1.0 -39 -9 250 66 -23 49 0.0 19.12376 73.07938 0.0 14677 0.9 0.3 -1.0 -39 -9 250 66 -23 49 0.0 19.12376 73.07938 0.0 14678 0.5 0.1 2.5 -39 -9 250 66 -23 49 0.0 19.12376 73.07938 0.0 14679 -0.2 -0.1 1.8 -39 -9 250 66 -23 49 0.0 19.12376 73.07938 0.0 14680 -0.9 0.2 -2.3 -39 -9 250 66 -23 49 0.0 19.12376 73.07938 0.0 [14681 rows x 13 columns] </code></pre> <p>Data frame after group by:</p> <pre><code> MAC C01 MAC C02 MAC C03 p q r Ax Ay Az Bx By Bz G2 C03 0 0.00000 0.00000 0.0 0.000000e+00 0.2 0.3 0 0 0 0.0 0.0 0.0 0.0 1 19.12135 73.07947 75.0 1.330000e+01 6.4 -31.9 -140 0 1230 -4.0 -79.0 26.0 75.0 2 19.12135 73.07959 75.8 1.160000e+01 8.3 -47.0 -248 8 2080 1.0 -78.0 28.0 75.0 3 19.12135 73.07968 76.6 1.030000e+01 5.1 -32.9 -174 6 1560 6.0 -77.0 29.0 76.0 4 19.12136 73.07938 74.6 2.260000e+01 14.9 -56.7 -224 0 2144 -10.0 -78.0 23.0 74.0 ... ... ... ... ... ... ... ... .. ... ... ... ... ... 1452 19.12607 73.07969 184.8 8.881784e-16 -18.9 -27.7 216 -24 2240 8.0 74.0 47.0 184.0 1453 19.12608 73.07933 178.4 -1.600000e+00 15.2 -17.8 408 -16 2080 1.0 75.0 47.0 178.0 1454 19.12608 73.07941 180.1 -4.100000e+00 8.6 -19.0 328 8 1656 3.0 76.0 46.0 180.0 1455 19.12608 73.07949 181.6 -3.500000e+00 -10.8 -22.5 312 -32 1768 5.0 75.0 46.0 181.0 1456 19.12608 73.07960 183.2 2.600000e+00 -17.1 -23.9 296 -8 2296 6.0 75.0 47.0 183.0 [1457 rows x 13 columns] </code></pre> <p>How &quot;first non zero&quot; MAC C01 (row at index 4) <code>19.12377</code> before group-by, but the same after group-by is <code>19.12135</code> (row at index 2)? Shouldn't both be same? Similarly, how last MAC C01 before group by is <code>19.12376</code>, but that after group by is <code>19.12608</code>?</p> <p>I guess what group by does is it keeps unique values for a group-by columns and aggregates other columns using function specified. Since it &quot;keeps unique values&quot; for group-by-columns, I feel the &quot;first non-zero&quot; group-by-column values should be same and similarly last group-by-column values should also be same. But that does not seem to be case here.</p> <p>What am I missing?</p>
<python><arrays><pandas>
2025-01-07 21:10:36
0
3,029
MsA
79,337,316
4,408,818
python-eve Cerberus keeps throwing "uknown rule" error for embedded dictionary?
<p>A production server is down due to some outdated drivers so I've been upgrading packages to get it working again. I think I'm quite close but eve is giving me some trouble. I have the following schema to validate. Removed some fields for clarity;</p> <p>survey.py</p> <pre><code>from . import question from . import building_data from . import address schema = { 'buildingAddress': { 'type': 'dict', 'schema': address.schema, }, 'surveyType': { 'type': 'string', 'allowed': [ &quot;local&quot;, &quot;global&quot;, ], }, 'questions': { 'type': 'list', 'schema': { 'type': 'dict', 'schema': question.embedded_schema, } }, } </code></pre> <p>questions.py</p> <pre><code>import copy schema = { 'name': { 'type': 'string', 'required': True }, 'type': { 'type': 'string', 'allowed': [ 'boolean', 'shortText', ], 'required': True, }, 'group': { 'type': 'string' }, 'title': { 'type': 'string', 'required': True }, 'oldCode': { 'type': 'string' }, 'questionSets': { 'type': 'list', 'schema': { 'type': 'string' }, }, 'min': { 'type': 'number', 'nullable': True, }, 'max': { 'type': 'number', 'nullable': True, }, 'range': { 'type': 'number', 'nullable': True, 'min': 1, }, 'minValue': { 'type': 'string', }, 'maxValue': { 'type': 'string', }, # single / multiple choice 'choices': { 'type': 'dict', 'propertyschema': { 'type': 'string', 'regex': '\d+' } }, } embedded_schema = copy.deepcopy(schema) embedded_schema.update( { '_id': { 'type': 'objectid', 'data_relation': { 'resource': 'questions', 'field': '_id', 'embeddable': False } }, } ) </code></pre> <p>For whatever reason, a SchemaError(self.schema_validator.errors) exception is raised when making a post call. The error is seemingly from the &quot;questions&quot; field, as the rest of the exception i as follows. Clipped the rest for readability.</p> <pre><code>{'questions': [{'schema': ['no definitions validate', {'anyof definition 0': [{'schema': [{'_id': ['unknown rule'], '_version': ['unknown rule'], 'choices': ['unknown rule'], 'group': ['unknown rule'], 'maxValue': ['unknown rule'], 'minValue': ['unknown rule'], 'name': ['unknown rule'], 'oldCode': ['unknown rule'], 'questionSets': ['unknown rule'], 'range': ['unknown rule'], 'title': ['unknown rule'], 'type': [&quot;must be of ['string', 'list'] type&quot;]}], 'type': ['null value not allowed']}], 'anyof definition 1': [{'schema': ['no definitions validate', {'anyof definition 0': [{'choices': [{'propertyschema': ['unknown rule']}]}] ... </code></pre> <p>I've tried several different combinations of eve and cerberus to no luck. I've looked through the repository and Cerberus is not being imported anywhere in the code, so it has to be the eve version of cerberus being used.</p> <p>I'm currently on eve 1.1.5 and cerberus 1.3.6. Upgraded fron eve 0.7.10 and Cerberus 0.9.2</p> <p>I would appreciate any help/suggestions.</p>
<python><validation><eve><cerberus>
2025-01-07 20:48:09
1
959
Ozymandias
79,337,242
2,913,725
Pyenv build failed when trying to install Python 3.10.15
<p>I have pyenv setup on my Macbook Pro M2 2022 Sonoma 14.6, which was migrated from an Intel-based chip a long while back. I was able to install and run python v3.10.13 via pyenv no problem but when I try to install python 3.10.15, the build fails, and I can't figure out why. I've tried both reinstalling Homebrew and reinstalling command line tools, but the problem persists. Any ideas?</p> <pre class="lang-none prettyprint-override"><code>python -V Python 3.10.13 pyenv install 3.10.15 python-build: use openssl from homebrew python-build: use readline from homebrew Downloading Python-3.10.15.tar.xz... -&gt; https://www.python.org/ftp/python/3.10.15/Python-3.10.15.tar.xz Installing Python-3.10.15... python-build: use readline from homebrew python-build: use zlib from xcode sdk BUILD FAILED (OS X 14.6 using python-build 20180424) Inspect or clean up the working tree at /var/folders/4h/8h2hjqjx659_l93m2_2qbqbw0000gn/T/python-build.20250107151340.62026 Results logged to /var/folders/4h/8h2hjqjx659_l93m2_2qbqbw0000gn/T/python-build.20250107151340.62026.log Last 10 log lines: ld: warning: duplicate -rpath '/opt/homebrew/lib' ignored ld: warning: duplicate -rpath '/Users/kalicious/.pyenv/versions/3.10.15/lib' ignored ld: warning: duplicate -rpath '/opt/homebrew/lib' ignored ld: warning: search path '/Users/kalicious/.pyenv/versions/3.10.15/lib' not found ld: warning: search path '/Users/kalicious/.pyenv/versions/3.10.15/lib' not found ld: warning: search path '/Users/kalicious/.pyenv/versions/3.10.15/lib' not found /opt/local/bin/ranlib: object: libpython3.10.a(getbuildinfo.o) malformed object (unknown load command 1) ar: internal ranlib command failed make: *** [libpython3.10.a] Error 1 make: *** Waiting for unfinished jobs.... </code></pre>
<python><macos><pyenv><macos-sonoma>
2025-01-07 20:16:01
1
1,599
JackKalish
79,337,201
839,733
mypy β€”explicit-package-based vs setuptools
<p>I’ve a project structured as follows:</p> <pre><code>. β”œβ”€β”€ hello β”‚ β”œβ”€β”€ __init__.py β”‚ └── animal.py β”œβ”€β”€ tests β”‚ β”œβ”€β”€ __init__.py β”‚ └── test_animal.py β”œβ”€β”€ README └── pyproject.toml </code></pre> <p>This is just a personal Python library, and doesn’t need to be published or distributed. The usage consists of running pytest and mypy from the root directory.</p> <p>Among other things, the <code>pyproject.toml</code> contains the following sections:</p> <pre><code>[project.optional-dependencies] test = [ &quot;pytest&quot;, ] lint = [ &quot;ruff&quot;, &quot;mypy&quot;, ] [tool.mypy] exclude = [ 'venv', ] ignore_errors = false warn_return_any = true disallow_untyped_defs = true </code></pre> <p>I install the dependencies locally as follows:</p> <pre><code>% $(brew --prefix python)/bin/python3 -m venv ./venv % ./venv/bin/python -m pip install --upgrade pip '.[test]' '.[lint]' </code></pre> <p>But my GitHub CI fails with the following error:</p> <pre><code>hello/__init__.py: error: Duplicate module named &quot;hello&quot; (also at &quot;./build/lib/hello/__init__.py&quot;) hello/__init__.py: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#mapping-file-paths-to-modules for more info hello/__init__.py: note: Common resolutions include: a) using `--exclude` to avoid checking one of them, b) adding `__init__.py` somewhere, c) using `--explicit-package-bases` or adjusting MYPYPATH Found 1 error in 1 file (errors prevented further checking) Error: Process completed with exit code 2. </code></pre> <p>As suggested, running mypy with <code>--explicit-package-bases</code> fixes this problem, but so does addition of the following section in <code>pyproject.toml</code>.</p> <pre><code>[tool.setuptools] py-modules = [] </code></pre> <p>I’ve reviewed the <a href="https://mypy.readthedocs.io/en/stable/command_line.html" rel="nofollow noreferrer">mypy</a>, and <a href="https://setuptools.pypa.io/en/latest/userguide/pyproject_config.html" rel="nofollow noreferrer">setuptools</a> documentations, but am not sure which of the two is better suited for my purpose, or why are they even necessary. As mentioned earlier, I’m not trying to publish or distribute this as a Python package.</p> <p>Which one of the two configurations the recommended way to go, and why?</p>
<python><setuptools><mypy>
2025-01-07 19:59:34
1
25,239
Abhijit Sarkar
79,337,116
1,106,952
Trying to understand how Flask sets the Location string in the response header
<p>I have a uwsgi application behind an nginx reverse proxy. In flask I was originally trying to see what location was getting set in the response headers so I added:</p> <pre><code>@app.after_request def add_header(response): print(response.headers) return response </code></pre> <p>And in my uwsgi logs I see:</p> <pre><code>Jan 07 13:54:21 dev1 uwsgi[133646]: Content-Type: text/html; charset=utf-8 Jan 07 13:54:21 dev1 uwsgi[133646]: Content-Length: 208 Jan 07 13:54:21 dev1 uwsgi[133646]: Location: / </code></pre> <p>But in a packet capture on the unix socket between uwsgi and nginx I see:</p> <pre><code> 43 6f 6e 74 65 6e 74 2d 54 79 70 65 3a 20 74 65 Content-Type: te 78 74 2f 68 74 6d 6c 3b 20 63 68 61 72 73 65 74 xt/html; charset 3d 75 74 66 2d 38 0d 0a =utf-8.. 43 6f 6e 74 65 6e 74 2d 4c 65 6e 67 74 68 3a 20 Content-Length: 32 30 38 0d 0a 208.. 4c 6f 63 61 74 69 6f 6e 3a 20 68 74 74 70 3a 2f Location: http:/ &lt;url lines redacted&gt; 6d 2f 0d 0a m/.. 53 65 74 2d 43 6f 6f 6b 69 65 3a 20 73 65 73 73 Set-Cookie: </code></pre> <p>I'm trying to figure out where that additional URL information is added to the location header.</p> <p>As is often the case, after writing this out and posting the question I had a realization that the uwsgi_params in nginx might be affecting this. That was in fact the case. I am leaving this open for my own education: I assume the replacement of &quot;Location: /&quot; with &quot;Location: PROTO://URL/&quot; is happening in uwsgi not flask and I would like to know more about that.</p> <p>For anyone that stumbles in here via google: The original problem was that some browsers were not sending cookies with during a redirect. It turns out it was because the Location header was specifying the http protocol on an https connection. That was happening because both nginx and uwsgi are behind a load balancer that is off-loading https. Hard coding https in the uwsgi_params file caused the correct protocol to be returned and fixed the issue.</p>
<python><flask><uwsgi>
2025-01-07 19:22:27
0
1,761
TheDavidFactor
79,337,114
28,063,240
Detect if Tag is a block-level element?
<p>How can I check if a BeautifulSoup Tag is a block-level element (e.g. <code>&lt;p&gt;</code>, <code>&lt;div&gt;</code>, <code>&lt;h2&gt;</code>), or a &quot;phrase content&quot; element like <code>&lt;span&gt;</code>, <code>&lt;strong&gt;</code>?</p> <p>Basically I want to have a function that returns True for any Tag that is allowed inside of <code>&lt;p&gt;</code> tag according to the HTML spec, and false for any Tag that is not allowed inside of a <code>&lt;p&gt;</code> tag.</p> <p>I'm asking this question because I don't want to hardcode the list of allowed tags myself, but I can't find anything from <code>bs4</code> or <code>html</code> docs about judging whether a Tag is phrasing content or not.</p> <p>BeautifulSoup already knows which elements are allowed inside of <code>&lt;p&gt;</code> and which are not:</p> <pre><code>&gt;&gt;&gt; BeautifulSoup('&lt;p&gt;&lt;h2&gt;') &lt;html&gt;&lt;body&gt;&lt;p&gt;&lt;/p&gt;&lt;h2&gt;&lt;/h2&gt;&lt;/body&gt;&lt;/html&gt; &gt;&gt;&gt; BeautifulSoup('&lt;p&gt;&lt;em&gt;') &lt;html&gt;&lt;body&gt;&lt;p&gt;&lt;em&gt;&lt;/em&gt;&lt;/p&gt;&lt;/body&gt;&lt;/html&gt; </code></pre> <p>I would also be happy to use Python's <code>html</code> module if it can give me the answer.</p>
<python><html><beautifulsoup>
2025-01-07 19:20:52
3
404
Nils
79,337,064
210,867
How to run async code in IPython startup files?
<p>I have set <code>IPYTHONDIR=.ipython</code>, and created a startup file at <code>.ipython/profile_default/startup/01_hello.py</code>. Now, when I run <code>ipython</code>, it executes the contents of that file as if they had been entered into the IPython shell.</p> <p>I can run sync code this way:</p> <pre class="lang-py prettyprint-override"><code># contents of 01_hello.py print( &quot;hello!&quot; ) </code></pre> <pre class="lang-bash prettyprint-override"><code>$ ipython Python 3.12.0 (main, Nov 12 2023, 10:40:37) [GCC 11.4.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help. hello In [1]: </code></pre> <p>I can also run async code directly in the shell:</p> <pre class="lang-py prettyprint-override"><code># contents of 01_hello.py print( &quot;hello!&quot; ) async def foo(): print( &quot;foo&quot; ) </code></pre> <pre class="lang-bash prettyprint-override"><code>$ ipython Python 3.12.0 (main, Nov 12 2023, 10:40:37) [GCC 11.4.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help. hello In [1]: await foo() foo In [2]: </code></pre> <p>However, I cannot run async code in the startup file, even though it's supposed to be as if that code was entered into the shell:</p> <pre class="lang-py prettyprint-override"><code># contents of 01_hello.py print( &quot;hello!&quot; ) async def foo(): print( &quot;foo&quot; ) await foo() </code></pre> <pre class="lang-bash prettyprint-override"><code>$ ipython Python 3.12.0 (main, Nov 12 2023, 10:40:37) [GCC 11.4.0] Type 'copyright', 'credits' or 'license' for more information IPython 8.31.0 -- An enhanced Interactive Python. Type '?' for help. [TerminalIPythonApp] WARNING | Unknown error in handling startup files: File ~/proj/.ipython/profile_default/startup/01_imports.py:5 await foo() ^ SyntaxError: 'await' outside function </code></pre> <p>Question: Why doesn't this work, and is there a way to run async code in the startup file without explicitly starting a new event loop just for that? (<code>asyncio.run()</code>)</p> <p>Doing that wouldn't make sense, since that event loop would have to close by the end of the file, which makes it impossible to do any initialization work that involves context vars (which is where Tortoise-ORM stores its connections), which defeats the purpose.</p> <p>Or stated differently: How can I access the event loop that IPython starts for the benefit of the interactive shell?</p>
<python><asynchronous><python-asyncio><ipython>
2025-01-07 18:52:25
3
8,548
odigity
79,336,883
6,160,119
How to add a prefix to layer names in AutoCAD using ezdxf
<p>I am trying to add a prefix to the layer names in a <code>.dxf</code> file. Here is my code:</p> <pre class="lang-py prettyprint-override"><code>import ezdxf doc = ezdxf.readfile('infile.dxf') layer_names = [layer.dxf.name for layer in doc.layers] for old_name in layer_names: try: layer = doc.layers.get(old_name) new_name = 'MY_PREFIX_' + old_name layer.rename(new_name) print(f'Layer &quot;{old_name}&quot; renamed to &quot;{new_name}&quot;.') except ValueError as exc: print(exc) doc.saveas('outfile.dxf') </code></pre> <p>The layer list of <code>infile.dxf</code> looks like this:</p> <p><a href="https://i.sstatic.net/oTiu5C8A.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTiu5C8A.jpg" alt="Layer list of infile.dxf" /></a></p> <p>When I run the snippet above I get this output:</p> <pre><code>Can not rename layer &quot;0&quot;. Layer &quot;Layer1&quot; renamed to &quot;MY_PREFIX_Layer1&quot;. Layer &quot;Layer2&quot; renamed to &quot;MY_PREFIX_Layer2&quot;. Layer &quot;Layer3&quot; renamed to &quot;MY_PREFIX_Layer3&quot;. Layer &quot;Layer4&quot; renamed to &quot;MY_PREFIX_Layer4&quot;. Can not rename layer &quot;Defpoints&quot;. </code></pre> <p>Apparently the code works fine, but when I open <code>outfile.dxf</code> with AutoCAD, the list of layers includes two unwanted layers:</p> <ol> <li><code>Defpoints</code> (I can live with this).</li> <li><code>Layer1</code>, which is empty. Please note that <code>Layer1</code> was the current layer when I saved <code>infile.dxf</code> from the AutoCAD application.</li> </ol> <p><a href="https://i.sstatic.net/7Rm7gzeK.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7Rm7gzeK.jpg" alt="Layer list of outfile.dxf" /></a></p> <p>How can I avoid that these two layers show up in the list of layers of the Layer Property Manager?</p>
<python><autocad><dxf><ezdxf>
2025-01-07 17:41:28
1
13,793
Tonechas
79,336,866
25,625,672
Half-precision in ctypes
<p>I need to be able to seamlessly interact with <a href="https://en.wikipedia.org/wiki/Half-precision_floating-point_format" rel="noreferrer">half-precision</a> floating-point values in a ctypes structure. I have a working solution, but I'm dissatisfied with it:</p> <pre class="lang-py prettyprint-override"><code>import ctypes import struct packed = struct.pack('&lt;Ife', 4, 2.3, 1.2) print('Packed:', packed.hex()) class c_half(ctypes.c_ubyte*2): @property def value(self) -&gt; float: result, = struct.unpack('e', self) return result class Triple(ctypes.LittleEndianStructure): _pack_ = 1 _fields_ = ( ('index', ctypes.c_uint32), ('x', ctypes.c_float), ('y', c_half), ) unpacked = Triple.from_buffer_copy(packed) print(unpacked.y.value) </code></pre> <pre class="lang-none prettyprint-override"><code>Packed: 0400000033331340cd3c 1.2001953125 </code></pre> <p>I am dissatisfied because, unlike with <code>c_float</code>, <code>c_uint32</code> etc., there is no automatic coercion of the buffer data to the Python primitive (<code>float</code> and <code>int</code> respectively for those examples); I would expect <code>float</code> in this half-precision case.</p> <p>Reading into the CPython source, the built-in types are subclasses of <a href="https://github.com/python/cpython/blob/3.13/Modules/_ctypes/_ctypes.c#L5163" rel="noreferrer">_SimpleCData</a>:</p> <pre class="lang-c prettyprint-override"><code>static PyType_Spec pycsimple_spec = { .name = &quot;_ctypes._SimpleCData&quot;, .flags = (Py_TPFLAGS_DEFAULT | Py_TPFLAGS_BASETYPE | Py_TPFLAGS_IMMUTABLETYPE), .slots = pycsimple_slots, }; </code></pre> <p>and only declare a <code>_type_</code>, for instance</p> <pre class="lang-py prettyprint-override"><code>class c_float(_SimpleCData): _type_ = &quot;f&quot; </code></pre> <p>However, attempting the naive</p> <pre class="lang-py prettyprint-override"><code>class c_half(ctypes._SimpleCData): _type_ = 'e' </code></pre> <p>results in</p> <pre class="lang-none prettyprint-override"><code>AttributeError: class must define a '_type_' attribute which must be a single character string containing one of 'cbBhHiIlLdfuzZqQPXOv?g'. </code></pre> <p>as defined by <a href="https://github.com/python/cpython/blob/3.13/Modules/_ctypes/_ctypes.c#L1767" rel="noreferrer">SIMPLE_TYPE_CHARS</a>:</p> <pre class="lang-c prettyprint-override"><code>static const char SIMPLE_TYPE_CHARS[] = &quot;cbBhHiIlLdfuzZqQPXOv?g&quot;; // ... if (!strchr(SIMPLE_TYPE_CHARS, *proto_str)) { PyErr_Format(PyExc_AttributeError, &quot;class must define a '_type_' attribute which must be\n&quot; &quot;a single character string containing one of '%s'.&quot;, SIMPLE_TYPE_CHARS); goto error; } </code></pre> <p>The end goal is to have a <code>c_half</code> type that I can use with the exact same API as the other built-in <code>ctypes.c_</code> classes, ideally without myself writing a C module. I <em>think</em> I need to mimic much of the behaviour seen in the neighbourhood of <a href="https://github.com/python/cpython/blob/3.13/Modules/_ctypes/_ctypes.c#L2201" rel="noreferrer">PyCSimpleType_init</a> but that code is difficult for me to follow.</p>
<python><floating-point><ctypes>
2025-01-07 17:36:28
1
601
avigt
79,336,815
11,405,174
Merge two dataframes together dependent on one column containing the values of another column
<p>Let's say I have Dataframes A and B that look like this:</p> <pre class="lang-none prettyprint-override"><code>A: | B: Code Price | Name ID 1 ABC1210 128.14 | 1 TEXTABC1211987654 351 (contains A 2) 2 ABC1211 3620.10 | 2 SAMPLE12345 20 (doesn't contain any) 3 ABC1212 96.44 | 3 ABC1220 SAMPLETEXT 164 (contains A 4) 4 ABC1220 35.78 | 4 Sample ABC1210 Text 776 (contains A 1) </code></pre> <p>I'd like to combine Code and Name, but I can't find a way to search for Code values in Name without the strings exactly matching.</p> <p>My prior research shows both <code>df.str.contains(value)</code> and <code>df.isin(value)</code> as potentially useful, but both of these can only work on one <code>value</code> at once - I can't do <code>A[&quot;Code&quot;].isin(B[&quot;Name&quot;])</code>, for example.</p> <p>The conclusion I've come to is that I'd have to iterate over every value of the <code>Code</code> column in A, check for any corresponding values in the <code>Name</code> column of B, then rename that value to A's value, with the end goal of using <code>df.merge()</code> to put the two Dataframes together. This seems absurdly slow and against the purpose of the <code>pandas</code> module, which is meant to avoid this kind of iteration.</p> <p>Is there any way to do what I'm describing with <code>pandas</code> or <code>numpy</code>? Is there a different implementation that I'm missing?</p>
<python><python-3.x><pandas>
2025-01-07 17:15:57
0
464
Corsaka
79,336,754
2,243,490
pytest not maintaining the module structure
<p>pytest is not maintaining the module structure</p> <p><strong>Folder structure</strong></p> <pre><code>hello_world β”œβ”€β”€ config.py β”œβ”€β”€ main.py └── tests β”œβ”€β”€ config.py └── test_main.py </code></pre> <p><strong>./config.py</strong></p> <pre><code>name=&quot;app-config&quot; </code></pre> <p><strong>./main.py</strong></p> <pre><code>import config as app_config import tests.config as test_config def print_config_name(): print(app_config.name) print(test_config.name) if __name__ == &quot;__main__&quot;: print_config_name() </code></pre> <p><strong>./tests/config.py</strong></p> <pre><code>name=&quot;test-config&quot; </code></pre> <p><strong>./tests/test_main.py</strong></p> <pre><code>import main def test_main(): main.print_config_name() </code></pre> <p><strong>App Run Output:</strong> <code>$hello_world&gt; python main.py</code></p> <pre><code>app-config test-config </code></pre> <p><strong>Test Run Output:</strong> <code>$hello_world&gt; pytest -sss -vvv</code></p> <pre><code>tests/test_main.py::test_main test-config test-config PASSED </code></pre> <p><strong>Expected Test Run Output:</strong> <code>$hello_world&gt; pytest -sss -vvv</code></p> <pre><code>tests/test_main.py::test_main app-config test-config PASSED </code></pre>
<python><python-3.x><pytest>
2025-01-07 16:53:48
0
1,886
Dinesh