QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,780,677 | 6,315,736 | Python common cx_Oracle connection in multiple files | <p>Please guide me, I think there some gap which I am not able to get.
I have used cx_oracle to connect to my oracle database and extract data. Now I need to call this connection string in multiple different scripts. However I am not able to establish the connection in the other files. I did search on this topic and found post that suggested to have a common file, so I created a <strong>conn_var.py</strong> that contains the connection string. This file looks like below -</p>
<pre><code>import cx_Oracle
import pandas as pd
report_id = 8412
country = 'es'
period = '2023'
data_schema = 'SUPPORT'
type = 'sup1'
if type == 'sup1':
conn1 = cx_Oracle.connect('abc','xyz','//....')
elif type == 'sup2':
conn1 = cx_Oracle.connect('pqr','klm','//....')
elif type == 'sup3':
conn1 = cx_Oracle.connect('lmn','hij','//....')
</code></pre>
<p>now I include this file in my first file and it works fine and I am to extract. <strong>file1.py</strong></p>
<pre><code>import cx_Oracle
import pandas as pd
import file2
from conn_var import *
extract1 = pd.read_sql("select * from SUPPORT.report_d",con=conn1)
</code></pre>
<p>But when I follow the same in file2, it gives me an error. Note that I am calling file2 in file1</p>
<p><strong>file2.py</strong> looks like below -</p>
<pre><code>import cx_Oracle
import pandas as pd
from conn_var import *
extract2 = pd.read_sql("select * from SUPPORT.report_f",con=conn1)
</code></pre>
<p>Please help!! where am I going wrong. why does it not work for file2? Also in file2 I do not get the variables that I declared in <strong>conn_var.py</strong> like <strong>report_id,country,period etc</strong>. How can I declare them globally so that they are available in all files.</p>
| <python><sql><global-variables><database-connection><cx-oracle> | 2023-07-27 14:20:01 | 0 | 729 | user1412 |
76,780,466 | 14,627,352 | Airflow task XComArg result not found | <p>I have a DAG which is built like this:</p>
<pre><code>...
default_args = {
'owner': 'airflow',
}
@dag(default_args=default_args, start_date=datetime.datetime(2021, 1, 1), schedule_interval=None, tags=['mimir'])
def data_comparism_psql():
@task(task_id="pull_release_data_from_blob")
....
@task(task_id="deviation_calculation_metrik_calls_mean_execution_time_std_dev")
def compare_release_files(release_data_export: List[Dict], querytype: Querytype):
...
....
release_data_calls = get_files_from_blob(querytype=Querytype.CALLS)
report_calls = compare_release_files(release_data_calls, querytype=Querytype.CALLS)["significant_differences"]
report_calls_std_dev = compare_release_files(release_data_calls, querytype=Querytype.STD_DEV)["significant_differences"]
release_data_mean_exec_time = get_files_from_blob(querytype=Querytype.MEAN_EXEC_TIME)
report_mean_exec_time = compare_release_files(release_data_mean_exec_time, querytype=Querytype.MEAN_EXEC_TIME)["significant_differences"]
report_mean_exec_time_std_dev = compare_release_files(release_data_mean_exec_time, querytype=Querytype.STD_DEV)["significant_differences"]
create_release_report([report_calls, report_mean_exec_time, report_calls_std_dev, report_mean_exec_time_std_dev], "Queries")
data_comparism_dag = data_comparism_psql()
</code></pre>
<p>Somehow in the <code>create_release_report</code> it fails:</p>
<pre><code>...
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/utils/session.py", line 75, in wrapper
return func(*args, session=session, **kwargs)
File "/home/airflow/.local/lib/python3.7/site-packages/airflow/models/xcom_arg.py", line 342, in resolve
raise XComNotFound(ti.dag_id, task_id, self.key)
airflow.exceptions.XComNotFound: XComArg result from deviation_calculation_metrik_calls_mean_execution_time_std_dev at data_comparism_psql with key="significant_differences" is not found!
</code></pre>
<p>If I check every step in the <code>Airflow UI</code> its green + additionally checking the XCom of the reference task, I can find <code>significant_differences</code>...</p>
<p>Any tip would be helpful, I tried several approaches also with task dependencies but it didn't work.</p>
| <python><airflow> | 2023-07-27 13:54:20 | 1 | 442 | AI Humanizer |
76,780,057 | 10,232,932 | datetime struggle with a pandas dataframe | <p>I am running the following <strong>code</strong>:</p>
<pre><code>filled_df = (ts.set_index(date_col)
.groupby(grouping)
.apply(lambda d: d.reindex(
pd.date_range(
mf_heuristic(d.index, champion_end), pd.to_datetime(
dt.datetime.now()
),
freq='MS')
)
)
.drop(grouping, axis=1)
.reset_index(grouping)
.fillna(0)
</code></pre>
<p><em>with the predefinitions:</em></p>
<pre><code>date_col = 'timepoint'
grouping = ['Level', 'Kontogruppe']
champion_end = fc_date - pd.DateOffset(months=13)
print(champion_end)
2021-03-01 00:00:00
print(ts)
Kontogruppe Level timepoint data_value
0 A ZBFO 2020-03-01 1
1 B ZBFO 2020-07-01 1612.59
2 A ZBFO 2020-08-01 1046.1
print(ts.dtypes)
Kontogruppe object
Level object
timepoint object
data_value float64
dtype: object
</code></pre>
<pre><code>def mf_heuristic(date, champion_end):
if min(date) < champion_end:
start_date = min(date)
else:
start_date = min(date)
return start_date
</code></pre>
<p>When I run the code, I get the following <strong>error massage</strong>:</p>
<blockquote>
<p>TypeError: Cannot compare Timestamp with datetime.date. Use ts ==
pd.Timestamp(date) or ts.date() == date instead.</p>
</blockquote>
<p>for the line with the mf_heuristic function.</p>
<p>If I try to put before:</p>
<pre><code>ts[date_col] = pd.Timestamp(ts[date_col])
</code></pre>
<p>I get the error massage:</p>
<blockquote>
<p>TypeError: function missing required argument 'year' (pos 1)</p>
</blockquote>
| <python><pandas><datetime> | 2023-07-27 13:06:38 | 1 | 6,338 | PV8 |
76,779,940 | 4,718,423 | Python unittest undo patch when exception is raised | <p>This is a difficult one with a complete fake class, but the idea is the same.</p>
<p>Here I need to open a blank file, but if this method returns FileExistsError, I delete that file and call open again. If I want to test the openBlankFile method for situations where the file exists, I need to patch out the 'open' method call and set side-effect to FileNotFound exception. <em>But</em>, when you do this, the second time the 'open' method is called, this one is obviously also patched. How do you go about this type of issue?</p>
<h3>File <em>mockPatch.py</em></h3>
<pre><code>from unittest import TestCase
from unittest.mock import patch
from os import remove
class WriteModule(object):
def openBlankFile(self):
try:
with open("Foo.txt", "x"):
print("created")
except FileExistsError:
remove("Foo.txt")
open("Foo.txt", "x")
print("removed and created")
class testWriteModule(TestCase):
@patch("mockPatch.open", side_effect=FileExistsError)
@patch("mockPatch.remove")
def testWriteModuleOpen(self, mock_remove, mock_open):
WriteModule().openBlankFile()
if __name__ == "__main__":
unittest.main()
# EOF
</code></pre>
<p>Run:</p>
<pre class="lang-none prettyprint-override"><code>python -m unittest mockPatch
</code></pre>
<p>Running this does not seem to mock either FileExistsError.</p>
<p><a href="https://i.sstatic.net/jLVdm.png" rel="nofollow noreferrer">Error output</a></p>
<p>What is up with this piece of code?</p>
| <python><unit-testing><mocking><patch> | 2023-07-27 12:53:41 | 1 | 1,446 | hewi |
76,779,874 | 504,383 | Python Django: TypeError: cannot unpack non-iterable MatchAll object | <p>I am facing below error when try to query using 'Q' in a viewset. It will work without any issues if I use this in a management command file.</p>
<p>My view.</p>
<pre><code>@permission_classes((AllowAny,))
class ClipartViewSet(viewsets.GenericViewSet):
serializer_class = ClipartSerializer
queryset = Clipart.objects.filter(is_active=True).all()
def list(self, request, **kwargs):
# Some extra logic
# qs = Clipart.objects.filter(name="Apes") #This line will work without any issues
qs = Clipart.objects.filter(Q(name="Apes") | Q(name="Dog")) # This line will show error
print(qs)
return super(ClipartViewSet, self).list(self, request, **kwargs)
</code></pre>
<h2>Error:</h2>
<pre><code>Internal Server Error: /api/s/configurator/cliparts
backend_1 | Traceback (most recent call last):
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/core/handlers/exception.py", line 47, in inner
backend_1 | response = get_response(request)
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/core/handlers/base.py", line 181, in _get_response
backend_1 | response = wrapped_callback(request, *callback_args, **callback_kwargs)
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
backend_1 | return view_func(*args, **kwargs)
backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/viewsets.py", line 125, in view
backend_1 | return self.dispatch(request, *args, **kwargs)
backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 509, in dispatch
backend_1 | response = self.handle_exception(exc)
backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 469, in handle_exception
backend_1 | self.raise_uncaught_exception(exc)
backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 480, in raise_uncaught_exception
backend_1 | raise exc
backend_1 | File "/usr/local/lib/python3.7/site-packages/rest_framework/views.py", line 506, in dispatch
backend_1 | response = handler(request, *args, **kwargs)
backend_1 | File "/backend/mycomp/apps/ecommerce/configurator/views/design_views.py", line 109, in list
backend_1 | qs = Clipart.objects.filter(Q(name="Apes") | Q(name="Dog")) # This line will show error
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/manager.py", line 85, in manager_method
backend_1 | return getattr(self.get_queryset(), name)(*args, **kwargs)
backend_1 | File "/usr/local/lib/python3.7/site-packages/safedelete/queryset.py", line 72, in filter
backend_1 | return super(SafeDeleteQueryset, queryset).filter(*args, **kwargs)
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 941, in filter
backend_1 | return self._filter_or_exclude(False, args, kwargs)
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 961, in _filter_or_exclude
backend_1 | clone._filter_or_exclude_inplace(negate, args, kwargs)
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/query.py", line 968, in _filter_or_exclude_inplace
backend_1 | self._query.add_q(Q(*args, **kwargs))
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/query.py", line 1391, in add_q
backend_1 | clause, _ = self._add_q(q_object, self.used_aliases)
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/query.py", line 1413, in _add_q
backend_1 | split_subq=split_subq, check_filterable=check_filterable,
backend_1 | File "/usr/local/lib/python3.7/site-packages/django/db/models/sql/query.py", line 1281, in build_filter
backend_1 | arg, value = filter_expr
backend_1 | TypeError: cannot unpack non-iterable MatchAll object
</code></pre>
| <python><django><django-models><django-rest-framework><django-views> | 2023-07-27 12:45:04 | 1 | 1,921 | Arun SS |
76,779,871 | 1,175,788 | How do I do a SQL-like window function in Pandas? | <p>I have data that looks like the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>org_id</th>
<th>org_name</th>
<th>person_id</th>
<th>date</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>Microsoft</td>
<td>453241</td>
<td>1/1/05</td>
</tr>
<tr>
<td>222</td>
<td>Zebra</td>
<td>21341</td>
<td>6/1/95</td>
</tr>
<tr>
<td>333</td>
<td>Company</td>
<td>42343241</td>
<td>1/1/23</td>
</tr>
<tr>
<td>111</td>
<td>Microsoft</td>
<td>098678</td>
<td>2/1/13</td>
</tr>
<tr>
<td>111</td>
<td>Microsoft Inc</td>
<td>6786</td>
<td>6/1/23</td>
</tr>
<tr>
<td>222</td>
<td>Zebra</td>
<td>546</td>
<td>4/1/06</td>
</tr>
<tr>
<td>333</td>
<td>Company</td>
<td>vcxv313</td>
<td>2/1/23</td>
</tr>
<tr>
<td>222</td>
<td>NewZebra</td>
<td>876</td>
<td>4/1/23</td>
</tr>
<tr>
<td>333</td>
<td>Company</td>
<td>432gf</td>
<td>4/1/23</td>
</tr>
</tbody>
</table>
</div>
<p>And I want to run Pandas functions similar to this type of SQL query:</p>
<pre><code>SELECT org_id, org_name
FROM (
SELECT ROW_NUMBER() OVER(PARTITION BY org_id ORDER BY date DESC) as row_num,
org_id, org_name
FROM dataframe
)
WHERE row_num = 1
</code></pre>
<p>result set should be:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>org_id</th>
<th>org_name</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>Microsoft Inc</td>
</tr>
<tr>
<td>222</td>
<td>NewZebra</td>
</tr>
<tr>
<td>333</td>
<td>Company</td>
</tr>
</tbody>
</table>
</div>
<p>I'm finding myself having trouble with the Pandas groupby syntax and aggregate functions. Any help would be appreciated</p>
| <python><pandas><window-functions> | 2023-07-27 12:44:47 | 1 | 3,011 | simplycoding |
76,779,821 | 22,213,065 | Find files that don't exist a specific regex in even lines | <p>I have high number of txt files in <code>E:\Desktop\social\Output_folder</code> directory and files must have a format like following list:</p>
<pre><code>Botelt
2,006,910
Classtertmates
932,977
SiretexDettegrees
740,025
PlantrthyhetAll
410,810
theGkykyulobe
316,409
NOVEMBER
1997
</code></pre>
<p><strong>This means that the files must have the following characteristics:</strong></p>
<ol>
<li>Only odd lines must contain letters.</li>
<li>even lines must contain only front regex: <code>^.*?(?<!\d)(?<!\d,)(\d{1,3}(?:,\d{3})*)(?!,?\d).*</code></li>
<li>latest non-empty lines must contain only a 4 digit number like 2020 or 2014 (year format)</li>
<li>Multiple number of my regex lines cannot be placed in consecutive.</li>
<li>Multiple number of letter lines cannot be placed in consecutive.</li>
</ol>
<p>Now I need a regex that find files in <code>E:\Desktop\social\Output_folder</code> directory that have not above characteristics. for example following list:</p>
<pre><code>QrtQrt
316,935,269
Frtaceertbrtortok
220,138,444
Reertdertdertit
113,759,355
YourtretTrtuertbete
87,035,728
Tatjjuygguked
85,739,300
MyshtyhSpyrtyactye
81,000,349
Ftyryriendttyysteyr
71,734,802
560,492,430
51,682,046
Tutymrtybrtylr
51,245,350
Crtyltyatrysrtysmarytetys
41,314,645
Tjyozytonyje
38
VtyyjKyjontyjaktyje
29,011,910
JUNE
2009
</code></pre>
<p>If you look at the example above, <code>71,734,802</code> and <code>560,492,430</code> and <code>51,682,046</code> are in consecutive.</p>
<p>I wrote following python script that must check my directory files and find files with incorrect characteristics:</p>
<pre><code>import os
import re
def is_valid_line(line, is_even):
if is_even:
return re.match(r'^.*?(?<!\d)(?<!\d,)(\d{1,3}(?:,\d{3})*)(?!,?\d).*$', line)
else:
return re.match(r'^[A-Z]', line)
def is_valid_file(file_path):
with open(file_path, 'r') as file:
lines = file.readlines()
if len(lines) % 2 == 0:
return False
for i, line in enumerate(lines):
is_even = i % 2 == 0
if not is_valid_line(line.strip(), is_even):
return False
# Check if the last line is a four-digit number
last_line = lines[-1].strip()
if not re.match(r'^\d{4}$', last_line):
return False
return True
def find_invalid_files(directory_path):
invalid_files = []
for file_name in os.listdir(directory_path):
if file_name.endswith('.txt'):
file_path = os.path.join(directory_path, file_name)
if not is_valid_file(file_path):
invalid_files.append(file_name)
return invalid_files
if __name__ == "__main__":
directory_path = r"E:\Desktop\social\Output_folder"
invalid_files = find_invalid_files(directory_path)
report_file = "invalid_files_report.txt"
with open(report_file, "w") as f:
if invalid_files:
f.write("The following files do not follow the specified format:\n")
for file_name in invalid_files:
f.write(file_name + "\n")
else:
f.write("All files in the directory follow the specified format.\n")
print("Report generated. Check 'invalid_files_report.txt' for details.")
</code></pre>
<p>but my script not working and report me all files names.<br />
where is my script problem?</p>
| <python><string><parsing> | 2023-07-27 12:38:54 | 1 | 781 | Pubg Mobile |
76,779,817 | 9,097,114 | Huggingface pipeline with langchain | <p>Hi i am trying to do speaker diarization with open/ai whisper model.<br />
from langchain.llms import HuggingFacePipeline</p>
<pre><code>import torch
from transformers import AutoTokenizer, WhisperProcessor,AutoModelForCausalLM, pipeline, AutoModelForSeq2SeqLM
model_id = 'openai/whisper-large-v2'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = WhisperProcessor.from_pretrained(model_id)
pipe = pipeline(
"automatic-speech-recognition",
model=model,
tokenizer=tokenizer,
max_length=100
)
local_llm = HuggingFacePipeline(pipeline=pipe)
</code></pre>
<p>The error i am getting is " <strong>AttributeError: 'WhisperProcessor' object has no attribute 'config'</strong>"</p>
<p>Is there anything to change from above code?<br />
Thanks in advance</p>
| <python><langchain><openai-whisper> | 2023-07-27 12:38:39 | 1 | 523 | san1 |
76,779,758 | 7,228,351 | Convert 2D dataframe to 3D based upon column names | <p>I have a dataframe with this shape:</p>
<pre><code>hpa02upc00 hpa01upc01 hpa01upc00 hpa00upc02 hpa00upc01 hpa00upc00
pwr
0 58.28 58.23 58.95 58.09 58.84 NaN
-1 NaN NaN 58.16 NaN 58.06 58.85
-2 NaN NaN NaN NaN NaN 58.08
</code></pre>
<p>I want to turn it into a 3D data frame for filtering with the hpaXX value and the upcXX values as the new axis.</p>
<p>I am new to multiaxis and could use some help getting started on how to both extract the columns and regenerate the new dataframe.</p>
<p>Thanks!</p>
| <python><pandas><multidimensional-array><multi-index> | 2023-07-27 12:29:28 | 1 | 334 | AAmes |
76,779,737 | 7,713,770 | How to show name instead of id with foreighkey with django restframework | <p>I have a django application. And a foreign relationship. But the api calls shows the id instead of the name. But I want to show the name instead of the id.</p>
<p>So this is the property:</p>
<pre><code>category = models.ForeignKey(Category, related_name='animals', on_delete=models.CASCADE, verbose_name="test")
</code></pre>
<p>and serializer:</p>
<pre><code>class SubAnimalSerializer(serializers.ModelSerializer):
class Meta:
model = Animal
fields = ['id', 'name', 'description',"category","pet_list", 'images' ]
read_only_fields = ['id']
</code></pre>
<p>so this is part of the api call:</p>
<pre><code>"animals": [
{
"id": 3,
"name": "Vicuña",
"description": "kjhkkhkhkjhjkhj Hello",
"category": 6,
"pet_list": "D",
"images": "http://192.168.1.135:8000/media/media/photos/animals/mammal.jpg"
},
{
"id": 5,
"name": "Goudhamster",
"description": "khkjh",
"category": 6,
"pet_list": "C",
"images": "http://192.168.1.135:8000/media/media/photos/animals/imgbin_dog-grooming-puppy-cat-pet-png_I7rLPen.png"
}
],
</code></pre>
<p>And I googled something ofcourse. So I found this:</p>
<p><a href="https://stackoverflow.com/questions/70219330/how-to-show-name-instad-of-id-in-django-foreignkey">How to show name instad of id in django ForeignKey?</a></p>
<p>But it doesn't exactly fit what I am looking for.</p>
<p>Question: how to show name instead of id in api call</p>
| <python><django><django-rest-framework> | 2023-07-27 12:26:51 | 2 | 3,991 | mightycode Newton |
76,779,557 | 264,136 | Comment python script programatically | <p>My script file has lots of classes:</p>
<pre><code>def func1():
...
class IPSEC(aetest.Testcase):
....
class IPSEC_DPI(aetest.Testcase):
....
class IPSEC_FNF(aetest.Testcase):
....
.
.
more classes
.
.
class all_results(aetest.Testcase):
....
</code></pre>
<p><code>all_results</code> is the last class.</p>
<p>I need to create a function that accepts the class name and file path of the script above and comment that particular class. all_results will never be commented. I know its weird but thats the requirement as of now.</p>
<p>I created the function below, but it fails. Any other short method to achieve the same?</p>
<pre><code>def comment_lines(file_name, profile):
lines = open(file_name, 'r').readlines()
start=None
end=None
i=0
print(f"looking for class {profile}(aetest.Testcase):")
for i in range(0, len(lines)):
if f"class {profile}(aetest.Testcase):" in lines[i]:
start=i
break
i+=1
for i in range(start+1, len(lines)):
if f"class " in lines[i]:
end=i-2
break
i+=1
for i in range(start, end+1):
lines[i] = "#" + lines[i]
out = open(file_name, 'w')
out.writelines(lines)
out.close()
</code></pre>
| <python> | 2023-07-27 12:04:27 | 3 | 5,538 | Akshay J |
76,779,553 | 5,898,079 | How to distinguish between same-named files and directories in Google Drive using fsspec in Python? | <p>I am working with Google Drive in <code>Python</code> using <code>fsspec</code> to perform various operations like listing and downloading files and directories. However, I have encountered a challenge when dealing with items that share the same name. For example, there might be a file and a directory both named "example.txt" in different locations. When using <code>fsspec</code>, I find it difficult to differentiate between these items solely based on their names. I need a way to distinguish between same-named files and directories while working with Google Drive using <code>fsspec</code>.</p>
<p>I have attempted to list the contents of directories using <code>fsspec</code> and then loop through the results to handle the files and directories accordingly. However, since items with the same name appear identical in the listing, I couldn't reliably tell them apart. As a result, my attempts to download specific files or navigate to directories were not successful.</p>
<p>I am looking for guidance on how to address this issue and implement a solution that allows me to differentiate between same-named files and directories effectively using <code>fsspec</code> in <code>Python</code>.</p>
| <python><google-drive-api><fsspec><pydrive2> | 2023-07-27 12:03:40 | 1 | 687 | muhammad ali e |
76,779,502 | 389,489 | Slice a 2D array using built-in slice function | <p>I need to write a function that returns a single row or a column from a 2D array. The input to the function tells what to return.</p>
<pre class="lang-py prettyprint-override"><code># 3x3 array with values 1 to 9
a = np.arange(1, 10).reshape(3,3)
rowCount, colCount = a.shape
# return last row [7, 8, 9]
a[rowCount - 1, :]
# return first row [1, 2, 3]
a[0, :]
# return last column [3, 6, 9]
a[:, lastCol]
# return first column [1, 4, 7]
a[:, 0]
</code></pre>
<p>How can I do this in a function, such that the function receives the row or column to return?</p>
<p>Something like,</p>
<pre class="lang-py prettyprint-override"><code>def getSlice(slice):
return a[slice]
</code></pre>
<p>Where the <a href="https://docs.python.org/3/glossary.html#term-slice" rel="nofollow noreferrer">slice object</a> is created using the built-in <a href="https://docs.python.org/3/library/functions.html#slice" rel="nofollow noreferrer">slice function</a>.</p>
<p>However, I cannot figure out how to create a slice object for a 2D array, due to the fact that <code>slice</code> function does not accept the colon operator like <code>slice(0, :)</code>.</p>
<p>Also, is there a way to represent "last row" or "last column if I do not know beforehand the shape of the 2D array?</p>
<h2>Use-Case</h2>
<p>Below are a couple of use-cases why I need a function instead of directly using the <code>a[:, 0]</code> expression:</p>
<ol>
<li>The caller does not have access to the array. The caller can get a desired row or column from the array by calling the <code>getSlice</code> function.</li>
<li>The preferred row or column needs to be pre-configured. For instance, <code>{a1: 'first row', a2: 'last column'}</code>. Both <code>a1</code> and <code>a2</code> may get transposed and modified many times. But at all times, I am interested only in the configured row/column of the two arrays.</li>
</ol>
| <python> | 2023-07-27 11:57:59 | 2 | 3,810 | Somu |
76,779,345 | 2,416,984 | Initiating opcua-asyncio Client in thread | <p>In context of writing tests for an implementation using <code>asyncua.Client</code>, I'd like to (1) write a fixture for <code>asyncua.Server</code> and (2) call the function that eventually starts the client in a separate thread. A minimal example of what I'd like to achieve, without the complexity of writing <code>conftest.py</code> etc, is running something like below:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import asyncua
import threading
async def listen(endpoint):
async with asyncua.Client(url=endpoint) as client:
print(f'OPC UA client has started {client}.')
return
async def run_all(endpoint):
server = asyncua.Server()
await server.init()
server.set_endpoint(endpoint)
async with server:
t = threading.Thread(target=asyncio.run, args=(listen(endpoint),))
t.start()
t.join()
if __name__ == '__main__':
asyncio.run(run_all('opc.tcp://0.0.0.0:5840/freeopcua/server/'))
</code></pre>
<p>This always leads to a <code>TimeoutError</code> in the line <code>async with asyncua.Client</code>.</p>
<p>Most examples I've found have two files called <code>server.py</code> for initializing <code>asyncua.Server</code> and <code>client.py</code> for initializing <code>asyncua.Client</code>. Then the server and client are started in separate terminals. However in order to run pytest, I believe both should start from the same entrypoint. How can this be achieved?</p>
| <python><multithreading><python-asyncio><opc-ua> | 2023-07-27 11:37:26 | 2 | 973 | user2416984 |
76,779,313 | 4,202,198 | Slurm Cluster Python Script Not Running on Multiple Nodes using SBATCH | <p>We recently setup a Slurm Cluster with 2 Nodes(1 headnode+compute node and 1 compute nodes) for some HPC CFD simulations.Right now i am trying to run some python script which is used for feature selection in one of our Machine learning project which would run for around a day in one system.I have configured python and installed all libraries in 2 machines, verified Slurm node availability, configured job Script with required parameters(shown below) .But while trying to run the job using SBATCH, i am able to see the script execution in the head node where i run the SBATCH but not on the other node.the execution happens in the exact number of cores i specified in job script.however when i specify the compute name in command using --nodelist , the script is running on the other compute node.
Kindly help as i am starter in both slurm cluster management as well as python.</p>
<p><strong>Python Script</strong></p>
<pre><code>import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# import seaborn as sns
from sklearn.feature_selection import f_classif, mutual_info_classif
from lightgbm import LGBMClassifier
from mlxtend.plotting import plot_sequential_feature_selection as plot_sfs
import matplotlib as mpl
mpl.style.use('seaborn')
# df = pd.read_csv('vib_processed_data.csv')#Vibration
df = pd.read_csv('curr_processed_data.csv')#Current
# print(df.shape)
# print(df.describe().T.to_string())
meta = pd.read_csv("data.csv", names=["mimb", "bbl", "mbl", "fcl", "file_name"],
header=None, skiprows=1)
# print(meta.shape)
meta_new = pd.DataFrame()
for i in range(6):
dummy = meta.copy()
dummy["file_name"] = dummy["file_name"].apply(lambda x: f"{x}_{i}")
meta_new = pd.concat([
meta_new,
dummy
])
# print(meta_new.head(), meta_new.shape)
meta = meta_new.reset_index(drop=True)
no_single_defects_df = pd.concat([meta.query('mimb==0&bbl==0&mbl==0&fcl==0'),
meta.query('(mimb==1 |mimb==2) &bbl==0&mbl==0&fcl==0'),
meta.query('mimb==0&(bbl==1|bbl==2)&mbl==0&fcl==0'),
meta.query('mimb==0&bbl==0&(mbl==1|mbl==2)&fcl==0'),
meta.query('mimb==0&bbl==0&mbl==0&(fcl==1|fcl==2)')]).reset_index(drop=True)
# print(no_single_defects_df.tail())
no_single_defects_df.reset_index(drop=True,inplace=True)
defects = ['mimb', 'bbl', 'mbl', 'fcl']
labels = np.argmax(no_single_defects_df[defects].values, axis=1).tolist()
no_single_defects_df=pd.merge(no_single_defects_df, df, on="file_name", how="left")
f_values, p_values = f_classif(no_single_defects_df.drop(columns=defects+["file_name"]), labels)
anova_test = pd.DataFrame(columns=["features", "f_values", "p_values"])
anova_test["features"] = no_single_defects_df.drop(columns=defects+["file_name"]).columns
anova_test["f_values"] = f_values
anova_test["p_values"] = p_values
anova_test = anova_test.sort_values(by=["f_values", "p_values"], ascending=False).reset_index(drop=True)
# print(anova_test.head(100).to_string())
features = anova_test[anova_test.index < 100]["features"].values
# corr = df[features].corr()
# mask = np.triu(corr)
# plt.figure(figsize=(20,15))
# sns.heatmap(corr, mask=mask)
# plt.show()
#-----------------------------------------------------------------------------------------------------------------------
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import hamming_loss, multilabel_confusion_matrix, classification_report
from sklearn.model_selection import StratifiedGroupKFold
from sklearn.pipeline import Pipeline
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import RandomForestClassifier
from sklearn.multiclass import OneVsRestClassifier
from mlxtend.feature_selection import SequentialFeatureSelector as sfs
from sklearn.metrics import get_scorer_names
from mlxtend.plotting import plot_sequential_feature_selection
from scipy.special import binom
def multinomial(params):
if len(params) == 1:
return 1
return binom(sum(params), params[-1]) * multinomial(params[:-1])
for i in range(3):
meta[f"Count_{i}"] = (meta[defects] == i).astype(int).sum(axis=1)
meta['groups'] = meta.apply(lambda row: int(multinomial([row['Count_0'],row['Count_1'], row['Count_2']])),axis=1)
# print(meta.shape)
df = pd.merge(meta, df, on="file_name", how = 'left')
train_data = df[features.tolist()+["groups"]+defects].reset_index(drop=True)
print(train_data.head())
Y = np.zeros((train_data.shape[0], 8))
counter = 0
for idx, defect in enumerate(defects):
for i in range(1,3):
Y[:,counter] = (train_data[defect].to_numpy() == i)
counter += 1
# print(list(Y))
# print(train_data[defects].tail())
# print(np.sum(Y))
Y_str = []
for label in Y:
Y_str.append("".join(list(map(str,label))))
# print(len(Y_str))
print('classification start')
gstrf = StratifiedGroupKFold(n_splits=3, shuffle=True, random_state=43)
# clf = Pipeline([('scaler', StandardScaler()), ('lr', OneVsRestClassifier(LogisticRegression()))])
clf = OneVsRestClassifier(LGBMClassifier())
feature_selector = sfs(
estimator=clf,
cv=list(
gstrf.split(df[features],y=Y_str, groups=df["groups"])
),
k_features=15,
scoring='roc_auc',
verbose=2,
floating=True,
n_jobs=-1
)
feature_selector.fit(df[features], Y)
print(feature_selector.k_feature_names_)
selected_features = list(feature_selector.k_feature_names_)
# fig1 = plot_sequential_feature_selection(feature_selector.get_metric_dict(), figsize=(10,20))
# plt.ylim([0,1])
# plt.show()
dump_features_list = selected_features.copy()
dump_features_list.append('file_name')
for defect in defects:
dump_features_list.append(defect)
dump_features_list.append('groups')
feature_dump = df[dump_features_list]
# feature_dump.to_csv("curr_feature_dump.csv", index=False)
# corr = df[list(feature_selector.k_feature_names_)].corr()
# mask = np.triu(corr)
# plt.figure(figsize=(20,15))
# sns.heatmap(corr, mask=mask, annot=True);
# plt.show()
selected_features = list(feature_selector.k_feature_names_)
plt.rc('font', size=5)
fig,ax = plot_sfs(feature_selector.get_metric_dict(), kind='std_dev',
figsize=(10, 7));
# ax.set_xticklabels(list(feature_selector.k_feature_names_))
plt.ylim(0,1)
ax.tick_params(axis="x", rotation=5)
ax.set_xlabel("Sequential Features")
ax.set_ylabel("Performance - AUC")
fig.align_labels()
plt.show()
fig.savefig("./vib_feature_selection_lgbm.png", dpi=300, bbox_inches='tight')
train_1_indices = list(train_data.query("groups==1|groups==4|groups==6").index)
val_1_indices = list(set(train_data.index) - set(train_1_indices))
train_2_indices = list(train_data.query("groups==1|groups==12").index)
val_2_indices = list(set(train_data.index) - set(train_2_indices))
cv = [
(train_1_indices, val_1_indices),
(train_2_indices, val_2_indices),
(val_1_indices, train_1_indices),
(val_2_indices, train_2_indices),
]
# ps = PredefinedSplit(train_data["groups"])
# train_data["kfold"] = -1
# for fold, (train_indices, val_indices) in enumerate(cv):
# train_data.loc[val_indices, "kfold"] = fold
# train_data[selected_features].head()
for fold, (train_indices, val_indices) in enumerate(cv):
print(f"FOLD {fold}:\n")
# splitting train and val based on k-fold
x_train = train_data.loc[train_indices]
x_val = train_data.loc[val_indices]
print(f"train_groups: {x_train.groups.unique()}")
print(f"val_groups: {x_val.groups.unique()}")
# exclude 'group'
x_train = x_train[selected_features]
x_val = x_val[selected_features]
y_train = Y[x_train.index]
y_val = Y[x_val.index]
clf = OneVsRestClassifier(LGBMClassifier())
# clf = OneVsRestClassifier(LGBMClassifier(boosting_type="dart",
# objective="binary",
# verbose=-1))
clf.fit(x_train, y_train)
# predict train and val
train_preds = clf.predict(x_train)
val_preds = clf.predict(x_val)
# calculate train and val loss (hamming)
train_loss = hamming_loss(y_train, train_preds)
val_loss = hamming_loss(y_val, val_preds)
print(f"TRAIN LOSS: {train_loss}")
print(f"OOF LOSS: {val_loss}\n")
print(classification_report(y_val, val_preds))
cMat = multilabel_confusion_matrix(y_val, val_preds)
print(cMat)
print("\n")
</code></pre>
<p><strong>Job Script</strong></p>
<pre><code>#!/bin/bash
#SBATCH --job-name=testjob
#SBATCH --nodes=2
#SBATCH --ntasks=32
#SBATCH --nodes-per-node=16
#SBATCH --partition=accel_ai
python3 featureselector.py
</code></pre>
<p>i have tried running the script specifying srun/mpirun python3 in job script which runs the jobs in both nodes but the same script is being run in all cores as different instances as i understood from the job script which is not what i wanted.</p>
| <python><cluster-computing><slurm><feature-selection><mlxtend> | 2023-07-27 11:32:04 | 0 | 1,626 | akhil kumar |
76,779,213 | 5,457,202 | AttributeError: Cutoff time DataFrame must contain a column with either the same name as the target dataframe index or a column named "instance_id" | <p>I'm learning how to use Featuretools with this <a href="https://github.com/Featuretools/predict-remaining-useful-life/blob/master/Advanced%20Featuretools%20RUL.ipynb" rel="nofollow noreferrer">tutorial</a> and I've made it to a snippet which is right below this paragraph:</p>
<pre class="lang-py prettyprint-override"><code>from featuretools.tsfresh import CidCe
import featuretools as ft
fm, features = ft.dfs(
entityset=es,
target_dataframe_name='RUL data',
agg_primitives=['last', 'max'],
trans_primitives=[],
chunk_size=.26,
cutoff_time=cutoff_time_list[0],
max_depth=3,
verbose=True,
)
fm.to_csv('advanced_fm.csv')
fm.head()
</code></pre>
<p>The variable <code>es</code> holds the dataset:</p>
<p><a href="https://i.sstatic.net/HDKr6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/HDKr6.png" alt="enter image description here" /></a></p>
<p>Then, the <code>cutoff_time_list[0]</code> has a table which looks like this:</p>
<p><a href="https://i.sstatic.net/ZN8Zb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZN8Zb.png" alt="enter image description here" /></a></p>
<p>However, I get this error:
AttributeError: Cutoff time DataFrame must contain a column with either the same name as the target dataframe index or a column named "instance_id"
Even when the dataframe and the cut-off table both have an "engine_no" column. Why is this caused? I'm using the version <code>1.27.0</code> of <code>featuretools</code>.</p>
| <python><deep-learning><feature-engineering><featuretools> | 2023-07-27 11:16:46 | 1 | 436 | J. Maria |
76,779,175 | 567,059 | Run Azure Function outside of functions runtime to allow for pytest testing | <p>I have an Azure Function App (Python Programming Model V2) that I am trying to trigger outside of the Azure Functions runtime (with my goal being to use <code>pytest</code> to test the function). However, when not running in Azure Functions locally using <code>func host start</code>, the decorator for the function is preventing it from being run.</p>
<p>How can I trigger the function outside of the Azure Functions runtime to allow me to test it?</p>
<hr />
<p>Take the example below. When I run the function locally and trigger it using cURL, as expected, the desired response is successfully returned</p>
<h3><code>example_project/function_app.py</code></h3>
<pre class="lang-py prettyprint-override"><code>import azure.functions as func
app = func.FunctionApp(http_auth_level=func.AuthLevel.FUNCTION)
@app.route(route='HttpTrigger')
def HttpTrigger(req: func.HttpRequest) -> func.HttpResponse:
return func.HttpResponse('Success', status_code=200)
</code></pre>
<pre class="lang-bash prettyprint-override"><code>user@computer:~/example_project$ curl http://localhost:7071/api/HttpTrigger
Success
</code></pre>
<p>However, if I run the function directly nothing happens. In the example below I would expect an error because a required keywork (<code>req</code>) is missing. Instead, it seems as though the function wasn't triggered at all.</p>
<pre class="lang-py prettyprint-override"><code>>>> from example_project.function_app import HttpTrigger
>>> HttpTrigger()
>>>
</code></pre>
<p>Now if I were to comment out the decorator from <code>example_project/function_app.py</code>, you can see that the function is run and an error occurred.</p>
<pre class="lang-py prettyprint-override"><code># @app.route(route='HttpTrigger')
</code></pre>
<pre class="lang-py prettyprint-override"><code>>>> from example_project.function_app import HttpTrigger
>>> HttpTrigger()
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: HttpTrigger() missing 1 required positional argument: 'req'
>>>
</code></pre>
| <python><azure-functions><pytest> | 2023-07-27 11:12:28 | 1 | 12,277 | David Gard |
76,779,124 | 16,852,041 | PyTest for @singledispatch and @typechecked not raising expected error | <p>Goal: Successfully pass test cases, for <code>NotImplementedError</code> in <code>test_score_not_implemented_error()</code>.</p>
<p>The purpose of <code>@singledispatch</code> <code>def score()</code> is to raise <code>NotImplementedError</code>, if when the provided arguments for <code>count_neg</code> and <code>count_pos</code> do not match <code>Tuple[int, int]</code> nor <code>Tuple[List[int], List[int]]</code>.</p>
<p>I want to test for this exception handling via. <code>test_score_not_implemented_error()</code>.</p>
<p>However, unexpectedly, I get an error for having implemented <code>@typechecked</code> on the other polymorphic functions.</p>
<p>I'm confident in my approach to needing a polymorphic function and my test function has the appropriate test cases. I suspect the issue lies in how I've implemented <code>def score()</code>'s polymorphic functions.</p>
<p>Tweak:
Removing <code>@typechecked</code> from polymorphic functions throws:</p>
<pre><code>FAILED tests/test_tps.py::test_score_not_implemented_error[0-01] - TypeError: can only concatenate str (not "int") to str
FAILED tests/test_tps.py::test_score_not_implemented_error[count_neg3-count_pos3] - TypeError: unsupported operand type(s) for +: 'int' and 'str'
FAILED tests/test_tps.py::test_score_not_implemented_error[count_neg4-count_pos4] - TypeError: unsupported operand type(s) for +: 'int' and 'str'
FAILED tests/test_tps.py::test_score_not_implemented_error[count_neg5-count_pos5] - TypeError: unsupported operand type(s) for +: 'int' and 'str'
</code></pre>
<hr />
<p><code>tests/test_score.py</code>:</p>
<pre><code>from typeguard import typechecked
from functools import singledispatch
import pytest
from pytest_cases import parametrize
from typing import Any, List, Tuple, Type, Union
@singledispatch
def score(count_neg: Any, count_pos: Any) -> None:
raise NotImplementedError(f'{type(count_neg)} and or {type(count_pos)} are not supported.')
@score.register(int)
@typechecked
def score_int(count_neg: int, count_pos: int) -> float:
return round(100 * count_pos / (count_pos + count_neg), 1)
@score.register(list)
@typechecked
def score_list(count_neg: List[int], count_pos: List[int]) -> float:
return round(100 * sum(count_pos) / (sum(count_pos) + sum(count_neg)), 1)
@parametrize('count_neg, count_pos',
[('0', 0),
(0, '0'),
('0', '0'),
(['0'], [0]),
([0], ['0']),
(['0'], ['0']),
(None, None)])
def test_score_not_implemented_error(count_neg: Union[str, int, List[str], List[int], None],
count_pos: Union[str, int, List[str], List[int], None],
error: Type[BaseException] = NotImplementedError):
with pytest.raises(error) as exc_info:
score(count_neg, count_pos)
assert exc_info.type is error
</code></pre>
<p>Traceback:</p>
<pre><code>(venv) me@laptop:~/BitBucket/project $ python -m pytest tests/test_score.py
===================================================================================================================== test session starts =====================================================================================================================
platform linux -- Python 3.9.16, pytest-7.4.0, pluggy-1.0.0
rootdir: /home/danielbell/BitBucket/pdl1-lung
plugins: hydra-core-1.3.2, typeguard-3.0.2, mock-3.11.1, cases-3.6.13, dvc-3.2.3, anyio-3.5.0
collected 7 items
tests/test_tps.py .F.FFF. [100%]
========================================================================================================================== FAILURES ===========================================================================================================================
___________________________________________________________________________________________________________ test_score_not_implemented_error[0-01] ____________________________________________________________________________________________________________
count_neg = 0, count_pos = '0', error = <class 'NotImplementedError'>
@parametrize('count_neg, count_pos',
[('0', 0),
(0, '0'),
('0', '0'),
(['0'], [0]),
([0], ['0']),
(['0'], ['0']),
(None, None)])
def test_score_not_implemented_error(count_neg: Union[str, int, List[str], List[int], None],
count_pos: Union[str, int, List[str], List[int], None],
error: Type[BaseException] = NotImplementedError):
with pytest.raises(error) as exc_info:
> score(count_neg, count_pos)
tests/test_tps.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../miniconda3/envs/pdl1lung/lib/python3.9/functools.py:888: in wrapper
return dispatch(args[0].__class__)(*args, **kw)
tests/test_tps.py:16: in score_int
def score_int(count_neg: int, count_pos: int) -> float:
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_functions.py:113: in check_argument_types
check_type_internal(value, expected_type, memo=memo)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = '0', annotation = <class 'int'>, memo = <typeguard.CallMemo object at 0x7f591213ccc0>
def check_type_internal(value: Any, annotation: Any, memo: TypeCheckMemo) -> None:
"""
Check that the given object is compatible with the given type annotation.
This function should only be used by type checker callables. Applications should use
:func:`~.check_type` instead.
:param value: the value to check
:param annotation: the type annotation to check against
:param memo: a memo object containing configuration and information necessary for
looking up forward references
"""
if isinstance(annotation, ForwardRef):
try:
annotation = evaluate_forwardref(annotation, memo)
except NameError:
if global_config.forward_ref_policy is ForwardRefPolicy.ERROR:
raise
elif global_config.forward_ref_policy is ForwardRefPolicy.WARN:
warnings.warn(
f"Cannot resolve forward reference {annotation.__forward_arg__!r}",
TypeHintWarning,
stacklevel=get_stacklevel(),
)
return
if annotation is Any or annotation is SubclassableAny or isinstance(value, Mock):
return
# Skip type checks if value is an instance of a class that inherits from Any
if not isclass(value) and SubclassableAny in type(value).__bases__:
return
extras: tuple[Any, ...]
origin_type = get_origin(annotation)
if origin_type is Annotated:
annotation, *extras_ = get_args(annotation)
extras = tuple(extras_)
origin_type = get_origin(annotation)
else:
extras = ()
if origin_type is not None:
args = get_args(annotation)
# Compatibility hack to distinguish between unparametrized and empty tuple
# (tuple[()]), necessary due to https://github.com/python/cpython/issues/91137
if origin_type in (tuple, Tuple) and annotation is not Tuple and not args:
args = ((),)
else:
origin_type = annotation
args = ()
for lookup_func in checker_lookup_functions:
checker = lookup_func(origin_type, args, extras)
if checker:
checker(value, origin_type, args, memo)
return
if not isinstance(value, origin_type):
> raise TypeCheckError(f"is not an instance of {qualified_name(origin_type)}")
E typeguard.TypeCheckError: argument "count_pos" (str) is not an instance of int
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:680: TypeCheckError
___________________________________________________________________________________________________ test_score_not_implemented_error[count_neg3-count_pos3] ___________________________________________________________________________________________________
count_neg = ['0'], count_pos = [0], error = <class 'NotImplementedError'>
@parametrize('count_neg, count_pos',
[('0', 0),
(0, '0'),
('0', '0'),
(['0'], [0]),
([0], ['0']),
(['0'], ['0']),
(None, None)])
def test_score_not_implemented_error(count_neg: Union[str, int, List[str], List[int], None],
count_pos: Union[str, int, List[str], List[int], None],
error: Type[BaseException] = NotImplementedError):
with pytest.raises(error) as exc_info:
> score(count_neg, count_pos)
tests/test_tps.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../miniconda3/envs/pdl1lung/lib/python3.9/functools.py:888: in wrapper
return dispatch(args[0].__class__)(*args, **kw)
tests/test_tps.py:22: in score_list
def score_list(count_neg: List[int], count_pos: List[int]) -> float:
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_functions.py:113: in check_argument_types
check_type_internal(value, expected_type, memo=memo)
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:676: in check_type_internal
checker(value, origin_type, args, memo)
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:273: in check_list
check_type_internal(v, args[0], memo)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = '0', annotation = <class 'int'>, memo = <typeguard.CallMemo object at 0x7f5854383770>
def check_type_internal(value: Any, annotation: Any, memo: TypeCheckMemo) -> None:
"""
Check that the given object is compatible with the given type annotation.
This function should only be used by type checker callables. Applications should use
:func:`~.check_type` instead.
:param value: the value to check
:param annotation: the type annotation to check against
:param memo: a memo object containing configuration and information necessary for
looking up forward references
"""
if isinstance(annotation, ForwardRef):
try:
annotation = evaluate_forwardref(annotation, memo)
except NameError:
if global_config.forward_ref_policy is ForwardRefPolicy.ERROR:
raise
elif global_config.forward_ref_policy is ForwardRefPolicy.WARN:
warnings.warn(
f"Cannot resolve forward reference {annotation.__forward_arg__!r}",
TypeHintWarning,
stacklevel=get_stacklevel(),
)
return
if annotation is Any or annotation is SubclassableAny or isinstance(value, Mock):
return
# Skip type checks if value is an instance of a class that inherits from Any
if not isclass(value) and SubclassableAny in type(value).__bases__:
return
extras: tuple[Any, ...]
origin_type = get_origin(annotation)
if origin_type is Annotated:
annotation, *extras_ = get_args(annotation)
extras = tuple(extras_)
origin_type = get_origin(annotation)
else:
extras = ()
if origin_type is not None:
args = get_args(annotation)
# Compatibility hack to distinguish between unparametrized and empty tuple
# (tuple[()]), necessary due to https://github.com/python/cpython/issues/91137
if origin_type in (tuple, Tuple) and annotation is not Tuple and not args:
args = ((),)
else:
origin_type = annotation
args = ()
for lookup_func in checker_lookup_functions:
checker = lookup_func(origin_type, args, extras)
if checker:
checker(value, origin_type, args, memo)
return
if not isinstance(value, origin_type):
> raise TypeCheckError(f"is not an instance of {qualified_name(origin_type)}")
E typeguard.TypeCheckError: item 0 of argument "count_neg" (list) is not an instance of int
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:680: TypeCheckError
___________________________________________________________________________________________________ test_score_not_implemented_error[count_neg4-count_pos4] ___________________________________________________________________________________________________
count_neg = [0], count_pos = ['0'], error = <class 'NotImplementedError'>
@parametrize('count_neg, count_pos',
[('0', 0),
(0, '0'),
('0', '0'),
(['0'], [0]),
([0], ['0']),
(['0'], ['0']),
(None, None)])
def test_score_not_implemented_error(count_neg: Union[str, int, List[str], List[int], None],
count_pos: Union[str, int, List[str], List[int], None],
error: Type[BaseException] = NotImplementedError):
with pytest.raises(error) as exc_info:
> score(count_neg, count_pos)
tests/test_tps.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../miniconda3/envs/pdl1lung/lib/python3.9/functools.py:888: in wrapper
return dispatch(args[0].__class__)(*args, **kw)
tests/test_tps.py:22: in score_list
def score_list(count_neg: List[int], count_pos: List[int]) -> float:
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_functions.py:113: in check_argument_types
check_type_internal(value, expected_type, memo=memo)
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:676: in check_type_internal
checker(value, origin_type, args, memo)
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:273: in check_list
check_type_internal(v, args[0], memo)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = '0', annotation = <class 'int'>, memo = <typeguard.CallMemo object at 0x7f58540d45e0>
def check_type_internal(value: Any, annotation: Any, memo: TypeCheckMemo) -> None:
"""
Check that the given object is compatible with the given type annotation.
This function should only be used by type checker callables. Applications should use
:func:`~.check_type` instead.
:param value: the value to check
:param annotation: the type annotation to check against
:param memo: a memo object containing configuration and information necessary for
looking up forward references
"""
if isinstance(annotation, ForwardRef):
try:
annotation = evaluate_forwardref(annotation, memo)
except NameError:
if global_config.forward_ref_policy is ForwardRefPolicy.ERROR:
raise
elif global_config.forward_ref_policy is ForwardRefPolicy.WARN:
warnings.warn(
f"Cannot resolve forward reference {annotation.__forward_arg__!r}",
TypeHintWarning,
stacklevel=get_stacklevel(),
)
return
if annotation is Any or annotation is SubclassableAny or isinstance(value, Mock):
return
# Skip type checks if value is an instance of a class that inherits from Any
if not isclass(value) and SubclassableAny in type(value).__bases__:
return
extras: tuple[Any, ...]
origin_type = get_origin(annotation)
if origin_type is Annotated:
annotation, *extras_ = get_args(annotation)
extras = tuple(extras_)
origin_type = get_origin(annotation)
else:
extras = ()
if origin_type is not None:
args = get_args(annotation)
# Compatibility hack to distinguish between unparametrized and empty tuple
# (tuple[()]), necessary due to https://github.com/python/cpython/issues/91137
if origin_type in (tuple, Tuple) and annotation is not Tuple and not args:
args = ((),)
else:
origin_type = annotation
args = ()
for lookup_func in checker_lookup_functions:
checker = lookup_func(origin_type, args, extras)
if checker:
checker(value, origin_type, args, memo)
return
if not isinstance(value, origin_type):
> raise TypeCheckError(f"is not an instance of {qualified_name(origin_type)}")
E typeguard.TypeCheckError: item 0 of argument "count_pos" (list) is not an instance of int
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:680: TypeCheckError
___________________________________________________________________________________________________ test_score_not_implemented_error[count_neg5-count_pos5] ___________________________________________________________________________________________________
count_neg = ['0'], count_pos = ['0'], error = <class 'NotImplementedError'>
@parametrize('count_neg, count_pos',
[('0', 0),
(0, '0'),
('0', '0'),
(['0'], [0]),
([0], ['0']),
(['0'], ['0']),
(None, None)])
def test_score_not_implemented_error(count_neg: Union[str, int, List[str], List[int], None],
count_pos: Union[str, int, List[str], List[int], None],
error: Type[BaseException] = NotImplementedError):
with pytest.raises(error) as exc_info:
> score(count_neg, count_pos)
tests/test_tps.py:38:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../../miniconda3/envs/pdl1lung/lib/python3.9/functools.py:888: in wrapper
return dispatch(args[0].__class__)(*args, **kw)
tests/test_tps.py:22: in score_list
def score_list(count_neg: List[int], count_pos: List[int]) -> float:
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_functions.py:113: in check_argument_types
check_type_internal(value, expected_type, memo=memo)
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:676: in check_type_internal
checker(value, origin_type, args, memo)
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:273: in check_list
check_type_internal(v, args[0], memo)
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
value = '0', annotation = <class 'int'>, memo = <typeguard.CallMemo object at 0x7f58541444f0>
def check_type_internal(value: Any, annotation: Any, memo: TypeCheckMemo) -> None:
"""
Check that the given object is compatible with the given type annotation.
This function should only be used by type checker callables. Applications should use
:func:`~.check_type` instead.
:param value: the value to check
:param annotation: the type annotation to check against
:param memo: a memo object containing configuration and information necessary for
looking up forward references
"""
if isinstance(annotation, ForwardRef):
try:
annotation = evaluate_forwardref(annotation, memo)
except NameError:
if global_config.forward_ref_policy is ForwardRefPolicy.ERROR:
raise
elif global_config.forward_ref_policy is ForwardRefPolicy.WARN:
warnings.warn(
f"Cannot resolve forward reference {annotation.__forward_arg__!r}",
TypeHintWarning,
stacklevel=get_stacklevel(),
)
return
if annotation is Any or annotation is SubclassableAny or isinstance(value, Mock):
return
# Skip type checks if value is an instance of a class that inherits from Any
if not isclass(value) and SubclassableAny in type(value).__bases__:
return
extras: tuple[Any, ...]
origin_type = get_origin(annotation)
if origin_type is Annotated:
annotation, *extras_ = get_args(annotation)
extras = tuple(extras_)
origin_type = get_origin(annotation)
else:
extras = ()
if origin_type is not None:
args = get_args(annotation)
# Compatibility hack to distinguish between unparametrized and empty tuple
# (tuple[()]), necessary due to https://github.com/python/cpython/issues/91137
if origin_type in (tuple, Tuple) and annotation is not Tuple and not args:
args = ((),)
else:
origin_type = annotation
args = ()
for lookup_func in checker_lookup_functions:
checker = lookup_func(origin_type, args, extras)
if checker:
checker(value, origin_type, args, memo)
return
if not isinstance(value, origin_type):
> raise TypeCheckError(f"is not an instance of {qualified_name(origin_type)}")
E typeguard.TypeCheckError: item 0 of argument "count_neg" (list) is not an instance of int
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/typeguard/_checkers.py:680: TypeCheckError
====================================================================================================================== warnings summary =======================================================================================================================
../../miniconda3/envs/pdl1lung/lib/python3.9/site-packages/cytomine/models/collection.py:26
/home/danielbell/miniconda3/envs/pdl1lung/lib/python3.9/site-packages/cytomine/models/collection.py:26: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
from collections import MutableSequence
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
=================================================================================================================== short test summary info ===================================================================================================================
FAILED tests/test_tps.py::test_score_not_implemented_error[0-01] - typeguard.TypeCheckError: argument "count_pos" (str) is not an instance of int
FAILED tests/test_tps.py::test_score_not_implemented_error[count_neg3-count_pos3] - typeguard.TypeCheckError: item 0 of argument "count_neg" (list) is not an instance of int
FAILED tests/test_tps.py::test_score_not_implemented_error[count_neg4-count_pos4] - typeguard.TypeCheckError: item 0 of argument "count_pos" (list) is not an instance of int
FAILED tests/test_tps.py::test_score_not_implemented_error[count_neg5-count_pos5] - typeguard.TypeCheckError: item 0 of argument "count_neg" (list) is not an instance of int
=========================================================================================================== 4 failed, 3 passed, 1 warning in 0.94s ============================================================================================================
</code></pre>
| <python><pytest><single-dispatch> | 2023-07-27 11:06:50 | 1 | 2,045 | DanielBell99 |
76,779,110 | 5,094,589 | Deleting all greetings and footers from email text. What is the approach? | <p>I have thousands of emails (dataset) and one step in a preprocessing pipeline is a deletion of all greetings (For example: Dear Sir and Madam, Dear Team X etc.) and footers (like: Best Regards, Sincerely etc.)</p>
<p>I see 3 approaches of solving the problem:</p>
<ol>
<li>Define all possible Greetings and Footers variants and using something like <code>if word in set_of_greetings or word in set_of_footers -> delete word</code> using <code>string.replace()</code> function.</li>
<li>Train/use pre-trained neural network in order to classify NER (named entity recognition) "greeting" and "footer" and than delete all footers and greetings.</li>
<li>Combination of this 2 approaches</li>
</ol>
<p>With the first approach, the difficulty is to find and store all possible forms of greetings (emails are mostly in german, but some are in english)</p>
<p>With second approach, the difficulty is to find a dataset or labeling existing emails. It might take days or maybe even more.</p>
<p>What would be the best approach in your opinion and how would you tackle the problem using this approach?</p>
<p>Each email is just a text(string).</p>
| <python><text> | 2023-07-27 11:04:24 | 0 | 1,106 | Daniil Yefimov |
76,779,058 | 11,350,845 | Timestamp of current time change to 1970-01-20 after serialisation by Serialising Producer | <p>I've been the confluent-kafka[avro] (2.1.1) library to send the result of an AWS lambda to our kafka with an avro schema.</p>
<p>The serialisation of the schema is fine, except for the first key, "creation_time" which invariably is set to something like <code>1970-01-20T13:34:12.378Z</code> when received by kafka (as visualised in AKHQ).
The timestamp of the message is fine as long as I let it set to default, If y try to use the same timestamp as in the schema the message is shown in the AKHQ as sent 58 years ago.</p>
<p>I have the problem in all our environments kafka's and I can reproduce it on my local dev env.</p>
<p>I tried to debug the code, but I can't get info after the serialisation, here's what I'm sure of:
Timestamp var content just before serialisation (float): <code>1690451888.45323</code>
Time received on the AKHQ message: <code>1970-01-20T13:34:11.888Z</code></p>
<p>After conversion this time give <code>1686851</code> as timestamp.
I initially through it was somehow truncated before serialisation, but it doesn't looks like it.</p>
<p>Here's how I get my timestamp in the values:</p>
<pre class="lang-py prettyprint-override"><code>timestamp = datetime.now(tz=tz.gettz(self.config.timezone)).timestamp()
values = {
"creationTime": timestamp,
"description": message,
"eventTypeId": self.config.metric_name,
"pgd": [],
"contracts": [],
"points": [],
"objects": [],
}
</code></pre>
<p>My kafka code</p>
<pre class="lang-py prettyprint-override"><code>"""This module contains everything necessary to send messages to kafka"""
import logging
from confluent_kafka import SerializingProducer
from confluent_kafka.schema_registry import SchemaRegistryClient
from confluent_kafka.schema_registry.avro import AvroSerializer
from confluent_kafka.serialization import StringSerializer
from param_store_models import KafkaInfo
LOGGER = logging.getLogger(__name__)
class KafkaProducer:
"""Class used to send messages to kafka"""
def __init__(self, schema: str, kafka_info: KafkaInfo):
producer_ssm_conf = {
"bootstrap.servers": kafka_info.bootstrap_servers,
"security.protocol": kafka_info.security_protocol,
"sasl.mechanism": kafka_info.sasl_mecanism,
"sasl.username": kafka_info.sasl_username,
"sasl.password": kafka_info.sasl_password,
}
registry_ssm_conf = {"url": kafka_info.schema_registry_url}
serializer = AvroSerializer(
SchemaRegistryClient(registry_ssm_conf), schema, conf={"auto.register.schemas": False}
)
producer_default_conf = {
"value.serializer": serializer,
"key.serializer": StringSerializer(),
"enable.idempotence": "true",
"max.in.flight.requests.per.connection": 1,
"retries": 5,
"acks": "all",
"retry.backoff.ms": 500,
"queue.buffering.max.ms": 500,
"error_cb": self.delivery_report,
}
self.__serializing_producer = SerializingProducer({**producer_default_conf, **producer_ssm_conf})
def produce(self, topic: str, key=None, value=None, timestamp=0):
"""Asynchronously produce message to a topic"""
LOGGER.info(f"Produce message {value} to topic {topic}")
self.__serializing_producer.produce(topic, key, value, on_delivery=self.delivery_report, timestamp=timestamp)
def flush(self):
"""
Flush messages and trigger callbacks
:return: Number of messages still in queue.
"""
LOGGER.debug("Flushing messages to kafka")
return self.__serializing_producer.flush()
@staticmethod
def delivery_report(err, msg):
"""
Called once for each message produced to indicate delivery result.
Triggered by poll() or flush().
"""
if err:
LOGGER.error(f"Kafka message delivery failed: {err}")
else:
LOGGER.info(f"Kafka message delivered to {msg.topic()} [{msg.partition()}]")
</code></pre>
<p>My avro schema (partially redacted to hide the customer info)</p>
<pre class="lang-json prettyprint-override"><code>{
"type": "record",
"name": "EventRecord",
"namespace": "com.event",
"doc": "Schéma d'un évènement de supervision brut",
"fields": [
{
"name": "creationTime",
"type": {
"type": "long",
"logicalType": "timestamp-millis"
}
},
{
"name": "eventTypeId",
"type": "string"
},
{
"name": "internalProductId",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "description",
"type": [
"null",
"string"
],
"default": null
},
{
"name": "contracts",
"type": {
"type": "array",
"items": "string"
}
},
{
"name": "points",
"type": {
"type": "array",
"items": "string"
}
},
{
"name": "objects",
"type": {
"type": "array",
"items": "string"
}
}
]
}
</code></pre>
<p>Received message on the akhq topic</p>
<pre class="lang-json prettyprint-override"><code>{
"creationTime": "1970-01-20T13:34:12.378Z",
"eventTypeId": "test",
"description": "Test",
"pgd": [],
"contracts": [],
"points": [],
"objects": []
}
</code></pre>
<h1>Versions</h1>
<p>confluent-kafka[avro]==2.1.1</p>
<p>kafka broker = latest</p>
<p>Operating system: Windows (Kafka stock on docker) / AWS ECS (Docker containers also)</p>
| <python><apache-kafka><avro><confluent-schema-registry><akhq> | 2023-07-27 10:58:34 | 1 | 382 | Junn Sorran |
76,779,015 | 6,035,977 | Why does sympy.Min cause a TypeError? | <p>The following minimalistic script crashes with SymPy 1.12 and Python 3.11:</p>
<pre><code>import sympy as sp
u, v = sp.symbols("u v")
sp.Min(u**1.0*v, v**1.0*u)
</code></pre>
<p>The result is a <code>TypeError: cannot determine truth value of Relational</code>. I have this situation arising implicitly in a general framework, so just dropping the (here) unnecessary minimum or the **1.0 is not feasible for me. Using something implicit to simplify the expression would be fine, but for example calling <code>sp.simplify</code> on the first or second argument doesn't succeed at simplifying the expression.</p>
| <python><math><sympy><minimum> | 2023-07-27 10:52:01 | 2 | 333 | Corram |
76,778,934 | 4,551,325 | Pandas DataFrame: select row-wise max value in absolute terms | <p>I have a Dataframe with only numeric data:</p>
<pre><code>[ In1]: df = pd.DataFrame(np.random.randn(5, 3).round(2), columns=['A', 'B', 'C'])
df
[Out1]: A B C
0 -0.27 1.22 1.10
1 -3.22 0.48 -1.64
2 1.42 0.24 -0.12
3 -1.12 0.44 0.23
4 1.88 -0.38 0.62
</code></pre>
<p>How do I select, for each row, the max value in absolute terms while preserving the sign?</p>
<p>In this case it would be:</p>
<pre><code>0 1.22
1 -3.22
2 1.42
3 -1.12
4 1.88
</code></pre>
<p>I got as far as determining which column to use:</p>
<pre><code>[ In2]: loc_max = df.abs().idxmax(axis=1)
loc_max
[Out2]:
0 B
1 A
2 A
3 A
4 A
</code></pre>
<p>Performance is important because my actual dataframe is big.</p>
<hr />
<p><strong>SOLUTIONS COMPARISON:</strong></p>
<p>All answers below will give the desired outcome.</p>
<p>Performance comparison on a slightly bigger dataframe:</p>
<pre><code>df = pd.DataFrame(np.random.randn(1000, 100).round(2))
def numpy_argmax():
idx_max = np.abs(df.values).argmax(axis=1)
val = df.values[range(len(df)), idx_max]
return pd.Series(val, index=df.index)
def check_sign():
row_max = df.abs().max(axis=1)
return row_max * (-1) ** df.ne(row_max, axis=0).all(axis=1)
def loop_rows():
return df.apply(lambda s: s[s.abs().idxmax()], axis=1)
def pandas_loc():
s = df.abs().idxmax(axis=1)
val = [df.loc[x, y] for x, y in zip(s.index, s)]
return pd.Series(val, index=df.index)
%timeit numpy_argmax()
%timeit check_sign()
%timeit loop_rows()
%timeit pandas_loc()
</code></pre>
<p>Results:</p>
<p><a href="https://i.sstatic.net/VGK7N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VGK7N.png" alt="enter image description here" /></a></p>
<p>As usual going to the <code>numpy</code> level behind <code>pandas</code> curtain achieves the best performance. (Let me know if that's not always true.)</p>
| <python><pandas><dataframe> | 2023-07-27 10:40:57 | 3 | 1,755 | data-monkey |
76,778,911 | 2,485,799 | Faster way to split a large CSV file evenly by Groups into smaller CSV files? | <p>I'm sure there is a better way for this but I am drawing a blank. I have a CSV file in this format. The ID column is sorted so everything is grouped together at least:</p>
<pre><code>Text ID
this is sample text, AAAA
this is sample text, AAAA
this is sample text, AAAA
this is sample text, AAAA
this is sample text, AAAA
this is sample text2, BBBB
this is sample text2, BBBB
this is sample text2, BBBB
this is sample text3, CCCC
this is sample text4, DDDD
this is sample text4, DDDD
this is sample text5, EEEE
this is sample text5, EEEE
this is sample text6, FFFF
this is sample text6, FFFF
</code></pre>
<p>What I want to do is split the CSV fast across X amount of smaller CSV files fast. So if X==3, then AAAA would go into "1.csv", BBBB would go into "2.csv", CCCC would go into "3.csv" and the next group would loop back around and go into "1.csv".</p>
<p>The groups vary in size so a hardcoded split by numbers won't work here.</p>
<p>Is there a faster way to split these reliably then my current method which just uses Pandas groupby in Python to write them?</p>
<pre><code> file_ = 0
num_files = 3
for name, group in df.groupby(by=['ID'], sort=False):
file_+=1
group['File Num'] = file_
group.to_csv(file_+'.csv',index=False, header=False, mode='a')
if file_ == num_files:
file_ = 0
</code></pre>
<p>This is a python based solution but I am open to stuff using <code>awk</code> or bash if it gets the job done.</p>
<p>EDIT:</p>
<p>For clarification, I want the groups split across a fixed amount of files I can set.</p>
<p>In this case, 3. (So x = 3). The first group (AAAA) would go into 1.csv, the 2nd into 2.csv, the third into 3.csv and then for the fourth group, it would loop back and insert it into 1.csv. etc.</p>
<p>Example output 1.csv:</p>
<pre><code>Text ID
this is sample text, AAAA
this is sample text, AAAA
this is sample text, AAAA
this is sample text, AAAA
this is sample text, AAAA
this is sample text4, DDDD
this is sample text4, DDDD
</code></pre>
<p>Example output 2.csv:</p>
<pre><code>Text ID
this is sample text2, BBBB
this is sample text2, BBBB
this is sample text2, BBBB
this is sample text5, EEEE
this is sample text5, EEEE
</code></pre>
<p>Example output 3.csv:</p>
<pre><code>Text ID
this is sample text3, CCCC
this is sample text6, FFFF
this is sample text6, FFFF
</code></pre>
| <python><pandas><csv><file><awk> | 2023-07-27 10:38:08 | 5 | 6,883 | GreenGodot |
76,778,866 | 18,029,617 | Converting Datetime Format to 'DD-MM-YYYY' in Pandas Alters Dtype to Object | <p>In my recent work with pandas, I encountered a challenge related to formatting datetime values. Unfortunately, whenever I attempt to change the datetime column's format to "<strong>DD-MM-YYYY</strong>" it results in the <strong>dtype</strong> being converted to an <strong>object</strong>, which is not desirable.</p>
<pre><code>import pandas as pd
data = {
'id': [1, 2, 3],
'_time': ["2023-07-27 10:30:00", "2023-07-28 15:45:00", "2023-07-29 09:15:00"]
}
df = pd.DataFrame(data)
df['_time'] = pd.to_datetime(df['_time'])
df['_time'] = df['_time'].dt.strftime("%d-%m-%Y %H:%M:%S")
print(df)
print(df.dtypes)
</code></pre>
<ul>
<li>The code above successfully formats the data as desired, but it alters the <strong>dtype</strong> of the "_time" column to an <strong>object</strong> type.</li>
</ul>
<p>including trying various methods using <strong>pandas</strong> and the <strong>datetime module</strong>, I haven't been able to find a solution to preserve the desired dtype (<strong>datetime64</strong>). The issue persists, and I'm still searching for a resolution.</p>
| <python><pandas><datetime><python-datetime> | 2023-07-27 10:33:04 | 0 | 691 | Bennison J |
76,778,848 | 11,028,689 | AttributeError: 'numpy.ndarray' object has no attribute 'torch' or 'matmul' | <p>I am having attribute errors with my code. I have numpy version '1.25.1', python 3.10 and torch 2.0.1.</p>
<p>When using transform and target_transform, I was following this pytorch tutorial (<a href="https://pytorch.org/tutorials/beginner/basics/transforms_tutorial.html" rel="nofollow noreferrer">https://pytorch.org/tutorials/beginner/basics/transforms_tutorial.html</a>)</p>
<pre><code>import numpy as np
import torch
...
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor, Lambda
..
# defining the class
class EmbeddingDataset(Dataset):
def __init__(self, embedding_fp, transform=None, target_transform=None):
with open(embedding_fp, "rb") as fIn:
stored_data = pickle.load(fIn)
stored_labels = stored_data['labels']
stored_embeddings = stored_data['embeddings']
self.X = stored_embeddings
self.y = stored_labels.to_numpy()
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
X = self.X[idx]
y = self.y[idx]
if self.transform:
X= self.transform(X)
# X = torch.tensor(x).float()
if self.target_transform:
y = self.target_transform(y)
# y = torch.tensor(y).float().unsqueeze(1)
return X, y
# instantiating a class
train_data = EmbeddingDataset('embeddings_db_train.pkl', transform=ToTensor(),
target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1)))
X_train = train_data.X
y_train = train_data.y
test_data = EmbeddingDataset('embeddings_db_test.pkl', transform=ToTensor(),
target_transform=Lambda(lambda y: torch.zeros(10, dtype=torch.float).scatter_(0, torch.tensor(y), value=1)))
X_test = test_data.X
y_test = test_data.y
type(X_test)
numpy.ndarray
# defining some functions
def plain_accuracy(self, X_test, y_test):
# evaluate accuracy of the model on
# the plain (x_test, y_test) dataset
w = torch.tensor(self.weight)
b = torch.tensor(self.bias)
out = torch.softmax(X_test.matmul(w) + b).reshape(-1, 1)
correct = torch.abs(y_test - out) < 0.5
return correct.float().mean()
...
def softmax(self, enc_x):
val_out = enc_x.T.dot(self.weight) + self.bias
val_expo = np.array([EncryptedLR.our_exp(x) for x in val_out])
val_pred = np.array([x * EncryptedLR.goldschmidt(x, M=1/EncryptedLR.GOLDSCHMIDT_CONST, n=EncryptedLR.GOLDSCHMIDT_ITER) for x in val_expo])
return val_pred
# error with the line
out = torch.softmax(X_test.matmul(w)+b).reshape(-1,1)
AttributeError: 'numpy.ndarray' object has no attribute 'matmul'
# also an error
X_test_ = X_test.torch.tensor(X).float()
AttributeError: 'numpy.ndarray' object has no attribute 'torch'
</code></pre>
<p>I have two questions. first, I do not quite understand why my X_test is still a numpy array, I thought it would be converted to a tensor when I am instantiating/creating the test_data object from my class? second, I do not know how to fix the Attribute Errors.
Can someone advise me what I need to change in my code.</p>
| <python><numpy><pytorch><torchvision> | 2023-07-27 10:30:10 | 0 | 1,299 | Bluetail |
76,778,746 | 18,649,992 | Parallel RNG with JAX sharding | <p>What is the correct approach for generating pseudo random numbers in parallel using sharding in <code>jax</code>?</p>
<p>The following doesn't work (due to sampling the same chain)</p>
<pre><code>sharding = jax.sharding.PositionalSharding(
jax.experimental.mesh_utils.create_device_mesh((8,)),
)
@jax.jit(static_argnum=1, out_sharding=sharding.reshape(8, 1))
def uniform_sharded(rng_key, n):
return jax.random.uniform(key=rng_key, shape=(n,))
</code></pre>
<p>I considered performing a <code>pmap</code> across devices with an array of keys, but the output would then be dependent on the device count and would seem to defeat the intent of sharding.</p>
| <python><random><parallel-processing><jax> | 2023-07-27 10:18:02 | 1 | 440 | DavidJ |
76,778,641 | 9,906,395 | How can I apply reduction rates to a capital and estimate future capital values | <p>I have 10 companies with a capital that gets eroded over time or does not get eroded at all. The erosion rate may vary according to the time period, e.g. for the first case the erosion rate before 2030 (included) is -2% and after that is -5% until 2050.</p>
<p>I want to create additional columns with the calculation of the future capital values until 2050 discounting it for the different rates - how can I achieve this?</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
import numpy as np
# Define target scopes
erossion1 = [
(-0.02, 2030),
(-0.05, 2050)
]
erossion2 = [
(-0.02, 2050)
]
erossion3 = np.nan
# Define a simple DataFrame
data = {
'Decrease Rate': [erossion1 for _ in range(5)] + [erossion2 for _ in range(3)] + [np.nan] + [np.nan] ,
'Capital 2021': np.random.rand(10)*1000
}
test = pd.DataFrame(data)
print(test)
</code></pre>
| <python><pandas> | 2023-07-27 10:05:10 | 1 | 1,122 | Filippo Sebastio |
76,778,628 | 8,233,873 | Find overlapping ranges from a large set of data efficiently | <p>Is there an efficient algorithm for checking for overlaps between multiple bounded continuous ranges?</p>
<p>If we have a pair of tasks, each represented by a start time and an end time, a function such as the one below can detect clashes.</p>
<pre class="lang-python prettyprint-override"><code># tasks are of the form (start_time, end_time)
def has_clash(task_1: tuple[float, float], task_2: tuple[float, float]) -> bool:
"""Check if a clash exists between two tasks."""
return (task_2[1] > task_1[0]) and (task_2[0] < task_1[1])
</code></pre>
<p>To check multiple tasks, we would have to run the above test pairwise on the set of tasks, for an overall complexity of O(n^2), e.g.:</p>
<pre class="lang-python prettyprint-override"><code>def has_any_clash(tasks: list[tuple[float, float]]) -> bool:
"""Check if any tasks clash, return True if no clashes, false otherwise."""
if len(tasks) < 2:
return False
clash = any(
has_clash(task, other_task)
for i, (task) in enumerate(tasks[1:])
for other_task in tasks[:i]
)
return clash
</code></pre>
<p>Can this complexity be improved upon by e.g. sorting or some other mathematical magic?</p>
<p>Is there an algorithm that does this more efficiently?</p>
| <python><sorting><time-complexity> | 2023-07-27 10:03:30 | 1 | 313 | multipitch |
76,778,558 | 17,082,611 | The requested array has an inhomogeneous shape after 1 dimensions when converting list to numpy array | <p>I am trying to load training and test data using a function named <code>load_data_new</code> which reads data from <code>topomaps/</code> folder and labels from <code>labels/</code> folder. They both contain <code>.npy</code> files.</p>
<p>Specifically <code>topomaps/</code> folder contains:</p>
<p><a href="https://i.sstatic.net/tvHyG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tvHyG.png" alt="topomaps" /></a></p>
<p>where, for example, <code>s01_trial03.npy</code> contains 128 topomaps while <code>s01_trial12</code> contains 2944 topomaps (that is, they might differ in shape!)</p>
<p>while <code>labels/</code> folder contains:</p>
<p><a href="https://i.sstatic.net/yHrnM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yHrnM.png" alt="labels" /></a></p>
<p>Moreover training data must contain only topomaps whose label is 0 (while test data can contain topomaps whose label is 0, 1 or 2). This is my code:</p>
<pre><code>def load_data_new(topomap_folder: str, labels_folder: str, test_size: float = 0.2) -> tuple:
"""
Load and pair topomap data and corresponding label data from separate folders
:param topomap_folder: (str) The path to the folder containing topomaps .npy files
:param labels_folder: (str) The path to the folder containing labels .npy files
:param test_size: (float) The proportion of data to be allocated to the testing set (default is 0.2)
:return: (tuple) Two tuples, each containing a topomap ndarray and its corresponding label 1D-array.
Note:
The function assumes that the filenames of the topomaps and labels are in the same order.
It also assumes that there is a one-to-one correspondence between the topomap files and the label files.
If there are inconsistencies between the shapes of the topomap and label files, it will print a warning message.
Example:
topomap_folder = "topomaps"
labels_folder = "labels"
(x_train, y_train), (x_test, y_test) = load_data_new(topomap_folder, labels_folder, test_size=0.2)
"""
topomap_files = os.listdir(topomap_folder)
labels_files = os.listdir(labels_folder)
# Sort the files to ensure the order is consistent
topomap_files.sort()
labels_files.sort()
labels = []
topomaps = []
for topomap_file, label_file in zip(topomap_files, labels_files):
if topomap_file.endswith(".npy") and label_file.endswith(".npy"):
topomap_path = os.path.join(topomap_folder, topomap_file)
label_path = os.path.join(labels_folder, label_file)
topomap_data = np.load(topomap_path)
label_data = np.load(label_path)
if topomap_data.shape[0] != label_data.shape[0]:
raise ValueError(f"Warning: Inconsistent shapes for {topomap_file} and {label_file}")
topomaps.append(topomap_data)
labels.append(label_data)
x = np.array(topomaps)
y = np.array(labels)
# Training set only contains images whose label is 0 for anomaly detection
train_indices = np.where(y == 0)[0]
x_train = x[train_indices]
y_train = y[train_indices]
# Split the remaining data into testing sets
remaining_indices = np.where(y != 0)[0]
x_remaining = x[remaining_indices]
y_remaining = y[remaining_indices]
_, x_test, _, y_test = train_test_split(x_remaining, y_remaining, test_size=test_size)
return (x_train, y_train), (x_test, y_test)
(x_train, y_train), (x_test, y_test) = load_data_new("topomaps", "labels")
</code></pre>
<p>But unfortunately I am getting this error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/vae.py", line 574, in <module>
(x_train, y_train), (x_test, y_test) = load_data_new("topomaps", "labels")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/alex/PycharmProjects/VAE-EEG-XAI/vae.py", line 60, in load_data_new
x = np.array(topomaps)
^^^^^^^^^^^^^^^^^^
ValueError: setting an array element with a sequence. The requested array has an inhomogeneous shape after 1 dimensions. The detected shape was (851,) + inhomogeneous part.
</code></pre>
<p>Which indicates that the elements within the <code>topomaps</code> list have different shapes, leading to an inhomogeneous array when trying to convert it to a NumPy array. This error occurs because the individual topomaps in the topomaps list have different shapes, and NumPy arrays require elements of consistent shape.</p>
<p>How may I fix?</p>
| <python><numpy><machine-learning><scikit-learn><numpy-ndarray> | 2023-07-27 09:55:07 | 1 | 481 | tail |
76,778,521 | 8,973,620 | Find last column number where condition is true | <p>I have the following 2D array, where ones are always on the left:</p>
<pre><code>arr = np.array([
[1, 1, 1, 0],
[1, 0, 0, 0],
[1, 1, 0, 0],
[1, 1, 1, 1],
[1, 0, 0, 0]
])
</code></pre>
<p>How can I find the indices of the most recent occurrence of the 1 in each row?</p>
<p>My expected result would be:</p>
<pre><code>array([2, 0, 1, 3, 0])
</code></pre>
| <python><python-3.x><numpy> | 2023-07-27 09:50:59 | 5 | 18,110 | Mykola Zotko |
76,778,403 | 189,035 | Numba: Cannot determine the type of nested functions | <p>I have this numba script (in a file called 'MySource.py') i'm trying to compile:</p>
<pre><code>from numpy import empty_like, inf, minimum, max
from numba import pycc
cc = pycc.CC('base_model')
cc.verbose = True
@cc.export('cumMin','float64[:](float64[:])')
def cumMin(A):
r = empty_like(A)
t = inf
for i in range(len(A)):
t = minimum(t, A[i])
r[i] = t
return r
# nm -C base_model.so > bingo.txt
@cc.export('da_comp','List(float64)(float64[:, :])')
def da_comp(rolled_ndarray):
this_series_p = rolled_ndarray[:, 0]
this_series_v = rolled_ndarray[:, 1]
return [max(cumMin(this_series_p)), max(cumMin(this_series_v))]
if __name__ == "__main__":
cc.compile()
</code></pre>
<p>In bash, I do:</p>
<pre><code> python3 MySource.py
</code></pre>
<p>but I get a compilation error:</p>
<pre><code> raise TypingError(msg, loc=inst.loc)
numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
Untyped global name 'cumMin': Cannot determine Numba type of <class 'function'>
File "MySource.py", line 24:
def da_comp(rolled_ndarray):
<source elided>
return [max(cumMin(this_series_p)), max(cumMin(this_series_v))]
</code></pre>
<p>I use numba 0.57.1 How to fix this?</p>
<p>#Edit:
I tried splitting the functions into two files as suggested in the answer by Eric below but this doesn't solve the issue (I get the same error).</p>
| <python><numba> | 2023-07-27 09:36:26 | 2 | 5,809 | user189035 |
76,778,374 | 16,498,000 | Is there a way to use "static" variables in a function to return values later on? | <p>I have the following input:</p>
<pre><code>[['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0'],
['0', '0', '10', '10', '0', '0', '10', '10', '0', '0', '0', '10', '10', '10', '10', '10', '0', '0', '0'],
['0', '0', '10', '10', '0', '0', '10', '10', '0', '0', '0', '0', '0', '0', '0', '10', '10', '0', '0'],
['0', '0', '10', '10', '0', '0', '10', '10', '0', '0', '0', '0', '0', '0', '0', '10', '10', '0', '0'],
['0', '0', '10', '10', '10', '10', '10', '10', '0', '0', '0', '0', '10', '10', '10', '10', '0', '0', '0'],
['0', '0', '0', '10', '10', '10', '10', '10', '0', '0', '0', '10', '10', '0', '0', '0', '0', '0', '0'],
['0', '0', '0', '0', '0', '0', '10', '10', '0', '0', '0', '10', '10', '0', '0', '0', '0', '0', '0'],
['0', '0', '0', '0', '0', '0', '10', '10', '0', '0', '0', '10', '10', '10', '10', '10', '10', '0', '0'],
['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0'],
['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0']]
</code></pre>
<p>For the following function:</p>
<pre><code>def assign_points(lists):
"""
Assigns points from a list of lists to Vec4 objects.
...
"""
z_max = float('-inf')
z_min = float('inf')
points = []
for y, l in enumerate(lists):
for x, point in enumerate(l):
try:
z, color = parse_point(point)
if color is None:
color = no_color(z)
points.append(Vec4(x, y, z, 0, color))
# Update z_max and z_min if necessary
if z > z_max:
z_max = z
if z < z_min:
z_min = z
except MyWarning as e:
exit(f"Error on line {str(y)}, item {str(x)}: {e}")
return points, z_max, z_min
</code></pre>
<p>However</p>
<p>My problem is that:</p>
<ol>
<li>I feel <code>assign_points</code> is too bloated and does/returns too much.</li>
<li>the input lists can get quite big and I think looping trough them more than once is wasteful</li>
<li>I'm not sure global vars is best practice</li>
</ol>
<p>I thought about doing a function that stored the values and (depending on a flag as argument) returned the maximum values once called outside <code>assign_points</code>. But I'm pretty sure python doesn't have static vars (right?)</p>
<p>How would you go about this problem? Should I leave it as is or make global vars or do a 3rd option?</p>
| <python><python-3.x><static-variables> | 2023-07-27 09:33:45 | 1 | 572 | MiguelP |
76,778,348 | 2,991,891 | Is there a way to check if a PDF is flat/flattened, using Python or Node.js | <p>I have a large set of PDFs that are created in different devices and applications. I just need to know if a PDF is flat/flattened or not. I'd prefere solutions that are implementable using Python or Node.js, but any posix CLI tool would also be helpful.</p>
<p>I would appreciate any suggestions even if it works most of the times.</p>
<h3>Update</h3>
<p>Since it's asked in the comments about my definition of a flat PDF, I'd add two definitions:</p>
<ol>
<li>Definition 1: a PDF is flat if it only has one layer.</li>
<li>Definition 2: a PDF is flat if it doesn't have any interactive elements.</li>
</ol>
<p>Any solution that solves the problem either for definition 1 or 2 is fine.</p>
| <python><node.js><pdf><flatten-pdf> | 2023-07-27 09:30:48 | 1 | 1,124 | Reza |
76,778,044 | 10,771,559 | Converting a dataframe from a matrix format | <p>I have a dataframe that looks like this:</p>
<pre><code> 1 2 3
1 1 4 2
2 4 1 8
3 2 8 1
</code></pre>
<p>I am trying to convert it to a dataframe with this format:</p>
<pre><code>Comparison Value
1 v 1 1
1 v 2 4
1 v 3 2
2 v 3 8
</code></pre>
<p>Reproducible dataframe:</p>
<pre><code>{1: {1: 1.0, 2: 4, 3: 2}, 2: {1: 4, 2: 1.0, 3: 8}, 3: {1: 2, 2: 8, 3: 1.0}}
</code></pre>
| <python><pandas> | 2023-07-27 08:49:34 | 2 | 578 | Niam45 |
76,777,987 | 15,177,019 | SciPy griddata does not work as I would expect | <p>I wanted two interpolate pixels of an picture by the surrounding pixels due to the fact that I have some pixels which do not contain the correct values.
For this, I found this <a href="https://stackoverflow.com/questions/37662180/interpolate-missing-values-2d-python/39596856#39596856">answer</a>.</p>
<p>I put it in this function:</p>
<pre class="lang-py prettyprint-override"><code>from typing import List, Tuple
import numpy as np
from scipy import interpolate
def bilinear_interpolation_and_insert(indices_list: List[Tuple[int, int]],
array: np.ndarray) -> np.ndarray:
"""
Perform bilinear interpolation using scipy.interpolate.griddata and insert the interpolated values into the 2D array
at the given indices.
Parameters:
indices_list (list of tuples): A list of tuples, each containing the row and column indices (both integers) for interpolation.
array (list of lists): A 2D array with numerical values.
Returns:
list of lists: The updated 2D array with the interpolated values inserted at the given indices.
"""
def interpolate_missing_pixels(image: np.ndarray,
mask: np.ndarray,
method: str = 'cubic',
fill_value: int = 0) -> np.ndarray:
h, w = image.shape[:2]
xx, yy = np.meshgrid(np.arange(w), np.arange(h))
known_x = xx[~mask]
known_y = yy[~mask]
known_v = image[~mask]
missing_x = xx[mask]
missing_y = yy[mask]
interp_values = interpolate.griddata((known_x, known_y),
known_v, (missing_x, missing_y),
method=method,
fill_value=fill_value)
interp_image = image.copy()
interp_image[missing_y, missing_x] = interp_values
return interp_image
mask = np.zeros(array.shape, dtype=bool)
for row, col in indices_list:
mask[row, col] = True
interpolated_array = interpolate_missing_pixels(array, mask)
return interpolated_array
# Example usage:
array_2d = np.array([
[100, 2, 100],
[40, 5, 40],
[100, 2, 100],
])
indices_to_interpolate = [(1, 1)] # Use integers for indices
updated_array = bilinear_interpolation_and_insert(indices_to_interpolate,
array_2d)
print("Updated 2D array:")
for row in updated_array:
print(row)
</code></pre>
<p>I would expect it to yield a number in the middle of 2 and 40 e.g. 21 however the results for the central value of the array is 14. With the <code>linear</code> mode it seems only to interpolate along one axis. What I would like to have is the mean of the surrounding values. Any idea of how I can achieve this?</p>
| <python><image-processing><scipy> | 2023-07-27 08:44:03 | 0 | 433 | David Zanger |
76,777,819 | 9,868,301 | How can I overwrite the default print output of a python dataclass? | <p>Given the <code>InventoryItem</code>example from the <code>dataclasses</code> documentation.</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
@dataclass
class InventoryItem:
"""Class for keeping track of an item in inventory."""
name: str
unit_price: float
quantity_on_hand: int = 0
InventoryItem(name="Banana", unit_price=5, quantity_on_hand=3)
# OUTPUT:
# InventoryItem(name='Banana', unit_price=5, quantity_on_hand=3)
</code></pre>
<p>How to overwrite the standard output message such that the output string</p>
<pre class="lang-py prettyprint-override"><code>"3 Banana(s) at a unit price of 5."
</code></pre>
<p>is displayed?</p>
| <python><python-dataclasses> | 2023-07-27 08:22:38 | 1 | 1,269 | wueli |
76,777,656 | 552,247 | Python3: a subprocess sleeps when started by Popen | <p>I need to run two commands in parallel. Each takes a long time to perform its processing (about 1 minute one and almost 2 minutes the other), and both produce many bytes on the stdout and stderr streams (about 300kB on stderr and several MB on stdout). And I have to capture both streams.</p>
<p>I used to use subprocess.run() to execute them, but that way I was serializing, and since the commands executed are singlethread I thought of parallelization with Popen.</p>
<p>Unfortunately, the simple way doesn't work:</p>
<pre class="lang-py prettyprint-override"><code>class test:
def __init__(self, param):
cmd = "... %d" % param # the command line parametrized with param
self.__p = subprocess.Popen(shlex.split(cmd), shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True)
def waitTerm(self, t=None):
self.__p.wait(t)
if self.__p.returncode != 0:
print(self.__p.stderr.read(), file=sys.stderr)
raise Exception('failure')
self.o = self.__p.stdout.read()
t1 = test(1)
t8 = test(8)
t1.waitTerm() # t1 should be longer
t8.waitTerm()
# here I can use stdout from both process
print(t1.o) # this is an example
</code></pre>
<p>Processes stop in sleep. I believe this is caused by filling the buffers related to the pipes.</p>
<p>In this case what is the smartest thing to do?</p>
| <python><python-3.x><subprocess><pipe> | 2023-07-27 08:01:40 | 1 | 1,598 | mastupristi |
76,777,450 | 20,240,835 | Loss of certain values when using dict and expand together in Snakemake | <p>I have try to use a function to return a dict as a rule input, here is a simple expamle:</p>
<pre><code># debug.smk
input_a='/{output}/to/{group_name}/input_a'
input_b='/{output}/to/{group_name}/input_b'
rule_a=True
rule_b=True
output='output'
group_name='group_name'
def get_rule_output():
output={}
if rule_a:
output['output_a']=expand(input_a, output=output, group_name=group_name)
print(expand(input_a, output=output, group_name=group_name))
if rule_b:
output['output_xx']=expand(input_b, output=output, group_name=group_name)
print(output)
return output
rule all:
input:
get_rule_output(),
</code></pre>
<p>I use following command to run it</p>
<pre><code>snakemake -s debug.smk
</code></pre>
<p>My expected:</p>
<p><code>get_rule_output()</code> return a dict with two output</p>
<pre><code>{'output_a': ['/output_a/to/group_name/input_a'], 'output_xx': ['/output_a/to/group_name/input_b']}
</code></pre>
<p>however, its return value is</p>
<pre><code>{'output_a': [], 'output_xx': ['/output_a/to/group_name/input_b']}
</code></pre>
<p>debug output by print() is following</p>
<pre><code>snakemake -s snakemake/debug.smk
['/output_a/to/group_name/input_a']
{'output_a': [], 'output_xx': ['/output_a/to/group_name/input_b']}
Building DAG of jobs...
#other snakemake debug output
</code></pre>
| <python><bioinformatics><snakemake> | 2023-07-27 07:33:15 | 2 | 689 | zhang |
76,777,253 | 1,973,798 | List comprehension: avoid the warning 'Cannot access member "get" for type "list[Unknown]"'. How to? | <p>I'm trying to clean up a tiny bit my python code and remove as many as Pylance warnings (albeit the code has been tested and it works).
Particularly, I'm struggling with the following one.</p>
<p><a href="https://i.sstatic.net/CEQ7J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CEQ7J.png" alt="enter image description here" /></a></p>
<p>I've my <code>ctr</code> variable holding a list, and each <code>item</code> contains a set made by a numerical index and a dictionary.</p>
<pre><code>ctr = sorted(ctr_calc.find_ctr().items())
vals = [(r, round(v.get('ctr'), 2)) for r, v in ctr[1:-1]]
</code></pre>
<p>The error I get is <code>Cannot access member "get" for type "list[Unknown]"</code>. Per se the error is already not correct, as v once accessed is a dictionary and not anymore a list.</p>
<p>Is there anyway to get rid of this warning (other than adding # ignore)?</p>
| <python><python-typing><pyright> | 2023-07-27 07:07:06 | 1 | 1,041 | Andrea Moro |
76,777,096 | 4,885,544 | Installing python packages locally for Azure Function App | <p>I have created a new Azure Function (Python 3.10.4) that executes an HTTP Trigger using their instruction with the Azure Func tool in my CLI. Everything works great, until I try to add a package to my requirements.txt/<strong>init</strong>.py script. All I would like to do is run a simple script to fetch from a database, but I am running into the error below when typing <code>func start</code> and it executes my <strong>init</strong>.py script. Below is the error:</p>
<pre><code> Exception: ModuleNotFoundError: No module named 'psycopg2'. Please check the requirements.txt file for the missing module. For more info, please refer the troubleshooting guide: https://aka.ms/functions-modulenotfound
</code></pre>
<p>Here is the content of my requirements.txt:</p>
<pre><code>azure-functions
psycopg2==2.9.1
</code></pre>
<p>And my <code>__init__.py</code> is still the default file that is generated, I am just trying to import my package at the top:</p>
<pre><code>import logging
import azure.functions as func
import psycopg2
def main(req: func.HttpRequest) -> func.HttpResponse:
logging.info('Python HTTP trigger function processed a request.')
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
return func.HttpResponse(f"Hello, {name}. This HTTP triggered function executed successfully.")
else:
return func.HttpResponse(
"This HTTP triggered function executed successfully. Pass a name in the query string or in the request body for a personalized response.",
status_code=200
)
</code></pre>
<p>And here is my file hierarchy for reference:</p>
<pre><code>.
├── TestTrigger
│ ├── __init__.py
│ ├── __pycache__
│ │ └── __init__.cpython-311.pyc
│ └── function.json
├── getting_started.md
├── host.json
├── local.settings.json
└── requirements.txt
</code></pre>
<p>Am I doing something wrong? When I push it up to Azure, it says it is installing <code>psycopg</code> but it is broken in Azure too.</p>
| <python><python-3.x><azure-functions><psycopg2><azure-http-trigger> | 2023-07-27 06:41:48 | 1 | 486 | sclem72 |
76,777,086 | 3,164,187 | Haystack: PromptNode takes too much time to load the model | <p>I use the below code based on the tutorials from Haystack:</p>
<pre><code> lfqa_prompt = PromptTemplate("deepset/question-answering-with-references", output_parser=AnswerParser(reference_pattern=r"Document\[(\d+)\]"))
prompt_node = PromptNode(model_name_or_path="google/flan-t5-large", default_prompt_template=lfqa_prompt)
pipe = Pipeline()
pipe.add_node(component=retriever, name="retriever", inputs=["Query"])
pipe.add_node(component=prompt_node, name="prompt_node", inputs=["retriever"])
output = pipe.run(query="A question?")
print(output["answers"][0].answer)
</code></pre>
<p>The very first time when I ran this the below line took time as it downloaded the model to my cache:</p>
<pre><code>prompt_node = PromptNode(model_name_or_path="google/flan-t5-large", default_prompt_template=lfqa_prompt)
</code></pre>
<p>My Assumption was for the next run it will use the cached model. As expected it's not downloading, but it's still taking a lot of time.</p>
<p>Can we reduce this time by saving the already processed model?</p>
| <python><huggingface-datasets><haystack> | 2023-07-27 06:40:01 | 1 | 1,442 | user3164187 |
76,777,047 | 7,357,166 | Can Pandas write excel files with merged cells? | <p>I have some data that I want to export in an excel spread sheet. The data contains nested dictionaries with variable number of elements.</p>
<p>It looks like this</p>
<pre><code>[{ "12345" :
{ "cup" : "123456789",
"spoon" : "234567891",
}
},
{ "23456" :
{ "plate" : "345678912",
}
}
]
</code></pre>
<p>I want to export this data in an Excel spreadsheet that looks like this:</p>
<p><a href="https://i.sstatic.net/S12uv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S12uv.png" alt="enter image description here" /></a></p>
<p>My data is more complex, but I guess if I understand how to get this done I can apply it myself.</p>
<p>So I was thinking about using the xlsxwriter python module, but I would have to loop thru the data to create the cells.
Then I remembered that Pandas has an easy way to import such data in a dataframe and has a nice excel export.</p>
<p>But I don't now if Pandas supports something like merged cells.</p>
<p>What would you suggest to use in such case?</p>
| <python><pandas><xlsxwriter> | 2023-07-27 06:33:03 | 1 | 422 | Empusas |
76,776,813 | 10,327,984 | cuda error while generating features from resnet50 | <p>I am attempting to generate features using ResNet-50 and loading the model onto my GPU to speed up the process. Here is the code I am using:</p>
<pre><code>import torch
from torchvision import transforms
from transformers import AutoImageProcessor, ResNetModel
from tqdm import tqdm
# Check if GPU is available, otherwise use CPU
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# Load the image processor and ResNet-50 model
image_processor = AutoImageProcessor.from_pretrained("microsoft/resnet-50")
model = ResNetModel.from_pretrained("microsoft/resnet-50").to(device)
# Define the transformation to convert image to tensor
transform = transforms.Compose([
transforms.ToTensor()
])
# Function to get features from an image using the model
def get_features_image(image):
inputs = image_processor(image, return_tensors="pt")
with torch.no_grad():
outputs = model(**inputs)
last_hidden_states = outputs.last_hidden_state
return last_hidden_states
# List to store the computed features for training dataset
chart_features_train = []
# Iterate through the images in the training dataset
for image in tqdm(train_chart_dataset):
image = image.convert("RGB")
image_transformed = transform(image)
# Move the transformed image tensor to the GPU
image_cuda = image_transformed.to(device)
# Compute features for the image and append to the list
chart_features_train.append(get_features_image(image_cuda))
</code></pre>
<p>However, I encounter the following error during the execution:</p>
<pre><code>RuntimeError: Input type (torch.FloatTensor) and weight type (torch.cuda.FloatTensor) should be the same
</code></pre>
| <python><pytorch> | 2023-07-27 05:48:03 | 1 | 622 | Mohamed Amine |
76,776,757 | 567,328 | Python API documentation for FAISS | <p>I am searching for a Python API documentation for FAISS, unable to find it. The wiki page says the python translation is very close to the C++ classes whose documentation can be found <a href="https://faiss.ai/index.html" rel="nofollow noreferrer">here</a></p>
| <python><nlp><faiss><similarity-search> | 2023-07-27 05:34:43 | 1 | 6,444 | ChrisOdney |
76,776,748 | 2,153,235 | Python convention indicating whether method chaining is method cascading? | <p>I am spinning up on Python and Spark. I noticed that Python uses a
lot of chaining of methods and/or properties. A lot of what I found
online about this for Python describes method <em>cascading</em>, even though
it is referred to as method chaining. According to
<a href="https://en.wikipedia.org/wiki/Method_chaining" rel="nofollow noreferrer">Wikipedia</a>, <em>chaining</em>
returns an object, from which another method is invoked. On the other
hand, <em>cascading</em> requires that the returned object is "self", so it
is a subset of chaining.</p>
<p>Is there a convention by which one can quickly recognize when
cascading is specifically being used as opposed to general chaining? Here are two examples
I saw from a <a href="https://sparkbyexamples.com/pyspark-tutorial/?expand_article=1" rel="nofollow noreferrer">PySpark
tutorial</a>:</p>
<pre><code>df = spark.readStream
.format("kafka")
.option("kafka.bootstrap.servers", "192.168.1.100:9092")
.option("subscribe", "json_topic")
.option("startingOffsets", "earliest") // From starting
.load()
df.selectExpr("CAST(id AS STRING) AS key", "to_json(struct(*)) AS value")
.writeStream
.format("kafka")
.outputMode("append")
.option("kafka.bootstrap.servers", "192.168.1.100:9092")
.option("topic", "josn_data_topic")
.start()
.awaitTermination()
</code></pre>
<p>It really helps to accelerate the figuring out of the code, especially
when one is unfamiliar with the ecosystem of classes in question. In
cascading, it's just consecutive calls to methods of the same object.
Chaining that is <em>not</em> cascading, however, requires much more care in
deciphering, since the object could change part way through the chain.</p>
| <python><method-chaining> | 2023-07-27 05:32:50 | 1 | 1,265 | user2153235 |
76,776,745 | 2,966,197 | Cannot resolve numpy and other packages version dependency using Docker | <p>I am trying to setup a <code>Docker</code> execution of a python code but <code>numpy</code> keeps giving dependecies issues with other package versions.</p>
<p>Here is my <code>Dockerfile</code> (this one is using 2 stage execution but I have tried a single stage execution using both python 3.11 and python 3.11-slim):</p>
<pre><code># Stage 1
FROM python:3.11 as builder
#tried FROM python:3.11 and FROM python:3.11-slim as well
# working directory
WORKDIR /app
RUN apt-get update && apt-get install -y \
build-essential \
&& rm -rf /var/lib/apt/lists/*
COPY requirements.txt .
#virtual environment
RUN python -m venv venv
ENV PATH="/app/venv/bin:$PATH"
RUN pip install --no-cache-dir --upgrade pip
RUN pip install --no-cache-dir -r requirements_new.txt
COPY . .
# Stage 2. Tried without this as well in a single stage execution
FROM python:3.11-slim
# Set the working directory inside the container
WORKDIR /app
COPY --from=builder /app .
# Run command
CMD [ "python", "./test.py"]
</code></pre>
<p>When I run this <code>Dockerfile</code>, I get following error:</p>
<pre><code>ERROR: Cannot install -r requirements.txt because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested numpy==1.23.5
argilla 1.13.2 depends on numpy<1.24.0
contourpy 1.1.0 depends on numpy>=1.16
langchain 0.0.200 depends on numpy<2 and >=1
layoutparser 0.3.4 depends on numpy
matplotlib 3.7.1 depends on numpy>=1.20
numexpr 2.8.4 depends on numpy>=1.13.3
onnxruntime 1.15.1 depends on numpy>=1.24.2
</code></pre>
<p>I have tried use single stage execution as well with base <code>python 3.11</code> version as well as <code>python 3.11-slim</code> version, but both give the same dependency issue. Despite installing <code>numpy 1.23.5</code> (which satisfies most of the package dependencies, it still gives error on those). Not sure how to get this issue resolved.</p>
| <python><docker><numpy> | 2023-07-27 05:31:35 | 1 | 3,003 | user2966197 |
76,776,506 | 993,856 | Unable to install dlib 19.24.0 on Windows 11 after putting cmake in path | <p>I'm having difficulties installing dlib 19.24.0 for Stable Diffusion. I've got Python 3.10.11 installed. When launching the Automatic 1111 Web UI, it fails to install dlib. So I tried installing directly via the console like this:</p>
<pre><code>C:\Windows\System32>pip install dlib==19.24.0
</code></pre>
<p>After doing so, the console returns a long error that ends with this message:</p>
<pre><code>ERROR: Could not build wheels for dlib, which is required to install pyproject.toml-based projects
</code></pre>
<p>Here is the full output after attempting the install command.</p>
<pre><code> Collecting dlib==19.24.0
Using cached dlib-19.24.0.tar.gz (3.2 MB)
Preparing metadata (setup.py) ... done
Building wheels for collected packages: dlib
Building wheel for dlib (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [68 lines of output]
running bdist_wheel
running build
running build_py
running build_ext
C:\Users\kenpa\AppData\Local\Temp\pip-install-us0rb766\dlib_71fc7c85fef145f580765efe56a6152b\setup.py:129: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if LooseVersion(cmake_version) < '3.1.0':
Building extension for Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Invoking CMake setup: 'cmake C:\Users\kenpa\AppData\Local\Temp\pip-install-us0rb766\dlib_71fc7c85fef145f580765efe56a6152b\tools\python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\Users\kenpa\AppData\Local\Temp\pip-install-us0rb766\dlib_71fc7c85fef145f580765efe56a6152b\build\lib.win-amd64-cpython-310 -DPYTHON_EXECUTABLE=C:\Users\kenpa\AppData\Local\Programs\Python\Python310\python.exe -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\Users\kenpa\AppData\Local\Temp\pip-install-us0rb766\dlib_71fc7c85fef145f580765efe56a6152b\build\lib.win-amd64-cpython-310 -A x64'
-- Building for: NMake Makefiles
CMake Error at CMakeLists.txt:5 (message):
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
You must use Visual Studio to build a python extension on windows. If you
are getting this error it means you have not installed Visual C++. Note
that there are many flavors of Visual Studio, like Visual Studio for C#
development. You need to install Visual Studio for C++.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
-- Configuring incomplete, errors occurred!
Traceback (most recent call last):
File "<string>", line 2, in <module>
File "<pip-setuptools-caller>", line 34, in <module>
File "C:\Users\kenpa\AppData\Local\Temp\pip-install-us0rb766\dlib_71fc7c85fef145f580765efe56a6152b\setup.py", line 222, in <module>
setup(
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\__init__.py", line 107, in setup
return distutils.core.setup(**attrs)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\core.py", line 185, in setup
return run_commands(dist)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\core.py", line 201, in run_commands
dist.run_commands()
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\dist.py", line 969, in run_commands
self.run_command(cmd)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py", line 1234, in run_command
super().run_command(command)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\wheel\bdist_wheel.py", line 325, in run
self.run_command("build")
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py", line 1234, in run_command
super().run_command(command)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\command\build.py", line 131, in run
self.run_command(cmd_name)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\cmd.py", line 318, in run_command
self.distribution.run_command(command)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\dist.py", line 1234, in run_command
super().run_command(command)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\site-packages\setuptools\_distutils\dist.py", line 988, in run_command
cmd_obj.run()
File "C:\Users\kenpa\AppData\Local\Temp\pip-install-us0rb766\dlib_71fc7c85fef145f580765efe56a6152b\setup.py", line 134, in run
self.build_extension(ext)
File "C:\Users\kenpa\AppData\Local\Temp\pip-install-us0rb766\dlib_71fc7c85fef145f580765efe56a6152b\setup.py", line 171, in build_extension
subprocess.check_call(cmake_setup, cwd=build_folder)
File "C:\Users\kenpa\AppData\Local\Programs\Python\Python310\lib\subprocess.py", line 369, in check_call
raise CalledProcessError(retcode, cmd)
subprocess.CalledProcessError: Command '['cmake', 'C:\\Users\\kenpa\\AppData\\Local\\Temp\\pip-install-us0rb766\\dlib_71fc7c85fef145f580765efe56a6152b\\tools\\python', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\\Users\\kenpa\\AppData\\Local\\Temp\\pip-install-us0rb766\\dlib_71fc7c85fef145f580765efe56a6152b\\build\\lib.win-amd64-cpython-310', '-DPYTHON_EXECUTABLE=C:\\Users\\kenpa\\AppData\\Local\\Programs\\Python\\Python310\\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\\Users\\kenpa\\AppData\\Local\\Temp\\pip-install-us0rb766\\dlib_71fc7c85fef145f580765efe56a6152b\\build\\lib.win-amd64-cpython-310', '-A', 'x64']' returned non-zero exit status 1.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for dlib
Running setup.py clean for dlib
Failed to build dlib
ERROR: Could not build wheels for dlib, which is required to install pyproject.toml-based projects
</code></pre>
<p>I do have Visual Studio Code installed. And CMake is installed and is in my user and system environment variables, like so:</p>
<p><a href="https://i.sstatic.net/5jANO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5jANO.png" alt="cmake path in system settings" /></a></p>
<p>How can I successfully install dlib 19.24.0? Thanks.</p>
| <python><cmake><dlib><stable-diffusion> | 2023-07-27 04:29:16 | 0 | 2,475 | Ken Palmer |
76,776,460 | 19,157,137 | How to Shade a Region Between Two Curves on a Graph in Python | <p>I have two functions representing Marginal Abatement Cost (MAC) and Marginal Damage Cost (MD) as follows:</p>
<pre class="lang-py prettyprint-override"><code>def MAC(E):
return 750 - E
def MD(E):
return 0.5 * E
</code></pre>
<p>I want to create a graph that shows both the MAC and MD curves on the same plot. Additionally, I would like to shade the region between the two curves where the MAC is higher than 0 in cost but lower than 250, and the emission level (E) varies between 0 and approximately 750.</p>
<p>I have tried using the <code>fill_between</code> function from Matplotlib, but the shading is not displaying as expected. How can I properly shade this region between the MAC and MD curves on the graph?</p>
<p>Code Sample:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
def MAC(E):
return 750 - E
def MD(E):
return 0.5 * E
# Generate values for E from 0 to 1000
E_values = np.linspace(0, 1000, 100)
# Calculate MAC and MD values for each E value
MAC_values = MAC(E_values)
MD_values = MD(E_values)
# Plot MAC and MD on the same graph
plt.plot(E_values, MAC_values, label='MAC', color='blue')
plt.plot(E_values, MD_values, label='MD', color='red')
# Add labels and a legend
plt.xlabel('Emission Level (E)')
plt.ylabel('Cost')
plt.legend()
# Add a title to the graph
plt.title('Marginal Abatement Cost (MAC) and Marginal Damage Cost (MD)')
# Shade the region between MAC and MD curves where MAC is higher than 0 and lower than 250
plt.fill_between(E_values, MAC_values, MD_values, where=(MAC_values >= 0) & (MAC_values <= 250), color='lightblue', alpha=0.5)
# Show the graph
plt.grid(True)
plt.axhline(0, color='black', linewidth=0.5)
plt.axvline(0, color='black', linewidth=0.5)
plt.legend()
plt.show()
</code></pre>
<p><strong>Expected Output:</strong></p>
<p>The code should produce a graph with two curves for MAC and MD, and the region between the two curves where the MAC is higher than 0 in cost but lower than 250 should be shaded in light blue. The emission level (E) should vary between 0 and approximately 750 on the x-axis. The blue-circled region is where I want to shade:</p>
<p><a href="https://i.sstatic.net/ghfNb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ghfNb.png" alt="enter image description here" /></a></p>
| <python><numpy><matplotlib> | 2023-07-27 04:15:11 | 1 | 363 | Bosser445 |
76,776,415 | 2,572,790 | Broken python 3.10 at Ubuntu 22.04 | <p>I have tried to update my Ubuntu from 20.04 to 22.04 and broken python during command</p>
<pre class="lang-bash prettyprint-override"><code>sudo apt --fix-missing purge $(dpkg -l | grep 'python3\.1[01]' | awk '{print $2}')
</code></pre>
<p>Now I could not use commands as <code>apt-get update</code> or <code>install</code> with message:</p>
<pre class="lang-none prettyprint-override"><code>The following packages have unmet dependencies:
libpython3.10-stdlib : Depends: libpython3.10-minimal (= 3.10.4-1+focal1) but it is not going to be installed
python3.10-minimal : Depends: libpython3.10-minimal (= 3.10.4-1+focal2) but it is not going to be installed
Recommends: python3.10 but it is not going to be installed
E: Unmet dependencies. Try 'apt --fix-broken install' with no packages (or specify a solution).
</code></pre>
<p>System has <code>python3.11</code> and <code>python3.10</code>, but latest one has broken most part of libs as <code>http</code> for example. So system is unstable.</p>
<p>Advices like <code>apt --fix-broken install</code> falls with messages like "dpkg founded too many errors and stopped" or "no <code>http</code> module into python.."</p>
<p>Also I have seen an advice to clean <code>/var/lib/dpkg/status</code> file from packages mentioned above, but still no effect.</p>
<p>How could I solve it without full reinstall of 22.04?</p>
| <python><ubuntu><ubuntu-22.04> | 2023-07-27 03:59:12 | 2 | 486 | user2572790 |
76,776,396 | 9,929,364 | Using angr's symbolic stack for solving binaries | <p>I am trying to adapt the technique mentioned in <a href="https://blog.notso.pro/2019-03-26-angr-introduction-part2/" rel="nofollow noreferrer">https://blog.notso.pro/2019-03-26-angr-introduction-part2/</a> on another binary (02_angr_find_condition). The binary can be found at <a href="https://github.com/jakespringer/angr_ctf/tree/master/dist" rel="nofollow noreferrer">https://github.com/jakespringer/angr_ctf/tree/master/dist</a></p>
<p>I am trying to figure out the offset for the padding etc and I could not find the correct offset that can print out the correct password for the binary.</p>
<p>My code snippet as follows</p>
<pre><code>def main():
base_address = 0x08048000
start_address = 0x08048645
def success(state):
stdout_output = state.posix.dumps(sys.stdout.fileno())
if b'Good Job.' in stdout_output:
return True
else: return False
def bad(state):
stdout_output = state.posix.dumps(sys.stdout.fileno())
if b'Try again.' in stdout_output:
return True
else: return False
getproject = angr.Project('angr_ctf/dist/02_angr_find_condition', auto_load_libs=False)
getstate = getproject.factory.entry_state(addr=start_address)
#set up the stack
getstate.regs.ebp = getstate.regs.esp
#this padding is for bytes prior (higher addresses) to the memory location we want to observe.
padding_length_bytes = 0x30
getstate.regs.esp -= padding_length_bytes
# Input is %8s so its eight characters
# character array to store input is 9 char bytes long.
input0 = claripy.BVS("input0", 64)
getstate.stack_push(input0)
for z in input0.chop(8):
getstate.solver.add(z >= 0x20)
getstate.solver.add(z <= 0x7f)
simgr = getproject.factory.simgr(getstate)
simgr.explore(find=success, avoid=bad)
print(simgr)
if len(simgr.found) > 0:
print(simgr.found[0].posix.dumps(0))
print(simgr.found[0].posix.dumps(1))
print(simgr.found[0].solver.eval(input0, cast_to=bytes))
</code></pre>
| <python><angr> | 2023-07-27 03:52:01 | 0 | 797 | localacct |
76,776,309 | 19,157,137 | Issue with Python Code for Emission Abatement Calculation | <p>I'm facing an issue with the Python code to calculate emission abatement (EA) for two industries (MACA and MACB). The code is not giving the expected results for EA and MACA.</p>
<p>Here's the code:</p>
<pre class="lang-py prettyprint-override"><code>import sympy as sp
EA, EB = sp.symbols('EA EB')
abatement_percentage = 40
MACA = 450 - 3 * EA
MACB = 300 - 2 * EB
EA_value = sp.solve(sp.Eq(MACA, 0), EA)[0]
EB_value = sp.solve(sp.Eq(MACB, 0), EB)[0]
EA_abatement = EA_value * (abatement_percentage / 100)
EB_abatement = EB_value * (abatement_percentage / 100)
total_abatement = EA_abatement + EB_abatement
EB_value = sp.solve(sp.Eq(MACA, MACB), EB)[0]
EA_value = total_abatement - EB_value
MACA_value = MACA.subs(EB, EB_value)
print("EA =", EA_value)
print("EB =", EB_value)
print("EA after abatement:", EA_abatement)
print("EB after abatement:", EB_abatement)
print("Total abatement:", total_abatement)
print("MACA =", MACA_value)
</code></pre>
<p>Expected Output:</p>
<pre><code>EA = 78
EB = 42
EA after abatement: 31.2
EB after abatement: 16.8
Total abatement: 48.0
MACA = 216
</code></pre>
<p><strong>Mathematical steps taken:</strong></p>
<ol>
<li><p>Given: MACA = 450 - 3EA and MACB = 300 - 2EB</p>
</li>
<li><p>To find when MACA = 0 and MACB = 0:</p>
<ul>
<li>450 - 3EA = 0, solving for EA gives EA = 150</li>
<li>300 - 2EB = 0, solving for EB gives EB = 150</li>
</ul>
</li>
<li><p>Abatement of 40%:</p>
<ul>
<li>EA = 150 * 40% = 60</li>
<li>EB = 150 * 40% = 60</li>
</ul>
</li>
<li><p>EA + EB = 60 + 60 = 120</p>
</li>
<li><p>EA = 120 - EB</p>
</li>
<li><p>Finding the value of EB when MACA = MACB:</p>
<ul>
<li>450 - 3EA = 300 - 2EB</li>
<li>Substituting EA = 120 - EB in the above equation, we get: 450 - 3(120 - EB) = 300 - 2EB</li>
<li>Simplifying the equation, we find 5EB = 210, which gives EB = 42</li>
<li>EA = 120 - 42 = 78</li>
</ul>
</li>
<li><p>Now, MACA = 450 - 3EA = 450 - 3(78) = 216</p>
</li>
</ol>
<p><strong>Issue:</strong>
The computed values for MACA and EA are incorrect, and the code doesn't provide the expected results. The computed value of MACA should be 216, whereas the current code produces an incorrect value. The computed value of EA should also be 78, but the current code generates a different value.</p>
<p>Please help in fixing the code to obtain the correct results for the emission abatement calculation.</p>
| <python><sympy> | 2023-07-27 03:22:18 | 1 | 363 | Bosser445 |
76,776,304 | 3,579,417 | Using UTF-8 in Python 3 string literals | <p>I have a script I'm writing where I need to print the character sequence "Qä" to the terminal. My terminal is using UTF-8 encoding. My file has <code># -*- coding: utf-8 -*-</code> at the top of it, which I think is not actually necessary for Python 3, but I put it there in case it made any difference. In the code, I have something like</p>
<pre><code>print("...Qä...")
</code></pre>
<p>This does not produce Qä. Instead it produces Q▒.</p>
<p>I then tried</p>
<pre><code>qa = "Qä".encode('utf-8')
print(f"...{qa}...")
</code></pre>
<p>This also does not produce Qä. It produces 'Q\xc3\xa4'.</p>
<p>I also tried</p>
<pre><code>qa = u"Qä"
print(f"...{qa}...")
</code></pre>
<p>This also produces Q▒.</p>
<p>However, I know that Python 3 can open files that contain UTF-8 and use the contents properly, so I created a file called qa.txt, pasted Qä into it, and then used</p>
<pre><code>with open("qa.txt") as qa_file:
qa = qa_file.read().strip()
print(f"...{qa}...")
</code></pre>
<p>This works. However, it's beyond dumb that I have to create this file in order to print this string. How can I put this text into my code as a string literal?</p>
<p>This question is NOT a duplicate of a question asking about Python 2.7, I am not using Python 2.7.</p>
| <python><unicode><utf-8><git-bash><python-unicode> | 2023-07-27 03:21:41 | 1 | 369 | faiuwle |
76,776,167 | 3,247,006 | How to get the all descendants of a node including itself with Django treebeard? | <p>I have <code>Category</code> model extending <a href="https://django-treebeard.readthedocs.io/en/latest/mp_tree.html" rel="nofollow noreferrer">MP_Node</a> with <a href="https://github.com/django-treebeard/django-treebeard" rel="nofollow noreferrer">Django treebeard</a> as shown below:</p>
<pre class="lang-py prettyprint-override"><code># "models.py"
from django.db import models
from treebeard.mp_tree import MP_Node
class Category(MP_Node):
name = models.CharField(max_length=50)
node_order_by = ('name',)
def __str__(self):
return self.name
</code></pre>
<p>Then, I could get all descendants of a category not including itself with <code>get_descendants()</code> using <strong>Django treebeard</strong> as shown below:</p>
<pre class="lang-py prettyprint-override"><code>categories = Category.objects.get(name="Food").get_descendants()
print(categories)
# <MP_NodeQuerySet [<Category: Meat>, <Category: Fish>]>
</code></pre>
<p>But, when I tried to get all descendants of a category including itself with <code>get_descendants(include_self=True)</code> using <strong>Django treebeard</strong>, I got the error as shown below:</p>
<pre class="lang-py prettyprint-override"><code>categories = Category.objects.get(name="Food").get_descendants(include_self=True)
print(categories) # Error
</code></pre>
<blockquote>
<p>TypeError: get_descendants() got an unexpected keyword argument 'include_self'</p>
</blockquote>
<p>Actually, I could get all descendants of a category including itself with <code>get_descendants(include_self=True)</code> using <a href="https://github.com/django-mptt/django-mptt" rel="nofollow noreferrer">Django mptt</a> as shown below. *I switched <strong>Django mptt</strong> to <strong>Django treebeard</strong> because <strong>Django mptt</strong> is unmaintained and gives some error:</p>
<pre class="lang-py prettyprint-override"><code>categories = Category.objects.get(name="Food").get_descendants(include_self=True)
print(categories)
# <TreeQuerySet [<Category: Food>, <Category: Meat>, <Category: Fish>]>
</code></pre>
<p>So, how can I get the all descendants of a category including itself with <strong>Django treebeard</strong>?</p>
| <python><django><django-models><descendant><django-treebeard> | 2023-07-27 02:40:20 | 1 | 42,516 | Super Kai - Kazuya Ito |
76,776,134 | 3,685,918 | Bloomberg API bdp fuction with LEI (xbbg) | <p>I try to pull 'Moodys credit rating' of entity using Python API.
Excel formula works well and Python using Tkcer also works well.
But Python using LEI not Tiker doesn't work.</p>
<p>For example, following two works well.</p>
<pre><code>Excel formula : BDP("KNPC1X7GHDZW8U2ZSF89 LEI", "RTG_MDY_OUTLOOK")
Python Xbbg : blp.bdp("1055z HK Equity", "RTG_MDY_OUTLOOK")
(fyi, KNPC1X7GHDZW8U2ZSF89 and 1055z HK Equity are Bank of China HongKong Ltd)
</code></pre>
<p>But following doesn't work in Python xbbg.</p>
<pre><code>blp.bdp("KNPC1X7GHDZW8U2ZSF89 LEI", "RTG_MDY_OUTLOOK")
</code></pre>
<p>Does <code>xbbg</code> not support LEI code?</p>
| <python><bloomberg> | 2023-07-27 02:25:12 | 0 | 427 | user3685918 |
76,775,904 | 1,688,501 | Understanding Tkinter Tk, Frame, and Toplevel object types (from a Java Swing perspective) | <p>I'm having trouble understanding the types of different variables related to Frames and windows in Tkinter (in Python). Note that I come from the world of Java Swing (and some C# WinForms).</p>
<p>There are 3 main types in Tkinter that I'm aware of: <code>Tk</code>, <code>Frame</code>, and <code>Toplevel</code>. Right now, I think of the types this way:</p>
<ul>
<li>A <code>Tk</code> object is a window, so it's a <code>Frame</code> from Java Swing.</li>
<li>A <code>Frame</code> object is where you place widgets, so it's a <code>Panel</code> from Java Swing.</li>
<li>A <code>Toplevel</code> object is both a window and where you place widgets, so it's a combination of <code>Frame</code> and <code>Panel</code> from Java Swing. All popup windows must be of this type. You shouldn't make another pair of <code>Tk</code> and <code>Frame</code> instances. At the same time, the main window can't be of this type.</li>
</ul>
<p>With Tkinter, if I want to add a widget to a window, I add it to a <code>Frame</code> or a <code>Toplevel</code>, as shown below.</p>
<pre><code>btnCreatePopup = tk.Button(<Frame or Toplevel instance>, text='Create popup', command=createPopup)
</code></pre>
<p>However, if I want to make a popup window modal, I don't use the same 2 types as I did for the widget, as shown below.</p>
<pre><code><Tk or Toplevel instance>.wm_attributes('-disabled', True)
</code></pre>
<p>I guess I'm confused because <code>Toplevel</code> objects accept both window attributes and widgets, but <code>Tk</code> objects only accept window attributes and <code>Frame</code> objects only accept widgets? Is my understanding of the 3 types in the bulleted list above correct and I am thinking about the types correctly? Do any of these 3 classes share a parent class that would help me understand the concept in Tkinter? I saw <code>Wm</code> (Window Manager) is a shared parent class of <code>Tk</code> and <code>Toplevel</code>, which tells me they are both windows, but I only add widgets to 1 type of <code>Wm</code> and not the other type of <code>Wm</code>?</p>
| <python><tkinter> | 2023-07-27 00:56:54 | 1 | 340 | cmasupra |
76,775,874 | 14,679,834 | Supabase is returning blank when I search for a specific row that I know exists | <p>I'm searching for a specific row in my database table in Supabase but it's returning <code>[]</code> for some reason. Here's my code:</p>
<pre><code>def check_if_connection_exists(collection_name: str):
supabase_url = os.environ.get(
"SUPABASE_URL") + "/rest/v1/my_table"
supabase_key = os.environ.get("SUPABASE_KEY")
response = requests.get(
url=supabase_url + f"?name=eq.{collection_name}&select=*",
headers={
"apikey": supabase_key,
"authorization": "Bearer " + supabase_key,
},
)
if (response.status_code != 200):
raise Exception(response.json())
if (response.json() == []):
return False
else:
return True
</code></pre>
<p>I've also tried doing this to see if there's any difference between the stored <code>collection_name</code> and the one I'm passing into the function: <code>print(stored_collection_name == collection_name_passed_in_function)</code></p>
<p>The <code>print</code> call returns true, and when searching through the DB directly, I can also find the row record it refers to.</p>
<p>RLS is disabled and this only happens for a specific value for the <code>collection_name</code> for some reason</p>
| <python><postgresql><rest><supabase> | 2023-07-27 00:43:23 | 0 | 526 | neil_ruaro |
76,775,674 | 2,280,741 | How to use `Mapped[]` and `with_variant` in SQLAlquemy 2? | <p>I used to use <code>.with_variant()</code> function, like this:</p>
<pre class="lang-py prettyprint-override"><code> num_followers = Column(
Integer().with_variant(
postgresql.INTEGER, "postgresql"
).with_variant(
mysql.INTEGER(unsigned=True), "mysql"
),
unique=False,
index=True,
nullable=True,
comment="Number of followers of a user",
)
</code></pre>
<p>But now SQLAlquemy Version 2 has <code>Mapped[]</code> ( <a href="https://docs.sqlalchemy.org/en/20/orm/mapping_styles.html#declarative-mapping" rel="nofollow noreferrer">https://docs.sqlalchemy.org/en/20/orm/mapping_styles.html#declarative-mapping</a> )</p>
<p>Is this correct to use?</p>
<pre class="lang-py prettyprint-override"><code> num_followers: Mapped[int | None] = mapped_column(
type_=Integer()
.with_variant(postgresql.INTEGER, "postgresql")
.with_variant(mysql.INTEGER(unsigned=True), "mysql", "mariadb"),
primary_key=False,
unique=False,
index=True,
nullable=True,
server_default=None,
comment="Number of followers of a user",
)
</code></pre>
<p>Because I feel that <code>Mapped[int | None]</code> is redundant with <code>type_</code> and <code>nullable</code>.</p>
<hr />
<p>If it is correct, can I use also with datetime, like this?</p>
<pre class="lang-py prettyprint-override"><code> created_at: Mapped[datetime] = mapped_column(
type_=DateTime(timezone=True)
.with_variant(postgresql.TIMESTAMP, "postgresql")
.with_variant(mysql.DATETIME(timezone=True), "mysql", "mariadb"),
primary_key=False,
unique=False,
index=True,
nullable=False,
server_default=func.now(),
comment="Date when the user was created inside the database"
)
</code></pre>
<p>Thanks</p>
| <python><sqlalchemy> | 2023-07-26 23:38:52 | 1 | 3,996 | Rui Martins |
76,775,578 | 189,035 | AttributeError: undefined symbol when importing own c compiled function in python | <p>I'm trying to domesticate the <code>numba</code> <code>cfunc</code> compiler ;)</p>
<p>This here is my base_model.py file (the source of my function).</p>
<pre><code>import numpy
import numba
import numba.pycc
cc = numba.pycc.CC('base_model')
cc.verbose = True
@cc.export('cumulativeMin','float64[:](float64[:])')
def cumulativeMin(A):
r = numpy.empty(len(A))
t = numpy.inf
for i in range(len(A)):
t = numpy.minimum(t, A[i])
r[i] = t
return r
if __name__ == "__main__":
cc.compile()
</code></pre>
<p>Then I do (in the terminal, I run ubuntu):</p>
<pre><code>$ python3 base_module.py
base_model.py:3: NumbaPendingDeprecationWarning: The 'pycc' module is pending deprecation. Replacement technology is being developed.
Pending Deprecation in Numba 0.57.0. For more information please see: https://numba.readthedocs.io/en/stable/reference/deprecation.html#deprecation-of-the-numba-pycc-module
import numba.pycc
</code></pre>
<p>then I do (in python):</p>
<pre><code>import numpy
import ctypes
mylib = ctypes.cdll.LoadLibrary('./base_model.cpython-310-x86_64-linux-gnu.so')
array = numpy.random.uniform(-1,0,1000)
mylib.cumulativeMin(array)
</code></pre>
<p>then I get this error:</p>
<pre><code>---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[1], line 7
5 mylib = ctypes.cdll.LoadLibrary('./base_model.cpython-310-x86_64-linux-gnu.so')
6 array = numpy.random.uniform(-1,0,1000)
----> 7 mylib.cumulativeMin(array)
File /usr/lib/python3.10/ctypes/__init__.py:387, in CDLL.__getattr__(self, name)
385 if name.startswith('__') and name.endswith('__'):
386 raise AttributeError(name)
--> 387 func = self.__getitem__(name)
388 setattr(self, name, func)
389 return func
File /usr/lib/python3.10/ctypes/__init__.py:392, in CDLL.__getitem__(self, name_or_ordinal)
391 def __getitem__(self, name_or_ordinal):
--> 392 func = self._FuncPtr((name_or_ordinal, self))
393 if not isinstance(name_or_ordinal, int):
394 func.__name__ = name_or_ordinal
AttributeError: ./base_model.cpython-310-x86_64-linux-gnu.so: undefined symbol: cumulativeMin
</code></pre>
<p>#Edit:</p>
<p>I would like this function (cummulativeMin) to be compiled ahead of time (<a href="https://numba.pydata.org/numba-doc/dev/user/pycc.html" rel="nofollow noreferrer">https://numba.pydata.org/numba-doc/dev/user/pycc.html</a>).</p>
| <python><ctypes><numba> | 2023-07-26 23:09:29 | 1 | 5,809 | user189035 |
76,775,461 | 3,641,630 | If else condition to create a new column | <p>I am trying to create a new column "name_correct" that removes special characters in the name column in the df below. Additionally, I also want to create a flag column that populates to 1 if a correction was made. In the code below, name_correct is created as expected but the binary flag is 1 for all when it should be 1 for row 1 and 0 for the rest. Can someone help me in correcting this? Thanks!</p>
<pre><code>d= {'name': ["AÃÑA", "23234", "BJORK", "3","JANEDOE123"]}
df=pd.DataFrame(data=d, index=[0, 1, 2, 3,4])
df
# remove special characters if there are any & correct_flag=1 if correction was made
if df['name'].str.contains('|Ã|Ñ').any():
df['name_correct'] = df['name'].str.translate(str.maketrans('ÃÑ','AN'))
df['correct_flag'] = 1
else:
#if no correction is made, name_correct = name and correct flag=0.
df['name_correct'] = df['name']
df['correct_flag'] = 0
print(df)
</code></pre>
| <python><pandas><string><if-statement> | 2023-07-26 22:36:26 | 1 | 435 | user3641630 |
76,775,388 | 11,782,991 | Python input() function doesn't accept input after focus is returned to Terminal | <p>My Python code below will sometimes stop accepting keyboard input if I change Terminal window focus. (It's paused on "Press [Enter] Key to start distribution...") but when I press enter it doesn't do anything...just hangs and I have to restart the program. There's other tasks I need to complete in the background before pressing the return key to continue running the script so it's impossible to keep window focus on Terminal. If I restart the script then it works fine again. <strong>How can I fix this?</strong> I asked AI and it says, "It's essential to ensure that you have the focus on the Terminal window when the input() function is expecting input to avoid any unintended consequences." (I do things like I go to Parallels, open Word documents, read email...etc.)</p>
<pre><code>while True:
try:
# Prompt for user input
input("Press [Enter] key to start distribution...").strip()
# Check to see there's 10 MP2 Files.
mp2_files = glob.glob(os.path.expanduser("~/Desktop/Amboss/*.MP2"))
destination_folder = os.path.join(myPath, f"TTB/TTB_Finals_Weekly/TTB_{dir_week}/Amboss_Distribution[7]/".format(dir_week))
# Copy all files first. This ensures you have all files in appropriate directory before uploading.
for file in mp2_files:
shutil.copy(file, destination_folder)
escaped_folder_path = glob.escape(destination_folder) # There's [] in filename so you gotta escape those.
number_of_MP2_files = len(glob.glob(os.path.join(escaped_folder_path, "*.MP2")))
if number_of_MP2_files != 8:
raise ValueError("Number of MP2 files does not equal 8, please check MP2 files.")
break
except ValueError as e:
print(str(e))
continue
</code></pre>
| <python><macos><input><terminal> | 2023-07-26 22:16:32 | 1 | 332 | nwood21 |
76,775,348 | 14,509,604 | Multiclass confusion matrix in python | <p>I'm trying to create a <strong>single</strong> multiclass confusion matrix in Python.</p>
<pre><code>df_true = pd.DataFrame({
"y_true": [0,0,1,1,0,2]
})
df_pred = pd.DataFrame({
"y_pred": [0,1,2,0,1,2]
})
</code></pre>
<p>And I want a <strong>single</strong> confusion matrix that tells me the actual and predict value for each case. Like this:</p>
<p><a href="https://i.sstatic.net/y3zxI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/y3zxI.png" alt="enter image description here" /></a></p>
| <python><machine-learning><scikit-learn><classification> | 2023-07-26 22:04:47 | 1 | 329 | juanmac |
76,775,248 | 30,461 | ArgumentParser.add_argument raises AttributeError: 'str' object has no attribute 'prefix_chars' | <p>When I run my program, I immediately get an <code>AttributeError</code> when it's setting up the argument parser:</p>
<pre><code>Traceback (most recent call last):
File "blah.py", line 55, in <module>
parser.add_argument('--select-equal', '--equal', nargs=2, metavar=[ 'COLUMN', 'VALUE' ], action='append', dest='filter_equality_keys_and_values', default=[], help='Only return rows for which this column is exactly equal to this value.')
File "…/Developer/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/argparse.py", line 1346, in add_argument
chars = self.prefix_chars
AttributeError: 'str' object has no attribute 'prefix_chars'
</code></pre>
<p>Well, that sounds reasonable. <code>str</code>s indeed don't have a <code>prefix_chars</code> attribute. Clearly the exception message is correct.</p>
<p>Except I can't parse arguments, or even get <em>to</em> parsing arguments, because of this exception. Why is this throwing an exception? Why is it trying to get <code>prefix_chars</code> of a <code>str</code> in the first place? The exception is coming from inside the Python module; is this a bug in Python or something I'm doing wrong?</p>
| <python><exception><argparse> | 2023-07-26 21:48:13 | 1 | 96,493 | Peter Hosey |
76,775,163 | 2,300,643 | Updating multiple traces using python and plotly | <p>I'm trying to update two traces at a time using the drop down box. I am not understanding how the <code>args</code> should be setup.</p>
<pre><code>fig = go.Figure()
for col in ['DELIVERED', 'RECEIVED']:
fig.add_trace(go.Scatter(
x=data.loc[data['ID']==str(x[1]),'DATETIME'],
y=data.loc[data['ID']==str(x[1]), col],
visible=True,
name = col
)
)
updatemenu = []
buttons = []
for id in data['ID'].unique():
buttons.append(
dict(method='restyle',
label=id,
visible=True,
args=[
{
'y':[data.loc[data['ID']==str(id), 'RECEIVED']],
'x':[data.loc[data['ID']==str(id), 'DATETIME']],
'type':'scatter'
},
{
'y':[data.loc[data['ID']==str(id), 'DELIVERED']],
'x':[data.loc[data['ID']==str(id), 'DATETIME']],
'type':'scatter'
},
[0]],
),
)
updatemenu = []
your_menu = dict()
updatemenu.append(your_menu)
updatemenu[0]['buttons'] = buttons
updatemenu[0]['direction'] = 'down'
updatemenu[0]['showactive'] = True
fig.update_layout(showlegend=False, updatemenus=updatemenu)
fig.show()
</code></pre>
| <python><plotly> | 2023-07-26 21:29:39 | 1 | 1,017 | yokota |
76,775,155 | 272,023 | How to load initial reference data within Flink job when using broadcast state pattern? | <p>I have some slow-changing reference data that I want to have available when processing events in Flink using PyFlink. For example, imagine there is information about employee IDs, teams and departments and how they relate to one another. The reference data can fit into memory.</p>
<p>I then want to process events that make reference to this reference data, perhaps things that people did and which I then want to aggregate later downstream by team or department.</p>
<p>I am currently thinking of having 2 streams: one for reference data and the other for the main data. The reference data stream has state (the map of employee->team->dept) and I intend to broadcast that state to the main event stream. This seems to fit the Broadcast State Pattern in the <a href="https://nightlies.apache.org/flink/flink-docs-release-1.17/docs/dev/datastream/fault-tolerance/broadcast_state/" rel="nofollow noreferrer">Flink docs</a>. The streams will be in some form of event log, e.g. Kinesis or Kafka, so in the event of restart I can go back to the beginning of the available log so there should always be reference data available.</p>
<p>My questions:</p>
<ol>
<li>How do I ensure that there is reference data available when I start to process the main event stream? (I do not want to do something like start a job with only reference data, snapshot state, restart job from that snapshot as that is very difficult to orchestrate inside Kubernetes in an automated fashion).</li>
<li>Do I need to buffer the main events and only emit them downstream when there is broadcast state available that that particular main event needs?</li>
<li>Is there an example of this buffering approach (ideally in Python)?</li>
<li>What watermark and watermarking strategy would be best to use for the reference data events?</li>
</ol>
| <python><apache-flink><pyflink> | 2023-07-26 21:26:59 | 1 | 12,131 | John |
76,775,116 | 6,118,986 | How to compare pandas dataframes ignoring column order | <p>Say I have two pandas dataframes:</p>
<pre><code>df_a = pd.DataFrame({ 0: ['a', 'b', 'c'], 1: [9, 8, 7], 2: [True, True, False] })
df_b = pd.DataFrame({ 0: [9, 8, 7], 1: [True, True, False], 2: ['a', 'b', 'c'] })
</code></pre>
<p>These two should be equal if ignoring column order, because they each contain the same 3 columns with the same row order. Every solution I've seen for this sort of thing tries to match based on column names, but that doesn't matter for me.</p>
| <python><pandas><dataframe> | 2023-07-26 21:19:15 | 3 | 403 | user6118986 |
76,775,010 | 17,889,328 | python match case - compare to result of a function return | <p>i'm using pysimplegui with an event-listener loop, comparing the event against a function return. the functions return strings - can i use match instead?</p>
<p>something like this:</p>
<pre class="lang-py prettyprint-override"><code>window = main_window()
while True:
event, values = window.read()
if event == keys_and_strings.GO_SHIP_KEY():
return process_shipments()
</code></pre>
<pre><code>def BOXES_KEY(shipment):
return f'-{shipment.shipment_name_printable.upper()}_BOXES-'.upper()
</code></pre>
<p>i wanted to change to using match case statements because, well, they're prettier?</p>
<p>so i tried eg:</p>
<pre><code> match event
case keys_and_strings.BOXES_KEY(shipment_to_edit):
package = boxes_click(shipment_to_edit=shipment_to_edit, window=window)
</code></pre>
<p>but got <code>(<class 'TypeError'>, TypeError('called match pattern must be a type'), <traceback object at 0x00000277B66FA4C0>)</code></p>
<p>so after some googling i tried:</p>
<p><code> match event: case [keys_and_strings.BOXES_KEY(shipment_to_edit)]:</code></p>
<p>but that failed.</p>
<p>so how can i use match case to compare event to the result of a function call, where the function returns a string?</p>
<p>and what are the pros and cons of match vs if/else?</p>
<p>thanks!</p>
| <python><match> | 2023-07-26 20:59:28 | 2 | 704 | prosody |
76,774,946 | 5,503,494 | How to match a regex expression only if a word is present before or after | <p>I'm really struggling with some regex. I've had a good look at similar questions and I can't work out why it's not working!</p>
<p>I'm trying to match the string 'ok' when it is preceded by 4 digits ((?<=\d{4}\s)ok) but only when the word 'devo' appears anywhere before or anywhere after in the string</p>
<pre><code>text_to_search="devo XXXXXXXXXX 9999 ok ferial blabla"
pattern = re.compile(r"(?<=devo)((?<=\d{4}\s)ok)|((?<=\d{4}\s)ok)(?=devo)")
matches = pattern.finditer (text_to_search)
[print (x) for x in matches]
</code></pre>
<p>This does not return any matches... if i try with:</p>
<pre><code>text_to_search=" XXXXXXXXXX 9999 ok ferial blabla devo"
</code></pre>
<p>it does not work either.</p>
<p>Just for clarity, the two examples above should match, but if i take another example like :</p>
<pre><code>text_to_search=" XXXXXXXXXX 9999 ok ferial blabla"
</code></pre>
<p>then this should not produce a match since 'devo' is not present..</p>
<p>Thank you in advance for your help</p>
| <python><regex><python-re> | 2023-07-26 20:47:43 | 2 | 469 | tezzaaa |
76,774,835 | 1,175,788 | Pandas groupby is keeping other non-groupby columns | <p>I have a situation where in a Pandas groupby function, the dataframe is retaining all the other non-groupby fields, even though I want to discard them.</p>
<p>Here's the dataframe before the groupby:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>org_id</th>
<th>inspection</th>
<th>person_id</th>
<th>date</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>TRUE</td>
<td>453241</td>
<td>1/1/23</td>
</tr>
<tr>
<td>222</td>
<td>FALSE</td>
<td>21341</td>
<td>6/1/23</td>
</tr>
<tr>
<td>333</td>
<td>FALSE</td>
<td>42343241</td>
<td>3/1/23</td>
</tr>
<tr>
<td>111</td>
<td>FALSE</td>
<td>098678</td>
<td>2/1/23</td>
</tr>
<tr>
<td>111</td>
<td>TRUE</td>
<td>6786</td>
<td>6/1/23</td>
</tr>
<tr>
<td>222</td>
<td>TRUE</td>
<td>546</td>
<td>4/1/23</td>
</tr>
<tr>
<td>333</td>
<td>TRUE</td>
<td>vcxv313</td>
<td>4/1/23</td>
</tr>
<tr>
<td>222</td>
<td>TRUE</td>
<td>876</td>
<td>4/1/23</td>
</tr>
<tr>
<td>333</td>
<td>TRUE</td>
<td>432gf</td>
<td>4/1/23</td>
</tr>
</tbody>
</table>
</div>
<p>and my groupby function is being used as : <code>df.groupby(by=['org_id', 'inspection'], dropna=False).count()</code></p>
<p>For some reason, it's keeping person_id and date in the output:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>org_id</th>
<th>inspection</th>
<th>person_id</th>
<th>date</th>
</tr>
</thead>
<tbody>
<tr>
<td>111</td>
<td>TRUE</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>222</td>
<td>FALSE</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>333</td>
<td>FALSE</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>111</td>
<td>FALSE</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>222</td>
<td>TRUE</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>333</td>
<td>TRUE</td>
<td>2</td>
<td>2</td>
</tr>
</tbody>
</table>
</div>
<p>The counts are correct, but it's being noted in the two columns preserved from the original dataframe. I've tried reassigning the dataframe but that hasn't fixed anything. Is there a parameter I'm supposed to set to remove the two columns?</p>
| <python><pandas><dataframe> | 2023-07-26 20:28:07 | 1 | 3,011 | simplycoding |
76,774,829 | 5,563,584 | Is there a faster way to match common elements in a numpy array? | <p>Given a n-length 2d array of strings I need to return an n x n array that does a simple matching operation across itself. The matching operation is checking if the sub arrays are exactly the same (return 2), share a common element (return 1), or neither (return 0). This is implemented as:</p>
<pre><code>def match(a, b):
a_set = set(a)
b_set = set(b)
if a_set == b_set:
return 2
elif a_set & b_set:
return 1
else:
return 0
</code></pre>
<p>example:</p>
<pre><code>arr = np.array([['a', 'b'], ['a', 'b'], ['a', 'c'], ['d', 'e']])
array([['a', 'b'],
['a', 'b'],
['a', 'c'],
['d', 'e']], dtype='<U1')
</code></pre>
<p>should return:</p>
<pre><code>[[2, 2, 1, 0]
[2, 2, 1, 0]
[1, 1, 2, 0]
[0, 0, 0, 2]]
</code></pre>
<p>My current solution below works fine but doesn't scale very well. Looking for something that could run quickly on 50k elements in a couple of seconds.</p>
<pre><code>np.reshape(np.array([match(i, j) for i, j in it.product(arr, repeat=2)]),(len(arr),len(arr)))
</code></pre>
| <python><numpy> | 2023-07-26 20:27:16 | 1 | 327 | greenteam |
76,774,666 | 17,741,308 | PyQt5 User Click to Order PushButtons | <p>The following somewhat long code runs after pip installing PyQt5:</p>
<pre><code>import sys
from PyQt5.QtWidgets import QApplication, QWidget, QPushButton, QVBoxLayout, QLabel
from PyQt5.QtCore import pyqtSignal
class MyPushButton(QPushButton):
self_emit_toggle_signal = pyqtSignal(list)
def __init__(self, text):
super().__init__(text)
self.emit_self_toggle = False
self.toggled.connect(self.when_toggled)
def when_toggled(self, checked):
if self.emit_self_toggle:
self.self_emit_toggle_signal.emit([self, checked])
class ButtonWidget(QWidget):
def __init__(self):
super().__init__()
self.buttons = []
self.checked_buttons = []
self.init_ui()
def init_ui(self):
self.setWindowTitle("Widget with 10 Push Buttons")
# Create 10 push buttons
for i in range(0, 10):
button = MyPushButton(f"Button {i}")
self.buttons.append(button)
# Add buttons to the layout
layout = QVBoxLayout()
self.label = QLabel("Game not started.")
layout.addWidget(self.label)
self.start_button = QPushButton("Start")
layout.addWidget(self.start_button)
self.clear_button = QPushButton("Clear")
layout.addWidget(self.clear_button)
for button in self.buttons:
layout.addWidget(button)
self.setLayout(layout)
#Callbacks:
self.start_button.clicked.connect(self.start_game)
self.clear_button.clicked.connect(self.clear_game)
self.show()
def start_game(self):
self.label.setText("Game started.")
for button in self.buttons:
button.setCheckable(True)
button.emit_self_toggle = True
button.self_emit_toggle_signal.connect(self.upon_toggle)
def upon_toggle(self, list_button_checked):
button, checked = list_button_checked[0], list_button_checked[1]
if checked:
self.checked_buttons.append(button)
else:
self.checked_buttons.remove(button)
self.update_label()
def update_label(self):
for button in self.buttons:
if button in self.checked_buttons:
index = self.buttons.index(button)
checked_index = self.checked_buttons.index(button)
button.setText(str(checked_index + 1) + " :" + f"Button {index}") #line (1)
else:
index = self.buttons.index(button)
button.setText(f"Button {index}")
def clear_game(self):
self.label.setText("Game not started.")
self.checked_buttons.clear()
for button in self.buttons:
button.emit_self_toggle = False
button.setCheckable(False)
button.setChecked(False)
button.self_emit_toggle_signal.disconnect(self.upon_toggle)
self.update_label()
if __name__ == "__main__":
app = QApplication(sys.argv)
window = ButtonWidget()
sys.exit(app.exec_())
</code></pre>
<p>It gives the following:</p>
<p><a href="https://i.sstatic.net/ioK7C.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ioK7C.png" alt="enter image description here" /></a></p>
<p>What the above code does, in words, is the following: after the user clicks "Start", the user will be able to click buttons. The order of user click will reflect directly on the labels of buttons. If a button is clicked and the user clicks on it again, then the click is cancelled and the program treats the button as if it was never clicked before. Clicking "Clear" will initialize the program. Programatically, the code saves the list of checked buttons, updates the list upon user click, and rewrites the text on every button accordingly.</p>
<p>Example: if the user clicks "Start", and then in order button 1,2,3,4,5,1, it will show the following:</p>
<p><a href="https://i.sstatic.net/R8ppv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R8ppv.png" alt="enter image description here" /></a></p>
<p>Questions:</p>
<ol>
<li><p>For now, I reflect the order of user click by directly changing the text on the button (see line (1)). Is it possible to modify the above code such that the order is still reflected and the text on the button is unchanged at all? The most preferred outcome would be some colorful number "floating" on the left hand side of the button.</p>
</li>
<li><p>(Optional) Is there a way to avoid using the check state at all? If not, can I at least hide the check state?</p>
</li>
</ol>
<p>Here is a demonstration of what I mean when both 1 and 2 are achieved, with the clicking being exactly the same as the example above:</p>
<p><a href="https://i.sstatic.net/TSNeg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TSNeg.png" alt="enter image description here" /></a></p>
<p>Note:</p>
<p>The provided code is meant to serve as a demo. Feel free to change the provided code in any way, including completely rewriting it. There is no need to worry about performance. In my actual project the number of buttons will surely be smaller than 15.</p>
| <python><python-3.x><qt><pyqt><pyqt5> | 2023-07-26 20:02:18 | 1 | 364 | 温泽海 |
76,774,433 | 16,498,000 | How to raise an exeption that tells me in what loop I was in? | <p>I have the following input to an <code>assign_points(lists)</code> function:</p>
<pre><code>[['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0'],
['0', '0', '10', '10', '0', '0', '10', '10', '0', '0', '0', '10', '10', '10', '10', '10', '0', '0', '0'],
['0', '0', '10', '10', '0', '0', '10', '10', '0', '0', '0', '0', '0', '0', '0', '10', '10', '0', '0'],
['0', '0', '10', '10', '0', '0', '10', '10', '0', '0', '0', '0', '0', '0', '0', '10', '10', '0', '0'],
['0', '0', '10', '10', '10', '10', '10', '10', '0', '0', '0', '0', '10', '10', '10', '10', '0', '0', '0'],
['0', '0', '0', '10', '10', '10', '10', '10', '0', '0', '0', '10', '10', '0', '0', '0', '0', '0', '0'],
['0', '0', '0', '0', '0', '0', '10', '10', '0', '0', '0', '10', '10', '0', '0', '0', '0', '0', '0'],
['0', '0', '0', '0', '0', '0', '10', '10', '0', '0', '0', '10', '10', '10', '10', '10', '10', '0', '0'],
['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0'],
['0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0', '0']]
</code></pre>
<p>And in my code I have something like:</p>
<pre><code>def A(point):
try:
#something
B(val_1, val_2, val_3)
except ValueError:
#It should quit here but I want to say what line and item (Y, X loop) it was in
def B(val_1, val_2, val_3):
try:
...
except ValueError:
#It should quit here but I want to say what line and item (Y, X loop) it was in
def assign_points(lists)
for y, list in enumerate(lists):
for x, point in enumerate(list):
A(point)
</code></pre>
<p>I have a function <code>B()</code> inside of a function <code>A()</code> inside two <code>for</code> loops.
If there's anything wrong inside the function B I want to raise an exception that gives me what loop it was on.</p>
<p>I know I can pass x and y but I feel like it bloats the code and makes it less reusable. I was wondering if there was an alternative to that.
I want the exception to say " item on line is improperly formatted"</p>
<p>Is this possible to do?</p>
| <python><python-3.x> | 2023-07-26 19:23:30 | 1 | 572 | MiguelP |
76,774,415 | 848,277 | Vectorized sum and product with list of pandas data frames | <p>I have a list of Data Frames, each corresponding to a different time period t from 0 to N. Each data frame has multiple types, I need to preform the calculation below for each type in the data frame.</p>
<p>An example data set would be as follows, I made each <code>df</code> in the list the same values for simplicity but the calculation would remain the same.</p>
<pre><code>d = {'type': ['a', 'b', 'c'], 'x': [1, 2, 3], 'y':[3,4,5]}
df = pd.DataFrame(data=d)
l = []
window = 20
[l.append(df.copy(deep=True)) for i in range(window)]
</code></pre>
<p>I need to compute a vectored sum <code>$$\sum$$</code> and product <code>$$\prod$$</code> using the above list of dataframes for each type (a, b, c) in an efficient manner. e.g. for each calculation below I need to filter by a single type <code>df[df['type'] == 'a']</code> for every <code>df</code> in the list.</p>
<p><a href="https://i.sstatic.net/kW43f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kW43f.png" alt="enter image description here" /></a></p>
<p>If I use a <code>df.groupby('type')</code> on every <code>df</code> in the list it could be very slow using a larger data set. The same goes if I use nested for loops for the sum and the product and filtering by type in each iteration of the for loop. How can I compute this sum product in an efficient manner ?</p>
<p><em>Update</em></p>
<p>One possible way as suggested in the comments is below, however this will start to be very inefficient if I use a larger dataset or window:</p>
<pre><code>import pandas as pd
import math
d = {'type': ['a', 'b', 'c'], 'x': [1, 2, 3], 'y':[3, 4, 5]}
df = pd.DataFrame(data=d)
l = []
window = 20
for i in range(window):
df['window'] = i
l.append(df.copy(deep=True))
df = pd.concat(l)
sums = {}
types = df['type'].unique()
for s in types:
sums[s] = 0
tdf = df[df['type'] == s]
for i in range(window):
sums[s] += tdf[tdf['window'] == i]['x'].values[0] * math.prod(2 - tdf[tdf['window'] == i+1]['x'].values[0] for i in range(0, window-1)) * tdf[tdf['window'] == i]['y'].values[0]
</code></pre>
| <python><pandas><dataframe> | 2023-07-26 19:21:21 | 1 | 12,450 | pyCthon |
76,774,140 | 125,244 | How to set decimals in Pandas dataframe equal to anothet column | <p>in a Pandas dataframe I have some columns that could need 2 or 4 or 5 decimals depending on how many decimals other columns have.
i.e. Indexes usually have 2 decimals but forexpairs do have more,</p>
<p>If I calculate a value (i.e. use an indicator) I want that value to be rounded to the same number of decimals as the OHLC values.</p>
<p>I know by df.round(decimals=2) you can set the rounding for a particular column but I don't know how to retrieve that specification in order to use it for another column.</p>
<p>Is there a function or variable I can get or should I look for the position of a separator in a row in a cclumn to findout?</p>
<p>The picture below contains columns OHLC having 2 decimals followed by various columns with4,,6 decimals. If want all those columns to have 2 decimals as well.
But Is also will have columns with 0 or 1 decimal in the future.
In case of forxpairs (EURUSD) OHLC will have more than 2 decimals. In that case all the columns should have that amount of decimalss as well.</p>
<p><a href="https://i.sstatic.net/O8YvQ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/O8YvQ.png" alt="enter image description here" /></a></p>
| <python><pandas><dataframe> | 2023-07-26 18:35:58 | 1 | 1,110 | SoftwareTester |
76,773,981 | 16,383,578 | Memorizing order of occurrence of huge data, using list + set or dict? | <p>I have literally millions (no exaggeration) of rows of data like the following:</p>
<pre><code>sample = [
[16777216, 33554431, None, 'AU', False, False, False, False],
[16777216, 16777471, 13335, 'AU', False, True, False, False],
[16777472, 16777727, None, 'CN', False, False, False, False],
[16777728, 16778239, None, 'CN', False, False, False, False],
[16778240, 16779263, 38803, 'AU', False, False, False, False],
[16778496, 16778751, 38803, 'AU', False, False, False, False],
[16779264, 16781311, None, 'CN', False, False, False, False],
[16781312, 16785407, None, 'JP', False, False, False, False],
[16781312, 16781567, 2519, 'JP', False, False, False, False],
[16785408, 16793599, None, 'CN', False, False, False, False],
[16785408, 16785663, 141748, 'CN', False, False, False, False],
[16793600, 16809983, 18144, 'JP', False, False, False, False],
[16809984, 16842751, 23969, 'TH', False, False, False, False],
[16809984, 16826367, 23969, 'TH', False, False, False, False],
[16809984, 16818175, 23969, 'TH', False, False, False, False],
[16809984, 16810239, 23969, 'TH', False, False, False, False],
[16810240, 16810495, 23969, 'TH', False, False, False, False],
[16810496, 16811007, 23969, 'TH', False, False, False, False],
[16811008, 16811263, 23969, 'TH', False, False, False, False],
[16811264, 16811519, 23969, 'TH', False, False, False, False],
[16812032, 16812287, 23969, 'TH', False, False, False, False],
[16812288, 16812543, 23969, 'TH', False, False, False, False],
[16812544, 16812799, 23969, 'TH', False, False, False, False],
[16812800, 16813055, 23969, 'TH', False, False, False, False],
[16813312, 16813567, 23969, 'TH', False, False, False, False],
[16814080, 16818175, 23969, 'TH', False, False, False, False],
[16818176, 16826367, 23969, 'TH', False, False, False, False],
[16818176, 16819199, 23969, 'TH', False, False, False, False],
[16819200, 16819455, 23969, 'TH', False, False, False, False],
[16819456, 16819711, 23969, 'TH', False, False, False, False],
[16819712, 16819967, 23969, 'TH', False, False, False, False],
[16819968, 16820223, 23969, 'TH', False, False, False, False]
]
</code></pre>
<p>They represent IPv4 networks, and I have data about IPv6 networks, too.</p>
<p>I got the data from a text file, and I wish to convert the data to an sqlite3 database. In fact I have already done so, but the data updates daily and for reasons I won't go into here I need to frequently update my database too.</p>
<p>As you can see from the sample, only the start and end are unique, there is a big amount of duplication of groups of the other six elements. Storing the rows like this is extremely inefficient, if we store only unique groups of the other elements, and replace them in the rows with a reference to the stored group, by compressing the data in this way, we can save a huge amount of storage space (this is true for my data).</p>
<p>One easy way to store compress the data is to store the repeated elements in a list in their order of occurrence, and replace them in the rows with their index in the list.</p>
<p>The basic idea is simple, start with an empty list, for each row, check if the data is in the list, if not, append the data to the list and replace the data in the row with the length of the list prior to appending, else, replace the data with the index of the data in the list.</p>
<p>The problem is, list membership checking uses linear search, which is O(n) and very inefficient for large data, and my data is really huge, it would take a lot of time. Even binary search would be slow in my case, and as you can see I can't do binary search here.</p>
<p>But set membership checking is amortized O(1), it is almost constant, it is extremely fast and remains fast no matter how many elements it contains, so that is exactly what I need.</p>
<p>The process would be like this:</p>
<pre><code>groups = []
unique_groups = set()
compressed = []
for row in sample:
data = tuple(row[2:])
if data in unique_groups:
compressed.append([*row[:2], groups.index(data)])
else:
unique_groups.add(data)
compressed.append([*row[:2], len(groups)])
groups.append(data)
</code></pre>
<p>The problem with this approach is that <code>list.index</code> uses linear search which is inefficient for my purposes as explained above, and for each unique group I need to store two copies of it, it would use twice as much space as needed.</p>
<p>A different approach would be using a dictionary, like this:</p>
<pre><code>groups = {}
compressed = []
for row in sample:
data = tuple(row[2:])
compressed.append([*row[:2], groups.setdefault(data, len(groups))])
</code></pre>
<p>This would be much faster, but I don't know about its space complexity. I heard that Python <code>dict</code> is memory expensive, but using <code>list</code> + <code>set</code> approach each item would be stored twice...</p>
<p>It may be that for small amount of data the first approach will use less space, but for a large amount of data the second approach will use less space, and my data is extremely large. I don't know how they will scale, and I haven't tested yet, just loading my data would take one Gibibyte of RAM, and I can't reliably generate test cases with the same duplication level as my data, because I never saved the intermediate stage.</p>
<p>So which approach will have less space complexity? Do know that I also want performance, so there is a tradeoff, if the faster method takes about twice as much space but one tenth the execution time, it is acceptable, but if it takes ten times as much space, then even if it is instant it isn't acceptable (I only have 16 GiB RAM, it is finite).</p>
<hr />
<h2>Edit</h2>
<p>All existing answers fail to solve my problem.</p>
<p>Because I need to process the data in Python first. There are a lot of overlapping ranges with different data, overlapping ranges with same data, and adjacent ranges with same data, I need to process the ranges so that there are no overlaps, the data is always from the smaller range, I will subtract the smaller range from the larger range, and I will merge adjacent ranges with same data. So that there is no ambiguity and the data is as small as possible.</p>
<p>See this <a href="https://stackoverflow.com/questions/76713694/how-to-efficiently-split-overlapping-ranges">question</a> for more details and code.</p>
<p>Given the above sample, the result would be:</p>
<pre><code>[(16777216, 16777471, 1),
(16777472, 16778239, 2),
(16778240, 16779263, 3),
(16779264, 16781311, 2),
(16781312, 16781567, 5),
(16781568, 16785407, 4),
(16785408, 16785663, 6),
(16785664, 16793599, 2),
(16793600, 16809983, 7),
(16809984, 16842751, 8),
(16842752, 33554431, 0)]
</code></pre>
<p>And no, I am sure as hell I can't use sqlite3 to do that.</p>
<p>I would then store two tables in my sqlite3 data base.</p>
<p>One with the above data, but each third column is incremented by one (sqlite3 id is one-based). The other with below data:</p>
<pre><code>[(None, 'AU', False, False, False, False),
(13335, 'AU', False, True, False, False),
(None, 'CN', False, False, False, False),
(38803, 'AU', False, False, False, False),
(None, 'JP', False, False, False, False),
(2519, 'JP', False, False, False, False),
(141748, 'CN', False, False, False, False),
(18144, 'JP', False, False, False, False),
(23969, 'TH', False, False, False, False)]
</code></pre>
<p>In the first table, each third column is the id of the actual data in the second table.</p>
<p>This is to achieve maximum storage efficiency, it can then be complimented by sqlite3 compression further.</p>
<p>But I need the data as such. And sqlite3 can't do that alone.</p>
<p>And I don't actually want to use sqlite3 or Pandas DataFrame and such, I only store the data in sqlite3 database because I need to persist the data between sessions, and any other serialization format such as JSON and CSV will be inefficient for this. Pickle can be very efficient but it isn't secure and not compatible across versions.</p>
<p>And no, I have benchmarked the performance of sqlite3 query on a disk based database, a single query to retrieve one item by id takes milliseconds to complete because of I/O delay, this is unacceptable. I will load all data from the database at the start of the program.</p>
<p>Into Python lists because DataFrame is also slow on my machine according to my benchmarks.</p>
<p>My goal is to find the network of any given IP address, to do this I will do binary search on the starts and determine IP is less than or equal to the end, like the following:</p>
<pre><code>from bisect import bisect
ASN = [
(None, 'AU', False, False, False, False),
(13335, 'AU', False, True, False, False),
(None, 'CN', False, False, False, False),
(38803, 'AU', False, False, False, False),
(None, 'JP', False, False, False, False),
(2519, 'JP', False, False, False, False),
(141748, 'CN', False, False, False, False),
(18144, 'JP', False, False, False, False),
(23969, 'TH', False, False, False, False)
]
NETWORKS = [
(16777216, 16777471, 1),
(16777472, 16778239, 2),
(16778240, 16779263, 3),
(16779264, 16781311, 2),
(16781312, 16781567, 5),
(16781568, 16785407, 4),
(16785408, 16785663, 6),
(16785664, 16793599, 2),
(16793600, 16809983, 7),
(16809984, 16842751, 8),
(16842752, 33554431, 0)
]
STARTS, ENDS, ASNS = zip(*NETWORKS)
def get_network(ip):
index = bisect(STARTS, ip) - 1
if ip <= (end := ENDS[index]):
return [STARTS[index], end, *ASN[index]]
return None
</code></pre>
<p>And if you want more details see <a href="https://community.ipfire.org/t/how-do-i-convert-location-db-to-sqlite3-format/9718" rel="nofollow noreferrer">this</a>.</p>
<hr />
<h2><em><strong>Update</strong></em></h2>
<p>I have benchmarked the size of the objects, it seems for arbitrarily generated data, <code>dict</code> has smaller space complexity, and <code>dict</code> has better time complexity, too. One important fact observed, however, is that the benchmark reported the memory usage of the objects in hundreds of Mebibytes range, but I didn't actually observe the interpreter use that much more RAM.</p>
<pre><code>from itertools import product
from typing import Mapping, Sequence, Set
def get_size(obj: object) -> int:
size = obj.__sizeof__()
if isinstance(obj, Mapping):
size += sum(get_size(k) + get_size(v) for k, v in obj.items())
elif isinstance(obj, Sequence | Set) and not isinstance(obj, str):
size += sum(get_size(e) for e in obj)
return size
def generate(a, b, mode):
gen = ((i, j, *k) for i, j in product(range(a), range(b)) for k in product((0, 1), repeat=4))
if mode:
return {k: i for i, k in enumerate(gen)}
l = list(gen)
return l, set(l)
print(get_size(generate(16, 16, 0)))
print(get_size(generate(16, 16, 1)))
print(get_size(generate(256, 256, 0)))
print(get_size(generate(256, 256, 1)))
ls = list(range(32768))
d = dict.fromkeys(range(32768))
</code></pre>
<pre><code>2060792
1210444
528477112
314540108
In [348]: %timeit ls.index(128)
1.67 µs ± 20.1 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [349]: %timeit ls.index(4096)
52.2 µs ± 810 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
In [350]: %timeit d[128]
48.8 ns ± 0.758 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
In [351]: %timeit d[4096]
61 ns ± 0.937 ns per loop (mean ± std. dev. of 7 runs, 10,000,000 loops each)
</code></pre>
<p>Overall, the <code>dict</code> approach is clearly better than the <code>list</code> + <code>set</code> one. But can we do better?</p>
| <python><python-3.x><optimization> | 2023-07-26 18:06:33 | 3 | 3,930 | Ξένη Γήινος |
76,773,890 | 11,370,582 | Display Full Date {DD-MM-YYYY} in Hover Text, python, plotly | <p>I'm plotting a chart that spans multiple years using the python plotly library - <code>import plotly.express as px</code></p>
<p>When the chart is displayed in full, the <code>date</code> that is shown in the hover text only includes the month and year <code>M-YYYY</code>.</p>
<p><a href="https://i.sstatic.net/JVgSF.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JVgSF.png" alt="enter image description here" /></a></p>
<p>Yet when it is zoomed in the full date is displayed <code>M-DD-YYYY</code></p>
<p><a href="https://i.sstatic.net/OER9Q.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OER9Q.png" alt="enter image description here" /></a></p>
<p>Are there any settings in or related to <code>hovertemplate</code> or <code>hovermode</code> that will allow the full date to be displayed at all times?</p>
<p>This can be replicated in the following example:</p>
<pre><code>import numpy as np
import pandas as pd
import plotly.express as px
start=pd.to_datetime("2016-01-01")
end=pd.to_datetime("2022-12-31")
length = end - start
df = pd.DataFrame({'Date':pd.date_range(start="2016-01-01",end="2022-12-31"),
'Value':np.random.randint(5,30,size=length.days+1)})
fig = px.area(df, x='Date', y="Value",
title = "Example Data",)
fig.update_traces(line_color='firebrick')
fig.update_layout(hovermode="x unified")
fig.show()
</code></pre>
| <python><pandas><date><plotly><hover> | 2023-07-26 17:50:42 | 3 | 904 | John Conor |
76,773,881 | 828,824 | Django Model.objects.create vs Model.save() | <p>New to Django ORM and had a small question.</p>
<p>If I have a model like:</p>
<pre><code>class Person(models.Model):
id = models.BigAutoField(primary_key=True)
name = models.CharField(max_length=255)
</code></pre>
<p>I can add a new person in the following ways:</p>
<pre><code>p1 = Person.objects.create(name="Bob")
</code></pre>
<p>or</p>
<pre><code>p2 = Person(name="Bob")
p2.save()
</code></pre>
<p>In the first case (p1), I can expect that the "id" field is set (auto-incremented when insertion is done). Can I expect the same in the second case? Does django update the in-memory "Person" object's properties (in this case the id field) once a save is done?</p>
<p>Follow-up questions:</p>
<ol>
<li>Let's say I had another model</li>
</ol>
<pre><code>class Business(models.Model):
id = models.BigAutoField(primary_key=True)
owner = models.ForeignKey(Person)
</code></pre>
<p>Does it make a difference if I do</p>
<pre><code>b = Business(owner=p1)
b.save()
</code></pre>
<p>vs</p>
<pre><code>b = Business(owner=p2)
b.save()
</code></pre>
<ol start="2">
<li>What happens if I don't save or create the person object before passing it into Business? e.g.</li>
</ol>
<pre><code>p = Person(name="Bob")
b = Business(owner=p)
b.save()?
</code></pre>
| <python><django> | 2023-07-26 17:49:56 | 1 | 3,345 | de1337ed |
76,773,880 | 14,742,965 | SLOW conversion of strings to floats in Python - ways to speed this up? | <h1>Summary of Problem</h1>
<p>I have a data set supplied to me which has geographic coordinates provided as strings with leading spaces and spaces between the positive/negative signs and the numbers. The data set is approximately 7000 rows, so not terribly huge, but the simple function I wrote to convert the coordinates to floats takes about 00:01:20 (1 minute, 20 seconds) per column - this is exceedingly slow! I am using Python 3.9, on a tablet (Windows 11 Home, 16 Gb of RAM, Intel Core i7 1.3 GHz) using VSCode and Jupyter Notebooks. I am bringing the data in as a Pandas DataFrame. My goal to convert the strings to numbers is accomplished with the code below, but the more columns I need to do this to, the slower it gets.</p>
<h1>What I've tried...</h1>
<p>My code below shows that all I am doing is going through the DataFrame with a for loop using <code>iterrows()</code>, using <code>split</code> to break each string up around the spaces, and putting the results through an <code>if</code> loop to determine whether negative or not. Then the numeric part of the split string is converted to either a <code>-float</code> or a <code>float</code> depending on the outcome of the logic test. At that point, I re-assign the float to the entry of that row and insert it back into the DataFrame.</p>
<p>This works. But I think it can work faster. I have a hunch that I am employing Pandas the wrong way and bogging this down. I would like it to go faster because I will use this routine to do larger and larger data sets.</p>
<p>Below I lay out two functions:</p>
<blockquote>
<ol>
<li><code>generate_string_coords</code> - A function to generate the types of strings I am supplied with. I'm working on a tool here, and I have no control over the raw materials (data). So this function simply generate strings that behave as my raw data do, and you can generate as many or as few as you would like using the <code>length</code> variable.</li>
<li><code>convert_string_coords_to_floats</code> - A function to convert the contrived strings to floats. This is the one that is the bane of my current code; once a dataset is generated with the function described in #1, I'd like to know how to optimize the function described in #2.</li>
</ol>
</blockquote>
<p>Here's the rub: when I try this even with a <code>length=10000</code> (in excess of my data set), both of these functions take about 00:00:02.7 (2.7 seconds) to run! This is way faster than in my code, and I wonder if Pandas is searching through each of the 60+ members of each Series (row) of my DataFrame.</p>
<h1>Some Code:</h1>
<h3>Generate the dummy data that behaves like the coordinates supplied to me:</h3>
<pre class="lang-py prettyprint-override"><code>
def generate_string_coords(length):
nums = np.random.uniform(-3, 6, (length, )).tolist()
signs = []
for num in nums:
if num < 0:
signs.append(' - ')
else:
signs.append(' + ')
strings = [sign+str(abs(num))for sign, num in zip(signs, nums)]
strings_df = pd.DataFrame(strings, columns=['coords'])
return strings_df
</code></pre>
<h3>Convert the dummy data from strings to floats:</h3>
<pre class="lang-py prettyprint-override"><code>def convert_str_coords_to_floats(strings_df):
for x, row in enumerate(strings_df.iterrows()):
print(x)
a = row[1]['coords']
if a.split()[0] == '-':
num_a = -float(a.split()[1])
else:
num_a = float(a.split()[1])
row[1]['coords'] = num_a
strings_df.iloc[x] = row[1]
return strings_df
</code></pre>
<p>Can anyone tell me if there is a more efficient way to do what I need?
Thank you!</p>
| <python><pandas><string><optimization><numeric> | 2023-07-26 17:49:53 | 1 | 302 | brosenheim |
76,773,710 | 1,118,895 | Enable datadog tracing for apache wsgi flask app | <p>I have python flask docker app running on apache server with mod_wsgi. I wanted to datadog tracing to the app but I am confused where on entrypoint I should add ddtrace-run command. The only thing we do on entrypoint is "/run-apache.sh".</p>
<p>I tried ["ddtrace-run", "sh", "/run-apache.sh"] but it didn't work.</p>
<p>My run-apache.sh script has</p>
<pre><code>exec /usr/sbin/apachectl -DFOREGROUND
</code></pre>
<p>Blockquote</p>
<p>Can you help me understand how we can run the app with ddtrace here.</p>
| <python><docker><mod-wsgi><wsgi><datadog> | 2023-07-26 17:25:58 | 0 | 1,089 | Darshan Patil |
76,773,671 | 13,086,128 | 6.3//2.1 gives 2.0. Please can someone explain? | <p>I am trying to divide 2 float numbers using integer division. I am <strong>not</strong> using <code>numpy</code> or any other library.</p>
<p>Just pure python.</p>
<pre><code>6.3//2.1
#output
2.0
</code></pre>
<p>I tried to break my problem into fractions like:</p>
<pre><code>(63/10)//(21/10)
#output
2.0
</code></pre>
<p>Still the same output.</p>
<p>I know <code>2.1 * 3.0</code> == <code>6.3</code></p>
<p>So, I am expecting <code>3.0</code> as output.</p>
<p>Please can someone explain?</p>
| <python><division> | 2023-07-26 17:21:44 | 0 | 30,560 | Talha Tayyab |
76,773,472 | 7,019,069 | How can I parametrize a function in pytest with a dynamic number of arguments that are based on another fixture? | <p>I'm looking forward to do something similar to the solution described here:
<a href="https://stackoverflow.com/questions/46941838/can-i-parameterize-a-pytest-fixture-with-other-fixtures">Can I parameterize a pytest fixture with other fixtures?</a></p>
<p>But in my use case I want to have a dynamic number of tests based on a query in a database. In order to do the query I need first the server which is defined by another session fixture.</p>
<p>So, my issue is that I can not find a way of passing the fixture of server to a static function to return directly the collection.</p>
<pre class="lang-py prettyprint-override"><code>
import pytest
@pytest.fixture(scope="session")
def server():
"""
Fixture that is in global configuration of many tests
"""
return "some_ip"
def function_based_on_server(server):
"""
#External function that return the list of tests, for example a query
"""
if server == "some_ip":
return [1, 2, 3, 4, 5]
return [1, 2, 3]
@pytest.fixture
def my_test(server, request):
"""
Fixture that I want to return my tests
"""
# request is actually not needed
return function_based_on_server(server)
@pytest.mark.parametrize('my_test', [x for x in range(2)], indirect=True)
def test_me(my_test):
# i want this my test to be 1, then 2...
print(f"\n{my_test}")
assert my_test == 1
</code></pre>
<p>In this example I make use of an artificial loop that returns two tests. But what I actually want is that loop to be something that depends on the server fixture, so that 5 tests are run, giving each time to the variable my_test a value from 1 to 5, instead of a list like in this example.</p>
<p>Thank you.</p>
| <python><pytest><parametrize><pytest-fixtures> | 2023-07-26 16:51:56 | 1 | 2,288 | nck |
76,773,428 | 2,040,432 | Optimal way to run a function on all rows of a Polars DataFrame | <p>I'm applying a relatively complex function over a moderate data-frame around 5k rows however it takes a long time ~10 mins and the different cores are only running at around 30% loading</p>
<p>How could I minimize the running time and ensure a 100% loading of all the CPU cores?</p>
<p>Is that way of calling the function and packing / unpacking the results the most appropriate?</p>
<pre><code>class my_class():
def __init__(self, bla:float=np.nan):
pass
def my_complex_function(self, param_A:float=np.nan, ...):
bla bla
return {f"{prefix_name}_result_A":result_A,
f"{prefix_name}_result_B":result_B,
f"{prefix_name}_result_B":result_C,
... }
my_object = my_class(999.9)
float_param_D = 999.9
float_param_E = 999.9
float_param_F = 999.9
prefix_name = "column_name_prefix"
df = df.with_columns(pl.struct('float_param_A',
'float_param_B',
'float_param_C').map_elements(lambda x:
my_object.my_complex_function(
param_A =x['float_param_A'],
param_B =x['float_param_B'],
param_C =x['float_param_C'],
param_E =float_param_E,
param_F =float_param_F,
param_D =float_param_D,
float_param_G=0,
prefix_name =prefix_name )
).alias('calculation_results')
).unnest('calculation_results')
</code></pre>
| <python><python-polars> | 2023-07-26 16:46:08 | 1 | 2,436 | kolergy |
76,773,374 | 1,502,718 | When subscribing to a multicast group, what is the best way to filter traffic from other hosts in the group? | <p>There is a server that will be sending messages to a multicast group.</p>
<p>I have multiple clients that will be receiving messages from that multicast group.</p>
<p>It's my understanding that anyone can send messages to the multicast group if they're subscribed on the socket. This means that subscribing would open me up to a lot of traffic that I don't care about.</p>
<p>In my client code, I could setup a conditional to check if the messages comes from the sender I want to listen to:</p>
<pre><code>data, address = sock.recvfrom(1024)
if address not in trusted:
continue
</code></pre>
<p>However, I was wondering if there was a way to configure the socket to only see the traffic of trusted hosts.</p>
<p>Even further, I'd like to know if there's a way to filter by source address when subscribing to the multicast group from the python client code.</p>
<p>If not, would I just need to configure the router to filter multicast traffic by the source IP?</p>
<p>What options are available here?</p>
| <python><sockets><multicast> | 2023-07-26 16:36:39 | 1 | 645 | Axoren |
76,773,336 | 111,888 | Python http response stream getting closed prematurely | <p>I'm trying to write simple code that reads lines one by one from a file streamed over http. Here is what I have:</p>
<pre class="lang-py prettyprint-override"><code>url = "https://drive.google.com/uc?id=1P2hMOGgz9tEN5DYb7rqTE0MeTbgb1Pd5"
response = requests.get(url, stream=True)
reader = io.TextIOWrapper(response.raw, encoding="utf-8")
line1 = reader.readline()
line2 = reader.readline()
</code></pre>
<p>This correctly reads the first line, but when trying to read the second line, it blows up with <code>ValueError: I/O operation on closed file</code>.</p>
<p>The test file just contains:</p>
<pre><code>1
2
3
</code></pre>
<p>What is causing the underlying stream to get closed prematurely? I see there are alternative solutions that rely on <code>Response.iter_lines()</code>, but I'd like to understand why the code that I have doesn't work.</p>
| <python> | 2023-07-26 16:31:49 | 1 | 43,343 | David Ebbo |
76,773,188 | 4,623,971 | Convert pyspark string column into new columns in pyspark dataframe | <p>I am using databricks and have a spark dataframe as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>data</th>
<th>ida</th>
<th>ids</th>
<th>mode</th>
</tr>
</thead>
<tbody>
<tr>
<td>[{fname=name1, value=777.7777777}, {fname=name2, value=456.4564574683}, {fname=name3, value=123.52624767358654}, {fname=name4, value=1.76}]</td>
<td>hfhdj-345</td>
<td>trhryr-365</td>
<td>all</td>
</tr>
<tr>
<td>[{fname=name1, value=123.123123123}, {fname=name2, value=321.321321321}, {fname=name3, value=765.234234234}, {fname=name4, value=3.45}]</td>
<td>hfdfg-453</td>
<td>trhret-123</td>
<td>some</td>
</tr>
</tbody>
</table>
</div>
<p>I want to create new columns in the dataframe based on the fname in each dictionary (name1, name2, name3, name4 - each of these becomes a new column in the dataframe) and then the associated value being the data for that column.</p>
<p>I've tried various methods, such as converting the data into json, but I still cannot seem to get it to work. I want to be able to do this using PySpark and then drop the data column as this will no longer be needed.</p>
| <python><dataframe><apache-spark><pyspark><azure-databricks> | 2023-07-26 16:08:12 | 2 | 334 | JGW |
76,773,012 | 20,295,949 | ImportError: Install adlfs to access Azure Datalake Gen2 and Azure Blob Storage | <p>I am using Spyder and I am running the following code:</p>
<p>import pandas</p>
<pre><code>storage_options={'account_name': 'bidi', 'account_key': 'xxxxxxxxxxxxxxxxxxx'}
df = pandas.read_csv('abfs://taxi@bidi.dfs.core.windows.net/raw/taxi_zone.csv', storage_options=storage_options)
print(df)
</code></pre>
<p>But this is giving me the following error:</p>
<pre><code>ImportError: Install adlfs to access Azure Datalake Gen2 and Azure Blob Storage
</code></pre>
<p>So I opened Anaconda Prompt as administrator and typed:</p>
<pre><code>Conda install adlfs
</code></pre>
<p>I received the following error:</p>
<pre><code>(base) C:\WINDOWS\system32>conda install adlfs
Collecting package metadata (current_repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
Collecting package metadata (repodata.json): done
Solving environment: failed with initial frozen solve. Retrying with flexible solve.
PackagesNotFoundError: The following packages are not available from current channels:
- adlfs
Current channels:
- https://repo.anaconda.com/pkgs/main/win-64
- https://repo.anaconda.com/pkgs/main/noarch
- https://repo.anaconda.com/pkgs/r/win-64
- https://repo.anaconda.com/pkgs/r/noarch
- https://repo.anaconda.com/pkgs/msys2/win-64
- https://repo.anaconda.com/pkgs/msys2/noarch
To search for alternate channels that may provide the conda package you're
looking for, navigate to
https://anaconda.org
and use the search bar at the top of the page.
</code></pre>
| <python><anaconda><spyder> | 2023-07-26 15:47:24 | 1 | 319 | HamidBee |
76,772,978 | 6,626,531 | Resolving TypeVar errors in Python for a decorator | <p>I've been adding types to my decorator to make mypy happy. I've been following <a href="https://stackoverflow.com/questions/65621789/mypy-untyped-decorator-makes-function-my-method-untyped">this</a> and have been looking at <a href="https://peps.python.org/pep-0484/" rel="nofollow noreferrer">pep 484</a>. I've made a lot of progress, but I'm getting the error message</p>
<pre><code>decorators.py:12: error: A function returning TypeVar should receive at least one argument containing the same TypeVar [type-var]
decorators.py:12: note: Consider using the upper bound "Callable[..., Any]" instead
</code></pre>
<p>However, when you look at the code. I have used <code>bound=Callable[..., Any]</code> I'm not sure how it wants me to proceed.</p>
<p>Script: decorators.py</p>
<pre><code>"""Decorator Functions."""
from typing import Any, Callable, List, TypeVar, Union, cast
from pandas import DataFrame, concat
F = TypeVar('F', bound=Callable[..., Any])
# Decorator to process DataFrame with ignore columns
def process_without_columns(
ignore_cols: List[str], final_cols_order: Union[List[str], None] = None
) -> F:
"""
Decorate to process a DataFrame, removing specified ignore columns, and then joining them back.
Parameters
----------
ignore_cols: List[str]
List of column names to ignore during processing.
final_cols_order: Union[List[str], None]
List specifying the desired order of columns in the final DataFrame.
If None, the original DataFrame's column order will be used. Default is None.
Returns
-------
decorator_process: Decorator function that processes the DataFrame.
"""
def decorator_process(func: F) -> F:
def inner(self, data_df: DataFrame, *args: Any, **kwargs: Any) -> DataFrame:
"""
Inner function that performs the actual processing of the DataFrame.
Parameters
----------
data_df: DataFrame
DataFrame to be processed.
*args
args passed into inner function
**kwargs
Kwargs passed into inner function
Returns
-------
DataFrame: Processed DataFrame with the original columns
"""
ignore_df = data_df[
ignore_cols
] # Extract the ignore columns as a separate DataFrame
data_df = data_df.drop(
columns=ignore_cols
) # Remove the ignore columns from the original DataFrame
# Process the DataFrame (smaller DataFrame without ignore columns)
processsed_df = func(self, data_df, *args, **kwargs)
# Join back the processed DataFrame with the ignore columns DataFrame
processsed_df = concat([processsed_df, ignore_df], axis=1)
# Reorder DataFrame columns if final_cols_order is specified
if final_cols_order is not None:
processsed_df = processsed_df[final_cols_order]
return processsed_df
return cast(F, inner)
return cast(F, decorator_process)
</code></pre>
| <python><python-3.x><decorator><mypy> | 2023-07-26 15:41:46 | 1 | 1,975 | Micah Pearce |
76,772,958 | 4,734,920 | Python logging config to file AND stdout: not possible to use different log levels | <p>Under Ubuntu I experiencetd that the usage of different debug levels for loggers to stdout and to file is not possible. In my Windows environment (Spyder with Anacosna) it works as expected. Please see my example code. Anyone has an idea what's wrong here?</p>
<p>This is my example code:</p>
<pre class="lang-py prettyprint-override"><code>import logging
#from logging.handlers import RotatingFileHandler
logfile = "log.log"
levels =('NOTSET', 'DEBUG', 'INFO', 'WARNING', 'ERROR', 'CRITICAL')
dateformat = '%y-%m-%d %H:%M:%S'
strformat = '%(asctime)s %(levelname)s in %(funcName)s at %(lineno)s: %(message)s'
formatter = logging.Formatter(strformat,datefmt=dateformat)
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
logging.basicConfig(
format = strformat,
datefmt = dateformat,
force = True,
level = logging.WARNING,
)
if logfile:
print(f"Creating {logfile}")
fh = logging.FileHandler(logfile)
#fh = RotatingFileHandler(logfile, maxBytes=(1000), backupCount=1)
fh.setLevel(logging.DEBUG)
fh.setFormatter(formatter)
logging.getLogger('').addHandler(fh)
logging.debug(f"logging level is set to {levels[logging.root.level//10]}")
logging.warning("level warning test")
</code></pre>
<p>I would expect to see the debug level message in the file but this is not the case.
I see the warning in both, stdout and the file.</p>
<p>What am I doing wrong?</p>
<p>Do you need the logging module versions? How can I display the version?</p>
| <python><file><ubuntu><logging> | 2023-07-26 15:39:11 | 2 | 332 | Stefatronik |
76,772,918 | 10,628,959 | How to Optimize Slow Aggregation with Large Collection in MongoDB? | <p>EDIT ----
Here how I managed to optimized but is not enough yet.
it seems that the $setWindowFields stage is causing the slowdown</p>
<pre><code>[
{
"$limit": 1
},
{
"$lookup": {
"from": "mobile_devices",
"pipeline": [
{
"$match": {
"orgUnitPath": {
"$regularExpression": {
"pattern": "^/",
"options": "i"
}
}
}
},
{
"$setWindowFields": {
"output": {
"totalCount": {
"$count": {}
}
}
}
},
{
"$limit": 100
},
{
"$project": {
"_id": 0
}
}
],
"as": "mobiledevices"
}
},
{
"$addFields": {
"nextPageToken": {
"$cond": {
"if": {
"$lt": [
100,
"$mobiledevices.totalCount"
]
},
"then": "MTAw",
"else": "$$REMOVE"
}
}
}
},
{
"$unset": "mobiledevices.totalCount"
},
{
"$group": {
"_id": "_id",
"kind": {
"$first": "$kind"
},
"etag": {
"$first": "$etag"
},
"mobiledevices": {
"$first": "$mobiledevices"
},
"nextPageToken": {
"$first": "$nextPageToken"
}
}
},
{
"$project": {
"_id": 0
}
},
{
"$set": {
"nextPageToken": {
"$ifNull": [
"$nextPageToken",
"$$REMOVE"
]
}
}
}
]
</code></pre>
<p>I have an aggregation pipeline in MongoDB that processes a collection (collection_list) of several gigabytes. The current aggregation is taking too much time to complete. The pipeline returns results in the following format:</p>
<pre class="lang-json prettyprint-override"><code>{
"nextPageToken": "MTA=",
"kind": "admin#directory#mobiledevices",
"etag": "\sdsd",
"target_model": [
{
// ...
},
// ...
]
}
</code></pre>
<p>Upon investigating the performance, it seems that the <code>$unwind</code> stage is causing the slowdown. Here is the current aggregation pipeline:</p>
<pre class="lang-python prettyprint-override"><code>lookup_pipeline = []
if scopes_ou_path:
"""Concatenate all OU paths with a regex OR operator,
e.g.: ["/vesa", "/europe/france"] -> "^/vesa|^/europe/france"""
scopes_ou_path = "^" + "|^".join(scopes_ou_path)
lookup_pipeline.append(
{"$match": {"orgUnitPath": {"$regex": scopes_ou_path, "$options": "i"}}}
)
skip, next_page = manage_next_page_token(next_page_token=next_page, limit=limit)
if filters:
lookup_pipeline.append({"$match": filters}) # type: ignore
if skip:
lookup_pipeline.append({"$skip": skip}) # type: ignore
lookup_pipeline.append(
{"$setWindowFields": {"output": {"totalCount": {"$count": {}}}}} # type: ignore
)
if limit:
lookup_pipeline.append({"$limit": limit}) # type: ignore
lookup_pipeline.append({"$project": {"_id": 0}}) # type: ignore
return [
{
"$lookup": {
"from": collection_list,
"pipeline": lookup_pipeline,
"as": target_model,
},
},
{"$unwind": {"path": f"${target_model}"}},
{
"$addFields": {
"nextPageToken": {
"$cond": {
"if": {
"$lt": [limit, f"${target_model}.totalCount"],
},
"then": next_page,
"else": "$$REMOVE",
},
},
},
},
{
"$unset": f"{target_model}.totalCount",
},
{
"$group": {
"_id": "_id",
"kind": {"$first": "$kind"},
"etag": {"$first": "$etag"},
f"{target_model}": {"$push": f"${target_model}"},
"nextPageToken": {"$first": "$nextPageToken"},
},
},
{"$project": {"_id": 0}},
{
"$set": {
"nextPageToken": {
"$ifNull": ["$nextPageToken", "$$REMOVE"],
},
},
},
]
</code></pre>
<p>I've noticed that <code>$unwind</code> takes considerable time. Is there a way to optimize this aggregation pipeline and improve its performance without using <code>$unwind</code>? Any guidance or alternative approaches to speed up the aggregation would be greatly appreciated. Thank you!</p>
| <python><mongodb><pymongo><mongodb-atlas> | 2023-07-26 15:34:21 | 1 | 475 | Raphael Obadia |
76,772,917 | 1,579,976 | How to package root-level modules in namespace packages? | <p>Consider this project structure:</p>
<pre><code><project root>
setup.py
<namespace_package>
module_a.py
module_b.py
<package>
__init__.py
</code></pre>
<p>The <a href="https://packaging.python.org/en/latest/guides/packaging-namespace-packages/#" rel="nofollow noreferrer">PyPA documentation</a> suggests that such a layout is permissible. It works for me locally using pip editable installs. My issue is that when I create a sdist or wheel <code>module_a.py</code> and <code>module_b.py</code> are not being included. Here is my <code>setup.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import find_namespace_packages, setup
setup(
name="foo",
version="0.1",
description="foo",
long_description="fooooooooooooooooo",
author="me",
author_email="me@non-ya.business",
packages=find_namespace_packages(include=["namespace_package.*"]),
include_package_data=True,
)
</code></pre>
<p>I know I can work around this problem by converting all of those root-level modules into <code>__init__.py</code> files inside a similarly named directory, but that seems like a very non-Pythonic solution. <strong>Is there a better/cleaner way to include my root-level modules?</strong></p>
| <python><setuptools><setup.py><python-packaging> | 2023-07-26 15:34:09 | 1 | 606 | carej |
76,772,900 | 7,274,182 | Tensor to PIL Image | <p>I am using the <a href="https://github.com/eugenesiow/super-image" rel="nofollow noreferrer">super-image</a> library to upscale an image. Here is the <code>ImageLoader</code> class from <code>super-image</code>.</p>
<pre><code>import numpy as np
import cv2
from PIL import Image
import torch
from torch import Tensor
class ImageLoader:
@staticmethod
def load_image(image: Image):
lr = np.array(image.convert('RGB'))
lr = lr[::].astype(np.float32).transpose([2, 0, 1]) / 255.0
return torch.as_tensor([lr])
@staticmethod
def _process_image_to_save(pred: Tensor):
pred = pred.data.cpu().numpy()
pred = pred[0].transpose((1, 2, 0)) * 255.0
# pred = pred[scale:-scale, scale:-scale, :]
pred = cv2.cvtColor(pred, cv2.COLOR_BGR2RGB)
return pred
@staticmethod
def save_image(pred: Tensor, output_file: str):
pred = ImageLoader._process_image_to_save(pred)
cv2.imwrite(output_file, pred)
# -- snip --
</code></pre>
<p>Currently only a <code>save_image</code> convenience method has been provided although I wish to convert the array directly into a PIL <code>Image</code>.
I am struggling to find how the image data is stored in order to correctly convert to the PIL <code>Image</code>.</p>
<p>Here is what I have tried so far but it only gives an incorrect output image.</p>
<pre><code>def upscale(image, scale):
inputs = ImageLoader.load_image(image)
model = PanModel.from_pretrained("pan", scale=scale)
preds = model(inputs)
pred = ImageLoader._process_image_to_save(preds)
image = Image.fromarray(pred, "RGB")
image.show()
image = Image.open("temp.png")
upscale(image, 4)
</code></pre>
<p>Here is an example of the image that is shown when trying to use <code>fromarray()</code></p>
<p><a href="https://i.sstatic.net/SHHEU.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SHHEU.jpg" alt="enter image description here" /></a></p>
| <python><deep-learning><pytorch><python-imaging-library> | 2023-07-26 15:32:57 | 1 | 850 | Jacob |
76,772,674 | 5,868,293 | Merge by year-week, plus or minus n weeks in pandas | <p>I have the following dataframes:</p>
<pre><code>import pandas as pd
aa = pd.DataFrame({'id': ['a','a','a','b'],
'week': ['2022-W13','2022-W14', '2022-W19', '2022-W14']})
bb = pd.DataFrame({'id': ['a','a','a','a','a','a','b','b','b','b'],
'week': ['2022-W12','2022-W13','2022-W14','2022-W15','2022-W16','2022-W20',
'2022-W13','2022-W14','2022-W15','2022-W16'],
'val': [0,1,2,3,4,5,6,7,8,9]})
id week
0 a 2022-W13
1 a 2022-W14
2 a 2022-W19
3 b 2022-W14
id week val
0 a 2022-W12 0
1 a 2022-W13 1
2 a 2022-W14 2
3 a 2022-W15 3
4 a 2022-W16 4
5 a 2022-W20 5
6 b 2022-W13 6
7 b 2022-W14 7
8 b 2022-W15 8
9 b 2022-W16 9
</code></pre>
<p>I would like to merge the two dataframes by <code>id</code> and <code>week</code>, but the <code>week</code> I want to give a <strong>plus or minus 1 week</strong>.</p>
<p>Explanation:</p>
<ul>
<li>for the first row of <code>aa</code> where <code>id=='a'</code> and <code>week='2022-W13'</code>, i would have to get the rows from <code>bb</code> where <code>id=='a'</code> and <code>week='2022-W12'</code> or <code>week='2022-W13'</code> or <code>week='2022-W14'</code></li>
<li>for the second row of <code>aa</code> where <code>id=='a'</code> and <code>week='2022-W14'</code>, i would have to get the rows from <code>bb</code> where <code>id=='a'</code> and <code>week='2022-W13'</code> or <code>week='2022-W14'</code> or <code>week='2022-W15'</code></li>
<li>etc</li>
</ul>
<p>The output datafame should be this:</p>
<pre><code>pd.DataFrame({'id': ['a','a','a','a','a','b','b','b'],
'week': ['2022-W12','2022-W13','2022-W14','2022-W15','2022-W20',
'2022-W13','2022-W14','2022-W15'],
'val': [0,1,2,3,5,
6,7,8]})
id week val
0 a 2022-W12 0
1 a 2022-W13 1
2 a 2022-W14 2
3 a 2022-W15 3
4 a 2022-W20 5
5 b 2022-W13 6
6 b 2022-W14 7
7 b 2022-W15 8
</code></pre>
| <python><pandas> | 2023-07-26 15:05:19 | 2 | 4,512 | quant |
76,772,262 | 15,804,353 | match case statement changing input when not supposed to | <p>I am using the current function to make sure that none of the array elements start with specific 2 variables:</p>
<pre class="lang-py prettyprint-override"><code>def checkDuplicates(x, y, array):
for l in array:
match l:
case [x, y]:
return False
case [x, y, _]:
return False
case other:
return True
</code></pre>
<p>and at some point, I had x as 1, y as 2, and l as [1,2]. There it matched the first case and returned false. However, when I used x as 1, y as 3 and l as [1,2], it for some reason changed y to 2 once it got to the first case statement, and didn't return True when it had to. Why could that be?</p>
<p>I learned about this concept on <a href="https://www.infoworld.com/article/3609208/how-to-use-structural-pattern-matching-in-python.html" rel="nofollow noreferrer">this</a> website</p>
| <python><match> | 2023-07-26 14:19:49 | 0 | 531 | hellwraiz |
76,772,249 | 4,542,117 | Python import import libraries in same script | <p>Let's say I have a library of plotting functions, all of which require importing matplotlib among a few other libraries. It would be cumbersome to do something like:</p>
<pre><code>def plot1(a,b,c,d):
import matplotlib
import matplotlib.pyplot as plt
...
...
def plot2(a,b,c,d):
import matplotlib
import matplotlib.pyplot as plt
...
...
</code></pre>
<p>Is there a way to be able to define the import matplotlib (and others) as a function, and simply import <em>that</em>? For example:</p>
<pre><code>script1.py
def load_matplotlib_funcs():
import matplotlib
import matplotlib.pyplot as plt
...
...
def plot1(a,b,c,d):
import load_matplotlib_funcs
...
</code></pre>
<p>Doing this does not seem to produce the desired results of utilizing matplotlib as.</p>
| <python><matplotlib> | 2023-07-26 14:18:39 | 1 | 374 | Miss_Orchid |
76,772,127 | 1,414,475 | Problem with using Python via R's reticulate in Docker container | <p>I create a Docker image based on <code>rocker/shinyverse</code>with the Dockerfile:</p>
<pre><code># File: Dockerfile
FROM rocker/shiny-verse:4.2.2
RUN echo "apt-get start"
RUN apt-get update && apt-get install -y \
python3 \
python3-pip
# install R packages
RUN R -e "install.packages('remotes')"
RUN R -e "install.packages('reticulate')"
RUN R -e "install.packages('tidyverse')"
# Install Python packages datetime and zeep
RUN python3 -m pip install datetime zeep
# Set the environment variable to use Python3
ENV RETICULATE_PYTHON /usr/bin/python3
</code></pre>
<p>Then I have the simple R file:</p>
<pre><code>library("tidyverse")
library(reticulate)
# Call a simple Python command to calculate 3*5
py_run_string("z = 3+4")
py$z %>% print()
</code></pre>
<p>After I launched the container, I want to run this R script with the shell command:</p>
<pre><code>docker exec shiny_new Rscript /home/shiny/ETL/reticulate_test.R
</code></pre>
<p>but I get the following error:</p>
<pre><code>Error: Python shared library not found, Python bindings not loaded.
Use reticulate::install_miniconda() if you'd like to install a Miniconda Python environment.
Execution halted
</code></pre>
<p>It fails when executing the python</p>
<p>I am unsure how to setup the Python in such a way that I can use python code via <code>reticulate</code> in my R script. Does anybody have an idea where I go wrong in setting up the Docker image?</p>
| <python><r><docker><reticulate> | 2023-07-26 14:04:34 | 1 | 3,428 | Jochem |
76,772,119 | 748,742 | Databricks Python run notebook in parallel with dynamic arguments | <p>I want to run a single notebook in parallel with the arguments passed dynamically. I want a generic function that takes parameters (notebook_path,threads, array_number_of_iterations and arguments). The user should be able to pass theses parameters and the notebook will automatically run the notebook in parallel.</p>
<p>Example:</p>
<pre><code>run_notebook_in_parallel(path, threads, array_number_of_iterations, arguments)
run_notebook_in_parallel('path_to_notebook', 2, array, argument_array)
</code></pre>
<p>I have written some code that does almost does this correctly, but the argument array literally passes an array like [0,1] to the called notebook - not a single value.</p>
<p>The only way I can make this work, is by directly passing the arguments in the map - I dont want to do this as I want the code to be as generic as possible:</p>
<pre><code>map(arguments = {"id":str(id_array), "otherargument":1})
</code></pre>
<p>I have the following code:</p>
<pre><code>from multiprocessing.pool import ThreadPool
df= spark.sql(f"""select id from xyz """)
id_array= df.select("id").rdd.flatMap(lambda x: x).collect()
arguments = {"id":str(id_array), "otherargument":1}
pool = ThreadPool(2)
pool.map(lambda id_array: dbutils.notebook.run(notebook_full_path,timeout_seconds =300,arguments = arguments), id_array)
</code></pre>
| <python><parallel-processing><databricks> | 2023-07-26 14:03:42 | 1 | 383 | uba2012 |
76,772,095 | 1,551,817 | How can I make my Slurm script loop over a list of file names? | <p>I have a slurm script to run my python code:</p>
<pre><code>#!/bin/bash -l
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --cpus-per-task=1
#SBATCH --mem=10G
#SBATCH --account=my_account
#SBATCH --qos=default
#SBATCH --time=2-00:00:00
###Array setup here
#SBATCH --array=1
#SBATCH --open-mode=truncate
#SBATCH --output=out_files/output.o
module purge
module load my_cluster
module load Miniconda3/4.9.2
eval "$(${EBROOTMINICONDA3}/bin/conda shell.bash hook)"
conda activate my_conda_env
cd /my_directory
python my_python_code.py -filename file_a.txt
</code></pre>
<p>This works, but at the moment, it just launches 1 job and uses <code>file_a.txt</code> as an argument.</p>
<p>How can I launch 10 simultaneous jobs? I know I can use:</p>
<pre><code>#SBATCH --array=1-10
</code></pre>
<p>but I want to use <code>file_a.txt</code> as the argument for job 1, <code>file_b.txt</code> as the argument for job 2 etc..</p>
<p>I would like to provide the lists of file names as a separate text file if possible, which is read by the slurm script.</p>
| <python><shell><slurm> | 2023-07-26 14:00:00 | 1 | 7,561 | user1551817 |
76,772,038 | 567,059 | VSCode Function App debug cant find task 'func: host start' | <p>I am trying to locally debug a Azure Function App (Python) using VS Code in WSL, Ubuntu 22.04. Unfortunately though, debugging doesn't seem to work.</p>
<p>I have installed and following pre-requisites -</p>
<ul>
<li>Azure extension (and logged in)</li>
<li>Azure Functions extension (and created a Function App project using <kbd>Ctl</kbd>, <kbd>Shift</kbd> + <kbd>p</kbd> -> Azure Functions: Create New Project...)</li>
<li>Installed <code>azurite</code> (and started using <kbd>Ctl</kbd>, <kbd>Shift</kbd> + <kbd>p</kbd> -> Azurite: Start)</li>
<li>Ensure <em><strong>local.settings.json</strong></em> is configured with <code>"AzureWebJobsStorage": "UseDevelopmentStorage=true"</code></li>
</ul>
<p>But when I try to debug using the launch configuration <strong>Attach to Python Functions</strong> from <em><strong>.vscode/launch.json</strong></em> (added when I created a Function App project), an error message is displayed.</p>
<blockquote>
<p>Cound not find the task 'func: host start'.
<a href="https://i.sstatic.net/vervU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vervU.png" alt="enter image description here" /></a></p>
</blockquote>
<p>Using <kbd>Ctl</kbd>, <kbd>Shift</kbd> + <kbd>p</kbd> and searching for 'func' does support that the task is missing.</p>
<p>However, on the CLI I can confirm that Azure Function App Tools is installed.</p>
<pre class="lang-bash prettyprint-override"><code>user@computer:~$ func --version
4.0.5198
</code></pre>
<p>I don't have the Python extension installed for two reasons -</p>
<ul>
<li>It's already installed at the version I require through WSL.</li>
<li>Installing it makes the debugger not work - it just freezes and 'Python extension loading...' is displayed in the tool bar forever. Nothing is displayed in the 'Python' output. I have attempted the fix as described in <a href="https://stackoverflow.com/questions/72791043/how-would-i-fix-the-issue-of-the-python-extension-loading-and-extension-activati">this question</a>, but it does not work.</li>
</ul>
<p>How can I get a Function App to reliably debug using VS Code?</p>
| <python><azure><azure-functions> | 2023-07-26 13:53:23 | 1 | 12,277 | David Gard |
76,772,023 | 4,999,991 | How to download all the messages in a Telegram group? | <p>I am trying to use the below code in a Jupyter NoteBook to download all the messages of a specific Telegram group:</p>
<pre class="lang-py prettyprint-override"><code>import csv
import asyncio
from telethon import TelegramClient
from telethon.sessions import StringSession
api_id = 'YOUR_API_ID'
api_hash = 'YOUR_API_HASH'
session = StringSession()
async def print_messages():
async with TelegramClient(session, api_id, api_hash) as client:
with open('messages.csv', 'w', newline='', encoding='utf-8') as file:
writer = csv.writer(file)
writer.writerow(['Sender ID', 'Text'])
channel_username = 'channelusername'
entity = await client.get_entity(channel_username)
async for message in client.iter_messages(entity, limit=None):
writer.writerow([message.sender_id, message.text])
task = asyncio.create_task(print_messages())
await task
</code></pre>
<p>It works without errors, however, it only gives me the first 25 or so messages not all. I also tried changing the limit to <code>limit=1000</code> to no avail. Suspecting that the Telegram rate limit is the issue I also tried adding</p>
<pre><code>await asyncio.sleep(2) # sleep for 2 seconds
</code></pre>
<p>inside the <code>async for</code> loop, but it did not help either.</p>
<p>I would appreciate it if you could help me know what is the problem and how to solve it.</p>
<p><strong>P.S.</strong> I posted the question also <a href="https://t.me/Python/1889166" rel="nofollow noreferrer">here</a> on Telegram.</p>
| <python><jupyter-notebook><python-asyncio><telegram><telethon> | 2023-07-26 13:51:58 | 0 | 14,347 | Foad S. Farimani |
76,771,858 | 7,192,927 | Ruff does not autofix line-too-long violation | <p>I have a python project and I am configuring latest version of <code>ruff</code> for that project for linting and formating purpose. I have the below settings in my <code>pyproject.toml</code> file:</p>
<pre class="lang-ini prettyprint-override"><code>[tool.ruff]
select = ["E", "F", "W", "Q", "I"]
ignore = ["E203"]
# Allow autofix for all enabled rules (when `--fix`) is provided.
fixable = ["ALL"]
unfixable = []
# restrict Line length to 99
line-length = 99
</code></pre>
<p>The <code>ruff check</code> command with autofix feature (<code>--fix</code>) of ruff identifies that the lines are long with <code>E501</code> errors, but it does not format that code to wrap to next line to maintain the line-length restriction. Is there something I need to enable or do to ensure that ruff fixes this? Or is this not possible in ruff currently? Please help.</p>
<p>I tried going through the documentation to find anything, but I am clueless what to do here.</p>
| <python><ruff> | 2023-07-26 13:32:55 | 2 | 884 | Sukanya Pai |
76,771,815 | 12,057,138 | How to plot an excel line graph in python from an existing csv? | <p>I have the following code which takes a csv string and saves it as an excel file.
I would like to plot a line graph out of this csv, excluding some columns and using the only
string column as x axis (all other columns are numbers).
Creating this graph in excel is a simple three steps procedure:</p>
<ol>
<li>Marking the relevant rows</li>
<li>Click on instert</li>
<li>Click on line chart</li>
</ol>
<p>However, I couldnt find a way to do it in python.</p>
<p>Here is my code:</p>
<pre><code>def save_to_excel(csv, type):
csv_data_frame = pd.read_csv(StringIO(csv))
excel_file = pd.ExcelWriter(f"{type}_results.xlsx", engine='openpyxl')
csv_data_frame.to_excel(excel_file, index=False, sheet_name="results")
with excel_file:
pass
</code></pre>
<p>Can anyone assist in finding an elegant way to go about this?</p>
| <python><pandas><excel> | 2023-07-26 13:27:40 | 1 | 688 | PloniStacker |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.