QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 β |
|---|---|---|---|---|---|---|---|---|
77,167,011 | 9,064,266 | Why do my tf-idf values not appear consistent? | <p>I have a series of tweets that I've converted to tokens. Among them are the following:</p>
<ul>
<li>geraldkutney happen realize happen conveniently rename catch yet emergency post fact come government</li>
<li>michaelemann burn happen chickenshittily change get make stupid argument good deed go unpunished</li>
<li>rickcaughell thomas_6278 coderedearth jrockstrom jordanbpeterson fact exxon predict good accuracy would happen temperature today back 1970s 80 prof model accurate</li>
</ul>
<p>Note that the first two tweets have 13 total tokens and the third one has three.</p>
<p>Using the following code, I have created TF-IDF values:</p>
<pre><code>vectoriser = sk_text.TfidfVectorizer()
vectoriser.fit(twit_api['text_clean'])
twit_vec = vectoriser.transform(twit_api['text_clean'])
twit_vec.columns = vectoriser.get_feature_names_out()
tokens_enc = twit_vec.toarray()
</code></pre>
<p>When I look at the tf-idf values for the word 'happen' in each of these, I get values
<code>0.41124561276932653</code>, <code>0.18906439908376366</code> and <code>0.1523571031416618</code>.</p>
<p>This is with the code</p>
<pre><code>print(tokens_enc[row_nos[0], vectoriser.vocabulary_['happen']])
</code></pre>
<p>These values don't appear consistent to me. I would expect the first value to be equal to double the second one as the tf is exactly double however this doesn't appear to be the case.</p>
<p>Have I misunderstood something?</p>
| <python><scikit-learn> | 2023-09-24 12:10:14 | 3 | 1,380 | Daniel V |
77,166,930 | 7,422,392 | Add method to the Django CMS Page Model | <p><strong>Problem/Gap</strong></p>
<p>The Django CMS documentation clearly describes the process for <a href="https://docs.django-cms.org/en/latest/how_to/extending_page_title.html" rel="nofollow noreferrer">extending the Page and Title Models</a>.
However, documentation on the possibility of adding a method such as the property method below is lacking, or I can't seem to find it.</p>
<blockquote>
<p>Note: The example provided works perfectly when I include it directly
under the Django CMS <code>Page</code> model. This is not the preferred way of
course.</p>
</blockquote>
<p><strong>Question</strong></p>
<p>Say I want to add the below method to the Page (cms.models.pagemodel.Page) or Title Model (assuming they are likely to follow the same process) <em>outside the CMS app</em>. How can this be achieved?</p>
<pre><code> @property
def my_custom_page_property(self):
try:
logger.info(f'Working....')
return {}
except Exception as e:
logger.error(f'Error: {e}')
return {}
</code></pre>
<p><strong>Attempt</strong></p>
<p>In a seperate app I added the code below, and migrated. The image field is properly configured and works fine. The method however doesn't seem to return anything.</p>
<pre><code>from cms.extensions import PageExtension
from cms.extensions.extension_pool import extension_pool
class MyCustomExtension(PageExtension):
image = models.ForeignKey(Image,
on_delete=models.SET_DEFAULT,
default=None,
blank=True,
null=True)
@property
def my_custom_page_property(self):
..
</code></pre>
| <python><django><django-cms> | 2023-09-24 11:41:07 | 1 | 1,006 | sitWolf |
77,166,897 | 2,604,247 | How to Form a Polars DataFrame from SQL Alchemy Asynchronous Query Result? | <p>The title says it all. Here is the code snippet.</p>
<pre class="lang-py prettyprint-override"><code>async with EngineContext(uri=URI) as engine:
session = async_sessionmaker(bind=engine, expire_on_commit=True)()
async with session.begin():
stmt: Select = select(User).order_by(_GenerativeSelect__first=User.login_date.desc()).limit(limit=10)
result = await session.execute(statement=stmt)
</code></pre>
<p>Equivalent to the very simple query,</p>
<pre class="lang-sql prettyprint-override"><code>SELECT * FROM User ORDER BY login_date DESC LIMIT 10;
</code></pre>
<p>The query is working fine and I am getting the results as expected. But I want to form a polars dataframe from the result, and without having to hardcode (or externally supply) the column names. I cannot seem to get the correct public APIs of the result object to manage it. When I try <code>result.keys()</code>, I get <code>RMKeyView(['User'])</code>, not the column names.</p>
<p>I tried iterating over the results, where each object has the column names as attributes, but there are many other attributes. So the code cannot be dynamic enough to pick up the column names.</p>
<p>So any help on the cleanest way to form a polars Dataframe (preferably lazy) from this result?</p>
<p>Related, when I begin the session, is there a way to explicitly mark the session as read-only in SQL alchemy, to preclude any possibility of data corruption when no write is intended in that session?</p>
| <python><mysql><sqlalchemy><python-polars> | 2023-09-24 11:33:30 | 5 | 1,720 | Della |
77,166,604 | 6,494,707 | How we can group the spectral bands into subgroups? | <p>The simplest way is to group based on the neighboring wavelength groups in <code>hyperspectral images</code>, as shown in the following image, where there are $G$ different groups and the <code>spectral bands</code> are grouped in equally sized bands. For example, each group $$g_i$$ has 10 spectral neighboring bands grouped together.</p>
<p><a href="https://i.sstatic.net/ey9Ok.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ey9Ok.png" alt="enter image description here" /></a></p>
<p>I am looking for a more sophisticated technique to group the spectral bands. Moreover, as far as I know, this is not related to band selection since we need all bands. Do you know any research paper/study for this aim? I would appreciate it if you share the studies. Thanks</p>
| <python><pca><satellite-image><spectral-python> | 2023-09-24 10:13:22 | 1 | 2,236 | S.EB |
77,166,543 | 5,049,813 | Number in df column, but not in list version of that column | <p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code> if 0 in df[RATING_COL]:
rating_col_list = df[RATING_COL].to_list()
assert 0 in rating_col_list
</code></pre>
<p>The assert is triggering an <code>AssertionError</code>. How can this be possible? <strong>How can it be that there's a 0 in the column, but when I convert the column to a list, the 0 disappears?</strong></p>
<p>The dataframe I'm loading is based on MovieLens-1M and looks like:</p>
<pre><code>user_id,item_id,rating
1,1193000,2
1,1193001,3
1,1193002,4
1,1193003,5
1,1193004,6
1,1193005,7
1,1193006,8
1,1193007,9
1,1193008,10
1,661000,6
1,661001,7
1,661002,8
1,661003,9
1,661004,10
1,661005,9
1,661006,8
1,661007,7
1,661008,6
</code></pre>
<p>In this format, <code>1,1193008,10</code> indicates that user 1 rated item 1193 with an 8. The 10 indicates that this is the rating, and all other items starting with 1193 will have a rating lower than 10. (So <code>1,661004,10</code> indicates that user 1 rated item 661 with a 4.)</p>
<p>(Also, I've checked with CTRL-F: there is no 0 rating in the rating column.)</p>
| <python><pandas><assert> | 2023-09-24 09:57:54 | 2 | 5,220 | Pro Q |
77,166,452 | 1,954,677 | How to control pytest-cov (or generally any pytest plugin) from my own plugin | <p>I have a plugin <code>myplugin</code> with the following behavior: When calling <code>pytest ... --myplugin=X</code> it should trigger the same behavior as <code>pytest ... --cov=X --cov-report=json</code>.</p>
<p>I'm still new to <code>pytest</code> and while my implementation technically works, I feel very uncomfortable with it because my implementations seems to break <code>pytest</code> behavior (see below) and I cannot manage to find general enough information on the pytest plugin concept in the pytest API reference or tutorials/videos to understand my mistake.</p>
<p>As I'm eager to learn, my <strong>question</strong> here is twofold</p>
<ul>
<li>Concrete: What am I doing wrong in terms of pytest plugin design?</li>
<li>General: Are there better approaches for controlling another plugin? If yes, how would one apply them to <code>pytest_cov</code>?</li>
</ul>
<p>We start with an example test project</p>
<pre class="lang-py prettyprint-override"><code>
# myproject/src/__init__.py
def func():
return 42
# myproject/test_src.py
import src
def test_src():
assert src.func() == 42
</code></pre>
<p>Then there is the plugin</p>
<pre class="lang-py prettyprint-override"><code>import pytest
def pytest_addoption(parser):
group = parser.getgroup('myplugin')
group.addoption(
'--myplugin',
action='store',
dest='myplugin_source',
default=None,
)
def _reconfigure_cov_parameters(options):
options.cov_source = [options.myplugin_source]
options.cov_report = {
'json': None
}
# FIXME this solution to control pytest_cov strongly relies on their implementation details
# - because pytest_cov uses the same hook without hookwrapper,
# we are guaranteed to come first
# - we modify the config parameters, hence strongly rely on their interface
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_load_initial_conftests(early_config, parser, args):
print('\ncode point 1')
print('early_config.known_args_namespace.cov_source', early_config.known_args_namespace.cov_source)
print('early_config.known_args_namespace.cov_report', early_config.known_args_namespace.cov_report)
print('early_config.known_args_namespace.myplugin_source', early_config.known_args_namespace.myplugin_source)
if early_config.known_args_namespace.myplugin_source is not None:
_reconfigure_cov_parameters(early_config.known_args_namespace)
print('\ncode point 2')
print('early_config.known_args_namespace.cov_source', early_config.known_args_namespace.cov_source)
print('early_config.known_args_namespace.cov_report', early_config.known_args_namespace.cov_report)
print('early_config.known_args_namespace.myplugin_source', early_config.known_args_namespace.myplugin_source)
yield
def pytest_sessionfinish(session, exitstatus):
print('\ncode point 3')
print('session.config.option.cov_source=', session.config.option.cov_source)
print('session.config.option.cov_report', session.config.option.cov_report)
print('session.config.option.myplugin_source=', session.config.option.myplugin_source)
</code></pre>
<p>When I run the plugin, it technically does what it should, <code>cov</code> behaves exactly like I want it producing the <code>json</code> output.</p>
<p>However, if I look at the debug output, it is as follows (truncated to relevant details):</p>
<ul>
<li>scenario 1 without <code>myplugin</code></li>
</ul>
<pre><code>$ python -m pytest -vs test_src.py --cov=./src --cov-report=html
code point 1
early_config.known_args_namespace.cov_source ['./src']
early_config.known_args_namespace.cov_report {'html': None}
early_config.known_args_namespace.myplugin_source None
code point 2
early_config.known_args_namespace.cov_source ['./src']
early_config.known_args_namespace.cov_report {'html': None}
early_config.known_args_namespace.myplugin_source None
plugins: cov-4.1.0, myplugin-0.1.0
code point 3
session.config.option.cov_source= ['./src']
session.config.option.cov_report {'html': None}
session.config.option.myplugin_source= None
</code></pre>
<ul>
<li>scenario 2 with <code>myplugin</code></li>
</ul>
<pre><code>$ python -m pytest -vs --myplugin=./src
code point 1
early_config.known_args_namespace.cov_source []
early_config.known_args_namespace.cov_report {}
early_config.known_args_namespace.myplugin_source ./src
code point 2
early_config.known_args_namespace.cov_source ['./src']
early_config.known_args_namespace.cov_report {'json': None}
early_config.known_args_namespace.myplugin_source ./src
plugins: cov-4.1.0, myplugin-0.1.0
code point 3
session.config.option.cov_source= []
session.config.option.cov_report {}
session.config.option.myplugin_source= ./src
Coverage JSON written to file coverage.json
</code></pre>
<p>So what puzzles me here, is that <code>myplugin</code> seems to break <code>pytest</code>s processing of the <code>cov_source</code> option so that my manipulations of <code>cov_</code> options in <code>early_config.known_args_namespace</code> are not correctly transferred to <code>session.config</code>. Even more surprising is that <code>cov</code> still sees my changes.</p>
<p>That is due to the fact, that <code>cov</code> seems to mainly rely on <code>early_config.known_args_namespace</code>, maybe that is a non-standard paradigm which I shouldn't have followed.</p>
<p>Details:</p>
<ul>
<li>plugin structure taken from <a href="https://github.com/pytest-dev/cookiecutter-pytest-plugin" rel="nofollow noreferrer">https://github.com/pytest-dev/cookiecutter-pytest-plugin</a> and installed via <code>pip install -e</code>.</li>
<li><code>Python 3.10.12</code></li>
<li><code>pytest-7.3.1</code></li>
<li><code>cov-4.1.0</code></li>
</ul>
| <python><plugins><pytest><pytest-cov> | 2023-09-24 09:26:26 | 1 | 3,916 | flonk |
77,166,143 | 714,564 | Creating a custom Python type from existing dataclass / pydandic class | <p>I have a kind of standard response format I want to apply to all my fastAPI responses. For this example, we'll talk about something like this:</p>
<pre><code>@dataclass
class Response:
status: str = "OK"
data: Any
</code></pre>
<p>Where the data can be any other dataclass. However, I don't want to have to create a separate dataclass for each. In my dreams, I'd love to have the option to do something like</p>
<pre><code>@dataclass
class Customer:
name: str
age: int
@app.get()
def get_customer(customer_id) -> Response[Customer]
....
</code></pre>
<p>Is there any way for me to do this in Python? Create these kind of custom types?</p>
<p>Thanks</p>
| <python><python-typing><pydantic><python-dataclasses> | 2023-09-24 07:45:07 | 1 | 671 | GuruYaya |
77,165,974 | 971,355 | NameError: name 'ExceptionGroup' is not defined | <p>I work through Python tutorial and ran against a sample in <a href="https://docs.python.org/3/tutorial/errors.html" rel="nofollow noreferrer">https://docs.python.org/3/tutorial/errors.html</a> in section <strong>8.9. Raising and Handling Multiple Unrelated Exceptions</strong> which doesn't work by me:</p>
<pre><code>$ python
Python 3.10.12 (main, Jun 11 2023, 05:26:28) [GCC 11.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> excs = [OSError('error 1'), SystemError('error 2')]
>>> raise ExceptionGroup('there were problems', excs)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'ExceptionGroup' is not defined
>>>
</code></pre>
<p>Why? Isn't a <code>ExceptionGroup</code> a built-in exception? The compiler doesn't give any errors and IDE pops up the documentation for this class...</p>
<p>The next thought was - I have to import something:</p>
<pre><code>>>> from builtins import ExceptionGroup
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'ExceptionGroup' from 'builtins' (unknown location)
</code></pre>
<p>What's wrong?</p>
| <python><exception> | 2023-09-24 06:37:20 | 1 | 3,401 | ka3ak |
77,165,787 | 11,141,271 | Get current song title from Windows Media Player UI in the bottom left | <p><a href="https://i.sstatic.net/QIexZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/QIexZ.png" alt="enter image description here" /></a></p>
<p>I want to get the text from the UI element in the bottom left of Windows Media Player. I want to store it in a python variable. How can I programmatically navigate the UI hierarchy to achieve this?</p>
| <python><ui-automation><windows-media-player><microsoft-ui-automation><comtypes> | 2023-09-24 05:15:46 | 1 | 1,045 | TeamDman |
77,165,623 | 489,607 | Django: Validate DateTimeField based on another DateTimeField | <p>I'd like to validate a <code>DateTimeField</code> based on another <code>DateTimeField</code> in the same mode, like so:</p>
<pre><code>class Operation(models.Model):
machine = models.ForeignKey(Machine, on_delete=models.CASCADE)
start = models.DateTimeField(auto_now_add=True)
end = models.DateTimeField(
null=True,
blank=True,
validators=[
MinValueValidator(start)
]
)
</code></pre>
<p>I get a <code>TypeError</code> exception when POSTing:</p>
<blockquote>
<p><em>'<' not supported between instances of 'datetime.datetime' and 'DateTimeField'</em></p>
</blockquote>
<p>These are the variables:</p>
<blockquote>
<p>a
datetime.datetime(2023, 1, 10, 0, 25, 29, tzinfo=zoneinfo.ZoneInfo(key='America/Sao_Paulo'))</p>
<p>b
<django.db.models.fields.DateTimeField: start></p>
<p>self <django.core.validators.MinValueValidator object at 0x104ded4d0></p>
</blockquote>
<p>I suppose I need to extract the <code>datetime.datetime</code> from the field, but I can't seem to do it from inside the model validator.</p>
<p>Any tips? Thank you.</p>
| <python><django><validation><django-models> | 2023-09-24 03:37:37 | 2 | 16,248 | davidcesarino |
77,165,603 | 308,827 | Repeating values in a specific pattern in pandas dataframe column | <p>I want to create a column in a pandas dataframe such that it contains values from 10 to 100 with an increment of 10. The value of 10 is assigned to the first 10 percent of the rows and the value of 20 is assigned to the next 10% of the rows and so on. Assuming that the dataframe has a length of 153, this is what I tried:</p>
<p><code>arr = range(10, 110, 10)</code><br />
<code>np.resize(arr, 153)</code></p>
<p>I get this:
<code>array([ 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100, 10, 20, 30])</code></p>
<p>How can I fix this? I want the foll. result:</p>
<p><code>array([ 10, 10, 10, 10, 10, 10,... 100, 100, 100, 100])</code></p>
| <python><pandas> | 2023-09-24 03:26:27 | 2 | 22,341 | user308827 |
77,165,535 | 2,357,712 | Can regex transformations be stored in a dictionary? | <p>I have thousands of lines of text to which I can apply a regex approach. I'm doing it in two steps, but I don't think my second step is possible.</p>
<ul>
<li>Step 1: Pick the correct regex pattern from a dictionary</li>
<li>Step 2: apply the correct transformation <strong>from a dictionary</strong> to create new output</li>
</ul>
<p>I'm trying to confirm that there is no way to store the transformation instruction in a dictionary.</p>
<p>I can solve the problem differently by having a decode function for each type, and then have a dictionay of decode functions, but I'm curious to know if there's a way to do it like I first attempted (below).</p>
<pre><code>text_lines = """
D3/25' 9
U-106.00
T-106.00
CX
PPlayHouse
LSchooling:School Fees
SSchooling:School Fees
EWeek 14
$-38.00
SSchooling:School Fees
EWeek 15
$-19.00
SSchooling:School Fees
EWeek 16
$-38.00
SSchooling:School Fees
$-11.00
"""
dict_regex_patterns = {
'AccountDefinitions': {
"D": r'(D)(\d{1,2})/\s*(\d{1,2})\'\s*(\d{1,2})$',
"S": r'(S)(.+)',
"E": r'(E)(.+)',
"$": r'($)(.+)',
...etc
},
'CatDefinitions': {
"T": r'(T)([+-]?\d+(\.\d{1,2})?)',
"U": r'(U)(.+)',
"C": r'(C)(.+)',
"P": r'(P)(.+)',
"L": r'(L)(.+)', },
...etc
}
# ------------------this wont work--------------------
dict_decoders = {
'AccountDefinitions': {
"D" : '20'+"{:02d}".format(int(source.group(4))) +f"-{source.group(2)}-{source.group(3)}",
...etc
}
}
# ------------------this wont work--------------------
....
current_column_name = None
lines = text_lines.splitlines()
for line in lines:
# If the line is one character long, consider it as part of the previous line
if len(line) >= 1:
current_column_name = line[0]
regXpatterns = dict_regex[fragmentType]
try:
matcher = re.match(regXpatterns[current_column_name], line)
decoder = re.match(regXpatterns[current_column_name], line)
except:
print('Error: No regex pattern defined for field value type ', current_column_name)
print (matcher)
if decoded:
answer = '20'+"{:02d}".format(int(match_date.group(4))) +f"-{match_date.group(2)}-{match_date.group(3)}"
print (line, ' <-> ', current_transaction['Date'])
</code></pre>
<p>This is an odd little file format called "QIF" (<a href="https://en.wikipedia.org/wiki/Quicken_Interchange_Format" rel="nofollow noreferrer">https://en.wikipedia.org/wiki/Quicken_Interchange_Format</a>) and I want to write my own convertor. Input QIF, output CSV.</p>
<p>For numerous reasons, not least of which is a peculiar implementation of this file format here, I want to create a function that will take a couple of lines of text and convert it to a CSV style set of records.</p>
<p>Input - as above, desired output - as follows:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>D</th>
<th>U</th>
<th>T</th>
<th>C</th>
<th>P</th>
<th>L</th>
<th>S</th>
<th>E</th>
<th>$</th>
</tr>
</thead>
<tbody>
<tr>
<td>3/25' 9</td>
<td>-106.00</td>
<td>-106.00</td>
<td>X</td>
<td>PlayHouse</td>
<td>Schooling:School Fees</td>
<td>Schooling:School Fees</td>
<td>Week 14</td>
<td>-38.00</td>
</tr>
<tr>
<td>3/25' 9</td>
<td>-106.00</td>
<td>-106.00</td>
<td>X</td>
<td>PlayHouse</td>
<td></td>
<td>Schooling:School Fees</td>
<td>EWeek 15</td>
<td>-19.00</td>
</tr>
<tr>
<td>3/25' 9</td>
<td>-106.00</td>
<td>-106.00</td>
<td>X</td>
<td>PlayHouse</td>
<td></td>
<td>Schooling:School Fees</td>
<td>EWeek 16</td>
<td>-38.00</td>
</tr>
<tr>
<td>3/25' 9</td>
<td>-106.00</td>
<td>-106.00</td>
<td>X</td>
<td>PlayHouse</td>
<td></td>
<td>Schooling:School Fees</td>
<td></td>
<td>-11.00</td>
</tr>
</tbody>
</table>
</div>
<p>EDIT #2 - using Quiffen</p>
<p>Produces this:</p>
<pre><code>C:\Users\Maxcot>python
Python 3.11.3 (tags/v3.11.3:f3909b8, Apr 4 2023, 23:49:59) [MSC v.1934 64 bit (AMD64)] on win32
Type "help", "copyright", "credits" or "license" for more information.
>>> from quiffen import Qif, QifDataType
>>> import os
>>> import decimal
>>> folder = r'C:\Users\Maxcot\Desktop\Files'
>>> sourcefile = os.path.join(folder,'Exported.qif')
>>> qif = Qif.parse(sourcefile, day_first=False)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Python311\Lib\site-packages\quiffen\core\qif.py", line 195, in parse
new_category = Category.from_list(sanitised_section_lines)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python311\Lib\site-packages\quiffen\core\category.py", line 466, in from_list
raise ValueError(f"Unknown line code: {line_code}")
ValueError: Unknown line code: t
>>>```
</code></pre>
| <python><regex> | 2023-09-24 02:19:06 | 1 | 1,617 | Maxcot |
77,165,423 | 2,223,706 | Strictly Convert JSON dict to dataclass with enums at runtime | <p>So, this actually does work:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from enum import Enum
from typing import Any, TypeVar
from pydantic import TypeAdapter, parse_obj_as
T = TypeVar('T')
def dict_to_dataclass_2(dict_: dict[Any, Any], dataclass_: type[T]) -> T:
adapter = TypeAdapter(dataclass_)
return adapter.validate_python(dict_)
class Color(Enum):
RED = 'red'
BLUE = 'blue'
@dataclass
class Fish:
color: Color
dict_to_dataclass({'color': 'red'}, Fish)
</code></pre>
<p>But I'd like to pass <code>strict=True</code> to <code>validate_python</code>. Is there any way to pass that option, but still handle the enum conversion?</p>
| <python><pydantic><python-dataclasses><typeddict> | 2023-09-24 01:04:18 | 0 | 4,643 | Garrett |
77,165,412 | 1,230,724 | Fastest way to assign series to dataframe | <p>I need to convert a series of a dataframe. The dataframe has about ~1M rows. The conversion works fine, but I noticed that assigning the series back to the dataframe (same column, with same index, name, etc) takes a lot of time (supposedly because the index is aligned, data is copied and some validation which isn't necessary).</p>
<p>Is there a way to bypass all of this and assign the new series to the dataframe?</p>
<p>My current (slow approach is):</p>
<pre><code>s1 = df[c]
s2 = convert(s1)
df[c] = s2
</code></pre>
<p>Is there anything glaringly obvious that I could try? I tried <code>.loc[:, c] = s2</code>, <code>.iloc[:, c_index] = s2</code> and <code>.loc[s2.index, c] = s2</code>, but there wasn't much of a difference. What other ways are there which I could try out?</p>
| <python><pandas> | 2023-09-24 00:56:25 | 1 | 8,252 | orange |
77,165,374 | 2,223,706 | Runtime checking for extra keys in TypedDict | <p><a href="https://stackoverflow.com/a/68483666/2223706">This answer</a> tells how to validated a TypedDict at runtime, but it won't catch the case of an extra key being present.</p>
<p>For example, this doesn't throw. How can I make it throw?</p>
<pre class="lang-py prettyprint-override"><code>from typing import Any, TypeVar
from pydantic import TypeAdapter
from typing_extensions import TypedDict
T = TypeVar('T')
def runtime_check_typed_dict(TypedDictClass: type[T], object_: Any) -> T:
TypedDictValidator = TypeAdapter(TypedDictClass)
return TypedDictValidator.validate_python(object_, strict=True)
class MyThing(TypedDict):
key: str
obj_: Any = {'key': 'stuff', 'extra': 'this should throw!'}
runtime_check_typed_dict(MyThing, obj_)
</code></pre>
| <python><pydantic><typeddict> | 2023-09-24 00:31:56 | 1 | 4,643 | Garrett |
77,165,193 | 3,917,465 | pytest @patch does not work for mocking the whole class (all the methods of the class) but works for mocking individual methods of the class | <p>I have two classes like below:</p>
<p>in parent.py:</p>
<pre><code>class TestClassParent(ABC):
def test_method(self):
print('the method TestMethod is called')
return True
</code></pre>
<p>in child.py:</p>
<pre><code>from src.scripts.parent import TestClassParent
class TestClassChild(TestClassParent):
custom_vars = [1, 2, 3]
</code></pre>
<p>I have a test class for child.py called, child_test.py like this:</p>
<pre><code>from unittest import TestCase
from unittest.mock import patch
from src.scripts.child import TestClassChild
class ChildTest(TestCase):
@patch('src.scripts.child.TestClassParent', autospec=True)
def test(self, mock_parent_class):
mock_parent_class.return_value.test_method.return_value = False
i = TestClassChild()
a = i.test_method()
assert a is False
</code></pre>
<p>Looks like the patch <code>@patch('src.scripts.child.TestClassParent', autospec=True)</code> didn't work and when <code>a = i.test_method()</code> is executed, the actual implementation is run.</p>
<p>However mocking an individual methods of the <code>TestClassParent</code> works as expected for example if I change the implementation like below, it works fine:</p>
<pre><code>@patch('src.scripts.child.TestClassParent.test_method', autospec=True)
def test(self, mock_method):
mock_method.return_value = False
i = TestClassChild()
a = i.test_method()
assert a is False
</code></pre>
<p>I also read from <a href="https://stackoverflow.com/questions/55597811/pytest-patching-a-class-does-not-work-calls-class-instead">pytest - Patching a class does not work, calls class instead</a> that I should mock the class where it's used not where it's defined and I did it exactly like that, the <code>TestClassParent</code> was used inside <code>TestClassChild</code> so I mocked where it's used: <code>@patch('src.scripts.child.TestClassParent')</code> but no luck.
I even mocked both <code>TestClassChild</code> and <code>TestClassParent</code> (both the usage and definition) still no luck and also removed <code>autospec=True</code> but again no luck.</p>
<p>Any ideas ??</p>
<p>Any help would be hugely appreciated.</p>
| <python><pytest><pytest-mock><pytest-mock-patch> | 2023-09-23 22:55:16 | 2 | 4,500 | Code_Worm |
77,165,149 | 19,628,700 | setting a lower bound constraint based on condition in gekko | <p>I am trying to set up something similar to the facility location problem using gekko in python. I'm trying to minimize the distance between a set of facilities and corresponding counties while ensuring that if a facility is used as an optimal assignment it accounts for at least 10% of the total population in all of the counties.</p>
<p>I'm trying to allow the algorithm to decide the best facilities to use, so I dont want to limit how many it can select.</p>
<p>so, my objective is to minimize the distance between the chosen facilities and the counties that are mapped to it. my constraints are that each county can only be mapped to one facility. And, if a facility is selected in the final solution, the sum of the population in each county mapped to that facility must be at least 10% of the total population in all the counties.</p>
<p>I keep getting "@error: Solution Not Found" and dont know other steps to take using this method. So, is there a <strong>better way to set this problem up</strong>, and if not <strong>what other algorithms would you recommend using to solve this problem?</strong></p>
<p><strong>Relevant Data</strong></p>
<ul>
<li><p>Note: this is a very small selection of my real dataset, I'm just trying to show what it looks like</p>
</li>
<li><p>county_idx = list of integers from 0 to I-1 where I = total # of counties</p>
</li>
<li><p>selected_facilities = list of integers from 0 to J-1 where J = total # of facilities</p>
</li>
<li><p>distances_mapped = array where rows are the counties index and columns are the facility index. The values are corresponding distances between the county and facility</p>
</li>
<li><p>populations_mapped = pandas series that has the population of each county where the rows are indexed by the county_idx and value is the corresponding population</p>
</li>
</ul>
<hr />
<pre><code>county_idx = [0,1,2]
selected_facilities = [0, 1, 2]
distances_mapped = np.array([[193, 85, 226],
[139, 112, 241],
[175, 110, 249]])
populations_mapped = [981447, 327286, 176622]
cutoff_population = 153
</code></pre>
<hr />
<p><strong>Relevant Code</strong></p>
<pre><code>m = GEKKO(remote=False)
n_rows = len(county_idx)
n_cols = len(selected_facilities)
# Create binary variables for facility selection
facility_assignment = m.Array(m.Var, (n_cols), lb=0, ub=1, integer=True)
# Create binary variables for county-facility mapping
assignment_matrix = m.Array(m.Var, (n_rows, n_cols), lb=0, ub=1, integer=True)
# Define the objective function - weighted sum of distances
m.Minimize(m.sum(assignment_matrix * distances_mapped))
# Each county is serviced by exactly one facility
for i in county_idx:
m.Equation(m.sum([assignment_matrix[(i, j)] for j in selected_facilities]) == 1)
# need to bind facility assignment to make sure it ties to the assignment matrix
for j in selected_facilities:
# need to make sure facility assignment updates when a facility is selected
# so, if any of the counties i have a facility j mapped to it then facility_assignement needs to keep track of that
m.Equation(facility_assignment[j] == m.sum([assignment_matrix[(i, j)] for i in county_idx]))
# Ensure that the population assigned to a facility is >= x% of the total population
for j in selected_facilities:
# logic: if a facility is selected then the sum of its population must be >= x% of the total population
m.Equation(m.sum([assignment_matrix[(i, j)] * populations_mapped.iloc[i, 0] for i in county_idx]) >= cutoff_population * facility_assignment[j])
# ensures integer solutions using apopt solver
m.options.SOLVER = 1
m.options.IMODE = 3
# Solve the optimization problem
m.solve(disp=True, debug=True)
</code></pre>
| <python><linear-programming><gekko><integer-programming> | 2023-09-23 22:36:46 | 1 | 311 | finman69 |
77,165,100 | 5,843,769 | Only the first row of annotations displayed on seaborn heatmap | <p>As it's usually advised, I have managed to reduce my problem to a minimal reproducible example:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
matrix = np.array([[0.1234, 1.4567, 0.7890, 0.1234],
[0.9876, 0, 0.5432, 0.6789],
[0.1111, 0.2222, 0, 0.3333],
[0.4444, 0.5555, 0.6666, 0]])
sns.heatmap(matrix, annot=True)
plt.show()
</code></pre>
<p>Vaguely based on Seaborn <a href="https://seaborn.pydata.org/generated/seaborn.heatmap.html?highlight=heatmap#seaborn.heatmap:%7E:text=values%20with%20text%3A-,sns.heatmap(glue%2C%20annot%3DTrue),-Control%20the%20annotations" rel="noreferrer">official documentation</a>.</p>
<p>Unfortunately, unlike what would be expected (all numbers visible), I get only the numbers in the top row visible:</p>
<p><a href="https://i.sstatic.net/iedcm.png" rel="noreferrer"><img src="https://i.sstatic.net/iedcm.png" alt="enter image description here" /></a></p>
<hr />
<p>As there is not really much room for error in this one, I'm out of ideas and google/SO doesn't seem to have this question asked before. Is this a bug?</p>
<hr />
<p>I am running:</p>
<pre><code>Seaborn 0.12.2
Matplotlib 3.8.0
PyCharm 2023.1.4
Windows 10
</code></pre>
| <python><matplotlib><seaborn> | 2023-09-23 22:19:13 | 6 | 1,558 | Mantas KandrataviΔius |
77,164,983 | 1,080,189 | Type hinting Python dataclasses subclass without redefining default value | <p>When using keyword only dataclasses to define fields of a base class that are inherited by subclasses, how should the fields be type hinted in the subclasses to denote that the fields have a limited set of permissible values without re-assigning the default value?</p>
<p>Example (for illustrative purposes - not a real world example):</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from enum import auto, StrEnum
from typing import Literal
class Direction(StrEnum):
DOWN = auto()
LEFT = auto()
UP = auto()
RIGHT = auto()
UNKNOWN = auto()
@dataclass(kw_only=True)
class Base:
direction: Direction = Direction.UNKNOWN
@dataclass(kw_only=True)
class Horizontal(Base):
direction: Literal[Direction.LEFT, Direction.RIGHT, Direction.UNKNOWN]
@dataclass(kw_only=True)
class Vertical(Base):
direction: Literal[Direction.DOWN, Direction.UP, Direction.UNKNOWN] = Direction.UNKNOWN
print(Horizontal())
print(Vertical())
</code></pre>
<p>I appreciate that this is purely type hinting and not enforced at runtime.</p>
<p>The <code>Horizontal</code> class can be instantiated without error (surprisingly - given that <code>direction</code> is defined again within this subclass without a default value) but pylint reports the following "E1125: Missing mandatory keyword argument 'direction' in constructor call"
Setting the value of <code>direction</code> in <code>Vertical</code> defeats the point of the default in the base class.</p>
| <python><python-typing><python-dataclasses> | 2023-09-23 21:28:25 | 0 | 1,626 | gratz |
77,164,975 | 20,258,214 | _tkinter.TclError: invalid command name | <p>I am getting the following error while dragging the window to another screen:</p>
<pre><code>Exception in Tkinter callback
_tkinter.TclError: invalid command name ".!sortingframe.!dynamicgrid.!imagebuttonwidget"
</code></pre>
<p>I am trying to use <a href="https://stackoverflow.com/users/7432/bryan-oakley">@bryan-oakley</a> solution from <a href="https://stackoverflow.com/questions/47704736/tkinter-grid-dynamic-layout">Tkinter Grid Dynamic Layout</a> to show buttons in a dynamic grid layout. When one button is pressed in the frame, I need to delete all the buttons inside the grid and clear the <code>tk.Text</code>. I tried adding a function for clearing the content:</p>
<pre><code> def clear_all(self):
self.text.configure(state='normal')
tags = self.text.tag_names()
for tag in tags:
self.text.tag_delete(tag)
self.text.delete('1.0', tk.END)
self.text.configure(state='disabled')
</code></pre>
<p>I've read that this Exception occurs when attempting to access a widget that has already been destroyed.</p>
<p>To my understanding, the error message suggests that the error is caused by trying to access an <code>ImageButtonWidget</code>.
I am getting the error when dragging the window to another screen and after clearing the <code>tk.Text</code> so I am assuming that I get the error from the way I am deleting the buttons.</p>
<p>Besides the <code>tk.Text</code> widget, I don't have any other references to the buttons.</p>
<p>I have no idea what I am doing wrong.</p>
<p>Thank you for your help.</p>
<hr />
<p>Edit:</p>
<pre class="lang-py prettyprint-override"><code>import tkinter as tk
import customtkinter as ctk
class DynamicGrid(ctk.CTkFrame):
def __init__(self, parent, *args, **kwargs):
super().__init__(parent, *args, **kwargs)
bg = 'gray80'
self.text = tk.Text(self, wrap="char", borderwidth=0,
background=bg, selectbackground=bg, selectforeground=bg,
exportselection=0, insertofftime=0, relief="flat",
state="disabled", spacing2=8)
self.text.grid(row=0, column=0, sticky='nswe')
def append(self, object):
self.text.configure(state="normal")
self.text.window_create("end", window=object)
self.text.insert("end", ' ')
self.text.configure(state="disabled")
def clear_all(self):
self.text.configure(state='normal')
tags = self.text.tag_names()
for tag in tags:
self.text.tag_delete(tag)
self.text.delete('1.0', tk.END)
self.text.configure(state='disabled')
class ImageButtonWidget(ctk.CTkButton):
def __init__(self, parent, filename):
super().__init__(parent)
self.configure(text=filename)
class SortingFrame(ctk.CTkFrame):
def __init__(self, parent):
super().__init__(parent)
self.images_frame = DynamicGrid(self)
self.create_images()
self.create_images()
self.images_frame.grid(row=1, column=0, sticky='nswe', columnspan=2)
def create_images(self):
# Create and display ImageButtonWidget instances for each image file
self.images_frame.clear_all()
for filename in range(2):
image_widget = ImageButtonWidget(self.images_frame, str(filename)+'.png')
self.images_frame.append(image_widget)
class App(ctk.CTk):
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.title("Example")
SortingFrame(self).grid(row=0, column=0, sticky='nsew')
if __name__ == "__main__":
app = App()
app.mainloop()
</code></pre>
<p>A full traceback:</p>
<pre><code>Exception in Tkinter callback
Traceback (most recent call last):
File "...AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 1921, in __call__
return self.func(*args)
File "...AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 839, in callit
func(*args)
File "...\explorerEnv\lib\site-packages\customtkinter\windows\widgets\scaling\scaling_tracker.py", line 187, in check_dpi_scaling
cls.update_scaling_callbacks_for_window(window)
File "...\explorerEnv\lib\site-packages\customtkinter\windows\widgets\scaling\scaling_tracker.py", line 64, in update_scaling_callbacks_for_window
set_scaling_callback(cls.window_dpi_scaling_dict[window] * cls.widget_scaling,
File "...\explorerEnv\lib\site-packages\customtkinter\windows\widgets\ctk_button.py", line 136, in _set_scaling
super()._set_scaling(*args, **kwargs)
File "...\explorerEnv\lib\site-packages\customtkinter\windows\widgets\core_widget_classes\ctk_base_class.py", line 228, in _set_scaling
super().configure(width=self._apply_widget_scaling(self._desired_width),
File "...AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 1675, in configure
return self._configure('configure', cnf, kw)
File "...AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 1665, in _configure
self.tk.call(_flatten((self._w, cmd)) + self._options(cnf))
_tkinter.TclError: invalid command name ".!sortingframe.!dynamicgrid.!imagebuttonwidget"
</code></pre>
| <python><tkinter> | 2023-09-23 21:24:50 | 1 | 1,598 | nokla |
77,164,963 | 11,494,082 | How to Merge Fine-tuned Adapter and Pretrained Model in Hugging Face Transformers and Push to Hub? | <p>I have fine-tuned the Llama-2 model following the <a href="https://github.com/facebookresearch/llama-recipes" rel="nofollow noreferrer">llama-recipes</a> repository's <a href="https://github.com/facebookresearch/llama-recipes/blob/main/examples/quickstart.ipynb" rel="nofollow noreferrer">tutorial</a>. Currently, I have the pretrained model and fine-tuned adapter stored in two separate directories as follows:</p>
<p>Pretrained Model Directory:</p>
<pre><code>Llama2-Finetuning/models_hf/
βββ 7B
βββ config.json
βββ generation_config.json
βββ pytorch_model-00001-of-00002.bin
βββ pytorch_model-00002-of-00002.bin
βββ pytorch_model.bin.index.json
βββ special_tokens_map.json
βββ tokenizer.json
βββ tokenizer.model
βββ tokenizer_config.json
</code></pre>
<p>Fine-tuned Adapter Directory:</p>
<pre><code>Llama2-Finetuning/tmp/llama-output/
βββ README.md
βββ adapter_config.json
βββ adapter_model.bin
βββ logs
</code></pre>
<p>I want to achieve two things:</p>
<ol>
<li><strong>Merge Pretrained Model and Adapter as a Single File</strong>:
I have observed that when I push the model to the Hugging Face Hub using <code>model.push_to_hub("myrepo/llama-2-7B-ft-summarization")</code>, it only pushes the adapter weights. How can I merge the pretrained model and fine-tuned adapter into a single file, similar to how "TheBloke" does, and upload them together to the Hugging Face Hub?</li>
<li><strong>Push and Load Pretrained Model and Adapter Separately</strong>:
Alternatively, I'd like to know how to push the pretrained model and fine-tuned adapter from their respective directories separately to the Hub, while still being able to load them together in my Python code for inference, just like how I loaded them from directories using the code below:</li>
</ol>
<pre><code>import torch
from transformers import LlamaForCausalLM, LlamaTokenizer
model_id="./models_hf/7B"
adapter_id="./tmp/llama-output"
tokenizer = LlamaTokenizer.from_pretrained(model_id)
model = LlamaForCausalLM.from_pretrained(model_id, load_in_8bit=True, device_map='auto', torch_dtype=torch.float16)
model.load_adapter(adapter_id)
</code></pre>
<p>I appreciate any guidance on how to achieve these two objectives efficiently. Thank you!</p>
| <python><huggingface-transformers><huggingface><llama><huggingface-hub> | 2023-09-23 21:20:21 | 2 | 315 | Aun Zaidi |
77,164,957 | 5,956,947 | Python: Using kwargs with patch() decorator | <p><strong>I can only use positional arguments with a function that uses <code>@patch</code></strong>. Running <code>being_patched(True)</code> works. Running <code>being_patched(arg=True)</code> does not.</p>
<p>The general principle is positional args cannot come after keyword args. I just find the the interaction with <code>@patch</code> to be unfortunate.</p>
<p><strong>Is there an alternative?</strong></p>
<p>I am running Python 3.11.</p>
<pre><code>import traceback
from unittest.mock import patch
import os.path
@patch('os.path.isfile')
def being_patched(arg, isfile_mock):
print(f"Inside being_patched. arg={arg} isfile_mock={isfile_mock}")
if __name__ == '__main__':
being_patched(True) # <== Works
print("\n----------\n")
try:
being_patched(arg=True) # <== Fails
except TypeError:
traceback.print_exc()
</code></pre>
<p>Produces this output:</p>
<pre><code>Inside being_patched. arg=True isfile_mock=<MagicMock name='isfile' id='3181488491344'>
----------
Traceback (most recent call last):
File "C:\Users\Mike Ulm\PycharmProjects\Exercises\USDataMap\patch_after_kwargs.py", line 13, in <module>
being_patched(arg=True)
File "C:\PythonFiles\Python311\Lib\unittest\mock.py", line 1375, in patched
return func(*newargs, **newkeywargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
**TypeError: being_patched() got multiple values for argument 'arg'**
</code></pre>
<p>Running python 3.11</p>
| <python><python-3.x><mocking><monkeypatching> | 2023-09-23 21:18:26 | 0 | 400 | Mike Ulm |
77,164,909 | 17,342,313 | module "QtQml.StateMachine" is not installed in PyQT application | <p>I wanted to use <a href="https://doc.qt.io/qt-6/qml-qtqml-statemachine-state.html" rel="nofollow noreferrer">Qt State Machine QML</a> in my PyQt 6 app, however when I try to run it, I get the following error:</p>
<pre class="lang-bash prettyprint-override"><code>QQmlApplicationEngine failed to load component
.../src/qml/Main.qml:8:1: module "QtQml.StateMachine" is not installed
</code></pre>
<p>Code:</p>
<ul>
<li>main.py:</li>
</ul>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt6.QtGui import QGuiApplication
from PyQt6.QtQml import QQmlApplicationEngine
from PyQt6.QtCore import QObject, pyqtSlot, pyqtSignal
app = QGuiApplication(sys.argv)
engine = QQmlApplicationEngine()
engine.quit.connect(app.quit)
engine.load('qml/Main.qml')
if not engine.rootObjects():
sys.exit(-1)
sys.exit(app.exec())
</code></pre>
<ul>
<li>qml/Main.qml:</li>
</ul>
<pre><code>Window {
... // Qt State Machine QML Guide code
DSM.StateMachine {
id: stateMachine
// set the initial state
initialState: s1
// start the state machine
running: true
DSM.State {
id: s1
DSM.SignalTransition {
targetState: s2
signal: button.clicked
}
onEntered: console.log("s1 entered")
onExited: console.log("s1 exited")
}
DSM.State {
id: s2
onEntered: console.log("s2 entered")
onExited: console.log("s2 exited")
}
}
}
</code></pre>
<p>To run the app I use pip to install the following packages from <code>requirements.txt</code>:</p>
<pre><code>PyQt6
pyqt6-plugins
PyQt6-Qt6
PyQt6-sip
pyqt6-tools
python-dotenv
qt6-applications
qt6-tools
</code></pre>
<p>I think the issue is with some missing package not defined there.</p>
<p>I installed <a href="https://archlinux.org/packages/extra/x86_64/qt6-scxml/" rel="nofollow noreferrer">qt6-scxml package</a> for Arch but I still get the same error and right now I am completely clueless.</p>
<p>Does anyone know what the missing pip package is?</p>
<p>Is it even possible to run it in PyQt6?</p>
| <python><qt><qml><pyqt6><qt6> | 2023-09-23 21:06:13 | 1 | 554 | satk0 |
77,164,700 | 7,212,809 | How to view plot in ipython? | <p>Repro</p>
<pre><code>In [2]: df = pd.read_csv('https://raw.githubusercontent.com/facebook/prophet/main/examples/example_pedestr
...: ians_covid.csv')
...:
In [3]: df.set_index('ds').plot();
...:
</code></pre>
<p>But this isn't plotting anything? How do I view the plot?</p>
<p>Following docs here: <a href="https://facebook.github.io/prophet/docs/handling_shocks.html#treating-covid-19-lockdowns-as-a-one-off-holidays" rel="nofollow noreferrer">https://facebook.github.io/prophet/docs/handling_shocks.html#treating-covid-19-lockdowns-as-a-one-off-holidays</a></p>
| <python><ipython> | 2023-09-23 19:58:30 | 0 | 7,771 | nz_21 |
77,164,571 | 11,686,518 | Reverse a series within DataFrame to new column | <p>How would I reverse the order of a series within a DataFrame into a new column?</p>
<pre><code>data = [["US Dollar", 7.00],
['Euro', 2.00],
['British Pound', 4.00],
['Indian Rupee', 2.00],
['Australian Dollar', 9.00],
['Canadian Dollar', 3.00],
['Singapore Dollar', 3.00],
['Swiss Franc', 5.00]
]
df = pd.DataFrame(data, columns=['Currency', 'Order'])
</code></pre>
<p>I tried using loc to do this but that is giving me the same order back.</p>
<pre><code>
df['Reverse Order'] = df['Order'].iloc[::-1]
df
# Currency Order Reverse Order
# 0 US Dollar 7.0 7.0
# 1 Euro 2.0 2.0
# 2 British Pound 4.0 4.0
# 3 Indian Rupee 2.0 2.0
# 4 Australian Dollar 9.0 9.0
# 5 Canadian Dollar 3.0 3.0
# 6 Singapore Dollar 3.0 3.0
# 7 Swiss Franc 5.0 5.0
</code></pre>
<p>I'm trying to get the reverse order such that</p>
<pre><code># Currency Order Reverse Order
# 0 US Dollar 7.0 5.0
# 1 Euro 2.0 3.0
# 2 British Pound 4.0 3.0
# 3 Indian Rupee 2.0 9.0
# 4 Australian Dollar 9.0 2.0
# 5 Canadian Dollar 3.0 4.0
# 6 Singapore Dollar 3.0 2.0
# 7 Swiss Franc 5.0 7.0
</code></pre>
| <python><pandas> | 2023-09-23 19:13:58 | 4 | 1,633 | Jesse Sealand |
77,164,541 | 395,857 | How can I programmatically remove all the datasets that HuggingFace has saved on the disk? | <p>I have downloaded a HuggingFace dataset (<a href="https://huggingface.co/datasets/uonlp/CulturaX" rel="nofollow noreferrer"><code>uonlp/CulturaX</code></a>) with the following Python code:</p>
<pre><code>from datasets import load_dataset
ds = load_dataset("uonlp/CulturaX", "ar")
</code></pre>
<p>It downloaded all the .parquet files of the HuggingFace dataset and also generated the .arrow files after the download was completed:</p>
<p><a href="https://i.sstatic.net/bgp3u.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bgp3u.png" alt="enter image description here" /></a></p>
<ul>
<li><code>downloads</code> contains the .parquet files (original dataset data files and format).</li>
<li><code>uonlp___cultura_x</code> contains the .arrow files.</li>
</ul>
<p>How can I programmatically remove all the datasets that HuggingFace has saved on the disk? I want to remove both the generated .arrow files and the original dataset data files. Is there some Python function for that?</p>
| <python><download><huggingface><huggingface-datasets> | 2023-09-23 19:06:34 | 1 | 84,585 | Franck Dernoncourt |
77,164,443 | 4,811,803 | Why does my match/case statement not work for class members? | <p>I'm finally using a Python version with support for the <code>match</code> statement, and was surprised that one of my <code>case</code>s didn't match. It seems to work fine for local or global variables, but not class/object members.</p>
<p>I've boiled it down to this example:</p>
<pre><code>#!/usr/bin/env python3
globl = None
class MyObj(object):
pass
def check(v):
global globl
globl = 2
m = MyObj()
m.val = 2
local = 2
match v:
case globl:
print(v, 'globl OK', globl)
match v:
case local:
print(v, 'local OK', local)
match v:
case m.val:
print(v, 'm.val OK', m.val)
check(1)
check(2)
check(3)
</code></pre>
<p>...which gives this output:</p>
<pre><code>tmp$ ./match-experiment.py
1 globl OK 1
1 local OK 1
2 globl OK 2
2 local OK 2
2 m.val OK 2
3 globl OK 3
3 local OK 3
</code></pre>
<p>It looks like the global and local variable are used "by reference", while the class member is used "by value". So the normal variables are assigned the matched-on value, as I expect, but the class member <code>case</code> is only executed when the current value of that variable is equal to the matched-on <code>v</code>.</p>
<p>Why does the match/case statement not work for class members?</p>
| <python><pattern-matching> | 2023-09-23 18:37:03 | 2 | 6,886 | Snild Dolkow |
77,164,318 | 1,170,805 | Error with LangChain ChatPromptTemplate.from_messages | <p>As shown in <a href="https://python.langchain.com/docs/get_started/quickstart" rel="nofollow noreferrer">LangChain Quickstart</a>, I am trying the following Python code:</p>
<pre><code>from langchain.prompts.chat import ChatPromptTemplate
template = "You are a helpful assistant that translates {input_language} to {output_language}."
human_template = "{text}"
chat_prompt = ChatPromptTemplate.from_messages([
("system", template),
("human", human_template),
])
chat_prompt.format_messages(input_language="English", output_language="French", text="I love programming.")
</code></pre>
<p>But when I run the above code, I get the following error:</p>
<pre><code>Traceback (most recent call last):
File "/home/yser364/Projets/SinappsIrdOpenaiQA/promptWorkout.py", line 6, in <module>
chat_prompt = ChatPromptTemplate.from_messages([
File "/home/yser364/.local/lib/python3.10/site-packages/langchain/prompts/chat.py", line 220, in from_messages
return cls(input_variables=list(input_vars), messages=messages)
File "/home/yser364/.local/lib/python3.10/site-packages/langchain/load/serializable.py", line 64, in __init__
super().__init__(**kwargs)
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__
pydantic.error_wrappers.ValidationError: 4 validation errors for ChatPromptTemplate
messages -> 0
value is not a valid dict (type=type_error.dict)
messages -> 0
value is not a valid dict (type=type_error.dict)
messages -> 1
value is not a valid dict (type=type_error.dict)
messages -> 1
value is not a valid dict (type=type_error.dict)
</code></pre>
<p>I use Python 3.10.12.</p>
| <python><langchain> | 2023-09-23 18:00:22 | 3 | 380 | Yannick Serra |
77,164,310 | 4,396,778 | How to enter text into a website via selenium and xpath | <p>I am using selenium to try and input text into a text field on a webpage via its xpath. My code to do this is as follows:</p>
<pre><code>import names
from fake_email import Email
import time
import undetected_chromedriver as uc
from selenium.webdriver.common.by import By
#initialize_VPN(save=1,area_input=['united kingdom'])
url = "https://freesim.vodafone.co.uk/check-out-payasyougo-campaign?kw=2076373398_78217941804_388970929371_aud-857274915513:kwd-319973502705_c___k_CjwKCAjwmbqoBhAgEiwACIjzEJyXn61Hv89l8EceOUp8kVymNDekTIKzbdghus8v4g_3lYX7eaFMGBoCiqIQAvD_BwE_k_;expubdata=ken_clickid;cpdir=%7bunescapedlpurl%7d&cid=ppc-UK_19_AO_P_X_J_I_D_PAYG_BAU_Promo_Text-Brand_Google_Free-SIM_NA_NA_Free-SIM_Mix_Mix_NA_Exact__2076373398&kpid=go_cmp-2076373398_adg-78217941804_ad-388970929371_aud-857274915513:kwd-319973502705_dev-c_ext-_prd-&vfadid=2076373398&cid=ppc&gad=1&gclid=CjwKCAjwmbqoBhAgEiwACIjzEJyXn61Hv89l8EceOUp8kVymNDekTIKzbdghus8v4g_3lYX7eaFMGBoCiqIQAvD_BwE&gclsrc=aw.ds"
driver = uc.Chrome()
fname = names.get_first_name()
driver.get(url)
time.sleep(10)
button = driver.find_element(by="xpath", value='//*[@id="onetrust-accept-btn-handler"]')
button.click()
time.sleep(10)
fname_field = driver.find_element(by="xpath", value='//*[@id="txtFirstName"]')
fname_field.send_keys(fname)
</code></pre>
<p>I am 99% sure the xpath is correct since I inspected the element and used the copy xpath function. I am really not sure where the error could be, any help would be appreciated.</p>
| <python><selenium-webdriver><xpath> | 2023-09-23 17:58:24 | 3 | 335 | tribo32 |
77,164,221 | 10,452,700 | Visualizaion of count records of categorical variables including their missing values (`None` or `NaN`) | <p>Let's say I have the following time series data dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import random
np.random.seed(2019)
# Generate TS
#rng = pd.date_range('2019-01-01', freq='MS', periods=N)
ts = pd.date_range('2000-01-01', '2000-12-31 23:00', freq='H') #.strftime('%d-%m-%Y %H:%M:%S') # freq='MS'set the frequency of date in months and start from day 1. You can use 'T' for minutes and so on
# number of samples
N = len(ts)
# Create a random dataset
data = {
#"TS": ts,
'Appx': [random.choice(['App1', 'App2', 'App3', None]) for _ in range(N)], # generate categorical data including missing data "None"
'VM': [random.choice(['VM1' , 'VM2' ]) for _ in range(N)]
}
df = pd.DataFrame(data, index=ts)
#df.resample('M').mean().plot()
df
# Appx VM
#2000-01-01 00:00:00 App1 VM2
#2000-01-01 01:00:00 None VM1
#2000-01-01 02:00:00 None VM2
#2000-01-01 03:00:00 App3 VM2
#2000-01-01 04:00:00 App1 VM1
#... ... ...
#2000-12-31 19:00:00 App2 VM1
#2000-12-31 20:00:00 App3 VM1
#2000-12-31 21:00:00 App3 VM1
#2000-12-31 22:00:00 App1 VM1
#2000-12-31 23:00:00 App1 VM1
# 8784 rows Γ 2 columns
</code></pre>
<p>After checking other available resources:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/27862214/plotting-categorical-data-counts-over-time">Plotting categorical data counts over time</a></li>
<li><a href="https://stackoverflow.com/questions/65694410/how-to-make-a-line-plot-from-a-dataframe-with-multiple-categorical-columns-in-ma">How to make a line plot from a dataframe with multiple categorical columns in matplotlib</a></li>
<li><a href="https://stackoverflow.com/questions/63308183/how-to-groupby-dataframe-with-categorical-variables-for-making-linechart-in-matp">How to groupby dataframe with categorical variables for making linechart in matplotlib?</a></li>
<li><a href="https://stackoverflow.com/questions/43832311/how-to-plot-by-category-over-time">How to plot by category over time</a></li>
<li><a href="https://stackoverflow.com/questions/65354689/plot-time-series-on-category-level">Plot time series on category level</a></li>
<li><a href="https://stackoverflow.com/questions/30942755/plotting-multiple-time-series-after-a-groupby-in-pandas">Plotting multiple time series after a groupby in pandas</a></li>
<li><a href="https://stackoverflow.com/questions/38197964/pandas-plot-multiple-time-series-dataframe-into-a-single-plot">Pandas: plot multiple time series DataFrame into a single plot</a></li>
<li><a href="https://stackoverflow.com/questions/68328524/plot-a-line-graph-with-categorical-columns-for-each-line">Plot a line graph with categorical columns for each line</a></li>
</ul>
<p><strong>Problem:</strong> plotting count records of categorical variables including their missing values (<code>None</code> or <code>NaN</code>) within <a href="/questions/tagged/pandas" class="post-tag" title="show questions tagged 'pandas'" aria-label="show questions tagged 'pandas'" rel="tag" aria-labelledby="tag-pandas-tooltip-container">pandas</a> dataframe</p>
<p><strong>My tries:</strong>
I tried to plot count records using the following scripts unsuccessfully:
At first, I used a simple example close to what I can depict both data and missing values inspired <a href="https://stackoverflow.com/a/50824069/10452700">here</a> by plotting App column records within missing values (dashed line) over time for each desired VM:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
day = ([ 1 , 2 , 3, 4, 5 , 6 , 7 , 8 , 9])
App1 = ([0.6 , 0.8 , np.nan, np.nan, 4 , 6 , 6.5 ,7 , 8])
App2 = ([ 1 , 2 , np.nan, np.nan, 0.5 , 7 , 8 , 9 , 10])
App3 = ([ 1.5 , 2.5 , np.nan, np.nan, 3 , 4 , 6 , 8 , 11])
cf = pd.DataFrame({'App1': App1, 'App2': App2, 'App3': App3}, index = day)
cf.index.name = 'day'
fig, ax = plt.subplots()
line, = ax.plot(cf['App1'].fillna(method='ffill'), color='r', ls = '--', lw = 1, label='_nolegend_')
ax.plot(cf['App1'], color='k', lw=1.5, marker = 'v', label='App1',)
line, = ax.plot(cf['App2'].fillna(method='ffill'), color='r', ls = '--', lw = 1, label='_nolegend_')
ax.plot(cf['App2'], color='k', lw=1.5, marker = 's', label='App2')
line, = ax.plot(cf['App3'].fillna(method='ffill'), color='r', ls = '--', lw = 1, label='_nolegend_')
ax.plot(cf['App3'], color='k', lw=1.5, marker = 'o', label='App3')
plt.xlabel('Time Stamp')
plt.ylabel('Record counts')
plt.title('Apps within missing values for VM1')
plt.legend()
plt.show()
</code></pre>
<hr />
<p>My outputs so far:</p>
<p><img src="https://i.imgur.com/W1PQsXB.jpg" alt="img" /></p>
<hr />
<p>But I get error when I apply it to my generated time-series data based on this <a href="https://stackoverflow.com/a/71086101/10452700">answer</a>:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
df['Appx'].fillna(value=np.nan, inplace=True)
df['Appx'].astype('category') # or str for string
#df = df.astype(int)
# Filter the DataFrame by a list of string values in the "App1" column
filtered_df = df[ df["Appx"].isin([np.nan])]
filtered_dff = df[~df["Appx"].isin([np.nan])]
cf = pd.DataFrame({'Appx': filtered_dff["Appx"]}, index = df.index)
#cf.index.name = df.index #'TS'
fig, ax = plt.subplots()
line, = ax.plot(cf['Appx'].fillna(method='ffill'), ls = '--', lw = 1, label='_nolegend_')
ax.plot(cf['Appx'], color=line.get_color(), lw=1.5, marker = 'o')
ax.tick_params(axis='x', labelrotation=45)
plt.xlabel('TS')
plt.ylabel('mm')
plt.legend('best')
plt.show()
</code></pre>
<blockquote>
<p>TypeError: 'value' must be an instance of str or bytes, not a float</p>
</blockquote>
<p>Even I digged further using <code>groupby()</code>:</p>
<pre class="lang-py prettyprint-override"><code># reset_index() gives a column for counting, after groupby uses year and category
ctdf = (df.reset_index()
.groupby(['Appx','VM'], as_index=False)
.count()
# rename isn't strictly necessary here, it's just for readability
.rename(columns={'index':'ct'})
)
ctdf
# Appx VM ct
#0 App1 VM1 1127
#1 App1 VM2 1084
#2 App2 VM1 1066
#3 App2 VM2 1098
#4 App3 VM1 1084
#5 App3 VM2 1049
df['Appx'].fillna(value=np.nan, inplace=True)
df['Appx'].astype('category') # or str for string
#df = df.astype(int)
# Filter the DataFrame by a list of string values in the "App1" column
filtered_df = df[ df["Appx"].isin([np.nan])]
#filtered_dff = df[~df["Appx"].isin([np.nan])]
# reset_index() gives a column for counting, after groupby uses year and category
ctdff = (filtered_df
#.isna()
.reset_index()
.groupby(['VM'], as_index=False)
.count()
# rename isn't strictly necessary here, it's just for readability
.rename(columns={'index':'ct'})
)
ctdff
# VM ct Appx
#0 VM1 1153 0
#1 VM2 1123 0
</code></pre>
<p>Similar to this <a href="https://stackoverflow.com/a/59127231/10452700">answer</a> I might be interested such plot so called <code>cat_horizontal_plot</code>:
<img src="https://i.imgur.com/ccSp95a.jpg" alt="img" /></p>
<p>Note: I'm not interested in removing or imputing solutions as much as possible:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/61868829/handling-missing-categorical-values-ml">Handling missing categorical values ML</a></li>
<li><a href="https://stackoverflow.com/questions/63260772/applying-onehotencoding-on-categorical-data-with-missing-values">Applying OneHotEncoding on categorical data with missing values</a></li>
<li><a href="https://stackoverflow.com/questions/46120727/replace-missing-values-in-categorical-data">replace missing values in categorical data</a></li>
<li><a href="https://stackoverflow.com/questions/46125486/deal-with-missing-categorical-data-python">Deal with missing categorical data python</a></li>
</ul>
<hr />
<p>In those corner cases I can't show missing values:</p>
<pre class="lang-py prettyprint-override"><code>import seaborn as sns
import matplotlib.pyplot as plt
sns.lineplot(data = df, x = df.index, y = 'Appx', hue = 'Appx', marker='o', alpha=0.2)
plt.legend(bbox_to_anchor=[0.5, 1.02], loc='lower center')
plt.xticks(rotation=45)
plt.show()
</code></pre>
<h2><img src="https://i.imgur.com/VcW0tEt.jpg" alt="img" /></h2>
<pre class="lang-py prettyprint-override"><code>grouped = df.groupby(['VM','Appx'])
for key, group in grouped:
data = group.groupby(lambda x: x.hour).count()
data['Appx'].plot(label=key , legend=True)
</code></pre>
<p><img src="https://i.imgur.com/MpMBNws.jpg" alt="img" /></p>
| <python><dataframe><matplotlib><time-series><missing-data> | 2023-09-23 17:35:07 | 1 | 2,056 | Mario |
77,163,990 | 617,864 | How to programmatically determine the minimum set of files required to import a given Python module? | <p>I'm working on a Python3 script that uses <a href="https://pypi.org/project/pyobjc/" rel="nofollow noreferrer">pyobjc</a> to access the macOS clipboard via <code>NSPasteboard</code>. To do this, it requires the following import:</p>
<pre class="lang-py prettyprint-override"><code>from AppKit import NSPasteboard
</code></pre>
<p>In order to keep my distribution to a minimum size (the full pyobjc 9.2 package is <strong>30MB</strong>), I wanted to find out the smallest set of files needed in order to allow this import to succeed. Through trial-and-error (using a REPL and attempting to import <code>NSPasteboard</code>, looking at the stack trace errors and adding in missing modules one by one), I determined this to be:</p>
<pre><code>./lib
βββ AppKit
βββ CoreFoundation
βββ Foundation
βββ objc
βββ PyObjCTools
</code></pre>
<p>This slimmed set is only <strong>7MB</strong> by comparison.</p>
<p><strong>My question is</strong>: is there a more pragmatic way to determine this? Using <a href="https://importlib-metadata.readthedocs.io/en/stable/using.html#package-distributions" rel="nofollow noreferrer">importlib</a> or something similar?</p>
| <python><macos><python-import><appkit><pyobjc> | 2023-09-23 16:37:25 | 1 | 861 | luckman212 |
77,163,863 | 2,023,111 | Should I wait after searching for elements and/or scrolling while web scraping? | <p>I'm trying to understand when to wait during a simple python/selenium web scraping script. I'm pulling titles and links for the top 100 results for approximately 50 search terms. I understand some waits are needed to ensure the page content has loaded, and other waits are needed to mimic human behavior (to avoid getting blocked by Google).</p>
<ol>
<li>Do these <code>find_elements()</code> and <code>find_element()</code> calls interact with the server? In other words, should wait after these calls (for both reasons mentioned previously)?</li>
</ol>
<pre><code>#get whatever tags
div_elements = driver.find_elements(By.XPATH, parentDivXPath)
title = div_element.find_element(By.XPATH, titleXPath)
</code></pre>
<ol start="2">
<li>Same question for scrolling, since I just noticed Google is now using infinite scroll. (I can swear it wasn't just yesterday! Am/was I hallucinating??) So now I need to implement scrolling, and I want to know if scrolling with JavaScript will interact with the server? I'm guessing I should also wait here since infinite scroll needs to load content from the server?</li>
</ol>
<pre><code>driver.execute_script("arguments[0].scrollIntoView();", element)
</code></pre>
| <python><web-scraping><selenium-chromedriver><infinite-scroll> | 2023-09-23 16:08:08 | 0 | 319 | Jonathan Cakes |
77,163,838 | 1,224,886 | Why are my arguments ignored only when passed with `subprocess.call`? | <p>I'm trying to call the command</p>
<pre><code>& 'C:\Users\USERNAME\AppData\Local\Programs\Microsoft VS Code\bin\code' --folder-uri='vscode-remote://ssh-remote+SERVER.domain.net/home/USERNAME/Dropbox/Projects/PROJECTNAME'
</code></pre>
<p>from a Python script with</p>
<pre class="lang-py prettyprint-override"><code>codepath = os.path.join(os.environ['LocalAppData'], 'Programs', 'Microsoft VS Code', 'bin', 'code')
path_parts = (
"--folder-uri='vscode-remote://ssh-remote+SERVER.domain.net/home/USERNAME/Dropbox/Projects/PROJECTNAME'",
)
subprocess.call([codepath] + list(path_parts), shell=True)
</code></pre>
<p>If I put that first command into an interactive powershell, vscode opens fine, (modulo replacing the upper-case placeholders with actually existing things).</p>
<p>If I do</p>
<pre><code>"C:\Users\USERNAME\AppData\Local\Programs\Microsoft VS Code\Code.exe" --folder-uri="vscode-remote://ssh-remote+SERVER.tomsb.net/home/USERNAME/Dropbox/Projects/PROJECTNAME"
</code></pre>
<p>in an interactive windows Command Prompt, it opens.</p>
<p>However, the python code just opens vscode as if no arguments were given (or as if a garbled local path was given--I <em>can</em> open local directories by providing them as the second entry in the first <code>call</code> argument.</p>
<p>How can I debug this??</p>
| <python><shell><visual-studio-code> | 2023-09-23 16:02:07 | 1 | 1,569 | tsbertalan |
77,163,726 | 1,008,596 | square api add image with python fails | <p>When using the squareup python API to upload an image to the catalog it fails on the file parameter. I will post the code, the failure and then some observations regarding the API documentation.</p>
<h2>pip install method #1</h2>
<p>I actually used a <code>requirements.txt</code>, but this is effectivly how I installed the module:</p>
<pre><code>$ pip install squareup
</code></pre>
<h2>The code</h2>
<pre><code># Import the os package
import os
# import the dotenv package
from dotenv import load_dotenv
# Get the current working directory
cwd = os.getcwd()
# Construct the .env file path
env_path = os.path.join(cwd, '.env')
# Load the .env file
load_dotenv(dotenv_path=env_path)
# Import square
from square.client import Client
# init the square API
sq_client = Client(
access_token=os.environ['SQUARE_ACCESS_TOKEN'],
environment='production'
)
# This works to verify api key and access id to upload to production environment. ie
# not sandbox
result = sq_client.catalog.upsert_catalog_object(
body = {
"idempotency_key": "{UNIQUE_KEY}",
"object": {
"type": "ITEM",
"id": "#coffee",
"item_data": {
"name": "Coffee",
"description": "Coffee Drink",
"abbreviation": "Co",
"variations": [
{
"type": "ITEM_VARIATION",
"id": "#small_coffee",
"item_variation_data": {
"item_id": "#coffee",
"name": "Small",
"pricing_type": "FIXED_PRICING",
"price_money": {
"amount": 300,
"currency": "USD"
}
}
},
{
"type": "ITEM_VARIATION",
"id": "#large_coffee",
"item_variation_data": {
"item_id": "#coffee",
"name": "Large",
"pricing_type": "FIXED_PRICING",
"price_money": {
"amount": 350,
"currency": "USD"
}
}
}
]
}
}
}
)
if result.is_success():
print(result.body)
elif result.is_error():
print(result.errors)
# afterwards, I could look in the square storefront and see the
# sample item in the storefront catalog.
# As a further test of the API, here is getting info about the
# api in the sample item. This also works as expected.
result = sq_client.catalog.list_catalog()
if result.is_success():
print(result.body)
elif result.is_error():
print(result.errors)
# This fails though.
file_to_upload_path = "./sample_imgs/drdoom.jpeg" # Modify this to point to your desired file.
f_stream = open(file_to_upload_path, "rb")
result = sq_client.catalog.create_catalog_image(
request = {
"idempotency_key": "{UNIQUE_KEY}",
"image": {
"type": "IMAGE",
"id": "#image_id",
"image_data": {
"name": "Image name",
"caption": "Image caption"
}
}
},
file = f_stream
)
if result.is_success():
print(result.body)
elif result.is_error():
print(result.errors)
</code></pre>
<p>The last api call as noted in the comments fails. It generates a stack trace with failure on the <code>file = f_stream</code> parameter. Using the read operation on f_stream works, so its not that the image cannot be found, its the file parameter itself.</p>
<p>Other points of interest, I used the square api explorer webpage and did the exact same code and it worked there. The only difference was that the explorer generates a key rather than use <code>{UNIQUE_KEY}</code> mechanism. I also tried putting the image as a peer to
the code so that I did not need to use a relative directory path. ie. no <code>./sample_imgs'</code> prefix.</p>
<p>Thinkingn that the error might be related to the version of python library and the file parameter, I went back to the square docs to find how they specify to install the sdk.
This <a href="https://developer.squareup.com/docs/sdks/python" rel="nofollow noreferrer">page</a> has a link on how to get the SDK package which says to visit PyPi. Note the link on PyPi specifies to use git. Consquently, I used this in my requirements.txt</p>
<h3>requirements.txt for method #2</h3>
<pre><code># square api install for method #1
#squareup
# square api install for method #2 using git clone
#square-python-sdk
git+https://github.com/square/square-python-sdk.git
</code></pre>
<p>However, using the git version has the same error. Please advise.</p>
| <python><square> | 2023-09-23 15:31:22 | 1 | 4,629 | netskink |
77,163,685 | 6,498,757 | Implementation of Perceptron algorithm, but not efficent when I run it | <p>When the sample size is set to 10, the average number of iterations until convergence should be around 15. However, when implementing the algorithm in my code, it takes approximately 225(or more!) iterations to reach convergence. This leads me to suspect that there may be an issue with the while loop in my code, but I'm unable to identify it.</p>
<pre class="lang-py prettyprint-override"><code>def gen_data(N=10):
size = (N, 2)
data = np.random.uniform(-1, 1, size)
point1, point2 = data[np.random.choice(data.shape[0], 2, replace=False), :]
m = (point2[1] - point1[1]) / (point2[0] - point1[0])
c = point1[1] - m * point1[0]
labels = np.array([+1 if y >= m * x + c else -1 for x, y in data])
data = np.column_stack((data, labels))
return data, point1, point2
class PLA:
def __init__(self, data):
m, n = data.shape
self.X = np.hstack((np.ones((m, 1)), data[:, :2]))
self.w = np.zeros(n)
self.y = data[:, -1]
self.count = 0
def fit(self):
while True:
self.count += 1
y_pred = self.predict(self.X)
misclassified = np.where(y_pred != self.y)[0]
if len(misclassified) == 0:
break
idx = np.random.choice(misclassified)
self.update_weight(idx)
def update_weight(self, idx):
self.w += self.y[idx] * self.X[idx]
def sign(self, z):
return np.where(z > 0, 1, np.where(z < 0, -1, 0))
def predict(self, x):
z = np.dot(x, self.w)
return self.sign(z)
</code></pre>
| <python><python-3.x><numpy><machine-learning><neural-network> | 2023-09-23 15:22:05 | 2 | 351 | Yiffany |
77,163,626 | 7,581,507 | Running code after Jupyter notebook finishes | <p>I would like to run code after the notebook finishes running (the code should run inside the same kernel)</p>
<p>I am familiar with registering callbacks on <a href="https://ipython.readthedocs.io/en/stable/config/callbacks.html" rel="nofollow noreferrer">IPython events</a>, but these only work at the cell level.</p>
<p>Is there anything similar at the notebook level? or maybe a way to detect that a cell is the last one?</p>
| <python><jupyter-notebook><ipython> | 2023-09-23 15:11:27 | 0 | 1,686 | Alonme |
77,163,495 | 1,259,374 | Refreshing Google API credentials throws TypeError after working several times in Python | <p>I am trying to authenticate and send an email using the <strong>Gmail API</strong> in <strong>Python</strong>. I've set up a function to handle the authentication. It worked well for several times, but then I started encountering an issue when trying to refresh the credentials:</p>
<pre><code>from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
import os
CREDENTIALS_FILE_NAME = 'credentials.json'
TOKEN_NAME = 'gmail_token.json'
def authenticate(self) -> Any:
creds = None
if os.path.exists(self.TOKEN_NAME):
creds = Credentials.from_authorized_user_file(self.TOKEN_NAME, self.get_scopes())
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request()) # <-- This line causes the error
else:
flow = InstalledAppFlow.from_client_secrets_file(os.path.join(Project.root_path(), self.CREDENTIALS_FILE_NAME), self.get_scopes())
creds = flow.run_local_server(port=0)
with open(self.TOKEN_NAME, 'w') as token:
token.write(creds.to_json())
return build('gmail', 'v1', credentials=creds)
def get_scopes(self) -> list:
return ['https://www.googleapis.com/auth/gmail.send']
</code></pre>
<p>After successfully authenticating and sending emails multiple times, I now get the following _<em>error</em>:</p>
<pre><code>creds.refresh(Request())
^^^^^^^^^ TypeError: Request.__init__() missing 1 required positional argument: 'url'
</code></pre>
<p>Can anyone help me fix this error or provide a detailed explanation of why it's occurring?</p>
<p><strong>Edit</strong></p>
<p>After <code>print(creds.token_uri)</code></p>
<p>This <code>url</code> received:
<code>https://oauth2.googleapis.com/token</code></p>
<p>So I try this:
<code>credentials.refresh(Request(url=credentials.token_uri))</code></p>
<p>And got <code>TypeError: 'Request' object is not callable</code></p>
| <python><google-oauth><gmail-api><google-api-python-client> | 2023-09-23 14:39:24 | 2 | 1,139 | falukky |
77,163,494 | 3,076,544 | How to load specific Firefox profile into selenium (Macos M1) | <p>I'm trying to load a specific profile into selenium, but it always opens with the default profile. Here is my code:</p>
<pre><code>profile_path = '/Users/mymachine/Library/Application Support/Firefox/Profiles/specificprofile.default'
options = webdriver.FirefoxOptions()
options.set_preference('profile', profile_path)
service = FirefoxService(GeckoDriverManager().install())
driver = webdriver.Firefox(service=service, options=options)
</code></pre>
<p>I also tried with direct indication of the gecko driver, but it doesn't work also.</p>
<pre><code>service = webdriver.FirefoxService(executable_path="/usr/local/bin/geckodriver")
</code></pre>
<p>When I log the output, I obtain this:</p>
<pre><code>1695479350799 mozrunner::runner INFO Running command: MOZ_CRASHREPORTER="1" MOZ_CRASHREPORTER_NO_REPORT="1" MOZ_CRASHREPORTER_SHUTDOWN="1" MOZ_NO_REMOTE="1" "/App ... s" "localhost" "-foreground" "-no-remote" "-profile" "/var/folders/0k/hz9yngm56nz1htdc3c3t3d0c0000gn/T/rust_mozprofileecFA3Z"
console.warn: services.settings: Ignoring preference override of remote settings server
console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment
1695479351216 Marionette INFO Marionette enabled
console.error: "Warning: unrecognized command line flag" "-remote-allow-hosts"
1695479351323 Marionette INFO Listening on port 60997
Read port: 60997
WebDriver BiDi listening on ws://127.0.0.1:60987
1695479351366 RemoteAgent WARN TLS certificate errors will be ignored for this session
UNSUPPORTED (log once): POSSIBLE ISSUE: unit 1 GLD_TEXTURE_INDEX_2D is unloadable and bound to sampler type (Float) - using zero texture because texture unloadable
DevTools listening on ws://127.0.0.1:60987/devtools/browser/2bbd8a80-f4ee-46a3-875d-7be3c1b2d504
console.warn: services.settings: Ignoring preference override of remote settings server
console.warn: services.settings: Allow by setting MOZ_REMOTE_SETTINGS_DEVTOOLS=1 in the environment
console.error: (new TypeError("lazy.AsyncShutdown.profileBeforeChange is undefined", "resource://services-settings/Database.sys.mjs", 593))
</code></pre>
<p>where <code>"-profile" "/var/folders/0k/hz9yngm56nz1htdc3c3t3d0c0000gn/T/rust_mozprofileecFA3Z"</code>calls my attention, as indicates that it is creating a temporary profile.</p>
<p>I'm in MacOS Darwin M1, with selenium 4.12.0, webdriver-manager 4.0.0.</p>
| <python><selenium-webdriver><selenium-firefoxdriver> | 2023-09-23 14:38:57 | 3 | 993 | MrT77 |
77,162,989 | 1,881,329 | Flask get public IP from client behind NGINX - not quite working | <p>I'm struggling to get the public IP from clients.</p>
<p>I have a Flask app served using Waitress behind an NGINX running on an AWS EC2 instance.</p>
<p>When I access the website using <a href="https://myservice.com" rel="nofollow noreferrer">https://myservice.com</a>, I get the AWS EC2 private IPv4.</p>
<p>When I access the website using <a href="https://www.myservive.com" rel="nofollow noreferrer">https://www.myservive.com</a>, it works: I get the public IP from my machine as intended. It also works if I access <a href="https://myservice.com" rel="nofollow noreferrer">https://myservice.com</a> but using a VPN. Finally, it also works if I access <a href="https://1.2.3.4" rel="nofollow noreferrer">https://1.2.3.4</a> (where 1.2.3.4 is the public IPv4 assigned to my AWS EC2 instance.</p>
<p>Can anyone help me understand why this is happening?</p>
<p>Here are the relevant configurations:</p>
<p>NGINX:</p>
<pre><code>upstream flask {
server flask:5000;
}
server {
listen [::]:80 default_server;
listen 80 default_server;
server_name myservice.com www.myservice.com;
return 301 https://$http_host$request_uri;
}
server {
listen [::]:443 ssl ipv6only=on;
listen 443 ssl;
server_name myservice.com www.myservice.com;
ssl_certificate /etc/letsencrypt/live/myservice.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/myservice.com/privkey.pem;
include /etc/letsencrypt/options-ssl-nginx.conf;
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem;
client_max_body_size 5M;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_pass http://flask;
}
location /static/ {
alias /static/;
autoindex off;
}
}
</code></pre>
<p>Flask:</p>
<pre class="lang-py prettyprint-override"><code># __init__.py
def create_app():
...
app.wsgi_app = ProxyFix(
app.wsgi_app, x_for=1, x_proto=1, x_host=1, x_prefix=1
)
...
@app.route('/inspect_public_ip')
def inspect_public_ip():
obj = {
'HTTP_X_FORWARDED_FOR': request.environ.get('HTTP_X_FORWARDED_FOR'),
'HTTP_X_REAL_IP': request.environ.get('HTTP_X_REAL_IP'),
'REMOTE_ADDR': request.environ.get('REMOTE_ADDR'),
'list': request.headers.getlist("X-Forwarded-For"),
'X-Forwarded-For': request.headers.get('X-Forwarded-For', None),
'X-Real-IP': request.headers.get('X-Real-IP', None),
'access_route_all': request.access_route,
}
return jsonify(obj)
...
return app
</code></pre>
<p>Docker:</p>
<pre class="lang-yaml prettyprint-override"><code>version: '3.8'
services:
flask:
command: [ "waitress-serve", "--port", "5000", "--call", "myservice:create_app" ]
expose:
- 5000
...
nginx:
ports:
- 80:80
- 443:443
depends_on:
- flask
...
</code></pre>
<p>PS: I believe I already read all StackOverflow's questions regarding this topic (there are a lot) and tried to solve the problem using them (as it can be shown in the code above), but none of the solutions worked so far (which are basically to set <code>proxy_set_header</code>s on NGINX and <code>ProxyFix</code> in Flask.</p>
| <python><amazon-web-services><nginx><flask> | 2023-09-23 12:17:29 | 0 | 436 | Carlos Souza |
77,162,824 | 1,186,991 | Unable to merge data from FRED | <p>The following code works using pandas get_data_fred() method.</p>
<pre><code>start = dt.datetime(2018, 1, 1)
end = dt.datetime(2022, 3, 28)
stk_tickers = ["AAPL", "MSFT", "TSLA", "GOOG"]
ccy_tickers = ["DEXJPUS", "DEXUSUK"]
idx_tickers = ["SP500"] # , 'DJIA', 'VIXCLS']
stk_data = yf.download(stk_tickers, start, end)
ccy_data = web.get_data_fred(ccy_tickers, start, end)
idx_data = web.get_data_fred(idx_tickers, start, end)
x_merged = stk_data['Adj Close'].merge(ccy_data, how='inner', left_index=True, right_index=True)
x_data = x_merged.merge(idx_data, how='inner', left_index=True, right_index=True)
x_data.head()
</code></pre>
<p>And produces the following.</p>
<p><a href="https://i.sstatic.net/UvVTN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UvVTN.png" alt="Output from code using Pandas get_data_fred()" /></a></p>
<p>I tried using the FredAPI but im getting an error. Ive printed the individual data frames and it appears as though I'm missing a title above the returned values.</p>
<pre><code>start = dt.datetime(2018, 1, 1)
end = dt.datetime(2022, 3, 28)
stk_tickers = ["AAPL", "MSFT", "TSLA", "GOOG"]
ccy_tickers = ["DEXJPUS", "DEXUSUK"]
idx_tickers = ["SP500"] # , 'DJIA', 'VIXCLS']
api_key_fred = os.getenv("API_KEY")
fred = fa.Fred(api_key='MY API KEY IS WRITTEN IN PLAIN TEXT HERE - I TRIED USING THE "api_key_fred' above but it wouldnt accept it)
stk_data = yf.download(stk_tickers, start, end)
ccy_data = fred.get_series('DEXJPUS', start, end)
idx_data = fred.get_series('SP500', start, end)
idx_data.columns = pd.MultiIndex.from_product([['SP500'],idx_data.columns])
ccy_data.head()
#x_merged = stk_data['Adj Close'].merge(ccy_data, how='inner', left_index=True, right_index=True)
#x_data = x_merged.merge(idx_data, how='inner', left_index=True, right_index=True)
#x_data.head()
</code></pre>
<p>This is what the dataframe looks like when using Pandas.</p>
<p><a href="https://i.sstatic.net/f9hsd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f9hsd.png" alt="Dataframe with title" /></a></p>
<p>And this is what i get when i use the get_data_fred().</p>
<p><a href="https://i.sstatic.net/hvLE6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hvLE6.png" alt="Dataframe from get_data_fred()" /></a></p>
<p>Im assuming the issue is the title, see error code below.</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
c:\Users\mannf\Documents\My Juypter Notebook.ipynb Cell 9 line 2
16 #idx_data.columns = pd.MultiIndex.from_product([['SP500'],idx_data.columns])
18 ccy_data.head()
---> 20 x_merged = stk_data['Adj Close'].merge(ccy_data, how='inner', left_index=True, right_index=True)
21 x_data = x_merged.merge(idx_data, how='inner', left_index=True, right_index=True)
22 x_data.head()
File c:\Users\mannf\anaconda3\Lib\site-packages\pandas\core\frame.py:10093, in DataFrame.merge(self, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
10074 @Substitution("")
10075 @Appender(_merge_doc, indents=2)
10076 def merge(
(...)
10089 validate: str | None = None,
10090 ) -> DataFrame:
10091 from pandas.core.reshape.merge import merge
> 10093 return merge(
10094 self,
10095 right,
10096 how=how,
10097 on=on,
10098 left_on=left_on,
10099 right_on=right_on,
10100 left_index=left_index,
10101 right_index=right_index,
10102 sort=sort,
10103 suffixes=suffixes,
10104 copy=copy,
10105 indicator=indicator,
10106 validate=validate,
10107 )
File c:\Users\mannf\anaconda3\Lib\site-packages\pandas\core\reshape\merge.py:110, in merge(left, right, how, on, left_on, right_on, left_index, right_index, sort, suffixes, copy, indicator, validate)
93 @Substitution("\nleft : DataFrame or named Series")
94 @Appender(_merge_doc, indents=0)
95 def merge(
(...)
108 validate: str | None = None,
109 ) -> DataFrame:
--> 110 op = _MergeOperation(
111 left,
112 right,
113 how=how,
114 on=on,
115 left_on=left_on,
116 right_on=right_on,
117 left_index=left_index,
118 right_index=right_index,
119 sort=sort,
120 suffixes=suffixes,
121 indicator=indicator,
122 validate=validate,
123 )
124 return op.get_result(copy=copy)
File c:\Users\mannf\anaconda3\Lib\site-packages\pandas\core\reshape\merge.py:645, in _MergeOperation.__init__(self, left, right, how, on, left_on, right_on, axis, left_index, right_index, sort, suffixes, indicator, validate)
628 def __init__(
629 self,
630 left: DataFrame | Series,
(...)
642 validate: str | None = None,
643 ) -> None:
644 _left = _validate_operand(left)
--> 645 _right = _validate_operand(right)
646 self.left = self.orig_left = _left
647 self.right = self.orig_right = _right
File c:\Users\mannf\anaconda3\Lib\site-packages\pandas\core\reshape\merge.py:2422, in _validate_operand(obj)
2420 elif isinstance(obj, ABCSeries):
2421 if obj.name is None:
-> 2422 raise ValueError("Cannot merge a Series without a name")
2423 else:
2424 return obj.to_frame()
ValueError: Cannot merge a Series without a name
</code></pre>
<p>How do i add this in to the dataframe?</p>
| <python><dataframe> | 2023-09-23 11:29:21 | 1 | 3,631 | Hans Rudel |
77,162,780 | 10,354,066 | Debugger in Python is running like in no debug mode | <p>I am coding in VS code with Python and I have Python extension installed. I put breakpoints anywhere in my program (I am sure the code works) and when I launch my program in debugging mode (F5 or start debugging) it runs just like in no debugging mode. Maybe any ideas what can cause this?</p>
<p>Simplest code I try to run:</p>
<pre><code>import pandas as pd
tsv_file = 'historicalData.tsv'
df = pd.read_table(tsv_file)
</code></pre>
| <python><visual-studio-code><debugging> | 2023-09-23 11:13:47 | 1 | 1,548 | vytaute |
77,162,718 | 2,005,559 | pandas dataframe style format not printing specified precision | <p>I am trying to format the dataframe using <a href="https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.format.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.io.formats.style.Styler.format.html</a>.</p>
<p>But, I am not getting the desired result:</p>
<pre><code>>>> df = pd.DataFrame([[np.nan, 1.534231, 'A'], [2.32453251, np.nan, 3.0]])
>>> df.style.format(na_rep='MISS', precision=3)
<pandas.io.formats.style.Styler object at 0x7f0b32eff740>
>>> print(df.head())
0 1 2
0 NaN 1.534231 A
1 2.324533 NaN 3.0
>>> print(pd.__version__)
1.5.3
</code></pre>
<p>with</p>
<pre><code>python --version
Python 3.12.0rc3
</code></pre>
<p>What am I doing wrong here?</p>
| <python><pandas><format> | 2023-09-23 10:56:35 | 2 | 3,260 | BaRud |
77,162,687 | 1,460,461 | AWS lambda with python and langchain in docker: No module named 'langchain' (Runtime.ImportModuleError) | <p>I cannot make my python script run in <code>AWS lambda</code> using <code>langchain</code> as a dependency in a docker-container. I got the docker-container created and the lambda-function deployed correctly. But as soon as I try to execute it, I get the message:</p>
<pre><code>"errorMessage": "Unable to import module 'handler': No module named 'langchain'"
</code></pre>
<p>Does anyone know why?</p>
<p>This is my <code>Dockerfile</code>:</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.9
COPY requirements.txt ./
COPY handler.py ./
# for chromaDB
RUN yum install gcc-c++ -y
RUN pip install --no-cache-dir -r requirements.txt
# CPU only version of pytorch (smaller)
RUN pip install --no-cache-dir torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cpu
CMD [ "handler.handler" ]
</code></pre>
<p>This is my <code>requirements.txt</code>:</p>
<pre><code>transformers
langchain
chromadb
pypdf
xformers
sentence_transformers
InstructorEmbedding
boto3
</code></pre>
<p>This is my <code>handler.py</code>:</p>
<pre><code>import sys
print("SYS-Path:")
print(sys.path)
import json
import torch
from langchain.document_loaders import (
AmazonTextractPDFLoader,
DirectoryLoader,
TextLoader
)
from langchain.vectorstores import Chroma
# langchain/chromaDB logic here ...
def handler(event, context):
print(event)
print(context)
body = json.loads(event.get('body'))
return {
"statusCode": 200,
"headers": {
'Content-Type': 'application/json',
'Access-Control-Allow-Origin': '*',
"Access-Control-Allow-Credentials": True
},
"body": {
"result": "ok"
}
}
</code></pre>
| <python><docker><aws-lambda><langchain> | 2023-09-23 10:48:29 | 1 | 7,802 | Benvorth |
77,162,653 | 16,171,413 | Python match-case return statement works in an interpreter but doesn't work in a text editor | <p>I am looking at the <a href="https://docs.python.org/3.10/tutorial/controlflow.html" rel="nofollow noreferrer">official python tutorial</a> for educational purposes and I tried this simple match case codes below:</p>
<pre><code>def http_error(status):
match status:
case 400:
return "Bad request"
case 404:
return "Not found"
case 418:
return "I'm a teapot"
case _:
return "Something's wrong with the internet"
</code></pre>
<p>I noticed that the <code>return</code> keyword works only in an interpreter but didn't seem to work in my text editor until I changed it to a <code>print</code> statement. For the purpose of understanding, can anyone clarify why this happened? I've looked at similar questions such as <a href="https://stackoverflow.com/questions/61844226/return-statement-not-working-in-sublime-text-editor">Return statement not working in sublime text editor</a> and <a href="https://stackoverflow.com/questions/54469142/code-runs-from-interpreter-but-not-in-editor">code runs from interpreter but not in editor</a> but it's either I'm missing something or it's not related to this match-case scenario.</p>
<p>Thanks.</p>
| <python> | 2023-09-23 10:38:32 | 1 | 5,413 | Uchenna Adubasim |
77,162,585 | 14,820,295 | Allocate resources on tasks | <p>I'm working on a Resource-Task allocation problem that considers 2 datasets:</p>
<ol>
<li>for each Resource and month I have availability (in working days)</li>
<li>for each Task I need certain working days</li>
</ol>
<p>Is there any possibility in Python to be able to randomly allocate my IDs on certain Tasks (only for a month) to eliminate the working days spent on each task?</p>
<p>I can't imagine a solution without a while loop, but I'd be happy if you could help me. Thank you.</p>
<p><em><strong>Example of my Resources dataset (1):</strong></em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id_resource</th>
<th>month</th>
<th>working_days</th>
</tr>
</thead>
<tbody>
<tr>
<td>123</td>
<td>2023-10-01</td>
<td>14</td>
</tr>
<tr>
<td>234</td>
<td>2023-10-01</td>
<td>15</td>
</tr>
<tr>
<td>345</td>
<td>2023-10-01</td>
<td>12</td>
</tr>
<tr>
<td>456</td>
<td>2023-10-01</td>
<td>17</td>
</tr>
</tbody>
</table>
</div>
<p><em><strong>Example of my Tasks dataset (2):</strong></em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id_task</th>
<th>month</th>
<th>working_days_needed</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>2023-10-01</td>
<td>24</td>
</tr>
<tr>
<td>2</td>
<td>2023-10-01</td>
<td>27</td>
</tr>
<tr>
<td>3</td>
<td>2023-10-01</td>
<td>7</td>
</tr>
</tbody>
</table>
</div>
<p><em><strong>Desidered Output:</strong></em></p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id_task</th>
<th>id_resource</th>
<th>days</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>123</td>
<td>14</td>
</tr>
<tr>
<td>1</td>
<td>234</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>234</td>
<td>5</td>
</tr>
<tr>
<td>2</td>
<td>345</td>
<td>12</td>
</tr>
<tr>
<td>2</td>
<td>456</td>
<td>10</td>
</tr>
<tr>
<td>3</td>
<td>456</td>
<td>7</td>
</tr>
</tbody>
</table>
</div> | <python><pandas><while-loop> | 2023-09-23 10:22:17 | 0 | 347 | Jresearcher |
77,162,489 | 11,630,148 | join() argument must be str, bytes, or os.PathLike object, not 'NoneType' with Django | <p>I'm running into an issue on my Django app. The issue is when I click on <code>Add a Record</code> the app returns the error: <code>join() argument must be str, bytes, or os.PathLike object, not 'NoneType'</code> but it shouldn't since I'm not doing any joining in my code. The view for this is:</p>
<pre class="lang-py prettyprint-override"><code>def add_job_record(request):
add_form = AddJobTrackerForm(request.POST or None)
if request.user.is_authenticated:
if request.method == "POST":
if add_form.is_valid():
add_job_record = add_form.save()
messages.success(request, 'Job Tracker added')
return redirect('core:home')
return render(request, 'applications/add_record.html', {'add_form': add_form})
else:
messages.danger(request, "You must be logged in")
return redirect('core:home')
</code></pre>
<p>I found a post here (I forgot to save the link) where the answer was to remove <code>{'add_form': add_form}</code> from my code but the form doesn't show up on the page when I removed it</p>
<p>The traceback is:</p>
<pre class="lang-bash prettyprint-override"><code> File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\core\handlers\exception.py", line 55, in inner
response = get_response(request)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\core\handlers\base.py", line 197, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\project\apps\applications\views.py", line 35, in add_job_record
return render(request, 'applications/add_record.html', {'add_form': add_form})
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\shortcuts.py", line 24, in render
content = loader.render_to_string(template_name, context, request, using=using)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\loader.py", line 62, in render_to_string
return template.render(context, request)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\backends\django.py", line 61, in render
return self.template.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 175, in render
return self._render(context)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\test\utils.py", line 112, in instrumented_test_render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in <listcomp>
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\loader_tags.py", line 157, in render
return compiled_parent._render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\test\utils.py", line 112, in instrumented_test_render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in <listcomp>
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\loader_tags.py", line 63, in render
result = block.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in <listcomp>
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1070, in render
return render_value_in_context(output, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1047, in render_value_in_context
value = str(value)
^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\forms\utils.py", line 75, in render
return mark_safe(renderer.render(template, context))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\forms\renderers.py", line 29, in render
return template.render(context, request=request).strip()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\backends\django.py", line 61, in render
return self.template.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 175, in render
return self._render(context)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\test\utils.py", line 112, in instrumented_test_render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in <listcomp>
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\loader_tags.py", line 208, in render
return template.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 177, in render
return self._render(context)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\test\utils.py", line 112, in instrumented_test_render
return self.nodelist.render(context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in render
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1005, in <listcomp>
return SafeString("".join([node.render_annotated(context) for node in self]))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\defaulttags.py", line 238, in render
nodelist.append(node.render_annotated(context))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 966, in render_annotated
return self.render(context)
^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1070, in render
return render_value_in_context(output, context)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\base.py", line 1047, in render_value_in_context
value = str(value)
^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\utils\html.py", line 420, in <lambda>
klass.__str__ = lambda self: mark_safe(klass_str(self))
^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\forms\boundfield.py", line 34, in __str__
return self.as_widget()
^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\forms\boundfield.py", line 107, in as_widget
return widget.render(
^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\forms\widgets.py", line 281, in render
return self._render(self.template_name, context, renderer)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\forms\widgets.py", line 286, in _render
return mark_safe(renderer.render(template_name, context))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\forms\renderers.py", line 28, in render
template = self.get_template(template_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\forms\renderers.py", line 34, in get_template
return self.engine.get_template(template_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\backends\django.py", line 33, in get_template
return Template(self.engine.get_template(template_name), self)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\engine.py", line 175, in get_template
template, origin = self.find_template(template_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\engine.py", line 157, in find_template
template = loader.get_template(name, skip=skip)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\loaders\cached.py", line 57, in get_template
template = super().get_template(template_name, skip)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\loaders\base.py", line 17, in get_template
for origin in self.get_template_sources(template_name):
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\loaders\cached.py", line 70, in get_template_sources
yield from loader.get_template_sources(template_name)
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\template\loaders\filesystem.py", line 35, in get_template_sources
name = safe_join(template_dir, template_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\admin\DEV\job-tracker\venv\Lib\site-packages\django\utils\_os.py", line 17, in safe_join
final_path = abspath(join(base, *paths))
^^^^^^^^^^^^^^^^^^
File "<frozen ntpath>", line 147, in join
File "<frozen genericpath>", line 152, in _check_arg_types
TypeError: join() argument must be str, bytes, or os.PathLike object, not 'NoneType'
[23/Sep/2023 20:17:59] "GET /tracker/add-record/ HTTP/1.1" 500 332790
</code></pre>
| <python><django> | 2023-09-23 09:56:09 | 1 | 664 | Vicente Antonio G. Reyes |
77,162,385 | 839,733 | mypy errors with decorator | <pre><code>class Cell(ABC):
def __init__(self, value: int | None = None):
...
@property
def value(self) -> int | None:
return self._value
class InputCell(Cell):
@Cell.value.setter
def value(self, value: int) -> None:
...
</code></pre>
<p>Running mypy with <code>--strict</code> generates:</p>
<pre><code>react.py:28: error: "Callable[[Cell], int | None]" has no attribute "setter" [attr-defined]
react.py:28: error: Untyped decorator makes function "value" untyped [misc]
</code></pre>
<p>Line 28 happens to be the <code>@Cell.value.setter</code>.
I've seen <a href="https://stackoverflow.com/a/65641392/839733">this answer</a> that uses <code>cast</code> in a custom decorator, but in my case, I don't have a custom decorator. How do I get rid of these errors?</p>
<p>Edit:</p>
<p>It appears that there's an <a href="https://github.com/python/mypy/issues/1465" rel="nofollow noreferrer">open ticket</a> for the first error that has been open for 7 years now!!!</p>
| <python><python-decorators><mypy> | 2023-09-23 09:22:37 | 0 | 25,239 | Abhijit Sarkar |
77,162,381 | 18,164,421 | CoinBase API not returning product ticker data | <p>I am trying to get the product ticker from CoinBase API <a href="https://docs.cloud.coinbase.com/exchange/reference/exchangerestapi_getproductticker" rel="nofollow noreferrer">https://docs.cloud.coinbase.com/exchange/reference/exchangerestapi_getproductticker</a> using two different functions. The first one is the following:</p>
<pre><code>def connect_to_coinbase_ticker(product_id: str):
url = f"https://api.exchange.coinbase.com/products/{product_id}/ticker"
headers = {"accept": "application/json"}
response = requests.get(url, headers=headers).json()
print(json.dumps(response, indent=4))
</code></pre>
<p>But I am getting this response message : <code>{"message": "NotFound"}</code></p>
<p>By looking at the above link I also tried to use the function it is provided there to get these data:</p>
<pre><code>def connect_to_coinbase_ticker(product_id: str):
conn = http.client.HTTPSConnection("api.exchange.coinbase.com")
payload = ''
headers = {'Content-Type': 'application/json'}
conn.request("GET", f"/products/{product_id}/ticker", payload, headers)
res = conn.getresponse()
data = res.read()
print(data.decode("utf-8"))
</code></pre>
<p>But I keep getting an error message: <code>{"message":"User-Agent header is required."}</code></p>
<p>So my question is how can I modify one of the above functions so I do not get these messages and get the product ticker data I request.</p>
<p>Thank you!</p>
| <python><coinbase-api> | 2023-09-23 09:21:54 | 1 | 302 | useeeeer132 |
77,162,144 | 7,383,971 | Average precision implementation for object detection - low confidence detections do not impact the score | <p>I have the following code that calculates precision-recall curve for object detection task, where detections are matched to ground-truth first by creating 1-to-1 pairs starting from detection with the highest confidence score and matching it to ground-truth object for which the overlap is the highest. The results are stored in <code>detections_matches</code> vector, in which the value is <code>True</code> if detection was matched against some ground-truth object and <code>False</code> otherwise. Then, this PR-curve is used to calculate Average Precision Score.</p>
<pre><code>def precision_recall_curve(
detection_matches: np.ndarray, detection_scores: np.ndarray, total_ground_truths: int
):
sorted_detection_indices = np.argsort(detection_scores, kind="stable")[::-1]
detection_scores = detection_scores[sorted_detection_indices]
detection_matches = detection_matches[sorted_detection_indices]
threshold_indices = np.r_[np.where(np.diff(detection_scores))[0], detection_matches.size - 1]
confidence_thresholds = detection_scores[threshold_indices]
true_positives = np.cumsum(detection_matches)[threshold_indices]
false_positives = np.cumsum(~detection_matches)[threshold_indices]
precision = true_positives / (true_positives + false_positives)
precision[np.isnan(precision)] = 0
recall = true_positives / total_ground_truths
full_recall_idx = true_positives.searchsorted(true_positives[-1])
reversed_slice = slice(full_recall_idx, None, -1)
return np.r_[precision[reversed_slice], 1], np.r_[recall[reversed_slice], 0]
def ap_score(precision, recall):
return -np.sum(np.diff(recall) * np.array(precision)[:-1])
</code></pre>
<p>This can be used to calculate AP-score for the example vectors:</p>
<pre><code>detection_matches = np.array([True, True, True, True, True, True, False, True])
detection_scores = np.array([0.9, 0.85, 0.8, 0.75, 0.7, 0.65, 0.6, 0.55])
total_ground_truths = 10
precision, recall = precision_recall_curve(detection_matches, detection_scores, total_ground_truths)
# (array([0.875 , 0.85714286, 1. , 1. , 1. ,
# 1. , 1. , 1. , 1. ]),
# array([0.7, 0.6, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0. ]))
ap_score(precision, recall)
# 0.6875
</code></pre>
<p>However, adding more detections, even with super-low confidence increases the AP-score, which doesn't seem correct.</p>
<pre><code>detection_matches = np.array([True, True, True, True, True, True, False, True, True, False, False, False, False, False, False])
detection_scores = np.array([0.9, 0.85, 0.8, 0.75, 0.7, 0.65, 0.6, 0.55, 0.04, 0.03, 0.02, 0.015, 0.012, 0.011, 0.01])
total_ground_truths = 10
precision, recall = precision_recall_curve(detection_matches, detection_scores, total_ground_truths)
# (array([0.88888889, 0.875 , 0.85714286, 1. , 1. ,
# 1. , 1. , 1. , 1. , 1. ]),
# array([0.8, 0.7, 0.6, 0.6, 0.5, 0.4, 0.3, 0.2, 0.1, 0. ]))
ap_score(precision, recall)
# 0.7763888888888889
</code></pre>
<p>I can see this is because low precision scores from precision vector (<code>array([1., 1., 1., 1., 1., 1., 0.85714286, 0.875, 0.88888889, 0.8, 0.72727273, 0.66666667, 0.61538462, 0.57142857, 0.53333333])</code>) are effectively ignored by the fact that both precision and recall are trimmed at the index where recall reaches full value. However, even when we don't trim, recall is constant and therefore the difference of recall is 0, so low precision scores are not taken into account anyway.</p>
<p>Is there a bug in this implementation? If so, what should be adjusted to make low precision score impact (negatively) the AP score? Or is it a case where AP score just doesn't work intuitively?</p>
| <python><numpy><scikit-learn><precision-recall><average-precision> | 2023-09-23 08:01:02 | 2 | 832 | Kuba_ |
77,161,886 | 2,217,263 | Compilation of Fortan code with f2py creates empty Python module | <p>I have a large Fortran 90 code that I would like to compile with <code>f2py</code> so I can run it from within a Python code. I do get the compilation to run without any errors, like this:</p>
<p><code>f2py -c -lgomp -lcfitsio.10 -L/usr/local/opt/cfitsio/lib/ -m code_name code_name.f90</code></p>
<p>Importing of <code>code_name</code> into Python works, but the subroutines are not included. It seems to have created an empty Python module.</p>
<p>I think it might be related to how I had set up the Fortran code, which has the following structure:</p>
<pre><code>program code_name
implicit none
"many defined variables"
call run
contains
subroutine run
end subroutine run
"many more subroutines"
end program code_name
</code></pre>
<p>From a terminal, the code would execute the <code>run</code> subroutine. I would like to run it in a same way with Python, so calling <code>code_name.run()</code>.</p>
<p>Any suggestions on how I could make this work?</p>
<p>Thanks a lot for any help π!</p>
| <python><fortran><f2py> | 2023-09-23 06:24:49 | 0 | 543 | Tomas |
77,161,846 | 4,794 | Model structure for regression from images | <p>I'm trying to build a tensorflow model for analysing board games, so I started with a simpler 2D dataset. I generated 1000 images of black semicircles like these:</p>
<p><a href="https://i.sstatic.net/07Ik7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/07Ik7.png" alt="input image 1" /></a> <a href="https://i.sstatic.net/S5Gzn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/S5Gzn.png" alt="input image 2" /></a></p>
<p>I thought it would be a good exercise to try and recover the angle of the flat side. I labeled these two example images as 210.474Β° and 147.593Β°.</p>
<p>Unfortunately, the results I get are terrible. All the predictions on the test data are roughly 180Β°, presumably close to the mean value of the labels.</p>
<p>Can anyone give me advice on how to improve my model architecture or otherwise improve my results? If all of the input data is boolean pixels, do I need to normalize it?</p>
<p>I create the model like this:</p>
<pre><code>def build_and_compile_model():
num_channels = 200
kernel_size = 3
image_height = 64
image_width = 64
regularizer = regularizers.l2(0.0001)
model = keras.Sequential(
[layers.Conv2D(num_channels,
kernel_size,
padding='same',
activation='relu',
input_shape=(image_height, image_width, 1),
activity_regularizer=regularizer),
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1)])
model.compile(loss='mean_absolute_error',
optimizer=tf.keras.optimizers.Adam(0.001))
return model
</code></pre>
<p>When I try to fit the model, it improves for a few epochs, then stabilizes at a high error.</p>
<p><a href="https://i.sstatic.net/2juM0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2juM0.png" alt="Plot of model training" /></a></p>
<p>Here's the complete example:</p>
<pre><code>import math
import shutil
import typing
from datetime import datetime
from pathlib import Path
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from PIL import Image, ImageDraw
import tensorflow as tf
from space_tracer import LivePillowImage
from tensorflow import keras
from tensorflow.python.keras import layers, regularizers
def build_and_compile_model():
num_channels = 200
kernel_size = 3
image_height = 64
image_width = 64
regularizer = regularizers.l2(0.0001)
model = keras.Sequential(
[layers.Conv2D(num_channels,
kernel_size,
padding='same',
activation='relu',
input_shape=(image_height, image_width, 1),
activity_regularizer=regularizer),
layers.Dense(64, activation='relu'),
layers.Dense(64, activation='relu'),
layers.Dense(1)])
model.compile(loss='mean_absolute_error',
optimizer=tf.keras.optimizers.Adam(0.001))
return model
def main():
image_folder = Path(__file__).parent / 'circle_images'
num_images = 1000
image_data, label_data = read_input_data(num_images, image_folder)
# Make NumPy printouts easier to read.
np.set_printoptions(precision=3, suppress=True)
image_count = image_data.shape[0]
image_data = image_data.reshape(image_data.shape + (1, ))
train_size = math.floor(image_count * 0.8)
train_dataset = image_data[:train_size, :, :]
test_dataset = image_data[train_size:, :, :]
train_labels = label_data[:train_size]
test_labels = label_data[train_size:]
test_results = {}
dnn_model = build_and_compile_model()
print('training dataset:', train_dataset.shape)
print('training labels:', train_labels.shape)
start = datetime.now()
history = dnn_model.fit(
train_dataset,
train_labels,
validation_split=0.2,
verbose=0, epochs=25)
print('Trained for', datetime.now() - start)
test_results['dnn_model'] = dnn_model.evaluate(test_dataset, test_labels, verbose=0)
print(pd.DataFrame(test_results, index=['Mean absolute error [game value]']).T)
test_predictions = dnn_model.predict(test_dataset).flatten()
print(test_labels[:10])
print(test_predictions[:10])
plot_loss(history)
def create_images(num_images: int, image_folder: Path) -> None:
print(f'Creating {num_images} images.')
image_folder.mkdir()
start_angles = np.random.random(num_images)
start_angles *= 360
rng = np.random.default_rng()
rng.shuffle(start_angles)
for i, start_angle in enumerate(start_angles):
image_path = image_folder / f'image{i}.png'
image = create_image(start_angle)
image.save(image_path)
label_text = '\n'.join(str(start_angle) for start_angle in start_angles)
(image_folder / 'labels.csv').write_text(label_text)
def create_image(start_angle: float) -> Image.Image:
image = Image.new('1', (64, 64)) # B&W 64x64
drawing = ImageDraw.Draw(image)
drawing.rectangle((0, 0, 64, 64), fill='white')
drawing.pieslice(((0, 0), (63, 63)),
-start_angle,
-start_angle+180,
fill='black')
return image
def read_input_data(num_images: int, image_folder: Path) -> typing.Tuple[
np.ndarray,
np.ndarray]:
""" Read input data from the image folder.
:returns: (images, labels)
"""
labels = []
if image_folder.exists():
with (image_folder / 'labels.csv').open() as f:
for line in f:
labels.append(float(line))
image_count = len(labels)
if image_count != num_images:
# Size has changed, so recreate the input data.
shutil.rmtree(image_folder, ignore_errors=True)
create_images(num_images, image_folder)
return read_input_data(num_images, image_folder)
label_data = np.array(labels)
images = np.zeros((image_count, 64, 64))
for i, image_path in enumerate(sorted(image_folder.glob('*.png'))):
image = Image.open(image_path)
bits = np.array(image)
images[i, :, :] = bits
return images, label_data
def plot_loss(history):
plt.plot(history.history['loss'], label='loss')
plt.plot(history.history['val_loss'], label='val_loss')
plt.ylim(bottom=0)
plt.xlabel('Epoch')
plt.ylabel('Error [angle]')
plt.legend()
plt.grid(True)
plt.show()
def demo():
image = create_image(226.634)
LivePillowImage(image).display()
if __name__ == '__main__':
main()
elif __name__ == '__live_coding__':
demo()
</code></pre>
<p>At the end, I see this output:</p>
<pre><code>Trained for 0:00:09.155005
Mean absolute error [game value]
dnn_model 92.051697
7/7 [==============================] - 0s 4ms/step
[210.474 147.593 327.796 120.112 163.402 178.04 333.604 342.488 119.694
240.8 ]
[177.15 181.242 181.242 181.242 181.242 181.242 181.242 181.242 181.242
181.242]
</code></pre>
<p>You can see that all the predictions are close to 180Β°.</p>
| <python><tensorflow><machine-learning><deep-learning><neural-network> | 2023-09-23 06:02:33 | 3 | 56,676 | Don Kirkby |
77,161,326 | 308,827 | groupby in pandas giving error when combining columns with NaNs | <p>I have the following dataframe and I want to combine the three columns <code>["Area (ha)", "Yield (tn per ha)", "Production (tn)"]</code> such that no NaNs are present:</p>
<pre><code> Country Crop Season Year Area Yield Production
0 Argentina maize 1 2000 3088715
22 Argentina maize 1 2000 3088715
44 Argentina maize 1 2000 3088715
66 Argentina maize 1 2000 3088715
88 Argentina maize 1 2000 3088715
110 Argentina maize 1 2000 3088715
132 Argentina maize 1 2000 3088715
154 Argentina maize 1 2000 3088715
176 Argentina maize 1 2000 3088715
198 Argentina maize 1 2000 3088715
220 Argentina maize 1 2000 3088715
242 Argentina maize 1 2000 3088715
264 Argentina maize 1 2000 3088715
8754 Argentina maize 1 2000 5.433
8776 Argentina maize 1 2000 5.433
8798 Argentina maize 1 2000 5.433
8820 Argentina maize 1 2000 5.433
8842 Argentina maize 1 2000 5.433
8864 Argentina maize 1 2000 5.433
8886 Argentina maize 1 2000 5.433
8908 Argentina maize 1 2000 5.433
8930 Argentina maize 1 2000 5.433
8952 Argentina maize 1 2000 5.433
8974 Argentina maize 1 2000 5.433
8996 Argentina maize 1 2000 5.433
9018 Argentina maize 1 2000 5.433
17508 Argentina maize 1 2000 16780650
17530 Argentina maize 1 2000 16780650
17552 Argentina maize 1 2000 16780650
17574 Argentina maize 1 2000 16780650
17596 Argentina maize 1 2000 16780650
17618 Argentina maize 1 2000 16780650
17640 Argentina maize 1 2000 16780650
17662 Argentina maize 1 2000 16780650
17684 Argentina maize 1 2000 16780650
17706 Argentina maize 1 2000 16780650
17728 Argentina maize 1 2000 16780650
17750 Argentina maize 1 2000 16780650
17772 Argentina maize 1 2000 16780650
</code></pre>
<p>Here is what I tried:
<code>df.groupby(["Country", "Crop", "Season", "Year"], dropna=False).mean().reset_index()</code></p>
<p>However I get this error: <code>*** TypeError: agg function failed [how->mean,dtype->object] </code></p>
<p>How do I fix it? Expected output is:</p>
<pre><code>Country Crop Season Year Area Yield Production
Argentina maize 1 2000 3088715 5.433 5.433
</code></pre>
| <python><pandas> | 2023-09-23 01:18:30 | 1 | 22,341 | user308827 |
77,161,295 | 19,251,203 | Reorder Series Data Based on Month | <p>I have a data frame with the following data:</p>
<pre><code>month
April 4484.900000
August 5664.419355
December 3403.806452
February 2655.298246
January 2176.338710
July 5563.677419
June 5772.366667
March 3692.258065
May 5349.774194
November 4247.183333
October 5199.225806
September 5766.516667
</code></pre>
<p>I want the data to be ordered from January to December, not alphabetically. Is there a way to reorder this?</p>
| <python><python-3.x><dataframe><series> | 2023-09-23 01:04:26 | 1 | 392 | user19251203 |
77,161,259 | 1,231,714 | Pandas - exclude rows that have wrong format | <p>I am logging some data and for some reason i have <code>@^</code> in the date column. Is there a way to exclude the rows that are not dates. It may happen that there are undesirable characters in other columns as well.</p>
<pre><code>2023/08/22 14:26:11, 1.3337, 1.0915, 1.2119, 1.5757
2023/08/22 14:27:15, 1.3161, 1.2086, 1.2007, 1.5865
2023/08/22 14:28:19, 1.2946, 1.2274, 1.2393, 1.5082
2023/08/22 14:29:23, 1.2868, 1.2155, 1.2886, 1.5413
2023/08/22 14:30:27, 1.3072, 1.2443, 1.3193, 1.5766
@^@^@2023/08/22 14:18:27, 1.5304, 1.4373, 1.4711, 1.7271
2023/08/22 14:19:31, 1.5675, 1.3973, 1.5239, 1.8931
</code></pre>
| <python><pandas><dataframe><csv> | 2023-09-23 00:41:39 | 2 | 1,390 | SEU |
77,161,209 | 22,466,650 | How to nest a list based on increasing sequences and ignore left overlapping ranges | <p>This is my input</p>
<pre><code>mylist = [2, 7, 8, 11, 7, 9, 10, 15, 22, 30, 32]
</code></pre>
<ul>
<li>from 2 to 11, it's increasing, so we need to grab the min max <code>[2, 11]</code></li>
<li>from 7 to 10 it's increasing, but we need to ignore it because the range
(7 to 10) is inluded in the first grabed list</li>
<li>from 15 to 32, it's increasing, so we need to grab the min max: <code>[15, 32]</code></li>
</ul>
<p>the final list should be : <code>[[2, 11], [15, 32]]</code></p>
<p>I tried something like below but it does not :</p>
<pre><code>final = []
mi = mylist[0]
ma = mylist[1]
for i, j in zip(mylist, mylist[1:]):
if i < j:
ma = j
elif i > j:
mi = i
continue
elif mi == ma:
continue
final.append([mi, ma])
</code></pre>
<p><strong>update:</strong></p>
<p>Let me add more scenarios :</p>
<ul>
<li>for <code>[5, 8, 10, 3, 4, 5, 7]</code> we should get <code>[[5,10]]</code> because even if <code>[3, 7]</code> is overlapping with <code>[5, 10]</code>, the start of <code>[3, 7]</code> is behind <code>[5, 10]</code></li>
<li>for <code>[5, 8, 10, 8, 9, 12]</code> we should get <code>[[5,12]]</code> which is <code>[5,10] βͺ [8, 12]</code> because <code>[8, 12]</code> is overlapping with <code>[5, 10]</code> from the right (ahead of it)</li>
<li>for <code>[1, 3, 5, 4, 3, 2, 1]</code> we should get <code>[[1, 5]]</code> because from 4 to 1 it is a decreasing sequence, so we have to ignore it</li>
</ul>
<p><strong>I'm not necessarily looking for a Python code. I just need the algorithm or the correct way to approach this problem.</strong></p>
| <python><algorithm><math> | 2023-09-23 00:15:19 | 2 | 1,085 | VERBOSE |
77,161,176 | 2,824,791 | Simple Python Script in Nextflow fails | <p>I am attempting to run Python from inside a <strong>Nextflow</strong> pipeline launched via <strong>Nextflow-Tower</strong>. I forked the <strong>nextflow-io/hello</strong> repo and made sure I could launch it successfully from my own repo. What I am doing wrong? If there is a simple python pipeline repo that can be launched(foolproof) using Nextflow-Tower that would be helpful.</p>
<p>My dir structure is:</p>
<p>root<br>
--main.nf<br>
--nextflow.config <br>
--bin <br>
----test.py<br></p>
<p><strong>nextflow.config</strong></p>
<pre><code>process.container = "us-east1-docker.pkg.dev/xxxx-yyyy/pass/python3:latest"
docker.enabled = true
</code></pre>
<p><strong>main.nf</strong></p>
<pre><code>#!/usr/bin/env nextflow
nextflow.enable.dsl=2
process sayHello {
input:
val x
output:
stdout
script:
"""
echo '$x world!'
"""
}
process processPython {
input:
val y
output:
stdout
script:
"""
#!/usr/bin/env python3
test.py
"""
}
workflow {
Channel.of('Bonjour', 'Ciao', 'Hello', 'Hola') | sayHello | view
Channel.of('foo', 'bar', 'cluster', 'frack') | processPython | view
}
</code></pre>
<p><strong>test.py</strong></p>
<pre><code>import sys
print(f"Python Version: {sys.version}")
</code></pre>
<p><strong>Error report</strong></p>
<pre><code>Error executing process > 'processPython (1)'
Caused by:
Process `processPython (1)` terminated with an error exit status (127)
Command executed:
#!/usr/bin/env python3
test.py
Command exit status:
127
Command output:
(empty)
Command error:
/usr/bin/env: 'python3': No such file or directory
Work dir:
gs://nf-tower-6897ad32-d3d4-4bda-ade1-97d25c0e680c/scratch/4wCvYTX4GV6S3k/98/707cf12d829d888f4f5179ee1ae55e
Tip: you can try to figure out what's wrong by changing to the process work dir and showing the script file named `.command.sh`
</code></pre>
| <python><nextflow><nextflow-tower> | 2023-09-22 23:59:57 | 2 | 5,096 | jlo-gmail |
77,161,173 | 395,857 | How can I multithreadedly download a HuggingFace dataset? | <p>I want to download a HuggingFace dataset, e.g. <a href="https://huggingface.co/datasets/uonlp/CulturaX" rel="nofollow noreferrer"><code>uonlp/CulturaX</code></a>:</p>
<pre><code>from datasets import load_dataset
ds = load_dataset("uonlp/CulturaX", "en")
</code></pre>
<p>However, it downloads on one thread at 50 MB/s, while my network is 10 Gbps. Since this dataset is 16 TB, I'd prefer to download it faster so that I don't have to wait for a few days. How can I multithreadedly download a HuggingFace dataset?</p>
| <python><multithreading><download><huggingface><huggingface-datasets> | 2023-09-22 23:59:18 | 2 | 84,585 | Franck Dernoncourt |
77,160,966 | 9,072,753 | Is there a convention which fields are reserved for implementation in python? | <p>I read the documentation of <code>threading.Thread</code> that mentions:</p>
<blockquote>
<p>There are two ways to specify the activity: by passing a callable object to the constructor, or by overriding the run() method in a subclass</p>
</blockquote>
<p>I have written the following program to test it:</p>
<pre><code>class MyThread(threading.Thread):
def __init__(self):
super().__init__()
self._started = False
def run(self):
self._started = True
obj = MyThread()
obj.start()
obj.join()
print(self._started)
</code></pre>
<p>However it fails:</p>
<pre><code>$ python3 ./1.py
Traceback (most recent call last):
File "/dev/shm/.1000.home.tmp.dir/./1.py", line 10, in <module>
obj.start()
File "/usr/lib/python3.11/threading.py", line 951, in start
if self._started.is_set():
^^^^^^^^^^^^^^^^^^^^
AttributeError: 'bool' object has no attribute 'is_set'
</code></pre>
<p>The documentation does not mention that "_started" is used internally for threading.Thread.</p>
<p>What should I expect from such a python program? Should I expect that all undocumented attributes of <code>threading.Thread</code> will start with a leading <code>_</code> and when inheriting from <code>threading.Thread</code> I should not make any fields with <code>_</code>? Finally, which attributes are safe to set and which are not?</p>
| <python><python-3.x><inheritance> | 2023-09-22 22:33:47 | 0 | 145,478 | KamilCuk |
77,160,935 | 5,256,563 | Python PIL error 'L' format requires 0 <= number <= 4294967295 | <p>I create a numpy array of int, then I transform it as a PIL <code>Image</code> of type <code>uint32</code>, and then I try to save it. See simple mock code below:</p>
<pre><code>import numpy
resLabels = numpy.ndarray(shape=(38000, 38000), dtype=numpy.int32)
resLabels.fill(0)
resLabels[0,0] = 10000000 # Just a dumb value to ensure that the int32 is required.
arr = resLabels.astype("uint32")
im = Image.fromarray(arr)
im.save("Test.tif")
</code></pre>
<p>Unfortunately I get the following error on the last line when trying to save the image:</p>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/Users//miniconda3/envs/Py39dev/lib/python3.9/site-packages/PIL/Image.py", line 2431, in save
save_handler(self, fp, filename)
File "/Users//miniconda3/envs/Py39dev/lib/python3.9/site-packages/PIL/TiffImagePlugin.py", line 1860, in _save
offset = ifd.save(fp)
File "/Users//miniconda3/envs/Py39dev/lib/python3.9/site-packages/PIL/TiffImagePlugin.py", line 945, in save
result = self.tobytes(offset)
File "/Users//miniconda3/envs/Py39dev/lib/python3.9/site-packages/PIL/TiffImagePlugin.py", line 889, in tobytes
data = self._write_dispatch[typ](self, *values)
File "/Users//miniconda3/envs/Py39dev/lib/python3.9/site-packages/PIL/TiffImagePlugin.py", line 699, in <lambda>
b"".join(self._pack(fmt, value) for value in values)
File "/Users//miniconda3/envs/Py39dev/lib/python3.9/site-packages/PIL/TiffImagePlugin.py", line 699, in <genexpr>
b"".join(self._pack(fmt, value) for value in values)
File "/Users//miniconda3/envs/Py39dev/lib/python3.9/site-packages/PIL/TiffImagePlugin.py", line 666, in _pack
return struct.pack(self._endian + fmt, *values)
struct.error: 'L' format requires 0 <= number <= 4294967295
</code></pre>
<p>After a quick test, it seems to come from the image dimensions. Everything works fine for images 30Kx30K, but fails for images 40Kx40K.</p>
<p>How can this be fixed?</p>
| <python><python-3.x><python-imaging-library> | 2023-09-22 22:21:39 | 1 | 5,967 | FiReTiTi |
77,160,797 | 9,811,964 | Automatically shift rows with same spatial coordinates into a different cluster | <p>I have a dataframe <code>df</code>:</p>
<pre><code>import pandas as pd
data = {
"latitude": [49.5659508, 49.568089, 49.5686342, 49.5687609, 49.5695834, 49.5706579, 49.5711228, 49.5716422, 49.5717749, 49.5619579, 49.5619579, 49.5628938, 49.5628938, 49.5630028, 49.5633175, 49.56397639999999, 49.566359, 49.56643220000001, 49.56643220000001, 49.5672061, 49.567729, 49.5677449, 49.5679685, 49.5679685, 49.5688543, 49.5690616, 49.5713705],
"longitude": [10.9873409, 10.9894035, 10.9896749, 10.9887881, 10.9851579, 10.9853273, 10.9912959, 10.9910182, 10.9867083, 10.9995758, 10.9995758, 11.000319, 11.000319, 10.9990996, 10.9993819, 11.004145, 11.0003023, 10.9999593, 10.9999593, 10.9935709, 11.0011213, 10.9954016, 10.9982288, 10.9982288, 10.9975928, 10.9931367, 10.9939141],
"cluster": [0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2],
"dup_location_count": [0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 2, 2, 2, 0, 0, 0, 0, 2, 2, 0, 0, 0, 2, 2, 0, 0, 0]
}
df = pd.DataFrame(data)
df.head(11)
latitude longitude cluster dup_location_count
0 49.565951 10.987341 0 0
1 49.568089 10.989403 0 0
2 49.568634 10.989675 0 0
3 49.568761 10.988788 0 0
4 49.569583 10.985158 0 0
5 49.570658 10.985327 0 0
6 49.571123 10.991296 0 0
7 49.571642 10.991018 0 0
8 49.571775 10.986708 0 0
9 49.561958 10.999576 1 2
10 49.561958 10.999576 1 2
</code></pre>
<p>The columns <code>latitude</code> and <code>longitude</code> represent the spatial coordinates of people. The column <code>cluster</code> represents the cluster. People who live at the same building or close to each other are usually in the same cluster. Each cluster has a cluster size of 9 people. The column <code>dup_location_count</code> represents the number of other people who share exactly the same coordinates.</p>
<p>I am looking for an automatic way to move people who share the same cluster, but have exactly the same coordinates, into a different cluster (see for example index 9 and 10). Preferably into a cluster which is "relatively close" to the original cluster. I have no exact definition of "relatively close" but the clusters with similar cluster numbers are closer to each other.</p>
<p>Note: make sure that even after shifting the people into different clusters, the cluster size stays the same (9).</p>
<p>Ideally I end up with a dataframe where each cluster contains people who live not at the same place and each cluster contains exactly 9 rows.</p>
<p>The original dataframe has 3k rows. Therefore I need some kind of algorithm to do the job.
Any ideas?</p>
<p><strong>Manual solution</strong></p>
<pre><code># sort values
df.sort_values(by=["cluster", "latitude", "longitude"], inplace=True)
df.reset_index(drop=True, inplace=True)
# swap people
cluster_size = 9
temp_value = df.loc[2, "cluster"]
df.loc[2, "cluster"] = df.loc[2+cluster_size, "cluster"]
df.loc[2+cluster_size, "cluster"] = temp_value
temp_value = df.loc[4, "cluster"]
df.loc[4, "cluster"] = df.loc[4+cluster_size, "cluster"]
df.loc[4+cluster_size, "cluster"] = temp_value
temp_value = df.loc[5, "cluster"]
df.loc[5, "cluster"] = df.loc[5+2*cluster_size, "cluster"]
df.loc[5+2*cluster_size, "cluster"] = temp_value
temp_value = df.loc[18, "cluster"]
df.loc[18, "cluster"] = df.loc[18+cluster_size, "cluster"]
df.loc[18+cluster_size, "cluster"] = temp_value
temp_value = df.loc[20, "cluster"]
df.loc[20, "cluster"] = df.loc[20+cluster_size, "cluster"]
df.loc[20+cluster_size, "cluster"] = temp_value
# end of cluster therefore go backwards
df.sort_values(by=["cluster", "latitude", "longitude"], inplace=True)
shift_value = cluster_size - 1
temp_value = df.loc[30, "cluster"]
df.loc[30, "cluster"] = df.loc[30-shift_value, "cluster"]
df.loc[30-shift_value, "cluster"] = temp_value
</code></pre>
<p>This way I end up with the final result:</p>
<pre><code>data = {
"latitude": [49.5633175, 49.5659508, 49.566359, 49.56643220000001, 49.56643221, 49.567729, 49.567729, 49.568089, 49.5687609, 49.5630028, 49.5659508, 49.56643220000001, 49.56643221, 49.5686342, 49.5695834, 49.5706579, 49.5716422, 49.5717749, 49.5619579, 49.5628938, 49.5633175, 49.56397639999999, 49.56397639999999, 49.566359, 49.56643221, 49.5677449, 49.5679685, 49.5619579, 49.5628938, 49.5630028, 49.5672061, 49.5679685, 49.5688543, 49.5690616, 49.5711228, 49.5713705],
"longitude": [10.9993819, 10.9999593, 11.0003023, 10.9999593, 11.001122, 10.9982288, 11.0011213, 10.9894035, 10.9887881, 10.9873409, 10.9873409, 10.9999593, 11.001122, 10.9896749, 10.9851579, 10.9853273, 10.9910182, 10.9867083, 10.9995758, 11.000319, 11.0003023, 10.9999593, 11.004145, 10.9935709, 11.001122, 10.9954016, 10.9982288, 10.9995758, 11.000319, 10.9990996, 10.9935709, 10.9982288, 10.9975928, 10.9931367, 10.9912959, 10.9939141],
"cluster": [0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 3]
}
df = pd.DataFrame(data)
</code></pre>
| <python><pandas><dataframe><cluster-computing> | 2023-09-22 21:38:55 | 1 | 1,519 | PParker |
77,160,785 | 2,363,252 | sqlalchemy.orm.exc.StaleDataError: UPDATE statement on table expected to update X row(s); Y were matched | <p>Using sqlalchemy 2.0.21 with postgresql 14, I have the following model:</p>
<pre class="lang-py prettyprint-override"><code>
class MyModel(TimestampedMixin):
id = Column(String(12), primary_key=True)
rating = Column(Float, index=True)
... more fields ...
</code></pre>
<p>Inspired by the <a href="https://docs.sqlalchemy.org/en/20/orm/queryguide/dml.html#orm-bulk-update-by-primary-key" rel="nofollow noreferrer">docs</a>, I want to update all <code>rating</code> values (currently Null) for each <strong>matching</strong> <code>id</code> from a list of dicts. If <code>id</code> is not already in the database it should ignore that dict and continue, since it's an update op...</p>
<pre class="lang-py prettyprint-override"><code> # ...
sess.flush()
sess.execute(
update(MyModel),
[
{
'id': row[0], # str, may or may not exist
'rating': row[1] # float, Null so far
}
for row in values # values is a list of lists
]
)
sess.commit()
</code></pre>
<p>But I get:</p>
<pre class="lang-py prettyprint-override"><code>Traceback (most recent call last):
File "/home/user/project/scripts/populate_db.py", line 128, in <module>
sess.execute(
File "/home/user/.venv/project/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2262, in execute
return self._execute_internal(
File "/home/user/.venv/project/lib/python3.10/site-packages/sqlalchemy/orm/session.py", line 2144, in _execute_internal
result: Result[Any] = compile_state_cls.orm_execute_statement(
File "/home/user/.venv/project/lib/python3.10/site-packages/sqlalchemy/orm/bulk_persistence.py", line 1596, in orm_execute_statement
result = _bulk_update(
File "/home/user/.venv/project/lib/python3.10/site-packages/sqlalchemy/orm/bulk_persistence.py", line 332, in _bulk_update
persistence._emit_update_statements(
File "/home/user/.venv/project/lib/python3.10/site-packages/sqlalchemy/orm/persistence.py", line 943, in _emit_update_statements
raise orm_exc.StaleDataError(
sqlalchemy.orm.exc.StaleDataError: UPDATE statement on table 'mymodel' expected to update 80000 row(s); 24017 were matched.
python-BaseException
</code></pre>
<p>which I don't get the reasoning why it shouldn't be the expected behavior. What am I missing?</p>
| <python><postgresql><sqlalchemy> | 2023-09-22 21:35:11 | 1 | 2,865 | stelios |
77,160,753 | 5,937,760 | How to format unconventional output into simple json dump | <p>In the answer from this post:
<a href="https://stackoverflow.com/a/55832055/5937760">https://stackoverflow.com/a/55832055/5937760</a></p>
<p>The output is:</p>
<pre><code>{TopicPartition(topic=u'python-test', partition=0): 5,
TopicPartition(topic=u'python-test', partition=1): 20,
TopicPartition(topic=u'python-test', partition=2): 0}
</code></pre>
<p>When I do a <code>json.dumps</code> on it, I get this:</p>
<pre><code>[["python-test", 0], ["python-test", 1], ["python-test", 2]]
</code></pre>
<p>How can I get it to output like so:</p>
<pre><code>[[0,5],[1,20],[2,0]]
</code></pre>
| <python><json><kafka-python> | 2023-09-22 21:29:11 | 1 | 1,317 | sojim2 |
77,160,736 | 188,159 | How to insert (append) a string containing XML as XML into inner XML (or remove a parent tag but keep the content) with lxml in Python? | <p>I'm trying to insert text into XML with lxml. The text contains XML which is supposed to become part of the XML it gets inserted into.</p>
<p>The following code does not work:</p>
<pre class="lang-py prettyprint-override"><code>from lxml import etree
tree = etree.fromstring('<tag><src></src><src></src></tag>')
new_mix = 'This <x id="1"/> rocks the <x id="2"/>!'
for src in tree.findall('.//src'):
src.append(etree.fromstring(new_mix))
print(etree.tostring(tree))
</code></pre>
<pre><code>lxml.etree.XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1
</code></pre>
<p>If I add a <code><p></p></code> parent tag to <code>new_mix</code>, so it becomes <code>'<p>This <x id="1"/> rocks the <x id="2"/>!</p>'</code>, it works:</p>
<pre class="lang-xml prettyprint-override"><code>b'<tag><src><p>This <x id="1"/> rocks the <x id="2"/>!</p></src><src><p>This <x id="1"/> rocks the <x id="2"/>!</p></src></tag>'
</code></pre>
<p>But how do I avoid or remove the <code><p></code> or any other parent tags while keeping the content?</p>
| <python><xml><lxml> | 2023-09-22 21:22:25 | 4 | 9,813 | qubodup |
77,160,524 | 2,291,710 | What is type annotation for "any callable but a class" in Python using Mypy? | <p>I'm trying to perfectly type annotate the following Python function:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Callable, Any
import inspect
def foo(func: Callable[..., Any]) -> None:
if inspect.isclass(func):
raise ValueError
# Do something...
class Bad:
pass
def good() -> Bad:
return Bad()
foo(good) # OK
foo(Bad) # NOK
</code></pre>
<p>I would like to narrow down the <code>Callable[..., Any]</code> type so that it rejects <code>Bad</code> passed as argument.</p>
<p>See Mypy playground <a href="https://mypy-play.net/?mypy=latest&python=3.10&flags=show-error-codes%2Cstrict&gist=ded1db9e24ac4ef3aaec2a01b9f55463" rel="nofollow noreferrer">here</a>.</p>
<p>I can't find anything in the <a href="https://docs.python.org/3/library/typing.html" rel="nofollow noreferrer"><code>typing</code></a> module nor the <a href="https://mypy.readthedocs.io/en/stable/" rel="nofollow noreferrer">Mypy</a> documentation to distinguish between a simple function and a class. Is this even possible?</p>
<p>It's not clear why <code>type(Bad)</code> and <code>type(good)</code> are very distinct at run-time, but I can't find the equivalent for type annotation.</p>
| <python><mypy><python-typing> | 2023-09-22 20:34:09 | 2 | 19,937 | Delgan |
77,160,469 | 471,376 | mock patch for the entire test session | <p><em><strong>tl;dr</strong></em> how to have a <code>mock.patch</code> for a <code>@classmethod</code> last an entire test session instead of only within <code>with</code> scope or function scope?</p>
<p>I want to <a href="https://docs.python.org/3/library/unittest.mock.html#unittest.mock.patch" rel="nofollow noreferrer">mock patch</a> a class method. However, I would like to run <code>patch.object</code> once instead of once within every test.</p>
<p>Currently, I must call <code>patch.object</code> twice</p>
<pre class="lang-py prettyprint-override"><code>class MyClass():
@classmethod
def print_hello(cls):
print("hello from the real MyClass.print_hello")
def do_something(self):
pass
def mock_print_hello(_cls):
print("hello from the patched mock_print_hello")
class TestMyClass(unittest.TestCase):
def test_init(self):
with mock.patch.object(MyClass, "print_hello", new_callable=mock_print_hello) as patch:
MyClass.print_hello()
MyClass()
def test_do_something(self):
with mock.patch.object(MyClass, "print_hello", new_callable=mock_print_hello) as patch:
MyClass.print_hello()
MyClass().do_something()
</code></pre>
<p>However, I would like to only call <code>patch.object</code> once:</p>
<pre class="lang-py prettyprint-override"><code>class MyClass():
@classmethod
def print_hello(cls) :
print("hello from the real MyClass.print_hello")
def do_something(self):
pass
def mock_print_hello(_cls):
print("hello from the patched mock_print_hello")
class TestMyClass(unittest.TestCase):
@classmethod
def setUpClass(cls):
# this patch will not remain after setUpClass returns
patch = mock.patch.object(MyClass, "print_hello", new_callable=mock_print_hello)
def test_init(self):
# this calls the real MyClass.print_hello, not mock_print_hello
MyClass.print_hello()
MyClass()
def test_do_something(self):
# this calls the real MyClass.print_hello, not mock_print_hello
MyClass.print_hello()
MyClass().do_something()
</code></pre>
<p>In practice, patching <code>MyClass.new</code> within <code>setUpClass</code> does not last beyond the scope of the function. In other words, the <code>test_init</code> and <code>test_do_something</code> do not call <code>mock_print_hello</code>, they call <code>MyClass.print_hello</code>.</p>
<p>How can I have the mock patch last beyond the scope of one function?</p>
| <python><python-unittest><python-mock><python-unittest.mock> | 2023-09-22 20:22:13 | 1 | 7,289 | JamesThomasMoon |
77,160,268 | 19,127,570 | How to reconstruct the 3D coordinate system using feature points from an image? | <p>Given a picture of a rubiks cube, is it possible to construct the world space by using feature points from the faces and edges in the image?</p>
<p>By using some Topological Data Analysis methods I can take an image of a rubiks cube and identify the squares on each face. I can also determine the vertices and edges that are visible (see examples below). Using this data, it is possible to separate the faces and find associated angles. There seems to be plenty of information to construct a 3D model of the cube, but I cannot figure out how.</p>
<h3>Examples of Feature Points</h3>
<p><img src="https://i.sstatic.net/mPHbq.png" alt="Cube with faces identified" /></p>
<p><img src="https://i.sstatic.net/LBU1D.png" alt="Cube with faces and edges identified" /></p>
<p>I was able to reconstruct a 2D model of the cube with each face colored correctly. The issue is still that there is no depth to the model.</p>
<p><a href="https://i.sstatic.net/2EXrq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2EXrq.png" alt="2D model of the cube" title="Cube from image in 2D" /></a></p>
<p>I tried calculating the depth as a function of the distance from the center vertex (see below). This didn't work because it created a cone instead of flat faces. <strong>I think I will need to use some sort of transformation matrix to get this to work properly.</strong></p>
<pre class="lang-py prettyprint-override"><code>fig = plt.figure()
ax = fig.add_subplot(projection='3d')
# Function to get the depth values
# based on distance from the center vertex
def zVal(xPts, yPts, center, theta):
zLst = np.zeros(xPts.shape).flatten()
tan = np.tan(np.deg2rad(theta))
x = xPts.flatten()
y = yPts.flatten()
for i in range(x.shape[0]):
dist = np.linalg.norm(np.array([x[i],y[i]]) - np.array(center))
zLst[i] = (dist*tan)
return zLst.reshape(xPts.shape)
# X and Y data comes from lstPairs0 which is a list of each line that
# connects the feature points on a face
xs = np.array(sum(lstPairs0, [])).T[0].reshape((8,9))
ys = np.array(sum(lstPairs0, [])).T[1].reshape((8,9))
# calculating depth values
zs = (zVal(xs, ys, [175,140], 45))
ax.scatter(xs, ys, zs)
plt.show()
</code></pre>
<p>Here is some sample data for the lines on one of the faces. Each row in the array is three feature points that make a line.</p>
<pre class="lang-py prettyprint-override"><code>lstPairs = [[[216.0, 207.0], [239.0, 181.0], [259.0, 158.0]],
[[216.0, 207.0], [185.0, 213.0], [144.0, 221.0]],
[[221.0, 66.0], [242.0, 94.0], [262.0, 121.0]],
[[221.0, 66.0], [187.0, 58.0], [146.0, 49.0]],
[[92.0, 136.0], [104.0, 167.0], [118.0, 199.0]],
[[92.0, 136.0], [104.0, 104.0], [119.0, 66.0]],
[[259.0, 158.0], [207.0, 185.0], [144.0, 221.0]],
[[262.0, 121.0], [210.0, 90.0], [146.0, 49.0]]]
</code></pre>
<p>The transformation matrix is probably the key, but please let me know what y'all think!</p>
| <python><image-processing><3d><coordinate-transformation> | 2023-09-22 19:35:59 | 0 | 773 | trent |
77,160,258 | 8,942,319 | SQL Alchemy: table.col == None or table.col.is_(None)? | <p>Running SQLAlchemy version < 2.0.0 and mysql. Wondering what the difference is between these</p>
<pre><code>select_stmt.where(
table.c.my_column.is_(None)
)
</code></pre>
<p>versus</p>
<pre><code>select_stmt.where(
table.c.my_column == None
)
</code></pre>
<p>Or even</p>
<pre><code>select_stmt.where(
table.c.my_column.is_(null())
)
</code></pre>
| <python><mysql><sqlalchemy> | 2023-09-22 19:34:03 | 1 | 913 | sam |
77,160,202 | 1,609,514 | How to manually select features for Scikit-Learn model regression? | <p>There are various methods for doing <a href="https://scikit-learn.org/stable/modules/classes.html#module-sklearn.feature_selection" rel="nofollow noreferrer">automated feature selection</a> in Scikit-learn.</p>
<p>E.g.</p>
<pre><code>my_feature_selector = SelectKBest(score_func=f_regression, k=3)
my_feature_selector.fit_transform(X, y)
</code></pre>
<p>The selected features are then retrievable using</p>
<pre><code>feature_idx = my_feature_selector.get_support(indices=True)
feature_names = X.columns[feature_idx]
</code></pre>
<p>(Note, in my case <code>X</code> and <code>y</code> are Pandas dataframes with named columns).</p>
<p>They are also saved as an attribute of a fitted model:</p>
<pre><code>feature_names = my_model.feature_names_in_
</code></pre>
<p>However, I want to build a pipeline with a manual (i.e. pre-specified) set of features.</p>
<p>Obviously, I could manually select the features from the full data-set every time I do training or prediction:</p>
<pre><code>model1_feature_names = ['MedInc', 'AveRooms', 'Latitude']
model1.fit(X[model1_feature_names], y)
y_pred1 = model1.predict(X[model1.feature_names_in_])
</code></pre>
<p>But I want a more convenient way to construct different models (or pipelines) each of which uses a potentially different set of (manually specified) features. Ideally, I would specify the <code>feature_names_in_</code> as an initialization parameter so that later I don't have to worry about transforming the data and can run my model (or pipeline) on the full data set as follows:</p>
<pre><code>model1.fit(X, y) # uses a pre-defined sub-set of features in X
model2.fit(X, y) # uses a different sub-set of features
y_pred1 = model1.predict(X)
y_pred2 = model2.predict(X)
</code></pre>
<p>Do I need to build a <a href="https://stackoverflow.com/a/65178066/1609514">custom feature selector</a> to do this? Surely there's an easier way.</p>
<p>I guess I was expecting to find something like a built-in <code>FeatureSelector</code> class that I could use in a pipeline as follows:</p>
<pre><code>my_feature_selector1 = FeatureSelector(feature_names=['MedInc', 'AveRooms', 'Latitude'])
my_feature_selector1.fit_transform(X, y) # This would do nothing
pipe1 = Pipeline([('feature_selector', my_feature_selector1), ('model', LinearRegression())])
</code></pre>
| <python><scikit-learn><pipeline><feature-selection> | 2023-09-22 19:20:02 | 1 | 11,755 | Bill |
77,160,103 | 1,769,327 | Exponential Moving Average (EMA) calculations in Polars dataframe | <p>I have the following list of 20 values:</p>
<pre><code>values = [143.15,143.1,143.06,143.01,143.03,143.09,143.14,143.18,143.2,143.2,143.2,143.31,143.38,143.35,143.34,143.25,143.33,143.3,143.33,143.36]
</code></pre>
<p>In order to find the Exponential Moving Average, across a span of 9 values, I can do the following in Python:</p>
<pre><code>def calculate_ema(values, periods, smoothing=2):
ema = [sum(values[:periods]) / periods]
for price in values[periods:]:
ema.append((price * (smoothing / (1 + periods))) + ema[-1] * (1 - (smoothing / (1 + periods))))
return ema
ema_9 = calculate_ema(values, periods=9)
</code></pre>
<pre><code>[143.10666666666668,
143.12533333333334,
143.14026666666666,
143.17421333333334,
143.21537066666667,
143.24229653333333,
143.26183722666667,
143.25946978133334,
143.27357582506667,
143.27886066005334,
143.28908852804267,
143.30327082243414]
</code></pre>
<p>The resulting list of EMA values is 12 items long, the first value [0] corresponding to the 9th [8] value from <em>values</em>.</p>
<p>Using Pandas and TA-Lib, I can perform the following:</p>
<pre><code>import pandas as pd
import talib as ta
df_pan = pd.DataFrame(
{
'value': values
}
)
df_pan['ema_9'] = ta.EMA(df_pan['value'], timeperiod=9)
df_pan
</code></pre>
<pre><code> value ema_9
0 143.15 NaN
1 143.10 NaN
2 143.06 NaN
3 143.01 NaN
4 143.03 NaN
5 143.09 NaN
6 143.14 NaN
7 143.18 NaN
8 143.20 143.106667
9 143.20 143.125333
10 143.20 143.140267
11 143.31 143.174213
12 143.38 143.215371
13 143.35 143.242297
14 143.34 143.261837
15 143.25 143.259470
16 143.33 143.273576
17 143.30 143.278861
18 143.33 143.289089
19 143.36 143.303271
</code></pre>
<p>The Pandas / TA-Lib output corresponds with that of my Python function.</p>
<p>However, when I try to replicate this using funtionality purely in Polars:</p>
<pre><code>import polars as pl
df = (
pl.DataFrame(
{
'value': values
}
)
.with_columns(
pl.col('value').ewm_mean(span=9, min_periods=9,).alias('ema_9')
)
)
df
</code></pre>
<p>I get different values:</p>
<pre><code>value ema_9
f64 f64
143.15 null
143.1 null
143.06 null
143.01 null
143.03 null
143.09 null
143.14 null
143.18 null
143.2 143.128695
143.2 143.144672
143.2 143.156777
143.31 143.189683
143.38 143.229961
143.35 143.255073
143.34 143.272678
143.25 143.268011
143.33 143.280694
143.3 143.284626
143.33 143.293834
143.36 143.307221
</code></pre>
<p>Can anyone please explain what adjustments I need to make to my Polars code in order get the expected results?</p>
| <python><python-polars> | 2023-09-22 18:56:56 | 2 | 631 | HapiDaze |
77,160,087 | 113,586 | Correct typing of a method without a fixed signature | <p>I wonder what is the correct way to type hint <a href="https://github.com/scrapy/scrapy/blob/2.11.0/scrapy/spiders/__init__.py#L81" rel="nofollow noreferrer"><code>scrapy.Spider.parse()</code></a>.</p>
<p>If defined in a subclass, it has a specific first argument (<code>response: Response</code>) but can have an arbitrary list of additional arguments (including an empty one), with or without <code>**kwargs</code>. As a best approximation for this, it's currently defined in the base class with a <code>self, response: Response, **kwargs: Any</code> signature, but overriding it with a different one causes mypy errors.</p>
<p>Changing the signature is fine from the API perspective because the correct signature in specific user code depends on other parts of the user code. If <code>SpiderA</code> and <code>SpiderB</code> inherit from <code>scrapy.Spider</code>, it usually makes sense to expect compatible signatures between <code>SpiderA</code> and its descendants but not between any two in <code>SpiderA</code>, <code>SpiderB</code> and <code>scrapy.Spider</code>. But declaring <code>scrapy.Spider.parse()</code> as an abstract method and <code>scrapy.Spider</code> as an ABC doesn't make sense as its descendants aren't required to define <code>parse()</code> at all (as they don't always use that method).</p>
<p>So I wonder if there is an accepted way to tell the tools that child classes can have somewhat different signatures in this method if they override it at all, or in the worst case mark the parent class code in a way that doesn't produce typing errors in descendant user code (I guess removing type hints will achieve this but this is the very last resort: while it can be argued that type hints for this method aren't very useful it feels strange to omit them from one method while striving to type hint all other code).</p>
| <python><python-typing> | 2023-09-22 18:53:17 | 1 | 25,704 | wRAR |
77,160,020 | 14,692,430 | Get proper intellisense with overloaded multimethods in python | <p>The following code works as expected:</p>
<pre class="lang-py prettyprint-override"><code>from multimethod import multimethod
@multimethod
def foo(a: int):
print(f"only {a}")
@multimethod
def foo(a: int, b: int):
print(f"{a} and {b}")
foo(1)
foo(1, 2)
</code></pre>
<p>Output:</p>
<pre><code>only 1
1 and 2
</code></pre>
<p>However, the Intellisense in VSCode for the function arguments look like this</p>
<p><a href="https://i.sstatic.net/vOpjh.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vOpjh.png" alt="intellisense" /></a></p>
<p>I know that this is caused by the multimethod decorator expecting any number of arguments for it to decide which overload of the function to call, however what I want is something like this:</p>
<p><a href="https://i.sstatic.net/3MFHb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3MFHb.png" alt="enter image description here" /></a></p>
<p>with multiple argument lists.</p>
<p>Does anyone know how to achieve this?</p>
| <python><overloading><intellisense><multiple-dispatch><multimethod> | 2023-09-22 18:43:24 | 1 | 352 | DaNubCoding |
77,159,772 | 22,466,650 | How to expand a Multiindex while incrementing the data inserted? | <p>This is my input :</p>
<pre><code>df = pd.DataFrame(
np.arange(0, 30).reshape(-1, 5),
index=pd.MultiIndex.from_product([["a"], ["b", "c", "d"], ["e", "g"]]),
columns= ["class", "type", "id", "name", "date"]
)
class type id name date
a b e 0 1 2 3 4
g 5 6 7 8 9
c e 10 11 12 13 14
g 15 16 17 18 19
d e 20 21 22 23 24
g 25 26 27 28 29
</code></pre>
<p>And I want to get this one by inserting the <code>f</code> between <code>e</code> and <code>g</code> with incremented numbers.</p>
<pre><code> class type id name date
a b e 0 1 2 3 4
f 0 0 0 0 0
g 5 6 7 8 9
c e 10 11 12 13 14
f 1 1 1 1 1
g 15 16 17 18 19
d e 20 21 22 23 24
f 2 2 2 2 2
g 25 26 27 28 29
</code></pre>
<p>I tried doing it with a loop but I failed :</p>
<pre><code>for i in range(3):
df = pd.concat([df, pd.DataFrame([[i]*5], index=pd.Index([("a", "b", "f")]), columns=df.columns)])
df = df.sort_index(level=2)
</code></pre>
<p>This got me :</p>
<pre><code> class type id name date
a b e 0 1 2 3 4
c e 10 11 12 13 14
d e 20 21 22 23 24
b f 0 0 0 0 0
f 1 1 1 1 1
f 2 2 2 2 2
g 5 6 7 8 9
c g 15 16 17 18 19
d g 25 26 27 28 29
</code></pre>
| <python><pandas> | 2023-09-22 17:55:24 | 2 | 1,085 | VERBOSE |
77,159,548 | 8,512,262 | How can I watch for user activity from a service (session 0) in Windows 11? | <p>I came up with the solution below to watch for user activity from a service. This is working as expected in Windows 10, but doesn't work for mouse activity in Windows 11 (only key press/release events are detected); evidently something has changed <em>WRT</em> the behavior of csrss.exe under Windows 11.</p>
<p>I've looked into all of the available information returned by <code>psutil</code> for csrss.exe and wasn't able to come up with an alternative approach to monitoring mouse activity in Windows 11. Ultimately, my question is: How can I watch for user activity from a service (session 0) in Windows 11?</p>
<p>I'm trying to find an approach that I can use either in conjunction with or instead of my existing solution - any help/info is much appreciated!</p>
<p><em>More info on this can be found <a href="https://stackoverflow.com/questions/76130069/how-can-i-check-for-user-activity-idle-from-a-windows-service-written-in-python">here</a> in a Q/A detailing my initial solution.</em></p>
<pre><code>import psutil
def get_csrss_pids() -> list[int]:
"""Get the PID(s) for the Windows Client Server Runtime process"""
return [
proc.pid for proc in psutil.process_iter(attrs=['name'])
if proc.name() == 'csrss.exe'
]
def get_io(pids: list[int]) -> list[int]:
"""Returns the last `read_bytes` value for the given csrss.exe PID(s)"""
return [psutil.Process(pid).io_counters().read_bytes for pid in pids]
pids = get_csrss_pids()
last_io = get_io(pids) # store an initial value to compare against
while True:
try:
if (tick := get_io(pids)) != last_io: # if activity is detected...
print(tick) # do something
last_io = tick # store the new value to compare against
except KeyboardInterrupt:
break
</code></pre>
| <python><service><windows-11><user-activity> | 2023-09-22 17:10:53 | 1 | 7,190 | JRiggles |
77,159,384 | 7,658,051 | How to run Python code on the host like the 'ansible.builtin.shell' module does for Bash? | <p>Sometimes I cannot use the native Ansible / Jinja2 functions to get what I want, so I go get it via Bash.</p>
<p>For example, I had to type yesteday's date in <code>yyyy-mm-dd</code> format, and I found the easisest Ansible way to get it to be over complicated, being the task such a basic one</p>
<pre class="lang-yaml prettyprint-override"><code>- name: print yesterday date
debug:
msg: "{{ '%Y-%m-%d'|strftime(ansible_date_time.epoch|int - 86400) }}"
</code></pre>
<p>So I wrote the following tasks to get it via Bash</p>
<pre class="lang-yaml prettyprint-override"><code>- name: Register yesterday date using Bash
ansible.builtin.???:
cmd: |
echo "$(date -d 'yesterday' '+%Y-%m-%d')"
register: yesterday_date_bash_echo_result
- name: Show yesterday_date_bash_echo_result
ansible.builtin.debug:
msg: "yesterday_date_bash_echo_result.stdout is {{ yesterday_date_bash_echo_result.stdout }}"
</code></pre>
<p>Now, I would like to do the same via Python, which is even more friendly to handle when it comes to string transforming and formatting (do not focus on getting yesterday's date task, for example, think of some specific regex substitution).</p>
<p>So, the python commands to get yesterday's date are, for example:</p>
<pre class="lang-python prettyprint-override"><code>from datetime import datetime, timedelta
yesterday = datetime.now() - timedelta(1)
print( datetime.strftime(yesterday, '%Y-%m-%d') )
</code></pre>
<p>How can I have my Control Machine to run them in a task so that I can register the Python output?</p>
<p>I would need something like the following</p>
<pre class="lang-yaml prettyprint-override"><code>- name: Register yesterday date using Python
ansible.builtin.shell:
cmd: |
from datetime import datetime, timedelta
yesterday = datetime.now() - timedelta(1)
print( datetime.strftime(yesterday, '%Y-%m-%d') )
- name: Show yesterday_date_python_echo_result
ansible.builtin.debug:
msg: "yesterday_date_python_echo_result.stdout is {{ yesterday_date_bash_echo_result.stdout }}"
</code></pre>
<p>Is there any builtin Ansible module to achieve this?</p>
| <python><bash><cmd><ansible><stdout> | 2023-09-22 16:40:52 | 1 | 4,389 | Tms91 |
77,159,305 | 10,999,070 | A regular expression with a pattern to exclude, but the pattern itself contains a substring we want to include | <p>For string: <code>.test[?(@.foo=="bazz")].boo</code>, using Python I want to get matches on this string based on <code>.</code>, <code>[?</code>, <code>]</code>, but ignore on <code>@.</code>.</p>
<p>I've managed to figure out how to match the pattern for all splits, but can't seem to figure out the ignore rule.</p>
<p><code>\[\?\(|\.|]</code></p>
<p>My best attempt at the ignore rule as of now:
<code>^((?!@\.).)*$|\[\?\(|\.|]</code></p>
<p>Expected results where highlighted substrings are matches:
<code>.</code>test<code>[?</code>(@.foo=="bazz")]<code>.</code>boo</p>
| <python><regex> | 2023-09-22 16:23:50 | 1 | 385 | phaseTiny |
77,159,222 | 5,916,316 | Django Serializer with Many To Many fileld returns empty list, How to fix it? | <p>I have two models <code>Product</code> and <code>Lessons</code>. I join them in one Product serializer. But I get empty list in <code>lesson</code> fileld. This is what I am getting now:</p>
<pre><code> {
"id": 1,
"name": "Python",
"owner": 1,
"lessons": [] # here should be lessons
},
</code></pre>
<p>my modelds.py</p>
<pre><code>class Product(models.Model):
name = models.CharField(max_length=250)
owner = models.ForeignKey(User, on_delete=models.CASCADE)
class Lesson(models.Model):
name = models.CharField(max_length=255)
video_link = models.URLField()
duration = models.PositiveIntegerField()
products = models.ManyToManyField(Product, related_name='lessons')
</code></pre>
<p>My serializers.py</p>
<pre><code>class LessonSerializer(serializers.ModelSerializer):
class Meta:
model = Lesson
fields = ('id', 'name', 'video_link', 'duration')
class ProductSerializer(serializers.ModelSerializer):
lessons = LessonSerializer(read_only=True, many=True)
class Meta:
model = Product
fields = ['id', 'name', 'owner', 'lessons']
</code></pre>
<p>And my view.py</p>
<pre><code>class LessonListView(ListAPIView):
serializer_class = ProductSerializer
permission_classes = [permissions.IsAuthenticated]
def get_queryset(self):
lesson_view = Product.objects.all()
return products_view
</code></pre>
<p>I can see many topics in the internet about that but I it is not working for me at all.
How can I resolve it and get my data that I have for <code>lessons</code> field ?</p>
| <python><django><django-rest-framework> | 2023-09-22 16:10:25 | 1 | 429 | Mike |
77,159,166 | 8,372,455 | BACnet server as a docker application | <p>On the Python BACnet stack bacpypes there is simple examples on how to make a BACnet server, like this <a href="https://github.com/JoelBender/bacpypes/blob/master/samples/mini_device.py" rel="nofollow noreferrer">mini_device.py</a> on the git repo.</p>
<p>The BACpypes applications require a <code>.ini</code> file like a config file which looks like that states the <code>address</code> of the NIC card you want to use:</p>
<pre><code>[BACpypes]
objectName: OpenDsm
address: 192.168.0.109/24
objectIdentifier: 500001
maxApduLengthAccepted: 1024
segmentationSupported: segmentedBoth
vendorIdentifier: 15
</code></pre>
<p>Trying to turn this into a docker container if I put this <code>mini_device.py</code> in a dir with the <code>BACpypes.ini</code>, <code>requirements.txt</code> for bacypes, and a <code>Dockerfile</code> that looks like this:</p>
<pre><code># Use an official Python runtime as a parent image with Python 3.10
FROM python:3.10-alpine
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
COPY . /app
# Install any needed packages specified in requirements.txt
RUN pip install -r requirements.txt
# Make port 47808 available to the world outside this container
EXPOSE 47808/udp
# Define environment variable
ENV PYTHONUNBUFFERED=1
# Run your Python script when the container launches
CMD ["python", "app.py"]
</code></pre>
<p>A run in terminal <code>$ docker build -t bacnet-server-test .</code></p>
<p>It builds just fine but when running it with <code>$ docker run -p 47808:47808/udp bacnet-server-test</code> I get an <code>OSError: [Errno 99] Address not available</code> error I think because the <code>BACpypes.ini</code> file is stating an incorrect <code>address</code> to use.</p>
<p>Would anyone have any advice to research on this? Am sort of a newbie in Docker thanks for any tips. Ideally if its possible it would be nice for the Python script to just bind the <code>address</code> to like an <code>eth0</code> adapter or something in Linux...?</p>
<p>Full traceback:</p>
<pre><code>Traceback (most recent call last):
File "/app/app.py", line 136, in <module>
main()
File "/app/app.py", line 96, in main
test_application = SampleApplication(this_device, args.ini.address)
File "/usr/local/lib/python3.10/site-packages/bacpypes/app.py", line 535, in __init__
self.mux = UDPMultiplexer(self.localAddress)
File "/usr/local/lib/python3.10/site-packages/bacpypes/bvllservice.py", line 96, in __init__
self.directPort = UDPDirector(self.addrTuple)
File "/usr/local/lib/python3.10/site-packages/bacpypes/udp.py", line 155, in __init__
self.bind(address)
File "/usr/local/lib/python3.10/asyncore.py", line 333, in bind
return self.socket.bind(addr)
OSError: [Errno 99] Address not available
</code></pre>
| <python><docker><bacnet><bacpypes> | 2023-09-22 16:00:48 | 1 | 3,564 | bbartling |
77,159,136 | 15,587,184 | Efficiently using Hugging Face transformers pipelines on GPU with large datasets | <p>I'm relatively new to Python and facing some performance issues while using Hugging Face Transformers for sentiment analysis on a relatively large dataset. I've created a DataFrame with 6000 rows of text data in Spanish, and I'm applying a sentiment analysis pipeline to each row of text. Here's a simplified version of my code:</p>
<pre><code>import pandas as pd
import torch
from tqdm import tqdm
from transformers import pipeline
data = {
'TD': [1, 2, 3, 4, 5, 6, 7, 8, 9, 10],
'text': [
# ... (your text data here)
]
}
df_model = pd.DataFrame(data)
device = 0 if torch.cuda.is_available() else -1
py_sentimiento = pipeline("sentiment-analysis", model="finiteautomata/beto-sentiment-analysis", tokenizer="finiteautomata/beto-sentiment-analysis", device=device, truncation=True)
tqdm.pandas()
df_model['py_sentimiento'] = df_model['text'].progress_apply(py_sentimiento)
df_model['py_sentimiento'] = df_model['py_sentimiento'].apply(lambda x: x[0]['label'])
</code></pre>
<p>However, I've encountered a warning message that suggests I should use a dataset for more efficient processing. The warning message is as follows:</p>
<pre><code>"You seem to be using the pipelines sequentially on GPU. In order to maximize efficiency please use a dataset."
</code></pre>
<p>I have a two questions:</p>
<p>What does this warning mean, and why should I use a dataset for efficiency?</p>
<p>How can I modify my code to batch my data and use parallel computing to make better use of my GPU resources, what code or function or library should be used with hugging face transformers?</p>
<p>I'm eager to learn and optimize my code.</p>
| <python><gpu><huggingface-transformers><huggingface-datasets> | 2023-09-22 15:57:36 | 3 | 809 | R_Student |
77,159,092 | 1,659,599 | CHORD_TYPE for Chord in music21 | <p>The property <code>Chord.commonName</code> returns the most common name associated with a chord. The returned string is from the table <a href="https://github.com/cuthbertLab/music21/blob/f457a2ba52ea3f978d805cd52fa101dfb0dd8577/music21/chord/tables.py#L981" rel="nofollow noreferrer">chord.tables.tnIndexToChordInfo</a> and has e.g. the form <code>dominant seventh chord</code></p>
<p>On the other side there is the table <a href="https://github.com/cuthbertLab/music21/blob/f457a2ba52ea3f978d805cd52fa101dfb0dd8577/music21/harmony.py#L60" rel="nofollow noreferrer">harmony.CHORD_TYPES</a> where the chord is named <code>dominant-seventh</code>.</p>
<p><code>CHORD_TYPES</code> is used by class <code>Harmony</code>. <code>tnIndexToChordInfo</code> is used by class <code>Chord</code>.</p>
<p>Is there a way in <code>music21</code> to get the representation in <code>CHORD_TYPES</code> having the <code>tnIndexToChordInfo</code>?</p>
<p>Use case: I use Chords generated from <code>RomanNumeral</code> in a <code>Scale</code> and want to display the shortname in the <code>lyric</code>. The <code>commonName</code> is too long.</p>
<pre class="lang-py prettyprint-override"><code>k = scale.MelodicMinorScale('c')
rn = roman.RomanNumeral('V7', k, caseMatters=False)
c = chord.Chord(rn)
c.lyric = rn.figure + "\n" + rn.commonName
</code></pre>
| <python><music21> | 2023-09-22 15:49:46 | 1 | 7,359 | wolfrevo |
77,159,000 | 20,358 | xml manipulation with Python | <p>I am using Python 3.9. I have a nested xml document string like this</p>
<pre><code>payload_xml = """
<AllData>
<MyPayload>
...
...
...
</MyPayload>
</AllData>
"""
</code></pre>
<p>Now I want to create another parent xml document string and ingest this payload into the newly created xml document like this</p>
<p>New XML</p>
<pre><code><Full_Message prop1="" prop2="">
<Header>
<headerValue1> </headerValue1>
<headerValue2> </headerValue2>
<headerValue3> </headerValue3>
<NestedValues>
<someval1> </someval>
</NestedValues>
</Header>
<Body>
<!--Insert MyPayload xml string here ignoring AllData node-->
</Body>
</Full_Message>
</code></pre>
<p>Here is where I am so far</p>
<pre><code> from lxml import etree
FullMessage_root = etree.Element("Full_Message")
AllData_root = etree.fromstring(payload_xml)
payload_only = AllData_root[0]
FullMessage_root.append(payload_only)
FullMessage_root.insert(0, etree.Element("Header"))
FullMessage_root.insert(1, etree.Element("Body"))
FullMessage_root.attrib['prop1']='hello world'
</code></pre>
<p>This results in:</p>
<pre><code><Full_Message prop1="hello world">
<Header/>
<Body/>
<MyPayload>
</MyPayload>
</Full_Message>
</code></pre>
<p>How do I nest <code><MyPayload></code> within the <code><Body></code> tags and create the multiple nested values within <code><Header></code>?</p>
| <python><python-3.x><lxml> | 2023-09-22 15:34:03 | 1 | 14,834 | user20358 |
77,158,844 | 8,440,356 | Creating and Working with a Class of Classes | <p>I have inherited some Python code that contains a class that is designed to get a series of objects from a running application using their API. However, I am struggling to work out the best way to handle those objects within my class.</p>
<p>I was thinking of dividing this class into two classes:</p>
<ul>
<li>the first to contain a group of these objects</li>
<li>a second for the individual objects</li>
</ul>
<p>Within the second class, there would be a number of methods that act as wrappers around the api calls - this is to make it easier for users of the library I am working on.</p>
<p>At the moment, I have a class similar to below. This is the bare bones of what I have</p>
<pre><code>from xyz import api
class GroupOfObjects():
def __init__:
# Set some attributes
self.objects = self.get_objects()
def get_objects(self):
objects = api.service.Objects()
# In order to convert objects from the API, I need to put them into
# a python list
return [object for object in objects]
def get_object_name(self, object):
# object.name is available through the api
return object.internalclass.internalfunc.name
</code></pre>
<p>In order to loop through each of the objects to get the names or other properties of the objects I have to call:</p>
<pre><code>objects = GroupOfObjects()
for x in objects:
if objects.get_object_name(x) == 'abcdef':
# Do something
</code></pre>
<p>This seems quite clunky. I do have methods in my class that returns a list of all the names. But this is just a list of strings.</p>
<p>What I would like to do is access a single object in the GroupOfObjects class and access the attributes and methods of it without needing to do a for loop.</p>
<p>Can anyone suggest better ways to handle this?</p>
<p>An alternative I had in mind would be to rewrite the whole module so it has a function that returns back a list of the individual objects. However, this would break existing functionality. But if this is the best way, then I would be happy to do it.</p>
| <python><class><oop> | 2023-09-22 15:12:07 | 1 | 565 | rockdoctor |
77,158,832 | 7,972,989 | Dash suppress error "A nonexistent object was used in an `Input` of a Dash callback" | <p>I have a complex Dash app where different layouts are displayed depending on some inputs. These output layouts (created in callbacks) contain some inputs.</p>
<p>Because those inputs are not defined yet in the layout, I have the error "A nonexistent object was used in an <code>Input</code> of a Dash callback."</p>
<p>Is there a way nowadays to get rid of this error other than playing with the CSS style "visibility" to make them exist in the HTML code, but invisible to the user?</p>
<p>I tried to add <code>suppress_callback_exceptions=True</code> when creating the app but it is not working.</p>
<p>I am using Dash v2.9.3.</p>
<pre class="lang-py prettyprint-override"><code>from dash import dcc, Dash, html, Input, Output, callback
app = Dash(__name__, suppress_callback_exceptions=True)
app.layout = html.Div(
children=[
dcc.Store(id="store_filters", storage_type="memory"),
dcc.RadioItems(
id="choice_filters",
options=["filters1", "filters2", "filters3"],
value="filters1",
inline=True,
),
html.Div(id="filters"),
]
)
@callback(Output("filters", "children"), Input("choice_filters", "value"))
def update_filters(choice_filters):
if choice_filters == "filters1":
layout = [dcc.Input(id="input1", type="text")]
elif choice_filters == "filters2":
layout = [dcc.Input(id="input2", type="text")]
elif choice_filters == "filters3":
layout = [dcc.Input(id="input3", type="text")]
else:
layout = []
return layout
@callback(
Output("store_filters", "data"),
[
Input("input1", "value"),
Input("input2", "value"),
Input("input3", "value"),
],
)
def filters_result(input1, input2, input3):
filters = {"filter1": input1, "filter2": input2, "filter3": input3}
print("Last filters entered : ")
print(filters)
return filters
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=8222)
</code></pre>
| <python><plotly-dash> | 2023-09-22 15:10:51 | 2 | 2,505 | gdevaux |
77,158,712 | 1,147,321 | Looping through a list of strings, getting tuples instead | <p>I have a problem and hope anyone here has seen something similar. What looks like a loop through a list of strings returns a tuple with data of unknown origin.</p>
<p>I am using Pycharm as my IDE. The error only happens if the code is executed in debug mode.</p>
<p>I have the following code.</p>
<pre><code>from multiprocessing import Process, Manager, Pool, Queue, Lock
import fnmatch
def find(_pat_set:set, _path:str, _ignore_pat: set = None, _exclude_path_set: set = None):
if _exclude_path_set is None:
_exclude_path_set = set()
if _ignore_pat is None:
_ignore_pat = set()
manager = Manager()
files_dict = manager.dict()
for root, dirs, files in os.walk(_path):
_ex_dir_set = set()
_full_dir_list = []
for _dir in dirs:
_full_dir = os.path.join(root, _dir)
_full_dir_list.append(_full_dir)
for _ex_dir in _exclude_path_set:
_filtered = list(fnmatch.filter(_full_dir_list, _ex_dir))
for f in _filtered:
_ex_dir_set.add(os.path.basename(f))
dirs[:] = sorted(list(set(dirs) - _ex_dir_set))
_searched_files = []
for _pat in _pat_set:
_searched_files += list(fnmatch.filter(files, _pat))
_ignore_files = []
for _ig_pat in _ignore_pat:
_ignore_files += list(fnmatch.filter(files, _ig_pat))
_parse_files = set(_searched_files) - set(_ignore_files)
files_dict[root] = dict((f, {}) for f in _parse_files)
return files_dict
def main():
exclude_set = {'*\\.*',
'*\\syn\\*',
'*\\tmp',
'*\\temp',
'*\\__pycache__'}
_files = find(_pat_set={'*'}, _path='c:\\work\\git_project\\project', _ignore_pat={'*.rpt', '.*', '*.sdc'}, _exclude_path_set=exclude_set)
if __name__ == '__main__':
main()
print('done')
</code></pre>
<p>It's a find routine to locate files and folders and build a shared dictionary which later is shared between threads to work on.</p>
<p>The problem is the following segment of the code:</p>
<pre><code> for _dir in dirs:
_full_dir = os.path.join(root, _dir)
</code></pre>
<p>When I execute my code it gets the following error from the part of the code</p>
<pre><code> File "C:\Python310\lib\ntpath.py", line 143, in join
genericpath._check_arg_types('join', path, *paths)
File "C:\Python310\lib\genericpath.py", line 152, in _check_arg_types
raise TypeError(f'{funcname}() argument must be str, bytes, or '
TypeError: join() argument must be str, bytes, or os.PathLike object, not 'tuple'
</code></pre>
<p>And when I look in my debug window i have the following data</p>
<pre><code>dirs = ['.git', '.settings', 'interface', 'scripts', 'tmp', 'tst_sw']
</code></pre>
<p>but the for value "_dir" contains this</p>
<pre><code>_dir = ('c:\\work\\git_project\\project\\.git', ['hooks', 'info', 'lfs', 'logs', 'modules',
'objects', 'refs'], ['COMMIT_EDITMSG', 'config', 'description', 'FETCH_HEAD',
'gitk.cache', 'HEAD', 'index', 'ORIG_HEAD', 'packed-refs', 'sourcetreeconfig.json'])
</code></pre>
<p>Where does all that data come from? It should only have contained the string</p>
<pre><code>_dir '.git'
</code></pre>
<p>There is no issue if the code is executed normally only in debug mode.</p>
<p>I hope someone has seen this before, I do not know if I can give any more information.</p>
<p>Regards</p>
| <python><debugging><pycharm><python-3.10> | 2023-09-22 14:55:21 | 1 | 2,167 | Ephreal |
77,158,299 | 6,368,549 | Python, create dataframe from data dump | <p>We pull some data from a URL. The data comes down as one huge string, but within it is delimited into small lists / pairs. A small sample of the data is here:</p>
<pre><code>[{"date_of_fix ":"9\/4\/2023","fix_description":"Broken report links\r","issue_no ":"1788"},{"date_of_fix ":"8\/30\/2023","fix_description":"Icon on password fields","issue_no ":"1769"},{"date_of_fix ":"8\/21\/2023","fix_description":"Add Tracking to Quote Page\r","issue_no ":"1744"}]
</code></pre>
<p>Would like to convert this to a dataframe, so I can later insert it into Oracle. But it is coming down as one huge delimited string, so not sure how to loop / split / convert / append this onto a dataframe.</p>
<p>Any help would be awesome.</p>
<p>Thanks!</p>
| <python><dataframe> | 2023-09-22 14:00:42 | 1 | 853 | Landon Statis |
77,158,196 | 8,184,694 | Get list of column names with values > 0 for every row in polars | <p>I want to add a column <code>result</code> to a polars <code>DataFrame</code> that contains a list of the column names with a value greater than zero at that position.</p>
<p>So given this:</p>
<pre><code>import polars as pl
df = pl.DataFrame({"apple": [1, 0, 2, 0], "banana": [1, 0, 0, 1]})
cols = ["apple", "banana"]
</code></pre>
<p>How do I get:</p>
<pre><code>shape: (4, 3)
βββββββββ¬βββββββββ¬ββββββββββββββββββββββ
β apple β banana β result β
β --- β --- β --- β
β i64 β i64 β list[str] β
βββββββββͺβββββββββͺββββββββββββββββββββββ‘
β 1 β 1 β ["apple", "banana"] β
β 0 β 0 β [] β
β 2 β 0 β ["apple"] β
β 0 β 1 β ["banana"] β
βββββββββ΄βββββββββ΄ββββββββββββββββββββββ
</code></pre>
<p>All I have so far is the truth values:</p>
<pre><code>df.with_columns(pl.concat_list(pl.col(cols).gt(0)).alias("result"))
shape: (4, 3)
βββββββββ¬βββββββββ¬βββββββββββββββββ
β apple β banana β result β
β --- β --- β --- β
β i64 β i64 β list[bool] β
βββββββββͺβββββββββͺβββββββββββββββββ‘
β 1 β 1 β [true, true] β
β 0 β 0 β [false, false] β
β 2 β 0 β [true, false] β
β 0 β 1 β [false, true] β
βββββββββ΄βββββββββ΄βββββββββββββββββ
</code></pre>
| <python><python-polars> | 2023-09-22 13:48:14 | 3 | 541 | spettekaka |
77,157,820 | 54,557 | How can I disable tab complete in IPython? | <p>I have a complex data structure, with essentially a tree-like structure, that takes a while (minutes) to load (it has to scan the filesystem and look for changes to lots files). I want to interrogate this structure, with a top-level variable called <code>obj</code>, in IPython. When I do so, muscle memory often kicks in, and I hit <code>obj.<tab></code> which triggers a tab complete. This seemingly forces a reload of the entire structure (rescanning the filesystem), which again takes minutes. I would like to disable this - how can I do this?</p>
<p>Ideally, the solution would involve changing the IPython settings within the session. It would also be acceptable to modify the code that builds the structure (I have control over this).</p>
<p>I have previously noticed similar behaviour with <code>pandas.DataFrame</code>.</p>
| <python><ipython> | 2023-09-22 12:57:48 | 0 | 9,654 | markmuetz |
77,157,721 | 5,013,752 | use-case streaming in pyspark | <p>I am working with Databricks on Azure, my data are hosted on ADLS2.<br />
Current version of runtime is 10.4 LTS (I can upgrade if needed)</p>
<p>I have a table <strong>Pimproduct</strong>:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>id</th>
<th>name</th>
<th>action</th>
<th>dlk_last_modified</th>
<th>pim_timestamp</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>Article1</td>
<td>A</td>
<td>2022-03-01</td>
<td>2022-02-28 22:34:00</td>
</tr>
</tbody>
</table>
</div>
<ul>
<li>id : the ID of the Article (unic)</li>
<li>name : the Name of the Article</li>
<li>Action : The action to execute on the line (A = add, D = delete)</li>
<li>dlk_last_modified : the insert date in my table</li>
<li>pim_timestamp : the extract date from the source system</li>
</ul>
<p>Every ~15 min, I receive a new file which contains the modification I need to insert. For each line in my file, I consider only for each IDs the most recent <code>pim_timestamp</code>:</p>
<ol>
<li>If the line is an action=A and the ID does not exist, I add the line</li>
<li>If the line is an action=A and the ID exists, I replace existing line for the same ID with the new line.</li>
<li>If the line is an action=D, I need to delete the ID from the table.</li>
</ol>
<hr />
<p>Initially, the modifications were daily. I was using this code:</p>
<pre class="lang-py prettyprint-override"><code>from pyspark.sql import functions as F, Window as W
df = spark.table("Pimproduct").unionByNames(
spark.read.format("avro").load("/path/to/daily/data")
)
df = df.withColumn(
"rn",
F.row_number().over(W.partitionBy("id").orderBy(F.col("pim_timestamp").desc())),
)
df = df.where("rn = 1").where("action <> 'D'")
df.write.saveAsTable("pimproduct", format="delta", mode="overwrite")
</code></pre>
<p>But now, I want to do the same in streaming, and I do not really know how I could do that. I tried this:</p>
<pre class="lang-py prettyprint-override"><code>import tempfile
from pyspark.sql import functions as F, Window as W
df = spark.readSteam.table("Pimproduct").unionByNames(
spark.readStream.schema(schema).format("avro").load("/path/to/daily/data")
)
df = df.withColumn(
"rn",
F.row_number().over(W.partitionBy("id").orderBy(F.col("pim_timestamp").desc())),
)
df = df.where("rn = 1").where("action <> 'D'")
with tempfile.TemporaryDirectory() as d:
df.writeStream.toTable("Pimproduct", checkpointLocation=d)
</code></pre>
<p>but I got the error:</p>
<blockquote>
<p>AnalysisException: Non-time-based windows are not supported on streaming DataFrames/Datasets;</p>
</blockquote>
<p>Any idea how I could perform this streamed data ingestion? I'm open to suggestions.</p>
| <python><azure><pyspark><azure-databricks><spark-structured-streaming> | 2023-09-22 12:45:20 | 1 | 15,420 | Steven |
77,157,695 | 15,341,457 | Scrapy - Only first url in url list is scraped | <p>I'm scraping reviews from restaurants in Rome, Milan and Bergamo. For each one of those cities there's one dedicated url containing 30 or more restaurants. The scraper starts crawling the Rome restaurants but never switches to the other cities. It correctly scrapes all the restaurants and reviews from Rome but then the spider is closed.</p>
<p>The Rome restaurants are scraped concurrently, I would expect the same behaviour with the starting urls, but only the first one is taken into consideration</p>
<pre><code>class ReviewSpider2(scrapy.Spider):
name= 'reviews2'
def start_requests(self):
urls = [
'https://www.tripadvisor.it/Restaurants-g187791-Rome_Lazio.html'
'https://www.tripadvisor.it/Restaurants-g187849-Milan_Lombardy.html'
'https://www.tripadvisor.it/Restaurants-g187830-Bergamo_Province_of_Bergamo_Lombardy.html'
]
for url in urls:
yield scrapy.Request(url, callback = self.parse_restaurants)
def parse_restaurants(self, response):
all_restaurants = list(set(response.xpath("//div[contains(@data-test,'_list_item')]//div/div/div/span/a[starts-with(@href,'/Restaurant_Review')]/@href").extract()))
for restaurant in all_restaurants:
url = 'https://www.tripadvisor.it' + restaurant
yield response.follow(url, callback = self.parse_restaurant)
</code></pre>
<pre><code> def parse_restaurant(self, response):
all_reviews_containers = response.xpath('//div[@class="rev_wrap ui_columns is-multiline"]/div[2]')
if all_reviews_containers is not None:
for review_container in all_reviews_containers:
items = ReviewscraperItem()
items['restaurant_name'] = response.css('.HjBfq::text').extract_first()
items['rating'] = 0
rating_classes = {
'ui_bubble_rating bubble_50': 5,
'ui_bubble_rating bubble_40': 4,
'ui_bubble_rating bubble_30': 3,
'ui_bubble_rating bubble_20': 2,
'ui_bubble_rating bubble_10': 1
}
rating_class = review_container.css('span::attr(class)').extract_first()
items['rating'] = rating_classes.get(rating_class)
items['quote'] = review_container.css('.noQuotes::text').extract_first()
items['address'] = response.xpath("//span/span/a[@class='AYHFM']/text()").extract_first()
items['review'] = review_container.css('.partial_entry::text').extract_first()
yield items
#check if the next page button is disabled (there are no pages left)
if response.xpath('//a[@class = "nav next ui_button primary disabled"]').extract_first() is None:
next_page = 'https://www.tripadvisor.it' + response.xpath('//a[@class = "nav next ui_button primary"]/@href').extract_first()
yield response.follow(url=next_page, callback = self.parse_restaurant)
</code></pre>
| <python><html><web-scraping><xpath><scrapy> | 2023-09-22 12:42:21 | 1 | 332 | Rodolfo |
77,157,638 | 2,363,252 | Pandas: parse_csv throws "ValueError: Unable to parse string", when column starts with: " | <p>I am trying to read a tab-separated <a href="https://datasets.imdbws.com/title.basics.tsv.gz" rel="nofollow noreferrer">file</a> with pandas</p>
<pre class="lang-py prettyprint-override"><code>
import pandas as pd
basics_cols_to_use = [
'tconst', 'titleType', 'originalTitle', 'startYear', 'runtimeMinutes', 'genres'
]
chunk_gen = pd.read_csv(
'reproduce.tsv',
delimiter='\t',
iterator=True,
chunksize=10,
na_values="\\N",
usecols=basics_cols_to_use,
dtype={
'startYear': 'Int16', # Int8 fails with NaN values :S
'runtimeMinutes': 'Int16'
},
on_bad_lines="warn"
)
for chunk in chunk_gen:
pass
</code></pre>
<p>and I get the following exception:</p>
<pre class="lang-py prettyprint-override"><code>
Traceback (most recent call last):
File "lib.pyx", line 2368, in pandas._libs.lib.maybe_convert_numeric
ValueError: Unable to parse string "Reality-TV"
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/user/proj/raw_data/tiskata.py", line 21, in <module>
for chunk in chunk_gen:
File "/home/user/.venv/imdbapi/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 1668, in __next__
return self.get_chunk()
File "/home/user/.venv/imdbapi/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 1777, in get_chunk
return self.read(nrows=size)
File "/home/user/.venv/imdbapi/lib/python3.10/site-packages/pandas/io/parsers/readers.py", line 1748, in read
) = self._engine.read( # type: ignore[attr-defined]
File "/home/user/.venv/imdbapi/lib/python3.10/site-packages/pandas/io/parsers/c_parser_wrapper.py", line 234, in read
chunks = self._reader.read_low_memory(nrows)
File "parsers.pyx", line 855, in pandas._libs.parsers.TextReader.read_low_memory
File "parsers.pyx", line 920, in pandas._libs.parsers.TextReader._read_rows
File "parsers.pyx", line 1065, in pandas._libs.parsers.TextReader._convert_column_data
File "parsers.pyx", line 1104, in pandas._libs.parsers.TextReader._convert_tokens
File "parsers.pyx", line 1210, in pandas._libs.parsers.TextReader._convert_with_dtype
File "/home/user/.venv/imdbapi/lib/python3.10/site-packages/pandas/core/arrays/numeric.py", line 275, in _from_sequence_of_strings
scalars = to_numeric(strings, errors="raise", dtype_backend="numpy_nullable")
File "/home/user/.venv/imdbapi/lib/python3.10/site-packages/pandas/core/tools/numeric.py", line 222, in to_numeric
values, new_mask = lib.maybe_convert_numeric( # type: ignore[call-overload] # noqa: E501
File "lib.pyx", line 2410, in pandas._libs.lib.maybe_convert_numeric
ValueError: Unable to parse string "Reality-TV" at position 1
</code></pre>
<p>(Ignore "Position 1" above. It refers to the number of line inside the chunk - <code>reproduce.tsv</code> has a header and two lines only)</p>
<p>By decreasing chunksize I managed to pinpoint one of the points where it happens:</p>
<pre><code>tconst titleType primaryTitle originalTitle isAdult startYear endYear runtimeMinutes genres
tt10233360 tvEpisode La venganza de BenjamΓn La venganza de BenjamΓn 0 2019 \N \N Crime,Drama,Thriller
tt10233364 tvEpisode "Rolling in the Deep Dish "Rolling in the Deep Dish 0 2019 \N \N Reality-TV
</code></pre>
<p>It seems when a column is starting with a double quote like <code>"Rolling in the Deep Dish</code> above, it throws that misleading error.</p>
<p>Is there a way to ignore such lines?</p>
<p>Thanks in advance</p>
<pre><code>Python 3.10.12
pandas==2.1.0
</code></pre>
<p>PS: I can probably get rid of the quotation marks at the beginning with:</p>
<pre><code>sed -i -e 's/\t"/\t/g' reproduce.tsv
</code></pre>
<p>but I would prefer to have a way to calm down pandas parser on any ValueError it encounters.</p>
| <python><pandas> | 2023-09-22 12:32:29 | 1 | 2,865 | stelios |
77,157,476 | 5,074,060 | Running tslearn via reticulate leads to TypeError | <p>I am trying to run <code>TimeSeriesKMeans</code> from <a href="https://tslearn.readthedocs.io/en/latest/gen_modules/clustering/tslearn.clustering.TimeSeriesKMeans.html" rel="nofollow noreferrer">TSlearn</a> via reticulate but apparently fail to properly convert my R data.frame to a numpy array. Full code and error below. Please have a look what I do wrong. Explanation appreciated because my knowledge of Python internals is quite poor.</p>
<pre class="lang-r prettyprint-override"><code>library(reticulate)
library(magrittr)
reticulate::py_install(c("numpy", "pandas", "tslearn"))
#> Using virtual environment '/home/rstudio/.virtualenvs/r-reticulate' ...
#> + /home/rstudio/.virtualenvs/r-reticulate/bin/python -m pip install --upgrade --no-user numpy pandas tslearn
# Simulate rhythmic time series
time <- seq(1, 21, 2)
time_series <-
lapply(seq(0, 1, .2), function(x) x*sin(2*pi*time/24)) %>%
unlist %>%
matrix(., byrow=TRUE, ncol=length(time)) %>%
data.frame %>% setNames(paste0("time", time))
# Convert time_series to numpy array
np <- reticulate::import("numpy")
ts_np <- r_to_py(np$array(time_series))
# Run https://tslearn.readthedocs.io/en/latest/gen_modules/clustering/tslearn.clustering.TimeSeriesKMeans.html via reticulate
ts <- reticulate::import("tslearn.clustering")
tsk <- ts$TimeSeriesKMeans(n_clusters=2, metric="dtw")
# Problematic line
try(tsk$fit(ts_np))
#> Error in py_call_impl(callable, call_args$unnamed, call_args$named) :
#> TypeError: 'float' object cannot be interpreted as an integer
#> Run `reticulate::py_last_error()` for details.
reticulate::py_list_packages()
#> package version requirement
#> 1 contourpy 1.1.0 contourpy==1.1.0
#> 2 cycler 0.11.0 cycler==0.11.0
#> 3 dtaidistance 2.3.10 dtaidistance==2.3.10
#> 4 fonttools 4.42.0 fonttools==4.42.0
#> 5 h5py 3.9.0 h5py==3.9.0
#> 6 importlib-metadata 6.8.0 importlib-metadata==6.8.0
#> 7 importlib-resources 6.0.1 importlib-resources==6.0.1
#> 8 joblib 1.3.2 joblib==1.3.2
#> 9 kiwisolver 1.4.4 kiwisolver==1.4.4
#> 10 llvmlite 0.41.0 llvmlite==0.41.0
#> 11 matplotlib 3.7.2 matplotlib==3.7.2
#> 12 natsort 8.4.0 natsort==8.4.0
#> 13 networkx 3.1 networkx==3.1
#> 14 numba 0.58.0 numba==0.58.0
#> 15 numpy 1.24.4 numpy==1.24.4
#> 16 packaging 23.1 packaging==23.1
#> 17 pandas 2.0.3 pandas==2.0.3
#> 18 patsy 0.5.3 patsy==0.5.3
#> 19 Pillow 10.0.0 Pillow==10.0.0
#> 20 pkg_resources 0.0.0 pkg_resources==0.0.0
#> 21 polars 0.18.15 polars==0.18.15
#> 22 progressbar 2.5 progressbar==2.5
#> 23 pyarrow 12.0.1 pyarrow==12.0.1
#> 24 pyparsing 3.0.9 pyparsing==3.0.9
#> 25 python-dateutil 2.8.2 python-dateutil==2.8.2
#> 26 pytz 2023.3 pytz==2023.3
#> 27 scikit-fuzzy 0.4.2 scikit-fuzzy==0.4.2
#> 28 scikit-learn 1.3.1 scikit-learn==1.3.1
#> 29 scipy 1.10.1 scipy==1.10.1
#> 30 seaborn 0.12.2 seaborn==0.12.2
#> 31 six 1.16.0 six==1.16.0
#> 32 statsmodels 0.14.0 statsmodels==0.14.0
#> 33 threadpoolctl 3.2.0 threadpoolctl==3.2.0
#> 34 tslearn 0.6.2 tslearn==0.6.2
#> 35 tzdata 2023.3 tzdata==2023.3
#> 36 zipp 3.16.2 zipp==3.16.2
reticulate::py_version()
#> [1] '3.8'
sessionInfo()
#> R version 4.2.1 (2022-06-23)
#> Platform: x86_64-pc-linux-gnu (64-bit)
#> Running under: Ubuntu 20.04.4 LTS
#>
#> Matrix products: default
#> BLAS: /usr/lib/x86_64-linux-gnu/openblas-pthread/libblas.so.3
#> LAPACK: /usr/lib/x86_64-linux-gnu/openblas-pthread/liblapack.so.3
#>
#> locale:
#> [1] LC_CTYPE=en_US.UTF-8 LC_NUMERIC=C
#> [3] LC_TIME=en_US.UTF-8 LC_COLLATE=en_US.UTF-8
#> [5] LC_MONETARY=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8
#> [7] LC_PAPER=en_US.UTF-8 LC_NAME=C
#> [9] LC_ADDRESS=C LC_TELEPHONE=C
#> [11] LC_MEASUREMENT=en_US.UTF-8 LC_IDENTIFICATION=C
#>
#> attached base packages:
#> [1] stats graphics grDevices utils datasets methods base
#>
#> other attached packages:
#> [1] magrittr_2.0.3 reticulate_1.31
#>
#> loaded via a namespace (and not attached):
#> [1] Rcpp_1.0.9 rstudioapi_0.13 knitr_1.39 lattice_0.20-45
#> [5] R.cache_0.16.0 rlang_1.0.4 fastmap_1.1.0 fansi_1.0.3
#> [9] stringr_1.4.0 styler_1.10.2 highr_0.9 tools_4.2.1
#> [13] grid_4.2.1 xfun_0.32 png_0.1-7 R.oo_1.25.0
#> [17] utf8_1.2.2 cli_3.3.0 withr_2.5.0 htmltools_0.5.3
#> [21] yaml_2.3.5 digest_0.6.29 lifecycle_1.0.1 Matrix_1.4-1
#> [25] purrr_0.3.4 vctrs_0.4.1 R.utils_2.12.0 fs_1.5.2
#> [29] glue_1.6.2 evaluate_0.16 rmarkdown_2.14 reprex_2.0.1
#> [33] stringi_1.7.8 pillar_1.8.0 compiler_4.2.1 R.methodsS3_1.8.2
#> [37] jsonlite_1.8.0
Created on 2023-09-22 by the reprex package (v2.0.1)
</code></pre>
<p>The <code>py_last_error</code>:</p>
<pre><code>> reticulate::py_last_error()
ββ Python Exception Message βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
Traceback (most recent call last):
File "/home/rstudio/.virtualenvs/r-reticulate/lib/python3.8/site-packages/tslearn/clustering/kmeans.py", line 821, in fit
self._fit_one_init(X_, x_squared_norms, rs)
File "/home/rstudio/.virtualenvs/r-reticulate/lib/python3.8/site-packages/tslearn/clustering/kmeans.py", line 675, in _fit_one_init
self.cluster_centers_ = _k_init_metric(
File "/home/rstudio/.virtualenvs/r-reticulate/lib/python3.8/site-packages/tslearn/clustering/kmeans.py", line 98, in _k_init_metric
centers = numpy.empty((n_clusters, n_timestamps, n_features), dtype=X.dtype)
TypeError: 'float' object cannot be interpreted as an integer
ββ R Traceback ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β
1. ββbase::try(tsk$fit(ts_np))
2. β ββbase::tryCatch(...)
3. β ββbase (local) tryCatchList(expr, classes, parentenv, handlers)
4. β ββbase (local) tryCatchOne(expr, names, parentenv, handlers[[1L]])
5. β ββbase (local) doTryCatch(return(expr), name, parentenv, handler)
6. ββtsk$fit(ts_np)
7. ββreticulate:::py_call_impl(callable, call_args$unnamed, call_args$named)
</code></pre>
| <python><r><reticulate> | 2023-09-22 12:06:28 | 1 | 878 | ATpoint |
77,157,310 | 7,093,241 | What's the correct index-url for PyPI? | <p>I am trying to install packages from <code>requirements.txt</code> on a machine with a different <code>global.index</code> and different <code>search.index</code>. I am using the <code>--index-url</code> option but I get an error.</p>
<pre><code>$ pip3 install --index-url https://pypi.org/ -r requirements.txt
Looking in indexes: https://pypi.org/
ERROR: Could not find a version that satisfies the requirement annotated-types==0.5.0 (from -r requirements.txt (line 1)) (from versions: none)
ERROR: No matching distribution found for annotated-types==0.5.0 (from -r requirements.txt (line 1))
</code></pre>
<p>I had Python <code>3.8</code> but I updated to <code>3.11</code> to check if there was a version incompatibility but I get the same error. Is there a different URL I should use? I tried <a href="https://pypi.org/projects/" rel="nofollow noreferrer">https://pypi.org/projects/</a> with <code>projects</code> at the end but it failed at the next package.</p>
| <python><pip><windows-subsystem-for-linux><pypi> | 2023-09-22 11:40:55 | 1 | 1,794 | heretoinfinity |
77,157,208 | 2,725,810 | Understanding the pickled object size | <p>I execute the following line in Python interpreter:</p>
<pre><code>len(pickle.dumps([numpy.random.random(384).tolist()]*55))
</code></pre>
<p>It gives 3584.</p>
<p>Given that <code>numpy.random.random</code> produces 8-byte floats, the data being pickled is 384 * 8 * 55 = 168,960 bytes. How come <code>len</code> gives such a small number?</p>
| <python><numpy><serialization><pickle> | 2023-09-22 11:26:25 | 2 | 8,211 | AlwaysLearning |
77,157,114 | 1,797,689 | How to understand when the user has changed the subject in a conversation? | <p>I am using LangChain for a chat bot. I have managed to create the prompt and send the chat history in it. A problem I am facing is that when the user is talking about a certain topic for some time and then starts another, the LM is unable to understand the change of subject. How do I make it understand? Is there a tuned LLM for this or an agent?</p>
<pre><code>recommender = VertexAI(model_name="text-bison@001",
max_output_tokens=1024,
temperature=0.9)
conv_memory = ConversationStringBufferMemory(
input_key="input_text",
buffer=user.chat_history,
memory_key="chat_history"
)
conversation = ConversationChain(
llm=recommender,
prompt=prompt,
input_key="input_text",
memory=conv_memory
)
llm_response: str = conversation.predict(input_text = input_text)
</code></pre>
| <python><langchain><py-langchain> | 2023-09-22 11:12:03 | 0 | 5,509 | khateeb |
77,156,996 | 5,439,470 | spark worker is receiving job but does not start working | <p>I use this compose-file to create a local spark "cluster"</p>
<pre><code>version: "3"
services:
spark-master:
image: spark:3.5.0
entrypoint: /opt/spark/bin/spark-class org.apache.spark.deploy.master.Master
ports:
- 8080:8080
- 7077:7077
spark-worker:
image: spark:3.5.0
depends_on:
- spark-master
entrypoint: /opt/spark/bin/spark-class org.apache.spark.deploy.worker.Worker spark://spark-master:7077
</code></pre>
<p>afterwards i use a conda environment to connect to the spark cluster from my host machine</p>
<pre><code>name: spark
channels:
- conda-forge
- defaults
dependencies:
- python=3.8
- pyspark
- jupyter
</code></pre>
<p>however when i open a notebook and run the following command, i can see that there is a new application in my spark web ui. But it does not seem to be doing anything.</p>
<pre><code>import pyspark
spark = pyspark.sql.SparkSession.builder.master("spark://localhost:7077").getOrCreate()
spark.range(1000 * 1000 * 1000).count()
</code></pre>
<p>when i check the logs of my worker i see that it received somthing. But it does not do anything.</p>
<pre><code>23/09/22 10:06:19 INFO ExecutorRunner: Launch command: "/opt/java/openjdk/bin/java" "-cp" "/opt/spark/conf:/opt/spark/jars/*" "-Xmx1024M" "-Dspark.driver.port=55497" "-Djava.net.preferIPv6Addresses=false" "-XX:+IgnoreUnrecognizedVMOptions" "--add-opens=java.base/java.lang=ALL-UNNAMED" "--add-opens=java.base/java.lang.invoke=ALL-UNNAMED" "--add-opens=java.base/java.lang.reflect=ALL-UNNAMED" "--add-opens=java.base/java.io=ALL-UNNAMED" "--add-opens=java.base/java.net=ALL-UNNAMED" "--add-opens=java.base/java.nio=ALL-UNNAMED" "--add-opens=java.base/java.util=ALL-UNNAMED" "--add-opens=java.base/java.util.concurrent=ALL-UNNAMED" "--add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED" "--add-opens=java.base/sun.nio.ch=ALL-UNNAMED" "--add-opens=java.base/sun.nio.cs=ALL-UNNAMED" "--add-opens=java.base/sun.security.action=ALL-UNNAMED" "--add-opens=java.base/sun.util.calendar=ALL-UNNAMED" "--add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED" "-Djdk.reflect.useDirectMethodHandle=false" "org.apache.spark.executor.CoarseGrainedExecutorBackend" "--driver-url" "spark://CoarseGrainedScheduler@example.net:55497" "--executor-id" "0" "--hostname" "172.28.0.4" "--cores" "10" "--app-id" "app-20230922100619-0000" "--worker-url" "spark://Worker@172.28.0.4:41199" "--resourceProfileId" "0"
</code></pre>
<p>any idea what is wrong here?</p>
| <python><docker><apache-spark><pyspark><docker-compose> | 2023-09-22 10:54:44 | 0 | 1,303 | jan-seins |
77,156,969 | 3,198,895 | Numpy promotion in integer-float product | <p>When profiling some code today I was surprised by the following behaviour:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
print(repr(10**19 * np.ones(1, dtype=np.float64)))
print(repr(10**20 * np.ones(1, dtype=np.float64)))
</code></pre>
<p>which returns:</p>
<pre><code>array([1.e+19])
array([1e+20], dtype=object)
</code></pre>
<p>The latter product is 'promoted' so that <code>dtype=object</code>. I find this a bit surprising since <code>type(10**20 * 1.0)</code> is <code>float</code> in pure Python. Where can I read about this behaviour?</p>
| <python><numpy> | 2023-09-22 10:51:29 | 1 | 1,721 | Svaberg |
77,156,768 | 5,505,224 | Importing module error and relative import error | <p>I went through posts on stackoverflow and I couldn't figure out. Current work around was hard-coding sys.path.append in run.py for the application to run locally.</p>
<p>[![Folder Structure Image][1]][1]</p>
<p>Without path append and I try to run with python3 app/run.py from project folder I receive no module name app error. I have also tried setting python path to project folder from VSCode terminal before running the application as I didn't want to hardcode workaround.</p>
<blockquote>
<p>SET PYTHONPATH="c:\project\directory:$PYTHONPATH"</p>
</blockquote>
<pre><code>import os, sys
#Project Directory if relative import or no module errors
# sys.path.append(r"C:\\Users\\iswar\\Downloads\\Alembic\\Alembic")
import uvicorn
from fastapi import FastAPI
import logging
from app.api import routes as api_routes
</code></pre>
<p>I'm missing something plain and obvious. I don't have trouble with any other files following the same pattern from dir1.dir2 import moduleName for imports. Please provide suggestions.
[1]: <a href="https://i.sstatic.net/mYSgH.png" rel="nofollow noreferrer">https://i.sstatic.net/mYSgH.png</a></p>
| <python> | 2023-09-22 10:21:25 | 0 | 482 | Rajesh Mappu |
77,156,673 | 15,189,432 | How do I get the output as a dataframe? | <p>How do I get the output as a dataframe? I can't seem to find out what type the output is.</p>
<pre><code>from bs4 import BeautifulSoup
url = "https://www.bvb.de/Spiele/Alle-Spiele"
soup = BeautifulSoup(requests.get(url).content, "html.parser")
table = soup.select("table.statistics")
for row in soup.select("tr:has(td)"):
tds = [td.get_text(strip=True, separator=" ") for td in row.select("td")]
if len(tds) > 2:
team1, team2 = tds[1], tds[2]
date = tds[0]
opponent1 = tds[1]
opponent2 = tds[2]
score = tds[3]
print(score)
</code></pre>
| <python><dataframe> | 2023-09-22 10:06:38 | 0 | 361 | Theresa_S |
77,156,285 | 7,188,690 | create a generalized function to transpose the columns and make dictionaries as values in the other column | <p>I have this spark data frame. I am trying to convert the columns to rows and make one of the columns gw are the key of the dictionary in the other column. I want to write a generalized function for this which can take multiple columns as input.</p>
<pre><code>+-----+-----+-----+-------+
| gw |rrc |re_est|
+-----+-------------------+
| 210.142.27.137 | 1400.0| 26.0|
| 210.142.27.202| 2300| 12 |
</code></pre>
<p>expected output like this:</p>
<pre><code>+-----+-----------+-
| index |gw_mapping|
+-----+------
| rrc | {210.142.27.137:1400.0, 210.142.27.202: 2300}|
| re_est | {10.142.27.137:26.0, 210.142.27.202:12 }|
</code></pre>
<p>What I have done:</p>
<pre><code>result_df = (
df
.select('gw', F.expr("stack(2, 'rrc', rrc, 're_est', re_est) AS (index, value)"))
.groupby('index')
.agg(F.expr("map_from_entries(collect_list(struct(gw, value))) as gw_mapping")))
</code></pre>
<p>I am unable to get the output somehow if I write a generalized function that can take multiple columns like rrc, re_est</p>
| <python><pyspark> | 2023-09-22 09:13:13 | 1 | 494 | sam |
77,156,217 | 8,430,629 | Calling model.fit on target array of columns vs array of rows with separate output nodes | <p>The two <em>should</em> be equivalent in terms of how they are fed into the model or one should be accepted and the other incompatible. A list of columns vs a list of rows. My model architecture takes 5 inputs and spits out 3 outputs. I noticed that in training - when my target data was a 3 x 1000 numpy array I had losses in the region of 0.3 (which is unusually high considering when I predict on the test set my loss is 0.01). Often the loss at each output node is similar when using the standard columns and then differs greatly when using the rows.</p>
<p>Then transposing it <code>y.T</code> does not work - only when converting to a list <code>list(y.T)</code> does the model accept it as a target. Then the losses are much more comparable to what is expected. I know this should not work and passing in the columns should be the way to go - but why is my loss so much lower.</p>
<p>I know it sounds nonsensical but heres an MRE to validate whats happening:</p>
<pre><code> inputs = tf.keras.layers.Input(shape=(5,))
x = tf.keras.layers.Dense(units=16, activation="relu")(inputs)
x = tf.keras.layers.Dense(units=16, activation="relu")(x)
out1 = tf.keras.layers.Dense(1)(x)
out2 = tf.keras.layers.Dense(1)(x)
out3 = tf.keras.layers.Dense(1)(x)
outputs = [out1, out2, out3]
model = tf.keras.models.Model(inputs=inputs, outputs=outputs)
# Creating data
data = np.random.random(size=(300,8))
df = pd.DataFrame(data)
df.columns = ["A", "B", "C","D", "E", "F", "G", "H"]
df["F"] = df["F"] * 2 # Perturbing
df["G"] = df["G"] * 3
df["H"] = df["H"] * 4
model.compile(loss="mae")
X = df[["A", "B", "C", "D", "E"]]
y = df[["F", "G", "H"]]
ny = y.to_numpy()
# history = model.fit(X, list(ny.T), epochs=200) # Alternate - uncomment when you want to use
history = model.fit(X, ny, epochs=200) # standard
pred = model.predict(X)
pred = np.concatenate(pred, axis=1)
resid = np.mean(np.abs(ny - pred), axis=0)
print(resid) # Training loss is lower for rows...
</code></pre>
<p>Any help is appreciated. Maybe defining the model with separate output branches as opposed to a single <code>Dense(3)</code> changes things?</p>
| <python><tensorflow><keras><deep-learning><neural-network> | 2023-09-22 09:02:43 | 1 | 312 | Governor |
77,156,187 | 11,267,783 | Set axis to specific value in pyqtgraph | <p>I would like to specify the value of my axis on my imageItem.</p>
<p>This is my code :</p>
<pre><code>import numpy as np
import pyqtgraph as pg
app = pg.mkQApp('Example')
layout = pg.GraphicsLayoutWidget(show=True)
plot_item = pg.PlotItem(title="")
image_item = pg.ImageItem()
plot_item.addItem(image_item)
layout.addItem(plot_item)
data = np.random.rand(10,10)
image_item.setImage(data)
axb = pg.graphicsItems.AxisItem.AxisItem(orientation='bottom')
axb.setRange(-100,100)
plot_item.setAxisItems(axisItems={'bottom': axb})
if __name__ == '__main__':
pg.exec()
</code></pre>
<p>This code does not work because the bottom axis is not between -100 and 100.</p>
<p>Moreover, is it possible to give an entire array ? In order to have my bottom axis corresponding to [-100,-75,-50,-25,0,25,50,75,100] (or np.linspace(-100,100,100))?</p>
| <python><pyqt><pyqtgraph> | 2023-09-22 08:59:32 | 1 | 322 | Mo0nKizz |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.