QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,933,074
| 14,546,482
|
Python nested dictionary - remove "" and data with extra spaces but keep None values
|
<p>I have a dictionary and would like to keep None values but remove values with "" and also values of any combination of " "'s</p>
<p>I have the following dictionary:</p>
<pre><code>{'UserName': '',
'Location': [{'City': '',
'Country': 'Japan',
'Address 1': ' ',
'Address 2': ' '}],
'PhoneNumber': [{'Number': '123-456-7890', 'ContactTimes': '', 'PreferredLanguage': None}],
'EmailAddress': [{'Email': 'test@test.com', 'Subscribed': None}],
'FriendCount': [{'SumAsString': 'xndiofa!#$*9'}]
}
</code></pre>
<p>Expected result:</p>
<pre><code>{
'Location': [{
'Country': 'Japan',
}],
'PhoneNumber': [{'Number': '123-456-7890', 'PreferredLanguage': None}],
'EmailAddress': [{'Email': 'test@test.com', 'Subscribed': None}],
'FriendCount': [{'SumAsString': 'xndiofa!#$*9'}]
}
</code></pre>
<p>I have this function and its partially working but I cant figure out how to remove key's with the extra spaces.</p>
<pre><code>def delete_junk(_dict):
for key, value in list(_dict.items()):
if isinstance(value, dict):
delete_junk(value)
elif value == '':
del _dict[key]
elif isinstance(value, list):
for v_i in value:
if isinstance(v_i, dict):
delete_junk(v_i)
return _dict
</code></pre>
|
<python><json><dictionary>
|
2023-04-04 19:19:26
| 2
| 343
|
aero8991
|
75,933,037
| 11,092,636
|
flask and Jinja2 control structure doesn't work with render_template.format but works when passing the variable directly
|
<p>Here is a MRE.</p>
<p><code>templates/index.html</code>:</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html>
<head>
<title>Flask App</title>
</head>
<body>
{{test}} {% if test %}lol{% endif %}
</body>
</html>
</code></pre>
<p><code>my_test.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from flask import Flask, render_template
app = Flask(__name__)
@app.route("/")
def index():
return render_template("index.html", test=True)
if __name__ == "__main__":
# local run
app.run()
</code></pre>
<p>This works perfectly.</p>
<p>However, when using the <code>.format</code> syntax, to output a variable I need to do <code>{test}</code> instead of <code>{{test}}</code> (which is fine, i don't mind) but more importantly, the <code>{% if ...}{% endif %}</code> doesn't work anymore. Any reason why? I can't understand the difference and I don't know how to google it (I've tried reading <a href="https://flask.palletsprojects.com/en/2.0.x/templating/" rel="nofollow noreferrer">https://flask.palletsprojects.com/en/2.0.x/templating/</a> but I'm not sure I got any answer).</p>
<p><strong>My question is to have an explanation of why this behaves like this. I evidently do know how to make the %if% works but my whole project uses <code>.format()</code> and I was wondering if I really needed to change everything and if there wouldn't be any drawbacks* in doing so.</strong></p>
<p>*EDIT: I found that if I did <code>{my_variable}</code> with <code>my_variable</code> containing a <code><br></code> it used to do a linebreak but with <code>{{my_variable}}</code> (not using <code>.format</code>), the <code><br></code> literally appears. So I guess, I'm not actually finding a way to make it work and I need some help.</p>
<p>EDIT2:
By <code>using .format()</code> I mean:</p>
<pre class="lang-py prettyprint-override"><code>def index():
return render_template("index.html").format(test=True)
</code></pre>
<p>and <code>index.html</code> would look like this:</p>
<pre class="lang-html prettyprint-override"><code><!DOCTYPE html>
<html>
<head>
<title>Flask App</title>
</head>
<body>
{test} {% if test %}lol{% endif %}
</body>
</html>
</code></pre>
<p>and the <code>if</code> statement won't work (<code>test</code> will be displayed correctly though).</p>
|
<python><flask>
|
2023-04-04 19:13:39
| 1
| 720
|
FluidMechanics Potential Flows
|
75,932,966
| 183,315
|
Why am I getting a MissingAuthenticationToken error when calling sns.publish from Python boto3?
|
<p>I'm working on a Python 3.10.3 application in a Docker container that sends messages using AWS SNS. I've written this code:</p>
<pre class="lang-py prettyprint-override"><code>import boto3
from botocore.config import Config
config = Config(signature_version=botocore.UNSIGNED)
def send_sns_message(topic_arn: str, message: str, message_attributes: dict) -> None:
sns = boto3.client(
"sns",
region_name=os.environ["AWS_REGION"],
aws_access_key_id= os.environ["AWS_ACCESS_KEY_ID"],
aws_secret_access_key=os.environ["AWS_SECRET_ACCESS_KEY"],
config=config,
)
att_dict = {}
for key, value in message_attributes.items():
if isinstance(value, str):
att_dict[key] = {"DataType": "String", "StringValue": value}
response = sns.publish(
TopicArn=topic_arn,
Message=message,
MessageGroupId=str(uuid4()),
MessageAttributes=att_dict,
)
</code></pre>
<p>When I call this function with a valid topic_arn, message and message_attributes it generates this error:</p>
<pre><code>botocore.exceptions.ClientError: An error occurred (MissingAuthenticationToken) when calling the Publish operation: Request is missing Authentication Token
</code></pre>
<p>The AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY are valid. Does anyone have any suggestion about how I can resolve this issue? Or, where to find/generate an authentication token?</p>
|
<python><amazon-web-services><amazon-sns>
|
2023-04-04 19:03:19
| 1
| 1,935
|
writes_on
|
75,932,923
| 14,608,529
|
How to play 2 pieces of audio in parallel through different speakers? - python
|
<p>I have two pieces of audio: (1) 15 heartbeat clips and (2) 15 frequency noises.
The frequency noises are only 1 second and shorter in duration than the heartbeat clips (generated on-the-fly, variable length).</p>
<p>I'm able to successfully play both pieces through different speakers but only in series (one at a time), not in parallel (together). I'm doing this through <code>sounddevice</code> as follows:</p>
<pre><code>import sounddevice as sd
import sound file as sf
for i, clip in enumerate(user_heartbeats):
clip.export("temp.wav", format="wav")
audio_data, sample_rate = sf.read("temp.wav", dtype='float32')
sd.play(audio_data, sample_rate, device=heartbeat_speaker_id)
sd.wait()
t = np.linspace(0.0, freq_duration_sec, int(freq_duration_sec * sampling_freq), endpoint=False)
waveform = (volume) * np.sin(2.0 * np.pi * frequencies_to_play[i] * t)
sd.play(waveform, sampling_freq, device=frequency_speaker_id)
sd.wait()
</code></pre>
<p>I looked into the <code>threading</code> library to play both simultaneously but I'm not able to get the intended result:</p>
<pre><code>def play_audio_pairing(audio, delay, device):
# load heartbeat sound
audio_data, sample_rate = sf.read(audio, dtype='float32')
if delay > 0:
padding = np.zeros((int(delay * sample_rate), audio_data.shape[1]))
audio_data = np.concatenate((padding, audio_data), axis=0)
# Play audio file on the specified sound device
sd.play(audio_data, sample_rate, device=device)
sd.wait()
max_audio_duration = max([sf.info(file).duration for file in heartbeat_sounds + frequencies_to_play])
threads = []
for i in range(len(heartbeat_sounds)):
heartbeat_sound = heartbeat_sounds[i]
frequency_noise = frequencies_to_play[i % len(frequencies_to_play)]
heartbeat_delay = max_audio_duration - sf.info(heartbeat_sound).duration
frequency_delay = max_audio_duration - sf.info(frequency_noise).duration
thread = threading.Thread(target=play_audio_pairing, args=(heartbeat_sound, heartbeat_delay - frequency_delay, heartbeat_speaker_id))
threads.append(thread)
thread.start()
thread = threading.Thread(target=play_audio_pairing, args=(frequency_noise, 0, frequency_speaker_id))
threads.append(thread)
thread.start()
for thread in threads:
thread.join()
</code></pre>
<p>Any help on how to merge these so that each frequency noise starts and plays with its corresponding heartbeat sound, in different speakers, would be really helpful.</p>
|
<python><multithreading><audio><python-multithreading><python-sounddevice>
|
2023-04-04 18:57:04
| 0
| 792
|
Ricardo Francois
|
75,932,695
| 935,376
|
reading a string with new line and comma into a list wit
|
<p>I have a string as shown below that correspond to date, agency, drivers license and car.</p>
<pre><code>st = '2020-09-12, Budget, DF8576, Ford,\n 2020-04-22, Avis, D143266, Toyota, \n ..'
</code></pre>
<p><a href="https://i.sstatic.net/zGcnn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zGcnn.png" alt="enter image description here" /></a></p>
<p>I want to split by new line (\n) first and then by comma and read into a list like below</p>
<pre><code>lst = [['2020-09-12', 'Budget', 'DF8576', 'Ford'],
['2020-04-22', 'Avis', 'D143266', 'Toyota'],
:
]
</code></pre>
<p>I tried this code below:</p>
<pre><code> st.split('\n')
</code></pre>
<p>It gives:</p>
<pre><code>['2020-09-12, Budget, DF8576, Ford,\\n 2020-04-22, Avis, D143266,..]
</code></pre>
<p>I am not sure why it is giving <code>\\n</code> when I want to split by new line first and then by comma.</p>
<p>Can you please help?</p>
|
<python><string>
|
2023-04-04 18:29:15
| 2
| 2,064
|
Zenvega
|
75,932,654
| 834,808
|
Unable to compile C extensions for clickhouse-connect python module
|
<p>Unexpectedly stuck on installation of python module <strong>clickhouse-connect</strong>, with this error:</p>
<pre><code>Building wheels for collected packages: clickhouse-connect
Running setup.py bdist_wheel for clickhouse-connect ... error
Complete output from command /usr/bin/python3 -u -c "import setuptools, tokenize;__file__='/tmp/pip-req-build-4fl5kcjv/setup.py';f=getattr(tokenize, 'open', open)(__file__);code=f.read().replace('\r\n', '\n');f.close();exec(compile(code, __file__, 'exec'))" bdist_wheel -d /tmp/pip-wheel-o14i2v_m --python-tag cp37:
Using Cython 3.0.0b1to build cython modules
Unable to compile C extensions for faster performance due to usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
error: invalid command 'bdist_wheel', will use pure Python
usage: -c [global_opts] cmd1 [cmd1_opts] [cmd2 [cmd2_opts] ...]
or: -c --help [cmd1 cmd2 ...]
or: -c --help-commands
or: -c cmd --help
</code></pre>
<p>And absolutely no ideas :(</p>
<p>Details:</p>
<ul>
<li>Python 3.7.3 (default, Oct 31 2022, 14:04:00) [GCC 8.3.0] on linux</li>
<li>pip 18.1 from /usr/lib/python3/dist-packages/pip (python 3.7)</li>
<li>cython, wheel, python3-dev - installed.</li>
<li>clickhouse-connect using pure Python installed successfully.</li>
</ul>
<p>I have tried to build module from sources (<a href="https://github.com/ClickHouse/clickhouse-connect" rel="nofollow noreferrer">github</a>), but again, no luck.</p>
|
<python><python-3.x><pip><module><clickhouse>
|
2023-04-04 18:23:55
| 1
| 468
|
Vasilij
|
75,932,642
| 1,715,544
|
How to test subclass of tornado.web.RequestHandler without using AsyncTestCase or AsyncHTTPTestCase (since they're deprecated)?
|
<p>I've got a subclass of <code>tornado.web.RequestHandler</code>, but the common test-cases are deprecated as of 6.2. Most resources I've found (including many questions on StackOverflow) use the deprecated classes, which I would like to avoid.</p>
<p>Tornado's documentation says to use <code>unittest.IsolatedAsyncToTestCase</code> instead, but without much of an indication as to how to actually do that. Furthermore—I think I'd like to simply instantiate my RequestHandler to directly test its methods w.o. relying on a TestCase class, but am having a hard time doing that.</p>
<p>So I have a couple of additional questions besides that in the title:</p>
<ol>
<li><p><code>tornado.web.RequestHandler</code> typically needs an <code>Application</code> and a <code>Request</code> to be instantiated, but when I provide them, I recieve a <code>HTTPRequest' object has no attribute 'supports_http_1_1'</code> error. Both my request and application are pretty straightforward:</p>
<pre><code>application = tornado.web.Application([
(r"/", tornado.web.RequestHandler)
])
request = tornado.httpclient.HTTPRequest(method="POST", url="/", headers=None, body=None)
</code></pre>
<p>I attempt to instantiate w/ <code>tornado.web.RequestHandler(application, request)</code>.</p>
</li>
<li><p>How can I use <code>unittest.IsolatedAsyncToTestCase</code> and why would I need to?</p>
</li>
</ol>
|
<python><unit-testing><tornado>
|
2023-04-04 18:22:47
| 1
| 1,410
|
AmagicalFishy
|
75,932,600
| 1,176,432
|
Distribute data from list into column in panda dataframe
|
<p>I have a list <code>users=['a','b','c','d']</code></p>
<p>I have a dataframe X with 100 rows.
I want to populate the <code>X['users']</code> with list users, such that</p>
<ol>
<li>distribution is even. In the above example there must be 25 entries of each element</li>
<li>the distribution is done in a random way. It shouldn't have a fix pattern of distribution each time I run. <code>abcdabcd vs aaabbbcccddd vs accbddab</code> are all valid distributions.</li>
</ol>
<p>How do I go about this?</p>
|
<python><pandas><dataframe>
|
2023-04-04 18:17:16
| 2
| 772
|
anotherCoder
|
75,932,557
| 1,964,489
|
Calculate overall balance with records including various add and sub operations
|
<p>I have a data frame <code>df</code> with columns <code>type</code>, <code>timestamp</code>, <code>value</code> and <code>team</code>. The type can be <code>add</code> (adding value) or <code>sub</code> (subtracting value). The records log the payments given or taken away for a particular team. I would like to calculate the overall balance of how much I spent for all teams overall and how much per team. Note, that I could pay <code>X</code> to particular team <code>A</code> on eg Monday, then on Tuesday get back <code>X</code>, and then pay team <code>A</code> amount <code>Y</code> on Friday. Is there some nice panda way to do that? Here is an example of how the data looks:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Type</th>
<th>Timestamp</th>
<th>Value</th>
<th>Team</th>
</tr>
</thead>
<tbody>
<tr>
<td>add</td>
<td>1675971432</td>
<td>20</td>
<td>A</td>
</tr>
<tr>
<td>add</td>
<td>1675971392</td>
<td>50</td>
<td>C</td>
</tr>
<tr>
<td>sub</td>
<td>1675877813</td>
<td>15</td>
<td>A</td>
</tr>
<tr>
<td>add</td>
<td>1675877579</td>
<td>10</td>
<td>D</td>
</tr>
<tr>
<td>add</td>
<td>1675877528</td>
<td>20</td>
<td>B</td>
</tr>
<tr>
<td>add</td>
<td>1675877128</td>
<td>15</td>
<td>A</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe>
|
2023-04-04 18:10:20
| 1
| 3,541
|
Ziva
|
75,932,505
| 3,394,510
|
How can I debug IPython display methods?
|
<p>I'm writing <code>_repr_latex_</code>, <code>_repr_html_</code>, <code>__repr__</code> methods.</p>
<p>The problem that I'm facing is that I want to debug the call chain, because I bumped in to a situation like this.</p>
<pre class="lang-py prettyprint-override"><code>class A:
def _get_text(self):
return self.__class__.__name__
def _repr_html_(self):
text = self._get_text()
return f"<b>{text}</b>"
class B(A):
def __repr__(self):
return f"<{self.__class__.__name__} >"
class C(B):
def _get_text(self):
return "I'm C, an equivalent data structure with different meaning."
</code></pre>
<p>And whenever I try to display by stating an object <code>c = C</code> at the end of a code block, it displays the <code>__repr__</code> not the <code>_repr_html_</code>, corresponding to the class <code>C</code>.</p>
<p>If anyone knows what is a common debugging process for this situation?</p>
|
<python><jupyter-notebook><ipython><jupyter-lab>
|
2023-04-04 18:02:12
| 1
| 840
|
ekiim
|
75,932,482
| 10,698,244
|
Generic way to check validity of file name in Python?
|
<p>Creating a file can fail in Python because of various file name related reasons:</p>
<ol>
<li><p>the file path is too long (see e.g.: <a href="https://askubuntu.com/questions/859945/what-is-the-maximum-length-of-a-file-path-in-ubuntu">what-is-the-maximum-length-of-a-file-path-in-ubuntu</a> )</p>
</li>
<li><p>the file name may be too long, as the file system is encrypted - see e.g. comment in <a href="https://askubuntu.com/a/859953">answer to that question</a>:</p>
<blockquote>
<p>On encrypted filesystems the max filename length is 143 bytes. To decide whether a filename is short enough you can find his byte length in Python with <code>len(filename.encode())</code>. – Marvo, Mar 9, 2018 at 12:45</p>
</blockquote>
</li>
<li><p>there are characters that either make issues in the file system, are not recommended or could make issues in another file system like: <code> \n!@#$%^&*()[]{};:,/<>?\|`~=+</code></p>
</li>
<li><p>... .</p>
</li>
</ol>
<p>Is there any convenience function that tells me beforehand whether</p>
<ul>
<li>a) my file name will work</li>
<li>b) may lead to issues in other file systems</li>
<li>c) and why it will fail?</li>
</ul>
<p>Solutions like:
<a href="https://stackoverflow.com/questions/9532499/check-whether-a-path-is-valid-in-python-without-creating-a-file-at-the-paths-ta">Check whether a path is valid in Python without creating a file at the path's target</a>
or
<a href="https://stackoverflow.com/questions/8686880/validate-a-filename-in-python">Validate a filename in python</a></p>
<p>unfortunately do not fulfill the requirements a) - c) (even the first one does not recognize the 143 character restrictions for encrypted folders / drives)</p>
|
<python><filesystems>
|
2023-04-04 18:00:25
| 2
| 1,369
|
user7468395
|
75,932,278
| 12,131,472
|
how to "pivot" my dataframe without changing row and column orders
|
<p>I wish to be able to "pivot" an existing dataframe without changing the row and column order for existing category,</p>
<p>I have this dataframe</p>
<pre><code> shortCode period WS $/mt change
0 TC2 2023-04-03 279.17 NaN NaN
1 TC2 BALMO 228.49 39.30 -5.88
2 TC2 Apr 23 231.31 39.78 -5.40
3 TC2 May 23 222.33 38.24 -2.30
4 TC2 Jun 23 212.64 36.57 -1.38
5 TC2 Jul 23 193.36 33.26 -0.51
6 TC2 Aug 23 189.42 32.58 -0.62
7 TC2 Sep 23 185.48 31.90 -0.75
8 TC2 Q2 23 222.09 38.20 -3.02
9 TC2 Q3 23 189.42 32.58 -0.63
10 TC2 Q4 23 190.58 32.78 -0.56
11 TC2 Q1 24 NaN 28.75 0.34
12 TC2 Q2 24 NaN 26.52 0.04
13 TC2 Cal 24 NaN 26.55 -0.03
14 TC2 Cal 25 NaN 26.00 -0.26
15 TC5 2023-04-03 203.57 NaN NaN
16 TC5 BALMO 198.80 54.05 -1.42
17 TC5 Apr 23 199.07 54.13 -1.34
18 TC5 May 23 203.35 55.29 -2.33
19 TC5 Jun 23 195.55 53.17 -1.21
.....
55 TC14 Q4 23 159.97 38.66 -0.05
56 TC14 Q1 24 NaN 30.00 0.30
57 TC14 Q2 24 NaN 28.58 0.04
58 TC14 Cal 24 NaN 28.38 -0.02
59 TC14 Cal 25 NaN 28.04 -0.14
60 TC17 2023-04-03 252.50 NaN NaN
61 TC17 BALMO 272.74 36.00 -0.16
62 TC17 Apr 23 271.62 35.85 -0.31
63 TC17 May 23 281.60 37.17 -0.44
64 TC17 Jun 23 281.38 37.14 -0.70
65 TC17 Jul 23 272.04 35.91 -0.43
66 TC17 Aug 23 267.04 35.25 -0.43
67 TC17 Sep 23 257.08 33.94 -0.43
68 TC17 Q2 23 278.20 36.72 -0.48
69 TC17 Q3 23 265.39 35.03 -0.44
70 TC17 Q4 23 275.70 36.39 -0.44
71 TC17 Q1 24 NaN 31.84 0.00
72 TC17 Q2 24 NaN 30.36 0.00
73 TC17 Cal 24 NaN 31.20 0.16
74 TC17 Cal 25 NaN 30.63 0.16
</code></pre>
<p>the rows and columns for the "TC"s are always following the same order</p>
<p>I wish to achieve(not enough space to show but the dataframe is 16 x16)</p>
<pre><code> shortCode TC2 Unnamed: 2 Unnamed: 3 TC5 Unnamed: 5 Unnamed: 6 \
0 period WS $/mt change WS $/mt change
1 2023-04-03 279.17 NaN NaN 203.57 NaN NaN
2 BALMO 228.49 39.3 -5.88 198.8 54.05 -1.42
3 Apr 23 231.31 39.78 -5.4 199.07 54.13 -1.34
4 May 23 222.33 38.24 -2.3 203.35 55.29 -2.33
5 Jun 23 212.64 36.57 -1.38 195.55 53.17 -1.21
6 Jul 23 193.36 33.26 -0.51 194.63 52.92 -2.64
7 Aug 23 189.42 32.58 -0.62 193.34 52.57 -2.55
8 Sep 23 185.48 31.9 -0.75 192.03 52.21 -2.48
9 Q2 23 222.09 38.2 -3.02 199.32 54.2 -1.62
10 Q3 23 189.42 32.58 -0.63 193.33 52.57 -2.55
11 Q4 23 190.58 32.78 -0.56 185.81 50.52 -1.74
12 Q1 24 NaN 28.75 0.34 NaN 46.58 0
13 Q2 24 NaN 26.52 0.04 NaN 41.41 0
14 Cal 24 NaN 26.55 -0.03 NaN 41.13 -0.37
15 Cal 25 NaN 26 -0.26 NaN 40.62 -0.42
TC6 Unnamed: 8 Unnamed: 9 TC14 Unnamed: 11 Unnamed: 12 TC17 \
0 WS $/mt change WS $/mt change WS
1 440 NaN NaN 123.33 NaN NaN 252.5
2 288.02 22.29 -0.16 169.04 40.86 -0.23 272.74
3 296.47 22.95 0.5 166.5 40.24 -0.85 271.62
4 241.89 18.72 -0.21 170.31 41.16 -0.85 281.6
5 225.58 17.46 -0.18 165.76 40.06 -0.18 281.38
6 216.72 16.77 0 158.97 38.42 -0.14 272.04
7 190.63 14.76 0.05 154.2 37.27 -0.02 267.04
8 194.32 15.04 0 151.83 36.7 0.05 257.08
9 254.65 19.71 0.04 167.52 40.49 -0.62 278.2
10 200.56 15.52 0.01 155 37.46 -0.04 265.39
11 234.77 18.17 0 159.97 38.66 -0.05 275.7
12 NaN 14.08 0 NaN 30 0.3 NaN
13 NaN 13.96 0 NaN 28.58 0.04 NaN
14 NaN 13.98 0.01 NaN 28.38 -0.02 NaN
15 NaN 13.5 0.01 NaN 28.04 -0.14 NaN
Unnamed: 14 Unnamed: 15
0 $/mt change
1 NaN NaN
2 36 -0.16
3 35.85 -0.31
4 37.17 -0.44
5 37.14 -0.7
6 35.91 -0.43
7 35.25 -0.43
8 33.94 -0.43
9 36.72 -0.48
10 35.03 -0.44
11 36.39 -0.44
12 31.84 0
13 30.36 0
14 31.2 0.16
15 30.63 0.16
</code></pre>
<p>basically just a pivot but keep the row order and the column order of 1. WS 2. $/mt 3. change</p>
<p>If I use pivot,</p>
<pre><code>df_test = df_tc.pivot(index='period' , columns='shortCode', values=['WS', '$/mt', 'change'])
</code></pre>
<p>the order will be lost</p>
<pre><code> WS $/mt \
shortCode TC14 TC17 TC2 TC5 TC6 TC14 TC17 TC2
period
2023-04-03 123.33 252.50 279.17 203.57 440.00 NaN NaN NaN
Apr 23 166.50 271.62 231.31 199.07 296.47 40.24 35.85 39.78
Aug 23 154.20 267.04 189.42 193.34 190.63 37.27 35.25 32.58
BALMO 169.04 272.74 228.49 198.80 288.02 40.86 36.00 39.30
Cal 24 NaN NaN NaN NaN NaN 28.38 31.20 26.55
Cal 25 NaN NaN NaN NaN NaN 28.04 30.63 26.00
Jul 23 158.97 272.04 193.36 194.63 216.72 38.42 35.91 33.26
Jun 23 165.76 281.38 212.64 195.55 225.58 40.06 37.14 36.57
May 23 170.31 281.60 222.33 203.35 241.89 41.16 37.17 38.24
Q1 24 NaN NaN NaN NaN NaN 30.00 31.84 28.75
Q2 23 167.52 278.20 222.09 199.32 254.65 40.49 36.72 38.20
Q2 24 NaN NaN NaN NaN NaN 28.58 30.36 26.52
Q3 23 155.00 265.39 189.42 193.33 200.56 37.46 35.03 32.58
Q4 23 159.97 275.70 190.58 185.81 234.77 38.66 36.39 32.78
Sep 23 151.83 257.08 185.48 192.03 194.32 36.70 33.94 31.90
</code></pre>
|
<python><pandas><dataframe><pivot>
|
2023-04-04 17:33:37
| 1
| 447
|
neutralname
|
75,932,117
| 10,918,680
|
Pandas read_csv() converts integer columns to float if there are NaN. How does one keep them integers?
|
<p>I need to use <code>read_csv()</code> to make a dataframe from a csv file. Most columns in the csv file are integer type (such as number of products bought) or string type (such as store name), but sometimes there might be float type (such as the weight of product in lbs).</p>
<p>I realize that Pandas convert the integer (int64) columns into float (float64) if there are blanks cells, which becomes NaN which is a float type.</p>
<p>I'd like to keep cells that originally had integers to stay integers. There are other parts of the program that relies on this.</p>
<p>I tried:</p>
<pre><code>for col in data.columns:
if data[col].dtype == np.float64:
data[col] = data[col].astype(float).astype('Int64')
</code></pre>
<p>But this will try to change all float columns into integers. There are might columns that were originally float (as opposed to being coerced into float due to NaN) that I'd like to keep float.</p>
<p>I cannot specify a dictionary for dtype of each column when using <code>read_csv()</code> as each dataset is different and it'll be a lot of manual work.</p>
<p>I wonder if there is a way to treat all columns as "object" types when reading the csv? As I understand, "object" types allow for mixed types in columns.</p>
|
<python><pandas><dataframe><numpy><csv>
|
2023-04-04 17:13:48
| 1
| 425
|
user173729
|
75,932,074
| 21,343,992
|
Python webbot library, example returns "session not created exception: Missing or invalid capabilities"
|
<p>I'm trying to run this simple Python webbot example on Ubuntu 22.04:</p>
<pre><code>from webbot import Browser
web = Browser()
web.go_to('google.com')
web.type('hello its me') # or web.press(web.Key.SHIFT + 'hello its me')
web.press(web.Key.ENTER)
web.go_back()
web.click('Sign in')
web.type('mymail@gmail.com' , into='Email')
web.click('NEXT' , tag='span')
web.type('mypassword' , into='Password' , id='passwordFieldId')
web.click('NEXT' , tag='span') # you are logged in . woohoooo
</code></pre>
<p>However, when I run the example using Python3 I get this error:</p>
<pre><code> web = Browser()
File "/usr/local/lib/python3.10/dist-packages/webbot/webbot.py", line 68, in __init__
self.driver = webdriver.Chrome(executable_path=driverpath, options=options)
File "/usr/local/lib/python3.10/dist-packages/selenium/webdriver/chrome/webdriver.py", line 80, in __init__
super().__init__(
File "/usr/local/lib/python3.10/dist-packages/selenium/webdriver/chromium/webdriver.py", line 104, in __init__
super().__init__(
File "/usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/webdriver.py", line 286, in __init__
self.start_session(capabilities, browser_profile)
File "/usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/webdriver.py", line 378, in start_session
response = self.execute(Command.NEW_SESSION, parameters)
File "/usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "/usr/local/lib/python3.10/dist-packages/selenium/webdriver/remote/errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.SessionNotCreatedException: Message: session not created exception: Missing or invalid capabilities
(Driver info: chromedriver=2.39.562737 (dba483cee6a5f15e2e2d73df16968ab10b38a2bf),platform=Linux 5.19.0-38-generic x86_64)
</code></pre>
<p>I installed Chrome using:</p>
<pre><code>wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
sudo apt install ./google-chrome-stable_current_amd64.deb
</code></pre>
<p>How do I resolve this?</p>
<p>I am very new to this, I literally just want to log in to a website, don't care what browser.</p>
<p>I Googled to try and find how to use Firefox instead, but the name of the library made searching difficult.</p>
|
<python><python-3.x><selenium-webdriver><webbot>
|
2023-04-04 17:09:19
| 1
| 491
|
rare77
|
75,932,059
| 8,121,824
|
Selenium Attribute Error when selecting dropdown
|
<p>I had some code that was running fine but has recently ran into an error. When I run the following code to select the dropdown to iterate through pages, it returns this error:</p>
<pre><code>File C:\Program Files\Spyder\pkgs\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File c:\users\shawn\documents\python scripts\webscraping\basketball\nba stats scrape daily update.py:42
select = Select(browser.find_element(by=By.CLASS_NAME, value="DropDown_select__4pIg9"))
File C:\Python310\Lib\site-packages\selenium\webdriver\support\select.py:36 in __init__
if webelement.tag_name.lower() != "select":
AttributeError: 'dict' object has no attribute 'tag_name'.
</code></pre>
<p>If I change this line to:</p>
<pre><code>select = Select(browser.find_element(by=By.CLASS_NAME, value="DropDown_select__4pIg9")['ELEMENT'])
</code></pre>
<p>I get the following error:</p>
<pre><code>Traceback (most recent call last):
File C:\Program Files\Spyder\pkgs\spyder_kernels\py3compat.py:356 in compat_exec
exec(code, globals, locals)
File c:\users\shawn\documents\python scripts\webscraping\basketball\nba stats scrape daily update.py:42
select = Select(browser.find_element(by=By.CLASS_NAME, value="DropDown_select__4pIg9")['ELEMENT'])
File C:\Python310\Lib\site-packages\selenium\webdriver\support\select.py:36 in __init__
if webelement.tag_name.lower() != "select":
AttributeError: 'str' object has no attribute 'tag_name'
</code></pre>
<p>Any help?</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.keys import Keys
import time
from selenium.webdriver.support.ui import Select
from bs4 import BeautifulSoup
import pandas as pd
import os
import datetime
import numpy as np
import lxml
from selenium.webdriver.common.by import By
#os.chdir(r'C:\Users\shawn\Documents\Python Scripts\Webscraping')
directory = os.getcwd()
beginningTime = time.time()
##Change the file path to where your chromedriver.exe file is
browser = webdriver.Chrome(r'C:\Temp\ChromeDriver\chromedriver.exe')
url = 'https://www.nba.com/stats/players/boxscores'
browser.get(url)
browser.maximize_window()
x = 1
dfStats = []
for x in range(50):
x+=1
time.sleep(10)
print(x)
select = Select(browser.find_element(by=By.CLASS_NAME, value="DropDown_select__4pIg9"))
select.select_by_visible_text(str(x))
table = browser.find_element_by_class_name('Crom_table__p1iZz').get_attribute('outerHTML')
soup = BeautifulSoup(table, 'html.parser')
dfData = pd.read_html(str(soup))[0]
print(dfData.head())
print(dfData.shape)
dfStats.append(dfData)
print(dfStats)
</code></pre>
|
<python><selenium-webdriver>
|
2023-04-04 17:08:02
| 1
| 904
|
Shawn Schreier
|
75,931,752
| 4,570,472
|
How to add an empty facet to a relplot or FacetGrid
|
<p>I have a <code>relplot</code> with columns split on one variable. I'd like to add one additional column <strong>with no subplot or subpanel</strong>. To give a clear example, suppose I have the following plot:</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
x = np.random.random(10000)
t = np.random.randint(low=0, high=3, size=10000)
y = np.multiply(x, t)
df = pd.DataFrame({'x': x, 't': t, 'y': y})
g = sns.relplot(df, x='x', y='y', col='t')
</code></pre>
<p>This generates a plot something like</p>
<p><a href="https://i.sstatic.net/282ZH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/282ZH.png" alt="enter image description here" /></a></p>
<p>I want a 4th column for <code>t=3</code> that displays no data nor axes. I just want a blank white subplot of equal size as the first three subplots. How can I do this?</p>
|
<python><matplotlib><seaborn><facet-grid><relplot>
|
2023-04-04 16:30:39
| 1
| 2,835
|
Rylan Schaeffer
|
75,931,703
| 8,942,319
|
MongoDB/pymongo error: The DNS response does not contain an answer to the question ... IN SRV
|
<p>I have a python script running in K8s that's been working fine. Trying to run the code locally within it's docker container and I'm getting the following error when it attempts to connect to MongoDB</p>
<p><code>pymongo.errors.ConfigurationError: The DNS response does not contain an answer to the question: _mongodb._tcp.cluster_name.abc123.mongodb.net IN SRV</code></p>
<p>The connection string
<code>mongodb+srv://username:password@cluster_name.abc123.mongodb.net</code> is from the Mongo UI "connect" instructions and I validated this against the secret kept in prod vaults.</p>
<p>When I run
<code>dig srv _mongodb._tcp.cluster_name.abc123.mongodb.net</code></p>
<p>I get 3 answers, 1 for each shard. But I think that's fine.</p>
<p>When I use <a href="https://toolbox.googleapps.com/apps/dig/#ANY/" rel="nofollow noreferrer">https://toolbox.googleapps.com/apps/dig/#ANY/</a> (ANY, not A record) it also returns the above 3 answers. A record returns no answer.</p>
<p>As I understand it, this points to it being an issue with my own router/ISP not being able to resolve the DNS record? Is that so? How do I go about diagnosing this and fixing it so I can run this code locally?</p>
<p>EDIT: running the script outside of docker works. Running it inside the container produces the above situation.</p>
|
<python><mongodb><docker><dns>
|
2023-04-04 16:25:18
| 0
| 913
|
sam
|
75,931,697
| 6,515,755
|
Sentry traces_sampler(sampling_context) how to use with FastAPI
|
<p>In sentry docs there is example</p>
<p><a href="https://docs.sentry.io/platforms/python/configuration/sampling/" rel="nofollow noreferrer">https://docs.sentry.io/platforms/python/configuration/sampling/</a></p>
<pre><code>def traces_sampler(sampling_context):
# Examine provided context data (including parent decision, if any)
# along with anything in the global namespace to compute the sample rate
# or sampling decision for this transaction
if "...":
# These are important - take a big sample
return 0.5
else:
# Default sample rate
return 0.1
sentry_sdk.init(
# ...
traces_sampler=traces_sampler,
)
</code></pre>
<p>but there are no typing for <code>sampling_context</code> variable, I have no idea how to filter specific http route in case of FastAPI integration.</p>
<p>Pls advice how to set different sample rate for specific route ?</p>
|
<python><fastapi><sentry>
|
2023-04-04 16:24:50
| 1
| 12,736
|
Ryabchenko Alexander
|
75,931,652
| 3,130,747
|
How to shade portions of a matplotlib axis face based on timeseries values
|
<p>Given data with timeseries and values associated with the timeseries, I'd like to shade the background of the axis plot depending on the value of the timeseries in order to highlight values which have particular meanings relating to the times (eg - could be a season, or opening hours etc).</p>
<p>I don't really know how to do this - but do have a picture of the sort of thing I mean:
<a href="https://i.sstatic.net/BxCvq.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BxCvq.jpg" alt="enter image description here" /></a></p>
<p>I don't work with timeseries much, but I've tried to create a dataset which should be suitable for example data:</p>
<pre class="lang-py prettyprint-override"><code>import io
so_data = pd.read_csv(
io.StringIO(
"x,y,plt_mask\n2023-03-22 02:29:51,0.0,False\n2023-03-22 03:20:26,0.0,False\n2023-03-23 00:51:06,0.0,False\n2023-03-23 01:29:42,0.0,False\n2023-03-23 04:48:22,23.081085,False\n2023-03-23 07:13:11,50.0,True\n2023-03-23 08:46:27,50.0,True\n2023-03-23 12:34:13,0.0,False\n2023-03-23 12:46:35,0.0,False\n2023-03-23 16:02:13,0.0,False\n2023-03-23 17:58:47,0.0,False\n2023-03-23 18:34:27,0.0,False\n2023-03-23 20:28:29,1.0,False\n2023-03-24 05:25:20,0.0,True\n2023-03-24 09:03:36,0.0,True\n2023-03-24 09:06:09,0.0,True\n2023-03-24 10:53:44,70.0,True\n2023-03-24 13:10:03,1273.676636,False\n2023-03-24 17:03:16,21.0,False\n2023-03-24 18:22:23,1.0,False\n"
)
)
fig, ax = plt.subplots()
so_data
fig.autofmt_xdate()
xfmt = matplotlib.dates.DateFormatter("%d-%m-%y %H:%M")
ax.xaxis.set_major_formatter(xfmt)
ax.plot(so_data["x"], so_data["y"])
</code></pre>
<p>Here the axis background should be a different colour (green / whatever) when the <code>plt_mask</code> value is <code>True</code>.</p>
|
<python><matplotlib>
|
2023-04-04 16:18:34
| 1
| 4,944
|
baxx
|
75,931,596
| 2,108,344
|
Why is my Python app slow connecting to Cassandra compared to C#?
|
<p>I have default configuration cassandra on default installation docker on windows 11. Table data contains 19 rows.</p>
<p>The python driver is exceptionally slow and crashes in about 20% of cases. (Connection Timeout)</p>
<p>I first expected this has something to do with docker or the container configuration, but I noticed that RazorSQL has no issues and therefore I did some performance testing by comparing the official datastax python driver to the official datastax .NET driver.</p>
<p>The results are devastating:</p>
<ul>
<li>Python: 22.908 seconds (!)</li>
<li>.NET: 0.168 seconds</li>
</ul>
<p>Is this normal behavior of the python driver?</p>
<p>My python code:</p>
<pre><code>from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
import time
start = time.time()
for i in range(10):
auth_provider = PlainTextAuthProvider(username="cassandra", password="cassandra")
cluster=Cluster(["localhost"], auth_provider=auth_provider,connect_timeout=30)
session=cluster.connect("rds")
session.execute("SELECT COUNT(*) FROM data").one()
end = time.time()
print((end - start)/10)
</code></pre>
<p>My C# code:</p>
<pre><code>using Cassandra;
using System;
using System.Diagnostics;
public void TestReliability()
{
Stopwatch stopwatch = new Stopwatch();
stopwatch.Start();
for (int i = 0; i < 100; i++){Test();}
stopwatch.Stop();
Console.WriteLine("Average connect + one query in ms: " + (stopwatch.ElapsedMilliseconds / 100));
}
public void Test()
{
Cluster cluster = Cluster.Builder().AddContactPoint("localhost").WithAuthProvider(new PlainTextAuthProvider("cassandra", "cassandra")).Build();
ISession session = cluster.Connect("rds");
var result=session.Execute("SELECT COUNT(*) FROM data");
session.Dispose();
cluster.Dispose();
}
</code></pre>
<p>EDIT: The python driver does not crash when timeout is set high enough (35 seconds(!))</p>
|
<python><cassandra><datastax-python-driver>
|
2023-04-04 16:12:18
| 2
| 783
|
SalkinD
|
75,931,562
| 11,922,765
|
Remove the white gaps, or no data regions, from the histogram
|
<p>I want to plot Histogram plot. This is temperature. It will never be more than +- 150. But the real data has some bad values, may be inserted into the database by the datalogger when it fails to hear from the sensor in time. I want to plot histogram plot, so, I should know from it presence of any bad data and how much it is.</p>
<p>My code:</p>
<pre><code> # I did log on the y-scale. It did help me in finding the complete range and their percentage. This was good
ax = sns.histplot(df,x='Temperature',stat='percent',bins=200)
ax.set_yscale('log')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/EaWKs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EaWKs.png" alt="enter image description here" /></a></p>
<p>I wanted to see the x-axis ticks completely. Mainly there is a large white gap between the -8000 and 0. I want to delete this space where there is no data. So, I did the log on x-scale. It did help in seeing the complete range positive values but it delete negative values.</p>
<pre><code> ax = sns.histplot(df,x='Temperature',stat='percent',bins=200)
ax.set_yscale('log')
ax.set_xscale('log')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/lyAm0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lyAm0.png" alt="enter image description here" /></a></p>
|
<python><pandas><matplotlib><seaborn><histogram>
|
2023-04-04 16:09:06
| 0
| 4,702
|
Mainland
|
75,931,485
| 11,447,688
|
Vectorized calculation with condition in pandas
|
<p>I want to have vectorized calculation, since <code>apply or df.iterrows()</code> is slow. below code works fine, gives result as expected</p>
<pre><code>import pandas as pd
df = pd.DataFrame({"a": [0, 12, 0, 5], "b": [5, 89, 45, 6], "c": [85, 23, 14, 10]})
def cal(data):
val1 = data["a"]
val2 = data["b"]
val3 = data["c"]
return val1+val2, val1-val2, val1*val3
df["add"], df["subract"], df["multiply"] = cal(df)
a b c add subract multiply
0 0 5 85 5 -5 0
1 12 89 23 101 -77 276
2 0 45 14 45 -45 0
3 5 6 10 11 -1 50
</code></pre>
<p>Now I want to have val1, val2, val3=0 whenever any of the rows of column <code>a</code> in dataframe is 0. below is the code</p>
<pre><code>def cal_with_zero(data):
if data["a"] == 0:
val1 = 0
val2 = 0
val3 = 0
else:
val1 = data["a"]
val2 = data["b"]
val3 = data["c"]
return val1+val2, val1-val2, val1*val3
df["add"], df["subract"], df["multiply"] = cal_with_zero(df)
</code></pre>
<p>I get error on code line <code>if data["a"] == 0:</code></p>
<pre><code>ValueError: The truth value of a Series is ambiguous. Use a.empty, a.bool(), a.item(), a.any() or a.all().
</code></pre>
<p>what I want as result is</p>
<pre><code> a b c add subract multiply
0 0 5 85 0 0 0
1 12 89 23 101 -77 276
2 0 45 14 0 0 0
3 5 6 10 11 -1 50
</code></pre>
<p>I want to have a vectorized approach instead of <code>apply or df.iterrows()</code>. and i want to have function <code>cal_with_zero</code>, since in passed dataframe there are lot more conditions which determines value of val1, val2, val3 in actual code due to which I was unable to use <code>np.where</code>, may be I missed something doing it. but calling <code>cal_with_zero</code> with whole dataframe as argument is necessary.</p>
<p>Thanks in advance!</p>
|
<python><pandas><numpy>
|
2023-04-04 16:01:12
| 2
| 1,511
|
young_minds1
|
75,931,467
| 11,668,258
|
json.decoder.JSONDecodeError: Invalid control character when we try to parse a JSON
|
<p>json.decoder.JSONDecodeError: Invalid control character error when i try to parse a JSON string.</p>
<pre><code>import json
import pprint
json_data = None
with open("C:\\Users\\75\\OneDrive\\PROJECT P1\\Work_1.0\\CBP\\server.txt", 'r') as f:
data = f.read()
json_data = json.loads(data)
pprint.pprint(json_data)
f.close()
jsonString = json.dumps(json_data,default = str)
jsonFile = open("converted.json", "w")
jsonFile.write(jsonString)
jsonFile.close()
</code></pre>
<p>Requirement is to import unformatted dump data from a text file and convert into JSON and write it into a .json file using python</p>
<p>Getting below error</p>
<pre><code>Traceback (most recent call last):
File "C:\\Users\\75\\OneDrive\\PROJECT P1\\Work_1.0\\CBP\\server.txt", line 11, in <module>
json_data = json.loads(data)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\json\__init__.py", line 346, in loads
return _default_decoder.decode(s)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.9_3.9.3568.0_x64__qbz5n2kfra8p0\lib\json\decoder.py", line 353, in raw_decode
obj, end = self.scan_once(s, idx)
json.decoder.JSONDecodeError: Invalid control character at: line 1 column 124 (char 123)
</code></pre>
|
<python><json>
|
2023-04-04 15:58:19
| 1
| 916
|
Tono Kuriakose
|
75,931,445
| 10,981,411
|
how do I change my dataframe shape in python
|
<p>below is my code</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame({'Date' : [np.nan, '2035-12-31', np.nan, '2036-12-31',np.nan, '2037-12-31',np.nan, '2038-12-31'],
'Cftype': ["Interest", "Principal", "Interest", "Principal","Interest", "Principal", "Interest", "Principal"],
'CF': [12,23,34,89,45,4,54,78]
})
</code></pre>
<p><a href="https://i.sstatic.net/F2859.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/F2859.png" alt="enter image description here" /></a></p>
<p>I want to see output as shown below</p>
<p><a href="https://i.sstatic.net/aZjlg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/aZjlg.png" alt="enter image description here" /></a></p>
<p>Can someone help please?</p>
|
<python><dataframe>
|
2023-04-04 15:56:36
| 2
| 495
|
TRex
|
75,931,368
| 19,504,610
|
FastAPI - switching off a switch some time later
|
<p>Let's say I have this <code>sqlmodel.SQLModel</code> which has a table name <code>Project</code>.</p>
<p>My intention is to feature a record of a <code>Project</code> for a definite period of time, e.g. 3 days, i.e. setting it's field <code>featured</code> to be <code>True</code>, and automatically set its field <code>featured</code> to be <code>False</code> thereafter.</p>
<pre><code>class Project(SQLModel):
featured: bool = False
</code></pre>
<p>How can I achieve this behavior using FastAPI? Is it via background tasks or what?</p>
|
<python><fastapi><pydantic><sqlmodel>
|
2023-04-04 15:49:25
| 1
| 831
|
Jim
|
75,931,355
| 4,466,012
|
pandas Series sort_index with key
|
<p>I am trying to sort a Series using <code>sort_index(key = lambda idx: foo(idx))</code>, which should take the first item of the list and put it at the end. My sorting function <code>foo</code> looks like this:</p>
<pre><code>def foo(idx):
print("pre",idx)
if idx.name == "pca_n":
ret = pd.Index(list(idx[1:]) + list(idx[:1]),name=idx.name)
else:
ret = idx.copy()
print("post",ret)
return ret
</code></pre>
<p>I call it like this:</p>
<pre><code>print("index before sort",byHyp.index)
byHyp = byHyp.sort_index(key = lambda x: foo(x))
print("index after sort",byHyp.index)
</code></pre>
<p>This results in the following output:</p>
<pre><code>index before sort Int64Index([-1, 2, 5, 10, 20], dtype='int64', name='pca_n')
pre Int64Index([-1, 2, 5, 10, 20], dtype='int64', name='pca_n')
post Int64Index([2, 5, 10, 20, -1], dtype='int64', name='pca_n')
index after sort Int64Index([20, -1, 2, 5, 10], dtype='int64', name='pca_n')
</code></pre>
<p>In other words, the output of <code>foo</code> gives a list of indices, but they are not retained in the Series. (I am expecting <code>[2,5,10,20,-1]</code>, as is the output of foo). Perhaps I am misunderstanding how to use the <code>key</code> argument of <code>sort_index</code>?</p>
|
<python><pandas>
|
2023-04-04 15:47:52
| 2
| 740
|
GregarityNow
|
75,931,340
| 20,176,161
|
Extract a number within a string
|
<p>I have a string that looks like this:</p>
<pre><code>T/12345/C
T/153460/613
</code></pre>
<p>I would like to extract the number between the slash <code>/</code> i.e. <code>12345</code> and <code>153460</code>. Sometimes I have 5 numbers, sometimes 6 or more.</p>
<p>I tried <code>df[2:7]</code> which extracts from the second to the seventh element but i don't know where to stop as sometimes I have 5 or 6 numbers.
How can I extract the number please?</p>
|
<python><string><dataframe><extract>
|
2023-04-04 15:46:24
| 1
| 419
|
bravopapa
|
75,931,337
| 19,003,861
|
Call parent model field m2m relationship field from child
|
<p>I am trying to call the list of <code>M2M</code> object referenced in <code>ModelB</code> from <code>ModelC</code>.</p>
<p>I normally use it the other way around (start with parent model and filter down to child), but this time I cant see a way around it.</p>
<p>I feel there might be something to do with the related name, but the behaviour seems different than with normal FK.</p>
<p>Any ideas?</p>
<p><strong>models.py</strong></p>
<pre><code>class ModelA(models.Model):
title = models.CharField(verbose_name="title",max_length=100, null=True, blank=True)
class ModelB(models.Model):
ModelB_Field1= models.ManyToManyField(ModelA, blank=True)
class ModelC(models.Model):
ModelC_Field1= models.ForeignKey(ModelB,null=True, blank=True, on_delete=models.SET_NULL, related_name='modelc_field1')
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>def function(request, modelc_id):
q = ModelC.objects.filter(pk=modelc_id)
</code></pre>
|
<python><django><django-views>
|
2023-04-04 15:46:06
| 2
| 415
|
PhilM
|
75,931,332
| 16,739,739
|
Python server is killed when started using docker-compose exec
|
<p>I start docker container and want to start simple python server in it.
I can't use <code>ENTRYPOINT/CMD</code> of Dockerfile as it is already used for other things.</p>
<p>I am trying to do it the following way:</p>
<pre><code>docker-compose exec <service_name> /bin/bash -c "./server.py &"
</code></pre>
<p>But as soon as <code>docker-compose exec</code> ends server stops.</p>
<p>Yet if I run application written on C++ in the same way it continues to run:</p>
<pre><code>docker-compose exec <service_name> /bin/bash -c "./app &"
</code></pre>
<p>I inspected this using <code>htop</code> and found out that both <code>server.py</code> and <code>app</code> start as child processes but after end of <code>docker-compose exec</code> <code>app</code> reattaches and <code>server.py</code> stops.</p>
<ol>
<li>How can I make my python server continue to run starting it through `docke-compose exec?</li>
</ol>
|
<python><ubuntu><docker-compose>
|
2023-04-04 15:45:19
| 1
| 693
|
mouse_00
|
75,931,038
| 6,466,366
|
How to correctly filter a QuerySet based on a form MultipleChoice field?
|
<p>This is how I have defined my form:</p>
<pre><code>class TaskFilterForm(forms.Form):
status = forms.MultipleChoiceField(
required=False,
widget=forms.CheckboxSelectMultiple,
choices=PLAN_STATUS,
)
</code></pre>
<p>And I'm trying to get my QuerySet filtered like this:</p>
<pre><code>class MyTasksView(generic.ListView):
template_name = 'my-tasks.html'
context_object_name = 'tasks'
object_list = Task.objects.all()
def get_context_data(self, **kwargs):
# Call the base implementation first to get a context
context = super().get_context_data(**kwargs)
# Add in a QuerySet of all the books
context['filter_form'] = TaskFilterForm(self.request.GET)
return context
def get_queryset(self, **kwargs):
print(self.request.GET)
qs = Task.objects.filter(assignee=self.request.user)
status = self.request.GET.get('status')
if status:
qs = qs.filter(status__in=status)
return qs
def post(self, request, *args, **kwargs):
context = super().get_context_data(**kwargs)
context['filter_form'] = TaskFilterForm(self.request.POST)
print('MyTaskView.request.POST', self.request.POST)
return render(request, self.template_name, context)
</code></pre>
<p>It is correctly filtering just for an item, but I'd like it to filter, say, including status 0 and 1, when both are selected.</p>
<p>How should I correct this?</p>
|
<python><django><python-requests><request>
|
2023-04-04 15:11:21
| 2
| 656
|
rdrgtec
|
75,931,002
| 425,893
|
Unable to set a lambda's role from CDK
|
<p>I have the following code:</p>
<pre class="lang-py prettyprint-override"><code>EventbridgeToLambda(
self,
"Some",
lambda_function_props=lambda_.FunctionProps(
code=lambda_.InlineCode(lambda_code),
handler="index.lambda_handler",
runtime=lambda_.Runtime.PYTHON_3_8,
# Set timeout to something other than 3 seconds
timeout=Duration.seconds(45),
layers=[lambdaLayer],
environment={
"S3_BUCKET": "dev_environment_bucket",
},
role=iam.Role.from_role_arn(
self, id="x", role_arn="arn:aws:iam::numbers:path/rolename", mutable=True
),
vpc=ec2.Vpc.from_lookup(self, "VPC", vpc_id="vpc-0hex"),
allow_public_subnet=True,
vpc_subnets=ec2.SubnetSelection(
subnets=[ec2.Subnet.from_subnet_id(self, "Subnet", "subnet-0hex")]
),
security_groups=[ec2.SecurityGroup.from_lookup_by_id(self, "SG", "sg-0hex")],
),
event_rule_props=events.RuleProps(
schedule=events.Schedule.cron(
minute="*", hour="0-3,11-23", day="*", month="*", year="*"
)
),
)
</code></pre>
<p>If I manually install/create this lambda, there is a specific role I need to set in the Configuration->Basic Settings->Edit panel. If I forget to set this role, then when I go to set up the configuration in Configuration->VPC, the correct vpc/subnet/security-groups are not available to choose from the list.</p>
<p>The above CDK code will work to set arbitrary vpc/subnet/security-groups, I've tested it and that part seems to work. However, I am unable to figure out why I'm not able to set the role. When I attempt to synth/deploy, I receieve the following warnings/errors:</p>
<pre><code>WARN AWS_SOLUTIONS_CONSTRUCTS_WARNING: An override has been provided for the property: role[physicalName].
WARN AWS_SOLUTIONS_CONSTRUCTS_WARNING: An override has been provided for the property: role[grantPrincipal].
WARN AWS_SOLUTIONS_CONSTRUCTS_WARNING: An override has been provided for the property: role[roleName].
WARN AWS_SOLUTIONS_CONSTRUCTS_WARNING: An override has been provided for the property: role[roleArn].
WARN AWS_SOLUTIONS_CONSTRUCTS_WARNING: An override has been provided for the property: role[policyFragment][principalJson][AWS][0].
jsii.errors.JavaScriptError:
TypeError: Cannot read properties of undefined (reading 'cfnOptions')
at Object.addCfnSuppressRules (/private/var/folders/_0/8_jcp7n556xdgnb70mfxh95m0000gn/T/jsii-kernel-VO2cY2/node_modules/@aws-solutions-constructs/core/lib/utils.js:138:18)
at deployLambdaFunction (/private/var/folders/_0/8_jcp7n556xdgnb70mfxh95m0000gn/T/jsii-kernel-VO2cY2/node_modules/@aws-solutions-constructs/core/lib/lambda-helper.js:135:17)
at Object.buildLambdaFunction (/private/var/folders/_0/8_jcp7n556xdgnb70mfxh95m0000gn/T/jsii-kernel-VO2cY2/node_modules/@aws-solutions-constructs/core/lib/lambda-helper.js:33:20)
at new EventbridgeToLambda (/private/var/folders/_0/8_jcp7n556xdgnb70mfxh95m0000gn/T/jsii-kernel-VO2cY2/node_modules/@aws-solutions-constructs/aws-eventbridge-lambda/lib/index.js:23:40)
at Kernel._create (/private/var/folders/_0/8_jcp7n556xdgnb70mfxh95m0000gn/T/tmpucsljbww/lib/program.js:9964:29)
at Kernel.create (/private/var/folders/_0/8_jcp7n556xdgnb70mfxh95m0000gn/T/tmpucsljbww/lib/program.js:9693:29)
at KernelHost.processRequest (/private/var/folders/_0/8_jcp7n556xdgnb70mfxh95m0000gn/T/tmpucsljbww/lib/program.js:11544:36)
at KernelHost.run (/private/var/folders/_0/8_jcp7n556xdgnb70mfxh95m0000gn/T/tmpucsljbww/lib/program.js:11504:22)
at Immediate._onImmediate (/private/var/folders/_0/8_jcp7n556xdgnb70mfxh95m0000gn/T/tmpucsljbww/lib/program.js:11505:46)
at process.processImmediate (node:internal/timers:471:21)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "app.py", line 16, in <module>
SomeStack(app, "DataIngestion", env=Environment(account='numbers', region='us-east-1'))
File "/Users/john/Projects/z-data-ingestion-v2/.venv/lib/python3.8/site-packages/jsii/_runtime.py", line 112, in __call__
inst = super().__call__(*args, **kwargs)
File "/Users/john/Projects/z-data-ingestion-v2/cdk_stacks/lambdasSomeStack.py", line 43, in __init__
EventbridgeToLambda(self, 'Some',
File "/Users/john/Projects/quext-data-ingestion-v2/.venv/lib/python3.8/site-packages/jsii/_runtime.py", line 112, in __call__
inst = super().__call__(*args, **kwargs)
File "/Users/john/Projects/z-data-ingestion-v2/.venv/lib/python3.8/site-packages/aws_solutions_constructs/aws_eventbridge_lambda/__init__.py", line 204, in __init__
jsii.create(self.__class__, self, [scope, id, props])
File "/Users/john/Projects/z-data-ingestion-v2/.venv/lib/python3.8/site-packages/jsii/_kernel/__init__.py", line 334, in create
response = self.provider.create(
File "/Users/john/Projects/quext-data-ingestion-v2/.venv/lib/python3.8/site-packages/jsii/_kernel/providers/process.py", line 363, in create
return self._process.send(request, CreateResponse)
File "/Users/john/Projects/z-data-ingestion-v2/.venv/lib/python3.8/site-packages/jsii/_kernel/providers/process.py", line 340, in send
raise RuntimeError(resp.error) from JavaScriptError(resp.stack)
RuntimeError: Cannot read properties of undefined (reading 'cfnOptions')
</code></pre>
<p>I can use this role when configuring Lambdas manually. It is pre-existing, I'm not trying to create a new one. The syntax seems correct. I suspect that there is some missing configuration that I need to set if I need a role other than default, but it's unclear what that'd be. I saw something in the documents about how sometimes it's necessary to remove the path portion of the ARN, and I've tried it both with and without that. Without the path portion, the stack trace is identical, except that the last line says <code>cannot read properties of undefined (reading 'split')</code>. I assume this is because it's expecting the path to be present.</p>
<p>iam.Role.from_role_name() behaves similarly.</p>
<p>I'm just trying to match with CDK what I can do by hand. What am I missing or failing to understand?</p>
|
<python><aws-lambda><aws-cdk>
|
2023-04-04 15:07:34
| 1
| 5,645
|
John O
|
75,930,775
| 236,195
|
Pydantic field with custom data type and mypy
|
<p>Following the <a href="https://docs.pydantic.dev/usage/types/#custom-data-types" rel="nofollow noreferrer">Custom Data Types</a> example from Pydantic docs, I made my own <code>Timestamp</code> data type subclassing <code>str</code>. Everything works OK except that mypy is complaining when passing literal string values where it expects a <code>Timestamp</code> object.</p>
<p>Considering the code below, how can I explain to mypy that line 25 is not an error?</p>
<p>I tried with <code>Annotate[str, Timestamp]</code>, but only to get the reverse error (line 25 is OK, but line 29 gets <code>error: "<typing special form>" not callable</code>).</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from pydantic import BaseModel
class Timestamp(str):
@classmethod
def __get_validators__(cls):
yield cls.validate
@classmethod
def validate(cls, value: str) -> Timestamp:
...
return cls(value)
class Foo(BaseModel):
title: str
bar: Timestamp
def _main() -> None:
f1 = Foo(
title="F1",
bar="2023-04-05T06:07:08Z", # line 25: error: Argument "bar" to "Foo" has incompatible type "str"; expected "Timestamp" [arg-type]
)
f2 = Foo(
title="F2",
bar=Timestamp("2023-04-05T06:07:08Z"), # line 29
)
</code></pre>
|
<python><mypy><python-typing><pydantic>
|
2023-04-04 14:44:46
| 1
| 13,011
|
frnhr
|
75,930,734
| 896,627
|
How to wrap custom C++ types for use with pybind11 if they can be initialized with literals?
|
<p>I'm embedding a Python interpreter into my C++ code with pybind11.</p>
<p>I want to call C++ functions from Python code, so I have to wrap them first. Unfortunately, they take all kinds of custom types as parameters.</p>
<p><code>void func_i_like_to_use(CustomeType a, AnotherCustomeType b);</code></p>
<p>Typically, I would now start to wrap those too, but they are a deep hierarchy of inheritance mixed with a lot of template classes and typedef'ed names in between. In other words: it would be a real pain to replicate the whole tree.</p>
<pre><code>//... more BaseTemplate hierarchy
template <T>
class CustomTemplate<T>: public BaseTemplate<T>;
typedef CustomTemplate<float> CustomType;
</code></pre>
<p>Thankfully, almost all parameters can be initialized with plain literals or enum-values. Mostly because they have constructors that relay those POD types deeper into the class tree or because they have type conversion features.</p>
<p><code>function_i_like_to_use(5.7, "My string literal");</code></p>
<p>Now from Python, it would totally suffice to use those literal forms. Is there a way to wrap these functions with alternative types that would work when calling the function with, but which do not match the actual function parameter types as declared?</p>
<pre><code>
PYBIND_EMBEDDED_MODULE(example, m)
{
// this compiles, but can't be called with literals
m.def("func_i_like_to_use", &func_i_like_to_use, "Want to call from Python",
pybind11::arg("a"), pybind11::arg("b"))
}
// this doesn't compile, as it doesn't match the function declaration
m.def("func_i_like_to_use", static_cast<void (*)(float, std::string)>(&func_i_like_to_use), "Want to call from Python",
pybind11::arg("a"), pybind11::arg("b"))
}
pybind11::exec("import example"
"example.func_i_like_to_use(5.8, 'hello world')");
</code></pre>
<p>How could I get this to work? I could wrap the function again in C++ with POD parameters, but sometimes, even if calling a function with literals works, calling them with POD types doesn't, so I can't relay the wrapped parameters to the original function.</p>
<p>I could wrap the first order of CustomType with pybind11, but wrapping the actual constructors isn't sufficient to allow calling with literals. Maybe I could insert 'virtual' constructors here for use just by Python? How would this work with pybind11?</p>
<p>(<strong>Update</strong>: fixed usage of pybind11::arg(); modified experimental function bind with static_cast)</p>
|
<python><c++><pybind11>
|
2023-04-04 14:41:31
| 1
| 2,324
|
Chaos_99
|
75,930,720
| 10,266,106
|
Selecting Array Values With A Separate Array of Indices
|
<p>I am attempting to index a stacked 3-D Numpy array (named <code>stack</code>) by a group of 2-D indices contained inside of a separate Numpy array (named <code>piece</code>). The separate array contains several groups of 2-D index pairs. Inspection of these indices at the first entry of the array is as follows:</p>
<pre><code>[[0 1]
[1 0]
[0 0]
[1 1]
[2 0]
[0 2]
[2 1]
[1 2]]
</code></pre>
<p>The dimensions of the stacked 3-D array are <code>(1228, 2606, 14)</code>, which was created by stacking multiple 2-D arrays to ultimately inspect values along common index value pairs. I've tried indexing stack by piece in several methods, both of which have not produced the desired result:</p>
<ol>
<li>extraction = stack[tuple(piece), :]</li>
<li>extraction = stack[piece, :]</li>
</ol>
<p>Attempt #1 was derived from this question <a href="https://stackoverflow.com/questions/5508352/indexing-numpy-array-with-another-numpy-array">here</a>. In both instances, the shape of extraction yields <code>(8, 2, 2606, 14)</code> as opposed to an array that contains 112 values across the 8 provided index pairs. Any insight as to where corrections can be made is appreciated!</p>
|
<python><numpy><multidimensional-array><indexing><numpy-ndarray>
|
2023-04-04 14:39:52
| 0
| 431
|
TornadoEric
|
75,930,656
| 1,614,862
|
Splunk python SDK to upload a CSV file to lookup table of specific destination app
|
<p>I want to automate uploading a csv file to lookup tables of a specific destination app in my splunk. I can do this from splunk GUI as shown below, however I was trying to find a way to do it from python. I tried the following code which doesn't seem to be correct.</p>
<pre><code>path = 'data/inputs/oneshot'
Service.post(path, name=filename, **kwargs)
</code></pre>
<p><a href="https://i.sstatic.net/weYZZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/weYZZ.png" alt="enter image description here" /></a></p>
|
<python><csv><upload><splunk><splunk-sdk>
|
2023-04-04 14:33:44
| 1
| 4,269
|
user1614862
|
75,930,570
| 3,744,747
|
SQLAlchemy cast to PostgreSQL TIMESTAMP WITH TIME ZONE
|
<p>I have seen that in postgresql we can cast datetime to <code>timestamp</code> or <code>timestamptz</code> which is <code>TIMESTAMP WITHOUT TIME ZONE</code> or <code>TIMESTAMP WITH TIME ZONE</code>.</p>
<p>For instance using <code>psql(Postgres shell)</code>:</p>
<pre><code>db=# select '2023-03-31 00:00:00'::timestamp;
timestamp
---------------------
2023-03-31 00:00:00
</code></pre>
<p>or</p>
<pre><code>db=# select '2023-03-31 00:00:00'::timestamptz;
timestamptz
------------------------
2023-03-31 00:00:00+00
</code></pre>
<p>The second example is exactly what I'm trying to do with SqlAlchemy.</p>
<p>I know that we can use <code>Model.created_date.cast(TIMESTAMP)</code> which is like the first example, but I need to cast <code>created_date</code> to <code>timestamptz</code>.</p>
<p>The purpose of doing this is to extract some parts of that date in many different time zones.</p>
<p>in addition, the field in my model is like this:</p>
<pre><code>created_date = Column(DateTime, nullable=False, default=datetime.utcnow)
</code></pre>
|
<python><postgresql><datetime><sqlalchemy><timestamp>
|
2023-04-04 14:25:12
| 1
| 11,103
|
Mehrdad Pedramfar
|
75,930,541
| 3,980,808
|
Extracting information from oddly shaped tables with python-docx
|
<p>I am using <code>python-docx</code> to try and extract some information from tables in word documents. Most of my sample files work well but I have a few tables that are failing. The basic code I am running is this:</p>
<pre><code>from docx.api import Document
document = Document(file_path)
[[cell.text for cell in row.cells] for row in document.tables[0].rows]
</code></pre>
<p>This normally works but for some examples not all cells are read and some data is misaligned. See example below and attached file.</p>
<p>The failing table looks like this in word
<a href="https://i.sstatic.net/baVnT.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/baVnT.png" alt="enter image description here" /></a>
<a href="https://docs.google.com/document/d/1Qvcqy01QI8UyT7lsRjjL3LSegpwtznrW/edit?usp=share_link&ouid=104601276623209492420&rtpof=true&sd=true" rel="nofollow noreferrer">File</a></p>
<p>and the output I get in python running my code is</p>
<pre><code>[['', '', '', '', '', 'Nr.', '0b', '0c'],
['0d', '0e', '', '1b', '1c', '1d', '1e', ''],
['2b', '2c', '2d', '2e', '', '3b', '3c', '3d'],
['3e', '', '4b', '4c', '4d', '4e', '', '5b'],
['5c', '5d', '5e', '', '6b', '6c', '6d', '6e'],
['', '7b', '7c', '7d', '7e', '', '8b', '8c'],
['8d', '8e', 'foo', 'foo', 'foo', 'foo', 'foo', ''],
['',
'Click here to enter a date.',
'bar yes no\n\nfoobar\n',
'bar yes no\n\nfoobar\n',
'bar yes no\n\nfoobar\n',
'bar yes no\n\nfoobar\n',
'bar yes no\n\nfoobar\n'],
[],
[],
[],
[]]
</code></pre>
<p>What I am expecting here is something along the lines of this</p>
<pre><code>[['', '', '', '', ''],
['Nr.', '0b', '0c', '0d', '0e']
['1.', '1b', '1c', '1d', '1e'],
['2.', '2b', '2c', '2d', '2e'],
['3.', '3b', '3c', '3d', '3e'],
['4.', '4b', '4c', '4d', '4e'],
['5.', '5b', '5c', '5d', '5e'],
['6.', '6b', '6c', '6d', '6e'],
['7.', '7b', '7c', '7d', '7e'],
['8.', '8b', '8c', '8d', '8e'],
['foo', 'foo', 'foo', 'foo', 'foo', ''],
['bar yes no\n\nfoobar\n',
'bar yes no\n\nfoobar\n',
'bar yes no\n\nfoobar\n',
'bar yes no\n\nfoobar\n',
'bar yes no\n\nfoobar\n']]
</code></pre>
<p>Do you have any suggestions as to why this is and how to remidy this? Note that it is not just because some cells are spanning several columns, normally this is handled semi-gracefully.</p>
|
<python><python-docx>
|
2023-04-04 14:22:22
| 0
| 973
|
Ivar Eriksson
|
75,930,486
| 4,647,519
|
pandas.Series.map depending on order of items in dictionary?
|
<p>I tried to map the data types of a pandas DataFrame to different names using the map method. It works for 2 out of 3 permutations of the data types within the dictionary argument to map. But the 3rd one is ignoring 'int64'. Is the order of the dictionary keys supposed to matter or what am I missing here?</p>
<pre><code>import pandas as pd
df = pd.DataFrame({
'x': [1,2,3],
'y': [1.0, 2.2, 3.5],
'z': ['one', 'two', 'three']
})
df.dtypes
df.dtypes.map({'int64': 'integer', 'float64': 'decimal', 'object': 'character'}) # works
df.dtypes.map({'object': 'character', 'float64': 'decimal', 'int64': 'integer'}) # works
df.dtypes.map({'float64': 'decimal', 'int64': 'integer', 'object': 'character'}) # NaN for x
</code></pre>
|
<python><pandas>
|
2023-04-04 14:17:56
| 1
| 545
|
tover
|
75,930,356
| 19,130,803
|
python documentation using sphinx
|
<p>I am using sphinx to create document for the python project, I have google type docstring with following settings:</p>
<pre><code>extensions = [
"sphinx.ext.autodoc",
"sphinx.ext.autosummary",
"sphinx.ext.viewcode",
"sphinx_autodoc_typehints",
"sphinx.ext.napoleon",
]
</code></pre>
<pre><code>napoleon_use_rtype = False
</code></pre>
<p>If set to true, in output I get <code>Return type: tuple[bool, str]</code> twice (in bold) and setting to false displays once (in bold).</p>
<pre><code>html_theme = "sphinxdoc"
</code></pre>
<p>Output As below:</p>
<pre><code>set_data(*, data) [source]
Set the acquire task data.
Return type: tuple[bool, str] # This gets displayed in bold
Args:
data (Any): data.
Returns:
tuple[bool, str]: (True, msg) if successful, (False, msg) otherwise.
Raises: None
</code></pre>
<p>As under <code>Returns:</code> It display the return type. So I want to remove this <code>Return type: tuple[bool, str] </code> from the output. If that is not possible can we make it normal text by removing bold.</p>
<p>Is there any configuration setting for above?</p>
|
<python>
|
2023-04-04 14:07:44
| 0
| 962
|
winter
|
75,930,300
| 4,716,625
|
PySpark - filter data frame based on field containing any value from a list
|
<p>I have a list of values called <code>codes</code>, and I want to exclude any record from a Spark dataframe whose <code>codelist</code> field includes any of the values in the <code>codes</code> list.</p>
<pre><code>codes = ['O30', 'O81', 'Z38']
from pyspark.sql.types import StructType,StructField, StringType, IntegerType
dfrows = [
("Jane", "Doe", "I13; Z22; F11"),
("Janet", "Doser", "O81; F22; I11"),
("Jean", "Dew", "D11; O30; Z00; D10"),
("Janey", "Doedoe", "D11; Z38; Z00; O81"),
("Jena", "Dote", "I13"),
("Jenae", "Dee", "O30")
]
schema = StructType([ \
StructField("fakefirstname",StringType(),True), \
StructField("fakelastname",StringType(),True), \
StructField("codelist", StringType(), True)
])
scdf = sc.createDataFrame(data=dfrows ,schema=schema)
scdf.show()
# +-------------+------------+------------------+
# |fakefirstname|fakelastname| codelist|
# +-------------+------------+------------------+
# | Jane| Doe| I13; Z22; F11|
# | Janet| Doser| O81; F22; I11|
# | Jean| Dew|D11; O30; Z00; D10|
# | Janey| Doedoe|D11; Z38; Z00; O81|
# | Jena| Dote| I13|
# | Jenae| Dee| O30|
# +-------------+------------+------------------+
</code></pre>
<p>After removing all records where the <code>codelist</code> field contains any value from the <code>code</code> list, then I should end up with the final dataframe:</p>
<pre><code>+-------------+------------+-------------+
|fakefirstname|fakelastname| codelist|
+-------------+------------+-------------+
| Jane| Doe|I13; Z22; F11|
| Jena| Dote| I13|
+-------------+------------+-------------+
</code></pre>
|
<python><pyspark>
|
2023-04-04 14:02:01
| 1
| 1,223
|
bshelt141
|
75,930,288
| 14,653,659
|
Send e-mail with ssl.PROTOCOL_TLS_CLIENT
|
<p>I wrote some code to send e-mails over a relay in Python. The code can be seen below. I have marked variables like which I changed from the real code in {}:</p>
<pre><code>import smtplib, ssl
port = 587 # For TLS
# Create a secure SSL context
context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_2)
message = """\
Subject: Hi there
This message is sent from Python."""
with smtplib.SMTP({relay_name}, port) as server:
server.starttls(context=context)
server.login({from_mail}, {password})
server.sendmail({from_mail},{to_mail}, message)
</code></pre>
<p>This works without a problem, but I get a warning:</p>
<pre><code> DeprecationWarning: ssl.PROTOCOL_TLSv1_2 is deprecated
</code></pre>
<p>Now I have taken studied the <a href="https://docs.python.org/3/library/ssl.html#module-ssl" rel="nofollow noreferrer">documentation</a> for quite a while and it seems that I should be using <em>ssl.PROTOCOL_TLS_CLIENT</em> instead of <em>ssl.PROTOCOL_TLSv1_2</em>, but I cannot really figure out how this would work. The simple way by just replacing the protocols does not work as I than get an error:</p>
<pre><code>[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate
</code></pre>
<p>I assume that to make it work I need a line similar to <code>context.load_verify_locations('path/to/cabundle.pem')</code> as written in one of the examples in the documentation but I am unsure which .pem file I can and should use here.
Do I need a file for which the public key is then deployed on the relay? Can I just create any .pem file? Why then did it work before without that. I have read somewhere that the base version uses same kind of default certificate how can I use this here?</p>
|
<python><ssl>
|
2023-04-04 14:01:01
| 0
| 807
|
Manuel
|
75,930,214
| 15,673,412
|
python - boolean mask on array of custom objects
|
<p>Let's suppose I have defined the following class:</p>
<pre><code>class Event(object):
def __init__(self, time):
self.time = time
self.alive = True
def __repr__(self,):
return f"t = {self.time:.2E}"
</code></pre>
<p>And then I have a <code>numpy.ndarray</code> of <code>Event</code> objects, sorted by time:</p>
<pre><code>timestamps
>>>> array([t = 5.00E-11, t = 1.51E-08, t = 3.15E-08, t = 4.69E-08], dtype=object)
</code></pre>
<p>I want to select for example all the array elements that have the <code>time</code> field between two values, and set their <code>alive</code> field to <code>False</code>.</p>
<p>Is there a way to do it?</p>
<p>The only thing I tried is with an explicit for loop, but this can become really slow:</p>
<pre><code>for i in range(len(timestamps)):
# ...
# element i get selected
for j in range(i, len(timestamps):
if timestamps[i].time < timestamps[j].time < timestamps[i].time + t_dead:
tiemstamps[j].alive = False
</code></pre>
<p>while I am looking for something like</p>
<pre><code>target = timestamps[i].time
timestamps[target < timestamps.time < target + t_dead].alive = False
# I know the syntax is very wrong here
</code></pre>
|
<python><arrays><object><numpy-ndarray>
|
2023-04-04 13:53:16
| 1
| 480
|
Sala
|
75,930,204
| 9,690,045
|
Using pandas library without bz2 (No root access, can't install bz2)
|
<p>I am trying to run a python code which uses <code>pandas</code> library. I am getting an error saying that <code>bz2</code> can not be found. I do not have root access and can not install anything. <strong>Is there a way to use <code>pandas</code> in another way?</strong> <em>There are similar questions to mine in stackoverflow but they require root access to install <code>bz2</code>.</em></p>
<p>Importing <code>pandas</code> (Python 3.8.3, CentOS Linux release 7.6.1810 (Core)):</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
</code></pre>
<p>output:</p>
<pre class="lang-bash prettyprint-override"><code>Traceback (most recent call last):
File "tmp.py", line 1, in <module>
import pandas as pd
File "/home/myusername/src/dataprocessing/.venv/lib/python3.8/site-packages/pandas/__init__.py", line 22, in <module>
from pandas.compat import is_numpy_dev as _is_numpy_dev # pyright: ignore # noqa:F401
File "/home/myusername/src/dataprocessing/.venv/lib/python3.8/site-packages/pandas/compat/__init__.py", line 24, in <module>
import pandas.compat.compressors
File "/home/myusername/src/dataprocessing/.venv/lib/python3.8/site-packages/pandas/compat/compressors.py", line 7, in <module>
import bz2
ModuleNotFoundError: No module named 'bz2'
</code></pre>
|
<python><python-3.x><pandas><bz2>
|
2023-04-04 13:51:43
| 2
| 836
|
SMMousaviSP
|
75,930,172
| 10,581,944
|
Type hinting for loading a machine learning PMML model pipeline using `pickle.load()`
|
<p>i have a function like:</p>
<pre><code>def load_model(model_path: str) -> ???:
with open("XXX.pkl", "rb") as f:
model_pipeline = pickle.load(f)
return model_pipeline
</code></pre>
<p>My question is what type hint i should specify for the output? thanks.</p>
|
<python><pickle><type-hinting><pmml>
|
2023-04-04 13:49:15
| 0
| 3,433
|
wawawa
|
75,930,160
| 8,962,929
|
How to override createsuperuser command in django
|
<p>I'm trying to customize the <code>createsuperuser</code> command.</p>
<p>The purpose of this is:</p>
<p>I've a table called <code>administrator</code>, and its have OneToOne relationship with <code>auth_user</code> table</p>
<pre><code>class Administrator(models.Model):
user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE, related_name="admin")
</code></pre>
<p>And whenever I create new super user by using command <code>createsuperuser</code>, i'd like to create a record in table <code>administrator</code> as well.</p>
<p>But I'm not sure how to do this yet.
Please help, thanks.</p>
<p>Here's my repo:</p>
<p><a href="https://github.com/congson95dev/regov-pop-quiz-backend-s-v1/blob/main/auth_custom/management/commands/createsuperuser.py" rel="nofollow noreferrer">https://github.com/congson95dev/regov-pop-quiz-backend-s-v1/blob/main/auth_custom/management/commands/createsuperuser.py</a></p>
|
<python><django><django-rest-framework><superuser>
|
2023-04-04 13:47:49
| 1
| 830
|
fudu
|
75,930,094
| 8,962,929
|
Nested Serializer for prefetch_related in Django Rest Framework
|
<p>I'm trying to make a nested serializer with <code>prefetch_related</code> but it doesn't work, here's my code:</p>
<p>models.py</p>
<pre><code>from django.db import models
class Student(models.Model):
phone = models.IntegerField(null=True)
birth_date = models.DateField(null=True)
user = models.OneToOneField(get_user_model(), on_delete=models.CASCADE, related_name="student")
class Course(models.Model):
title = models.CharField(max_length=100, blank=True, default='')
class CourseEnroll(models.Model):
course = models.ForeignKey(Course, on_delete=models.PROTECT, related_name='course_enroll')
student = models.ForeignKey(Student, on_delete=models.PROTECT, related_name='student_enroll')
</code></pre>
<p>views.py</p>
<pre><code>from rest_framework import mixins
from rest_framework.viewsets import GenericViewSet
from quiz.models import Course
from quiz.serializers import CourseSerializer
class CourseViewSet(mixins.CreateModelMixin,
mixins.ListModelMixin,
mixins.RetrieveModelMixin,
GenericViewSet):
def get_queryset(self):
queryset = Course.objects.prefetch_related("course_enroll").all()
return queryset
serializer_class = CourseSerializer
</code></pre>
<p>serializers.py</p>
<pre><code>from rest_framework import serializers
from quiz.models import Course, CourseEnroll
class CourseEnrollSerializer(serializers.ModelSerializer):
class Meta:
model = CourseEnroll
fields = ['id']
class CourseSerializer(serializers.ModelSerializer):
student_enrolled = CourseEnrollSerializer(many=True, read_only=True)
class Meta:
model = Course
fields = ['id', 'title', 'student_enrolled']
</code></pre>
<p>Here's my repo:
<a href="https://github.com/congson95dev/regov-pop-quiz-backend-s-v1" rel="nofollow noreferrer">https://github.com/congson95dev/regov-pop-quiz-backend-s-v1</a></p>
<p>Did I do something wrong here? Please help, thanks.</p>
|
<python><django><django-rest-framework><django-serializer><django-prefetch-related>
|
2023-04-04 13:41:20
| 1
| 830
|
fudu
|
75,930,033
| 18,018,869
|
How to avoid building numpy array iteratively
|
<p>I know iterating through numpy arrays is more costly than using the numpy functions. This is crucial to me since my arrays are quite large.</p>
<p>Please use provided code as explanation what I want to achieve</p>
<pre class="lang-py prettyprint-override"><code>start_value = 12
start_arr = np.array([-2, -4, -60, -0.5, 2, 2, 1, 70, -2, -5, 2])
out_arr = []
ans = start_value
for i in start_arr:
if i > 0:
out_arr.append(i)
ans = i
else:
out_arr.append(ans)
out_arr = np.array(out_arr)
# [12, 12, 12, 12, 2, 2, 1, 70, 70, 70, 2]
</code></pre>
<p>I don't know how to tell numpy to use the "previously" assigned value in case <code>i <= 0</code>. Also I can't explain the problem to my browser's search engine in a way it outputs something useful.</p>
|
<python><arrays><numpy>
|
2023-04-04 13:35:43
| 3
| 1,976
|
Tarquinius
|
75,929,991
| 10,581,944
|
Type hinting pandas dataframe with specific columns/vectors
|
<p>i have a function likes this:</p>
<pre><code>def func(y, predicted_y, sample_weights: pd.core.series.Series) -> float:
result = y / (predicted_y + y)
return (result * sample_weights).sum() / sample_weights.sum()
</code></pre>
<p>where <code>y</code>, <code>predicted_y</code> are vectors - the actual target column of my dataframe and the predicted target column of my dataframe.</p>
<p>How to specify the type hint for them? I used <code>pd.core.series.Series</code> for <code>sample_weights</code> is this correct? Thanks.</p>
|
<python><pandas><machine-learning><type-hinting>
|
2023-04-04 13:31:20
| 1
| 3,433
|
wawawa
|
75,929,908
| 11,261,546
|
Get channel to split in web request
|
<p>I have a web request that can get string parameters as (I'm simplifying the code to show the challenging part):</p>
<pre><code>ch1 = "RGB" # or "HSV"
sing1 = "G" # or "V"
</code></pre>
<p>I also get an image from the request and I want to perform a color conversion and a split according to the previous variables, for this I take the variable <code>ch1</code> and looped on it trying to find if <code>sing1</code> is in it, if yes the index is the channel to split:</p>
<pre><code>for i in range(0, len(ch1)):
if ch1[i] == sing1:
decoded_image = cv2.split(decoded_image)[i]
break
</code></pre>
<p>This works but seems pretty wrong. I thought on doing a <code>dir</code> with all possible colorSpace/channel combinations but that also seems wrong.</p>
<p>Is there a more proper way to achieve this?</p>
|
<python><string><opencv>
|
2023-04-04 13:24:00
| 1
| 1,551
|
Ivan
|
75,929,807
| 13,606,342
|
Run two different jobs depending upon the start and end time
|
<ul>
<li>I am trying to implement a simple schedule in python3.6. Given two time start and end time. It will execute the respective job.</li>
<li>All the time will be in 24hrs clock. Execute JobA if the current time falls between JobA start and end time. Execute JobB if the current time falls between JobB start and end time.</li>
<li>JobA start time is 09:00:00 and JobA end time is 17:55:00.</li>
<li>JobB start time is 18:00:00 and JobB end time is 08:55:00.</li>
<li>The issue is I can make it work if this is the only case, if the start and end time for JobA and JobB changes then I am unable to serve the use-case.</li>
<li>Is there a better way through which I can serve the use-case and execute the respective activity depending upon the start and end time.</li>
<li>The code I tried, in the below code I am getting 'Both jobs are currently offline.'</li>
</ul>
<pre><code>import datetime
import pytz
# Define the start and end times of Job A and Job B in the format (hh:mm:ss)
job_a_start = '09:00:00'
job_a_end = '17:55:00'
job_b_start = '18:00:00'
job_b_end = '08:55:00'
# Define the IST timezone
ist_tz = pytz.timezone('Asia/Kolkata')
# Get the current time in IST
now_ist = datetime.datetime.now(ist_tz).time()
# Convert job start and end times to datetime objects
job_a_start_dt = datetime.datetime.combine(datetime.date.today(), datetime.datetime.strptime(job_a_start, '%H:%M:%S').time())
job_a_end_dt = datetime.datetime.combine(datetime.date.today(), datetime.datetime.strptime(job_a_end, '%H:%M:%S').time())
job_b_start_dt = datetime.datetime.combine(datetime.date.today(), datetime.datetime.strptime(job_b_start, '%H:%M:%S').time())
job_b_end_dt = datetime.datetime.combine(datetime.date.today(), datetime.datetime.strptime(job_b_end, '%H:%M:%S').time())
# Check which job is currently active
if job_a_start_dt.time() <= now_ist < job_a_end_dt.time():
print("Job A is currently active.")
elif job_b_start_dt.time() <= now_ist < job_b_end_dt.time():
print("Job B is currently active.")
elif now_ist < job_a_start_dt.time():
time_diff = job_a_start_dt - datetime.datetime.combine(datetime.date.today(), now_ist)
print(f"Next job (Job A) starts in {time_diff}.")
elif now_ist < job_b_start_dt.time():
time_diff = job_b_start_dt - datetime.datetime.combine(datetime.date.today(), now_ist)
print(f"Next job (Job B) starts in {time_diff}.")
else:
print("Both jobs are currently offline.")
</code></pre>
|
<python><datetime>
|
2023-04-04 13:12:46
| 1
| 337
|
Lav Sharma
|
75,929,742
| 7,603,109
|
Pyspark: Read csv file with multiple sheets
|
<p>The <strong>.csv</strong> file I am using will have <strong>multiple sheets</strong> (Dynamic sheet names).</p>
<p>I have to create <strong>dataFrames</strong> for all the sheets</p>
<p>The syntax I am using:</p>
<pre><code>df = self.spark.read
.option("sheetName", None)
.option('header', 'true')
.csv(file_path)
sheet_names = df.keys()
print(sheet_names)
</code></pre>
<p>Error:</p>
<blockquote>
<p>'DataFrame' object has no attribute 'keys'</p>
</blockquote>
|
<python><dataframe><apache-spark><pyspark>
|
2023-04-04 13:05:46
| 1
| 22,283
|
Adrita Sharma
|
75,929,721
| 13,518,426
|
How to show full column width of a Polars dataframe?
|
<p>I'm trying to display the full width of column in polars dataframe. Given the following polars dataframe:</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame({
'column_1': ['TF-IDF embeddings are done on the initial corpus, with no additional N-Gram representations or further preprocessing', 'In the eager API, the expression is evaluated immediately. The eager API produces results immediately after execution, similar to pandas. The lazy API is similar to Spark, where a plan is formed upon execution of a query, but the plan does not actually access the data until the collect method is called to execute the query in parallel across all CPU cores. In simple terms: Lazy execution means that an expression is not immediately evaluated.'],
'column_2': ['Document clusterings may misrepresent the visualization of document clusterings due to dimensionality reduction (visualization is pleasing for its own sake - rather than for prediction/inference)', 'Polars has two APIs, eager and lazy. In the eager API, the expression is evaluated immediately. The eager API produces results immediately after execution, similar to pandas. The lazy API is similar to Spark, where a plan is formed upon execution of a query, but the plan does not actually access the data until the collect method is called to execute the query in parallel across all CPU cores. In simple terms: Lazy execution means that an expression is not immediately evaluated.']
})
</code></pre>
<p>I tried the following:</p>
<pre class="lang-py prettyprint-override"><code>pl.Config.set_fmt_str_lengths = 200
pl.Config.set_tbl_width_chars = 200
</code></pre>
<p>The result:</p>
<pre class="lang-py prettyprint-override"><code>shape: (2, 2)
┌───────────────────────────────────┬───────────────────────────────────┐
│ column_1 ┆ column_2 │
│ --- ┆ --- │
│ str ┆ str │
╞═══════════════════════════════════╪═══════════════════════════════════╡
│ TF-IDF embeddings are done on th… ┆ Document clusterings may misrepr… │
│ In the eager API, the expression… ┆ Polars has two APIs, eager and l… │
└───────────────────────────────────┴───────────────────────────────────┘
</code></pre>
<p>How can I display the full width of columns in a polars DataFrame in Python?</p>
<p>Thanks in advance!</p>
|
<python><dataframe><python-polars>
|
2023-04-04 13:03:41
| 3
| 433
|
Ahmad
|
75,929,620
| 10,232,932
|
Count space character before a letter/number in a python pandas column
|
<p>I have a pandas dataframe, which looks like this:</p>
<pre><code>columnA columnB
A 10
B 12
C 13
D 14
010 17
</code></pre>
<p>How can i count the space characters before the first string/number/letter in the column A in a new column? So for example:</p>
<pre><code>columnA columnB counter
A 10 0
B 12 1
C 13 2
D 14 2
010 17 1
</code></pre>
|
<python><pandas>
|
2023-04-04 12:52:44
| 3
| 6,338
|
PV8
|
75,929,501
| 402,649
|
Python connect to MySQL and MariaDB from same application?
|
<p>How can I connect to both MySQL and MariaDB from the same Python 3.6+ application? The application is installed on RHEL8, same issue seems to exist on Ubuntu.</p>
<p>Due to security vulnerabilities, I have to use MySQL client 8.0.30 or later</p>
<p>mysql.connector with client 8.0.30 can not connect to MariaDB (charset error, known issue, no listed workaround)</p>
<p>Mariadb can not be installed alongside MySQL due to package conflicts:</p>
<pre><code>file /usr/share/mysql/charsets/Index.xml from install of MariaDB-common-10.6.12-1.el8.x86_64 conflicts with file from package mysql-common-8.0.30-1.module+el8.6.0+16523+5cb0e868.x86_64
</code></pre>
|
<python><mysql><mariadb><mysql-connector>
|
2023-04-04 12:39:43
| 0
| 3,948
|
Wige
|
75,929,474
| 2,966,197
|
dictionary update() only keeping last dictionary when adding multiple dictionary from different files in a directory in python
|
<p>I have a directory which contains multiple <code>JSON</code> files. Each <code>JSON</code> file has a <code>JSON</code> structure (with 3 dictionary elements in each). I want to read each <code>JSON</code> file, and append the dictionary into a single <code>dictionary</code> - a master <code>dictionary</code> which has all dictionary inside it. This is my code:</p>
<pre><code>def json_read:
pathToDir = '/path/to/directory'
json_mstr = {}
for file_name in [file for file in os.listdir(pathToDir) if file.endswith('.json')]:
with open(pathToDir + file_name) as input_file:
print(file_name)
json_str = json.load(input_file)
json_mstr.update(json_str)
print(len(json_mstr))
return json_mstr
</code></pre>
<p>When I print the length, I see only <code>3</code> as the length of the final master <code>dictionary</code> and only the content of the dictionary from last <code>JSON</code> file in the master <code>dictionary</code>. Not sure why <code>update()</code> is resetting the dictionary after each file read?</p>
<p><strong>Note</strong>: An example sample structure of <code>JSON</code> within each file would be:</p>
<pre><code>{
"resourceType": "Single",
"type": "transaction",
"entry": [
{
"fullUrl": "urn:uuid",
"resource": {
"resourceType": "Employee",
"id": "4cb1a87c",
"text": {
"status": "generated",
"div": "generated"
},
"extension": [],
"identifier": [
{
"system": "https://github.com",
"value": "43f123441901"
}
],
"name": [
{
"use": "official",
"family": "Shields52",
"given": [
"Aaro97"
],
"prefix": [
"Mr."
]
}
],
"maritalStatus": {
"coding": [
{
"system": "MaritalStatus",
"code": "M",
"display": "M"
}
],
"text": "M"
},
"multipleBirthBoolean": false
},
"request": {
"method": "POST",
"url": "User"
}
},
{
"fullUrl": "f411764e1f01",
"resource": {
"resourceType": "Claim",
"id": "411764e1f01",
"status": "active",
"type": {
"coding": [
{
"system": "type",
"code": "Company"
}
]
},
"use": "claim",
"employee": {
"reference": "1141dfb308"
},
"billablePeriod": {
"start": "2009-12-24T16:42:36-05:00",
"end": "2009-12-24T16:57:36-05:00"
},
"created": "2009-12-24T16:57:36-05:00",
"provider": {
"reference": "7e31e2b3feb"
},
"priority": {
"coding": [
{
"system": "Employee",
"code": "normal"
}
]
},
"procedure": [
{
"sequence": 1,
"procedureReference": {
"reference": "58f373a0a0e"
}
}
],
"insurance": [
{
"sequence": 1,
"focal": true,
"coverage": {
"display": "Employer"
}
}
],
"item": [
{
"sequence": 1,
"productOrService": {
"coding": [
{
"system": "http://comp1.info/",
"code": "1349003",
"display": "check up"
}
],
"text": "check up"
},
"encounter": [
{
"reference": "bc0a5705f6"
}
]
},
{
"sequence": 2,
"procedureSequence": [
1
],
"productOrService": {
"coding": [
{
"system": "http://comp.info",
"code": "421000124101",
"display": "Documentation"
}
],
"text": "Documentation"
},
"net": {
"value": 116.60,
"currency": "USD"
}
}
],
"total": {
"value": 163.7,
"currency": "USD"
}
},
"request": {
"method": "POST",
"url": "Employee"
}
}
]
}
</code></pre>
|
<python><json><dictionary>
|
2023-04-04 12:36:22
| 1
| 3,003
|
user2966197
|
75,929,413
| 11,338,984
|
Cannot install diffusion with pip
|
<p>I am trying to install diffusion by running <code>pip install diffusion</code> but I get the error below. I tried it with both Python version 3.10.9 and 3.9.16. I also tried it in another computer but I got "Module not found" on <code>diffusion.one_shot</code> when I use it like <code>from diffusion.one_shot import pose_to_video</code>.</p>
<pre class="lang-none prettyprint-override"><code>ERROR: Cannot install diffusion==6.7.0, diffusion==6.7.10, diffusion==6.7.2, diffusion==6.7.3, diffusion==6.7.5, diffusion==6.8.0, diffusion==6.8.1, diffusion==6.8.2, diffusion==6.8.3, diffusion==6.8.4, diffusion==6.8.5, diffusion==6.8.6, diffusion==6.8.7, diffusion==6.8.8, diffusion==6.9.0 and diffusion==6.9.1 because these package versions have conflicting dependencies.
The conflict is caused by:
diffusion 6.9.1 depends on diffusion-core==0.0.28
diffusion 6.9.0 depends on diffusion-core==0.0.28
diffusion 6.8.8 depends on diffusion-core==0.0.16
diffusion 6.8.7 depends on diffusion-core==0.0.16
diffusion 6.8.6 depends on diffusion-core==0.0.16
diffusion 6.8.5 depends on diffusion-core==0.0.16
diffusion 6.8.4 depends on diffusion-core==0.0.16
diffusion 6.8.3 depends on diffusion-core==0.0.16
diffusion 6.8.2 depends on diffusion-core==0.0.16
diffusion 6.8.1 depends on diffusion-core==0.0.16
diffusion 6.8.0 depends on diffusion-core==0.0.16
diffusion 6.7.10 depends on diffusion-cbor==6.7.10
diffusion 6.7.5 depends on diffusion-cbor==6.7.5
diffusion 6.7.3 depends on diffusion-cbor==6.7.3
diffusion 6.7.2 depends on diffusion-cbor==6.7.2
diffusion 6.7.0 depends on diffusion-cbor==6.7.0
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
|
<python>
|
2023-04-04 12:29:39
| 0
| 1,783
|
Ertan Hasani
|
75,929,336
| 610,569
|
How to test exceptions in mock unittests for non-deterministic functions?
|
<p>I've a chain of functions in my library that looks like this, from <code>myfuncs.py</code></p>
<pre class="lang-py prettyprint-override"><code>import copy
import random
def func_a(x, population=[0, 0, 123, 456, 789]):
sum_x = 0
for _ in range(x):
pick = random.choice(population)
if pick == 0: # Reset the sum.
sum_x = 0
else:
sum_x += pick
return {'input': sum_x}
def func_b(y):
sum_x = func_a(y)['input']
scale_x = sum_x * 1_00_000
return {'a_input': sum_x, 'input': scale_x}
def func_c(z):
bz = func_b(z)
scale_x = bz['b_input'] = copy.deepcopy(bz['input'])
bz['input'] = scale_x / (scale_x *2)**2
return bz
</code></pre>
<p>Due to the randomness in <code>func_a</code>, the output of <code>fun_c</code> is non-deterministic. So sometimes when you do:</p>
<pre><code>>>> func_c(12)
{'a_input': 1578, 'input': 1.5842839036755386e-09, 'b_input': 157800000}
>>> func_c(12)
{'a_input': 1947, 'input': 1.2840267077555213e-09, 'b_input': 194700000}
>>> func_c(12)
---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-121-dd3380e1c5ac> in <module>
----> 1 func_c(12)
<ipython-input-119-cc87d58b0001> in func_c(z)
21 bz = func_b(z)
22 scale_x = bz['b_input'] = copy.deepcopy(bz['input'])
---> 23 bz['input'] = scale_x / (scale_x *2)**2
24 return bz
ZeroDivisionError: division by zero
</code></pre>
<p>Then I've modified <code>func_c</code> to catch the error and explain to users why <code>ZeroDivisionError</code> occurs, i.e.</p>
<pre><code>def func_c(z):
bz = func_b(z)
scale_x = bz['b_input'] = copy.deepcopy(bz['input'])
try:
bz['input'] = scale_x / (scale_x *2)**2
except ZeroDivisionError as e:
raise Exception("You've lucked out, the pick from func_a gave you 0!")
return bz
</code></pre>
<p>And the expected behavior that raises a <code>ZeroDivisionError</code> now shows:</p>
<pre><code>---------------------------------------------------------------------------
ZeroDivisionError Traceback (most recent call last)
<ipython-input-123-4082b946f151> in func_c(z)
23 try:
---> 24 bz['input'] = scale_x / (scale_x *2)**2
25 except ZeroDivisionError as e:
ZeroDivisionError: division by zero
During handling of the above exception, another exception occurred:
Exception Traceback (most recent call last)
<ipython-input-124-dd3380e1c5ac> in <module>
----> 1 func_c(12)
<ipython-input-123-4082b946f151> in func_c(z)
24 bz['input'] = scale_x / (scale_x *2)**2
25 except ZeroDivisionError as e:
---> 26 raise Exception("You've lucked out, the pick from func_a gave you 0!")
27 return bz
Exception: You've lucked out, the pick from func_a gave you 0!
</code></pre>
<p>I could test the <code>func_c</code> in a deterministic way to avoid the zero-division without iterating <code>func_c</code> multiple times and I've tried:</p>
<pre><code>from mock import patch
from myfuncs import func_c
with patch("myfuncs.func_a", return_value={"input": 345}):
assert func_c(12) == {'a_input': 345, 'input': 7.246376811594203e-09, 'b_input': 34500000}
</code></pre>
<p>And when I need to test the new exception, I don't want to arbitrarily iterate <code>func_c</code> such that I hit the exception, instead I want to mock the outputs from <code>func_a</code> directly to return the 0 value.</p>
<h3>Q: How do I get the mock to catch the new exception without iterating multiple time through <code>func_c</code>?</h3>
<p>I've tried this in my <code>testfuncs.py</code> file in the same directory as <code>myfuncs.py</code>:</p>
<pre><code>from mock import patch
from myfuncs import func_c
with patch("myfuncs.func_a", return_value={"input": 0}):
try:
func_c(12)
except Exception as e:
assert str(e).startswith("You've lucked out")
</code></pre>
<h5>Is how I'm checking the error message content the right way to check Exception in the mock test?</h5>
|
<python><unit-testing><mocking><monkeypatching>
|
2023-04-04 12:19:56
| 2
| 123,325
|
alvas
|
75,929,309
| 4,940,741
|
grid arrange tsne plots from a for loop in scanpy using matplotlib
|
<p>I want to pu together generated tsnes from the loop below:</p>
<pre><code>import scanpy as sc
import seaborn as sns
import matplotlib.pyplot as plt
# Subset the data by condition
conditions = adata_all.obs['condition'].unique()
# Create a grid of subplots with 1 row and 3 columns
fig, axes = plt.subplots(1, 3, figsize=(12, 4), sharex=True, sharey=True)
for i, condition in enumerate(conditions):
adata_sub = adata_all[adata_all.obs['condition'] == condition]
# Run t-SNE
sc.tl.tsne(adata_sub)
# Plot t-SNE with LNP color and condition facet in the appropriate subplot
sc.pl.tsne(adata_sub, color='CD11b', palette=cmap, title=condition, ncols=1, ax=axes[i])
# Save the figure
plt.savefig('tsne_plots.png', dpi=300)
</code></pre>
<p>I have three condition, like this</p>
<pre><code>conditions: ['AB', 'AC', 'AD']
Categories (3, object): ['AB', 'AC', 'AD'].
</code></pre>
<p>But in the output only the first tsne is drawn and the last two are empty plots. What is the problem?</p>
|
<python><grid><visualization><scanpy><tsne>
|
2023-04-04 12:16:52
| 1
| 575
|
minoo
|
75,929,123
| 13,836,083
|
Losing Data in TCP socket connectivity
|
<p>I am working on socket programming.Here server sends random number of bytes to the machine who connected to server.This random number simulate that server is also receving data from somwhere else and data length can vary much and we are not sure how much.</p>
<p>I am running a server and client both on the same machine. I was expecting no data loss, but to my surprise I can see data loss is happening here too. I must be doing something wrong in my code. I have tried to find but I didn't found anything suspicious. I can just guess the length of data may be a problem, however still I am not sure.</p>
<p>From the server I am first sending the length of the message to the client and then sending the actual message so that I can be sure I have received complete message.</p>
<p>I am connecting to server 100 times and many times the data loss is happening.</p>
<p>Below is my server code:</p>
<pre><code>import socket
import struct
import random as rand
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# bind the socket to a specific address and port
server_address = ('localhost', 12345)
sock.bind(server_address)
sock.listen(1)
while True:
message = "12345" * rand.randint(100000, 200000)
connection, client_address = sock.accept()
message_in_bytes=message.encode()
length = len(message_in_bytes)
print(client_address, "connected and message length is ",length)
length_bytes = struct.pack("<I", length)
connection.send(length_bytes + message_in_bytes)
connection.close()
</code></pre>
<p>Below is client code:</p>
<pre><code>import socket
import struct
def connect_server():
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server_address = ('localhost', 12345)
sock.connect(server_address)
message_length_bytes=sock.recv(4)
length_of_message = struct.unpack("<I",message_length_bytes)[0]
whole_message=sock.recv(length_of_message)
sock.close()
if length_of_message == len(whole_message):
return "success"
else:
return "failed"
stats={'success':0,'failed':0}
for _ in range(100):
stats[connect_server()] +=1
print(stats)
</code></pre>
|
<python><sockets>
|
2023-04-04 12:00:40
| 2
| 540
|
novice
|
75,929,062
| 17,119,272
|
One loop with all lists in one instead of two loops with separated lists
|
<p>I have an algorithmic question. Sorry if it seems to you silly.</p>
<p>Consider I have two lists:</p>
<pre class="lang-py prettyprint-override"><code>list_a = ["Red", "Blue", "Black"]
list_b = ["Samsung", "Apple"]
</code></pre>
<p>So the question is, if I iterate through the lists above with two <code>for</code> loop is faster or merging them and using just one?</p>
|
<python><algorithm>
|
2023-04-04 11:53:53
| 1
| 510
|
Ali Bahaari
|
75,929,002
| 11,942,948
|
How to properly patch sys.argv using mock patch in python unit test
|
<p>I have a file seed_dynamodb.py whose code is below. I want to write a unit test for it using mock patch. I am successfully able to patch boto3. Now I need to patch <code>sys.argv</code> as well. I have tried with below test code but it is giving an <code>IndexError</code>.</p>
<p>==========seed_dynamodb.py==========</p>
<pre><code>import sys
import boto3
def main(env,region):
dynamodb_client= boto3.client('dynamodb')
timestamp = '1234567'
table_name = 'syn-fcad-nielsen-' + env + '-time'
print(f'{table_name=}')
if env == 'uat':
timestamp = 1234567
if env == 'prod':
timestamp = 1234567
response = dynamodb_client.get_item(TableName=table_name,
Key={'BaseTime':{'S':'Timestamp'}})
if 'Item' in response:
print("Item exists in Dynamo DB table")
timestamp = response['Items']['Value']['N']
else:
response = dynamodb_client.put_item(TableName=table_name,
Item={
'BaseTime':{'S':'Timestamp'},
'Value': {'N': timestamp}
})
env = sys.argv[1]
region = sys.argv[2]
l = len(sys.argv)
print(f'{env=}{region=}{l=}')
main(env,region)
</code></pre>
<p>=======================test_dynamo.py=========</p>
<pre><code>from module import seed_dynamodb
import unittest
from unittest import mock
from unittest.mock import patch
import boto3
import sys
@mock.patch("module.seed_dynamodb.boto3.client")
class SeedDynamoDBTest(unittest.TestCase):
@patch.object(boto3, "client")
@patch.object(sys, 'argv', ["pr", "us-east-1"])
def test_seed_dynamodb(self, *args):
mock_boto = args[0]
mock_dynamo = args[0]
mock_dynamo.get_item.return_value = {"Item": {"Value": {"N": 1678230539}}}
mock_dynamo.put_item.return_value = {"Item": {"BaseTime": {"S": "Timestamp"}, "Value": {"N": 1678230539}}}
seed_dynamodb.dynamodb_client = mock_dynamo
self.assertIsNotNone(mock_boto.client.get_item.return_value)
# seed_dynamodb.main("pr-173", "us-east-1")
if __name__ == "__main__":
unittest.main(verbosity=2)
</code></pre>
<p>I am getting below issue:</p>
<pre><code>env = sys.argv[1]
IndexError: list index out of range
</code></pre>
<p>Can you please help me how i can fix this issue or write test case for patching sys.argv</p>
|
<python><unit-testing><mocking><patch><sys>
|
2023-04-04 11:47:48
| 3
| 417
|
Deepak Gupta
|
75,928,961
| 1,616,738
|
ICE connection is stuck on checking and PeerConnection ConnectionState is stuck on connecting in Python Linux desktop application
|
<p>I am currently working on a video calling application for Linux desktops. To establish peer-to-peer connections, I have used aiortc for RTC connections and utilized WebSockets for handling offer, answer, candidate, and message data transactions. Moreover, I have integrated OpenCV for video capture and streaming through aiortc.</p>
<p>However, I am facing an issue where the ICE connection state remains stuck in the "checking" phase and does not progress further.</p>
<p>Please have a look at my code and help me for the issue:</p>
<pre><code>import asyncio
import json
import time
import cv2
import websocket
import threading
from aiortc import (RTCIceCandidate,
RTCPeerConnection,
RTCSessionDescription,
VideoStreamTrack,
RTCConfiguration,
RTCIceServer)
from aiortc.contrib.media import MediaBlackhole
from CameraVideoStreamTrack import CameraVideoStreamTrack
from OpenCVVideoStreamTrack import OpenCVVideoStreamTrack
from VideoReceiver import VideoReceiver
class WebRTCClient:
def __init__(self, signaling_url, main):
self.local_video_track = None
self.Main = main
self.signaling_url = signaling_url
self.ws = None
self.pc = RTCPeerConnection(
configuration=RTCConfiguration(iceServers=[RTCIceServer(urls=['stun:stun.l.google.com:19302'])]))
self.playing = False
self.pc.on('datachannel', self.on_datachannel)
self.pc.on('track', self.on_track)
#self.pc.on("track", lambda track: asyncio.ensure_future(self.on_track(track)))
self.pc.on('iceconnectionstatechange', self.on_iceconnectionstatechange)
self.pc.on("icecandidate",self.on_icecandidate)
self.pc.on("icegatheringstatechange",self.on_icegatheringstatechange)
def connect(self):
self.ws = websocket.WebSocketApp(
self.signaling_url,
on_open=self.ws_on_open,
on_message=self.ws_on_message,
on_error=self.ws_on_error,
on_close=self.ws_on_close)
ws_thread = threading.Thread(target=self.ws.run_forever)
ws_thread.daemon = True
ws_thread.start()
async def create_offer(self):
# Add an audio transceiver with sendrecv direction
self.pc.addTransceiver("audio", direction="sendrecv")
self.pc.addTransceiver("video", direction="sendrecv")
if self.local_video_track is None:
# Capture local video stream
self.local_video_track = OpenCVVideoStreamTrack()
# Create a local video receiver thread
local_video_receiver = VideoReceiver(self.local_video_track)
local_video_receiver.update_signal.connect(lambda img: self.Main.update_image(img, "local"))
local_video_receiver.start()
# Add the local video track to the connection
self.pc.addTrack(self.local_video_track)
offer = await self.pc.createOffer()
print(f'Offer sending: {offer}')
await self.pc.setLocalDescription(offer)
print('Offer sending 2')
self.ws.send(json.dumps(self.pc.localDescription, default=lambda o: o.__dict__, sort_keys=True, indent=4))
print('Offer sent 2')
async def on_track(self, track):
print(f"Track {track.kind} received")
if track.kind == "audio":
print(f"Track {track.kind} received")
#self.pc.addTrack(self.player.audio)
#self.recorder.addTrack(track)
elif track.kind == "video":
print(f"Track {track.kind} received")
while True:
_, frame = await track.recv()
return frame
if not frame:
print(f"Track {track.kind} not received done")
break
# Convert frame to numpy array and update QLabel
img = frame.to_ndarray(format="bgr24")
self.Main.update_image(img, "remote")
print(f"Track {track.kind} received done 2")
print(f"Track {track.kind} received done")
# remote_video = VideoTransformTrack(track)
#
# # Create a remote video receiver thread
# remote_video_receiver = VideoReceiver(remote_video)
# remote_video_receiver.update_signal.connect(lambda img: self.Main.update_image(img, "remote"))
# remote_video_receiver.start()
def on_datachannel(self, channel):
@channel.on("message")
def on_message(message):
print(f"datachannel message: {message}")
if isinstance(message, str) and message.startswith("ping"):
channel.send("pong" + message[4:])
async def on_iceconnectionstatechange(self):
print(f"ICE connection state is {self.pc.iceConnectionState}")
print(f"Connection state is {self.pc.connectionState}")
if self.pc.iceConnectionState == "failed":
await self.pc.close()
async def on_icecandidate(self, candidate):
if candidate:
print("Local ICE candidate:", candidate)
# Send the candidate to the remote peer using your signaling server
jsoncandidate = json.dumps(candidate, default=lambda o: o.__dict__, sort_keys=True, indent=4)
print(f"Local ICE candidate: {candidate}")
self.ws.send(jsoncandidate)
async def on_icegatheringstatechange(self):
print("ICE gathering state changed to:", self.pc.iceGatheringState)
if self.pc.iceGatheringState == "complete":
print("All local ICE candidates have been gathered")
async def add_ice_candidate(self, message):
print(f'Candidate message {message}')
#message = json.loads(candidate)
if message["candidate"]["sdp"] is not None:
parts = message["candidate"]["sdp"].split()
foundation = parts[0].split(':')[1]
component_id = int(parts[1])
protocol = parts[2]
priority = int(parts[3])
ip_address = parts[4]
port = int(parts[5])
candidate_type = parts[7]
candidate = RTCIceCandidate(
foundation=foundation,
component=component_id,
protocol=protocol,
priority=priority,
ip=ip_address,
port=port,
type=candidate_type
)
candidate.sdpMLineIndex = message["candidate"]["sdpMLineIndex"]
candidate.sdpMid = message["candidate"]["sdpMid"]
print("Start Adding Candidate")
await self.pc.addIceCandidate(candidate)
print("End Adding Candidate")
async def close(self):
await self.ws.close()
await self.pc.close()
def ws_disconnect(self):
self.ws.close()
def ws_on_open(self, ws):
print("WebSocket connection opened")
def ws_on_message(self, ws, message):
print(f"Received message: {message}")
msg = self.parse_json(message)
asyncio.run(self.read_message(msg))
def ws_on_error(self, ws, error):
print(f"WebSocket error: {error}")
def ws_on_close(self, ws):
self.pc.close()
print("WebSocket connection closed")
async def read_message(self, message):
if message["type"] and message["type"].lower() == "offer":
print('Handle Offer')
offer = RTCSessionDescription(message["sdp"], "offer")
await self.handle_offer(offer)
elif message["type"] and message["type"].lower() == "answer":
print('Handle Answer')
self.pc.addTransceiver("video", direction="sendrecv")
self.pc.addTransceiver("audio", direction="sendrecv")
# Make sure to convert the remote_answer string to a RTCSessionDescription object
answer = RTCSessionDescription(message["sdp"], message["type"].lower())
print('Handle Answer 1')
await self.pc.setRemoteDescription(answer)
print('Handle Answer Done')
elif message["candidate"]:
await self.add_ice_candidate(message)
def parse_json(self, json_str):
# Parse the JSON string into a dictionary
data = json.loads(json_str)
# Convert all property names to lowercase
data = {key.lower(): value for key, value in data.items()}
return data
async def handle_offer(self, offer):
# handle offer
# Add transceiver
self.pc.addTransceiver("video")
self.pc.addTransceiver("audio")
time.sleep(1)
print("Handling Offer")
await self.pc.setRemoteDescription(offer)
print("Set Remote Description Done")
print("Recoder Started")
# send answer
answer = await self.pc.createAnswer()
print(f"Answer Created: {answer}")
await self.pc.setLocalDescription(answer)
print("Set Local Description Done")
# Send the answer to the other peer
ans = json.dumps(answer, default=lambda o: o.__dict__, sort_keys=True, indent=4)
print(f"answer sent json {ans}")
self.ws.send(ans)
print(f"answer sent {answer}")
class VideoTransformTrack(VideoStreamTrack):
def __init__(self, track):
super().__init__() # don't forget this!
self.track = track
async def recv(self):
return await self.track.recv()
</code></pre>
|
<python><linux><webrtc><pyqt6><aiortc>
|
2023-04-04 11:44:25
| 0
| 1,954
|
Pavan V Parekh
|
75,928,922
| 6,439,229
|
Is it possible to pass kwargs to customised python enum
|
<p>You can customise an enum so the members have extra attributes.
There are plenty of examples of that to be found.
They all look something like this:</p>
<pre><code>class EnumWithAttrs(Enum):
def __new__(cls, *args, **kwargs):
value = len(cls.__members__) + 1
obj = object.__new__(cls)
obj._value_ = value
return obj
def __init__(self, a, b):
self.a = a
self.b = b
GREEN = 'a', 'b'
BLUE = 'c', 'd'
</code></pre>
<p>this example is from <a href="https://stackoverflow.com/a/19300424/6439229">https://stackoverflow.com/a/19300424/6439229</a></p>
<p>The majority of the examples I found show <code>__new__</code> to accept kwargs, but is it actually syntactically possible to pass kwargs to <code>__new__</code>?</p>
<p>This gives SyntaxErrors:</p>
<pre><code>class EnumWithAttrs(Enum):
GREEN = 'a', foo='b'
BLUE = 'c', **{'bar': 'd'}
</code></pre>
<p>So does this functional syntax:</p>
<pre><code>Color = EnumWithAttrs('Color', [('GREEN', 'a', foo='b'), ('BLUE', 'c', **{'bar': 'd'})])
</code></pre>
<p>Do people put the <code>**kwargs</code> in the function def of <code>__new__</code> just out of habit, or is there really a way to use them?</p>
<p>EDIT:
I'm not looking for a workaround to using kwargs, like passing a dict as a positional argument.</p>
|
<python><enums><keyword-argument>
|
2023-04-04 11:39:57
| 2
| 1,016
|
mahkitah
|
75,928,730
| 1,432,980
|
return JSON payload and byte response
|
<p>I have REST endpoint that returns</p>
<pre><code>{
"dataset": [
{
"name": "file1",
"location": "https://container/file1.csv"
},
{
"name": "file2",
"location": "https://container/file2.csv"
}
]
}
</code></pre>
<p>I return it as a pydantic model that looks like this</p>
<pre><code>class ResponseBodySuccessDataset(BaseModel):
name: str = Field(description="Name of the generated data object")
location: HttpUrl = Field(
description="URL of generated data object as returned to requestor",
)
class DummyDataResponseBodySuccess(BaseModel):
dataset: list[ResponseBodySuccessDataset] = Field(
description="Dataset array as returned to requestor",
)
</code></pre>
<p>However aside returning JSON payload, I also want to return an archive of all the files in the dataset.</p>
<p>I tried using <code>Response</code></p>
<pre><code> return Response(content=zip_stream.getvalue(), media_type="application/zip", headers=dummy_datasets[0])
</code></pre>
<p>by zipping all the files and passing dataset as a header (I use only the first element as it requires additional key to make the whole response as a dictionary).</p>
<p>However I want also to have the JSON payload in the response. Is there a way to achive?</p>
|
<python><fastapi>
|
2023-04-04 11:20:19
| 0
| 13,485
|
lapots
|
75,928,414
| 14,269,252
|
Rang-slider or similar solution to avoid overlapping of label in Y axis in Plotly
|
<p>I can not find the solution to my question. I made a plot with Plotly express, does range slider exist for Y axis? I already implemented range slider for X axis as follow, but the Y axis is messy, I was trying to come up with an solution but I can not implement rang slider on y axis, is there any way to make this chart performing better?</p>
<pre><code>def chart(df)
color_discrete_map = {'df1': 'rgb(255,0,0)',
'df2': 'rgb(0,255,0)',
'df3': '#11FCE4',
'df4': '#9999FF',
'df5': '#606060',
'df6': '#CC6600'}
fig = px.scatter(df, x='DATE', y='CODE', color='SOURCE', width=1200,
height=800,color_discrete_map=color_discrete_map)
fig.update_layout(xaxis_type='category')
fig.update_layout(yaxis={'dtick':1})
fig.update_layout(
margin=dict(l=250, r=0, t=0, b=20),
)
# fig.update_layout(xaxis=dict(tickformat="%y-%m", tickmode = 'linear', tick0 = 0.5,dtick = 0.75))
# Add range slider
fig.update_layout(
xaxis=dict(
rangeselector=dict(
buttons=list([
dict(count=1,
label="1m",
step="month",
stepmode="backward"),
dict(count=6,
label="6m",
step="month",
stepmode="backward"),
dict(count=1,
label="YTD",
step="year",
stepmode="todate"),
dict(count=1,
label="1y",
step="year",
stepmode="backward"),
dict(step="all")
])
),
rangeslider=dict(
visible=True
),
type="date"
)
)
# Add range slider
fig.update_layout(yaxis={'dtick':1})
fig.update_xaxes(tickformat = '%Y-%B', dtick='M1')
fig.update_xaxes(ticks= "outside",
ticklabelmode= "period",
tickcolor= "black",
ticklen=10,
minor=dict(
ticklen=4,
dtick=7*24*60*60*1000,
tick0="2016-07-03",
griddash='dot',
gridcolor='white')
)
st.plotly_chart(fig)
return
</code></pre>
<p><a href="https://i.sstatic.net/FdQtX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FdQtX.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><plot><charts><plotly>
|
2023-04-04 10:44:27
| 1
| 450
|
user14269252
|
75,928,216
| 1,850,007
|
Implementing __mul__ for timeseries in Python for different items
|
<p>I am trying to implement a piecewise constant function class in python.
I am wondering what is the best way to implement these <code>__mul__</code> and <code>__add__</code> functions.
Essentially, I want to be able to handle two cases: multiply by a constant or multiply by another timeseries.</p>
<p>For multiply or adding the constants, I am just going to return a constant times the values with the existing timepoints. For multiply by timeseries, will construct a new series with timepoints being the union of the timepoints and multiply / add them one by one.</p>
<p>Whilst I know how to do both, should I just do</p>
<pre><code>if type(other) == int or type(other) == float:
# implementation
else:
# implementation
</code></pre>
<p>This looks very ugly to me. In C++, one would be able to do an operator overload with different input type. What is the Pythonic way of doing this? In particular, I am concerned that I might have missed out an eligible type in the statement with a statement like this one</p>
<pre><code>if type(other) == int or type(other) == float:
</code></pre>
<p>Also, the implementation of add, mul, sub, div function implementation will almost be identical (except the operator bit). Is there a way of writing one function which each individual one of them would just call?</p>
<p>Essentially I was thinking about a method like</p>
<pre><code>def operation_by_const(self, operator, other):
return PiecewiseConstant(self.times, self.values .... other)
</code></pre>
<p>where ... should be replaced by apply the add/mul/div/sub operator from np.array to const, but I am not sure what is the correct syntax for this in Python. Even though this function does not save much ink for multiplying constant case, for implementation of timeseries multiplied by a timeseries, it would save a lot of copy pasting.</p>
<p>Here is the rest of the code</p>
<pre><code>from bisect import bisect_left
from functools import cached_property
import numpy as np
class PiecewiseConstant:
def __init__(self, times: np.array, values: np.array):
# entire non-empty
assert(len(times) == len(values))
assert(len(times))
for i, j in zip(times[0:-1], times[1:]):
if i >= j:
raise ValueError("Time series needs to have increasing indices")
self.times = times
self.values = values
def __getitem__(self, time):
if time < self.times[0]:
return self.values[0]
time_index = bisect_left(self.times, time)
# if time is greater than all our time points or less than the time corresponding to the index
if time_index == len(self.times) or time < self.times[time_index]:
return self.values[time_index - 1]
return self.values[time_index]
def __setitem__(self, time):
raise NotImplementedError("Setting values of timeseries through indexing is not allowed")
def __mul__(self, other):
pass
def __add__(self, other):
pass
</code></pre>
<p>EDIT: I have now done this. Would appreciate some feedback.</p>
<pre><code>def apply_binary_operator(self, operator, other):
assert isinstance(other, PiecewiseConstant)
if self.times == other.times:
return PiecewiseConstant(self.times, f"self.values.{operator}(other.values)")
else:
times = []
values = []
self_it = 0
other_it = 0
self_time = self.times[self_it]
other_time = other.values[other_it]
while self_it != len(self.times) and other_it != len(other.times):
if self_time <= other_time:
times.append(self_time)
values.append(eval(f"self.values[self_it].{operator}(other[self_time])"))
self_it += 1
if self_time == other_time:
other_it += 1
other_time = other.times[other_it]
self_time = self.times[self_it]
else:
times.append(other_time)
values.append(eval(f"other.values[other_it].{operator}(self[other_time])"))
other_it += 1
def __mul__(self, other):
if isinstance(other, PiecewiseConstant):
return self.apply_binary_operator("__mul__", other)
</code></pre>
|
<python><operator-overloading>
|
2023-04-04 10:20:53
| 1
| 1,062
|
Lost1
|
75,928,204
| 4,502,950
|
Fill nan based on two columns in another data frame
|
<p>I have two data frames like (WITH DIFFERENT COLUMN HEADERS) only highlighting it because people are marking it as duplicates</p>
<pre><code>df1 = pd.DataFrame({'Date':['7/03/2022', '9/3/2022'],
'Client':['Client 1','Client 2'],
'Course 2':['Computer skill','CCC']})
df2 = pd.DataFrame({'Session Date':['7/03/2022', '9/3/2022'],
'Org':['Client 1','Client 3'],
'Session name':[np.nan,'CCC']})
</code></pre>
<p>What I want to do is fill up the null values in Session name in df2 with the one in df1 if Client and dates are the same.</p>
<p>This is the code that I have</p>
<pre><code>merged_df = pd.merge(df1, df2, left_on=['Date', 'Client'], right_on=['Session Date', 'Org'], how='inner')
df2['Session Name'] = merged_df.apply(lambda x: x['Course 2'] if pd.isna(x['Session Name']) else x['Session Name'], axis=1)
df2
</code></pre>
<p>But it's obviously not working,the output it prints is</p>
<pre><code> Session Date Org Session Name
0 7/03/2022 Client 1 Computer skill
1 10/3/2022 Client 3 NaN
</code></pre>
<p>Where as it should print</p>
<pre><code> Session Date Org Session Name
0 7/03/2022 Client 1 Computer skill
1 10/3/2022 Client 3 CCC
</code></pre>
|
<python><pandas><dataframe>
|
2023-04-04 10:19:53
| 3
| 693
|
hyeri
|
75,927,965
| 146,289
|
How can the Django ORM perform an aggregate subquery in a WHERE statement?
|
<p>I'm trying to construct the following query similar to this the Django ORM:</p>
<p><code>SELECT * FROM table WHERE depth = (SELECT MIN(depth) FROM table)</code></p>
<p>How can this be written in Django ORM notations? So far it seems hard to use an aggregate like this, because <code>QuerySet.aggregate()</code> isn't lazy evaluated, but executes directly.</p>
<p>I'm aware that this basic example could be written as <code>Model.objects.filter(depth=Model.objects.aggregate(m=Min('depth'))['m'])</code> but then it does not evaluate lazily, and needs 2 separate queries. For my more complex case, I definitely need a lazily evaluated queryset.</p>
<hr />
<p>FYI, things I've tried and failed:</p>
<ul>
<li>a subquery with <code>Model.objects.order_by().values().annotate(m=Min('depth').values('m'))</code> will result in a <code>GROUP BY id</code> that seems hard to loose.</li>
<li>a subquery with <code>Model.objects.annotate(m=Min('depth')).filter(depth=F('m'))</code> will give a <code>GROUP BY id</code>, and include the <code>m</code> value in the main results as that's what annotate does.</li>
</ul>
<p>My current workaround is using <code>QuerySet.extra(where=[...])</code> but I'd much rather like to see the ORM generate that code.</p>
|
<python><django><django-orm>
|
2023-04-04 09:53:30
| 2
| 22,626
|
vdboor
|
75,927,945
| 1,720,897
|
What does `pip install unstructured[local-inference]` do?
|
<p>I was following a tutorial on langchain, and after using loader.load() to load a PDF file, it gave me an error and suggested that some dependencies are missing and I should install them using <code>pip install unstructured[local-inference]</code>. So, I did. But it is now installing a whole lot of packages. A whole lot of it includes some packages to do with <code>nvidia-*</code>. Can someone please explain what this command does? It took a good couple of hours for this command to complete.</p>
|
<python><langchain>
|
2023-04-04 09:51:17
| 4
| 1,256
|
user1720897
|
75,927,772
| 10,564,162
|
Python script on Docker printing twice while in detached mode
|
<p>I am running a simple python script that prints "Hello" every minute</p>
<p><strong>app.py</strong></p>
<pre><code>import time
from croniter import croniter
from pytz import timezone
from datetime import datetime
def main():
print('Starting...')
# Set the timezone to 'Europe/London' to account for daylight savings
gtm = timezone('Europe/London')
# Create a cron expression that runs every 1 minute
cron_expr = '* * * * *'
while True:
# Get the current time in GMT
now = datetime.now(gtm)
# Get the next run time of the cron job
next_run_time = croniter(cron_expr, now).get_next(datetime)
# Wait until the next run time
wait_time = (next_run_time - now).total_seconds()
# Make sure it does not run straight away
time.sleep(wait_time)
# Run job at the scheduled time
print('Hello')
main()
</code></pre>
<p>The script works well but when I run it on Docker and use its detached mode:</p>
<pre><code>docker run -d image
</code></pre>
<p>I get "Hello" printed twice every minute</p>
<p>If I do</p>
<pre><code>docker run -it image
</code></pre>
<p>It works well too and only prints once every minute</p>
<p>This is my <strong>Dockfile</strong></p>
<pre><code># Use an official Python runtime as a parent image
FROM python:3.11-slim-buster
# Set the working directory to /app
WORKDIR /app
# Create and activate a virtual environment
RUN python -m venv venv
ENV PATH="/app/venv/bin:$PATH"
RUN . venv/bin/activate
# Copy the requirements.txt file into the container
COPY requirements.txt .
# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the application's code into the container
COPY . .
# Run the command to start the application
CMD ["python", "-u", "app.py"]
</code></pre>
|
<python><docker>
|
2023-04-04 09:33:59
| 0
| 2,668
|
Álvaro
|
75,927,766
| 1,581,090
|
How to avoid recursive error when returning python dict elements in python?
|
<p>Suppose you want to return elements of a python dictionary as attributes of the class itself, you might implement it as follows:</p>
<pre><code>class MyClass:
def __init__(self):
print(self.x)
self.metadata = {
"a": 42,
"b": 1,
}
def __getattr__(self, item):
if item in self.metadata:
return self.metadata[item]
else:
raise KeyError(f"No attribute {item} found.")
test = MyClass()
</code></pre>
<p>But that implementation creates a recursive error.</p>
|
<python>
|
2023-04-04 09:33:20
| 1
| 45,023
|
Alex
|
75,927,519
| 9,974,205
|
Problem installing a Python program from GitHub
|
<p>I am trying to install the <a href="https://github.com/DamascenoRafael/mqtt-simulator" rel="nofollow noreferrer">MQTT simulator</a>.</p>
<p>I have python v3.3.10 installed in my computer. I followed <a href="https://www.youtube.com/watch?v=jjF9wmA4ik4&t=398s" rel="nofollow noreferrer">this video</a> between 1:48 and 5.43, so now I have a folder in my desktop called python_project, inside of which there is a folder called venv.</p>
<p>I have downloaded the mqtt-simulator-master from git hub and put it inside of python_project and copied the contents of mqtt-simulator-master onto python project itself. I have activated the virtual environment as <code>C:\Users\Me\Desktop\python_project>.\venv\Scripts\activate</code>.</p>
<p>Then I have written in the CMD <code>C:\Users\Jaime\Desktop\python_project>python3 -m venv venv</code>
which didn't have any visible output.</p>
<p>The next line was <code>source venv/bin/activate</code>, which resulted in an error, which said that source is an unrecognized command.</p>
<p>Then I wrote <code>pip3 install -r requirements.txt</code>. This gave me an output with the warning</p>
<blockquote>
<p>DEPRECATION: paho-mqtt is being installed using the legacy 'setup.py
install' method, because it does not have a 'pyproject.toml' and the
'wheel' package is not installed. pip 23.1 will enforce this behaviour
change. A possible replacement is to enable the '--use-pep517' option.
Discussion can be found at <a href="https://github.com/pypa/pip/issues/8559" rel="nofollow noreferrer">https://github.com/pypa/pip/issues/8559</a></p>
</blockquote>
<p>and the output <code>Successfully installed paho-mqtt-1.5.0</code></p>
<p>However, if I write <code>python3 mqtt-simulator/main.py</code>, i get the following error:</p>
<blockquote>
<p>File "C:\Users\Me\Desktop\python_project\mqtt-simulator\main.py",
line 3, in
from simulator import Simulator File "C:\Users\Me\Desktop\python_project\mqtt-simulator\simulator.py",
line 2, in
from topic import TopicAuto File "C:\Users\Me\Desktop\python_project\mqtt-simulator\topic.py", line
6, in
import paho.mqtt.client as mqtt ModuleNotFoundError: No module named 'paho'</p>
</blockquote>
<p>I need some tips on how to make this work since I am clueless about what to do.</p>
|
<python><json><windows><github><pip>
|
2023-04-04 09:09:34
| 1
| 503
|
slow_learner
|
75,927,503
| 6,695,297
|
Use Column Value as key and another Column Value as dict from django ORM
|
<p>In Django, I am trying an output without using a for loop to optimise the operation.</p>
<p>My model structure seems like this</p>
<pre><code>class Test(BaseClass):
id = model.UUIDField(uniq=True)
ref = models.ForeignKey(Ref)
name = model.CharField(max_length=50)
</code></pre>
<p>Sample data in the table</p>
<pre><code>+--------------------------------------+--------+----------------+
| id | ref_id | name |
+--------------------------------------+--------+----------------+
| 412dacb1-6451-4a1a-b3ac-09a30979b733 | 1 | test |
| fa7abc8e-2070-40b6-af84-6a8d78676b89 | 2 | new_rule |
| dd70a968-778c-4e45-9044-599effd243a6 | 3 | new_rule_test |
+--------------------------------------+--------+----------------+
</code></pre>
<p>And I need to output in dict something like this</p>
<pre><code>{
1: {
"name": "test",
"id": UUID(412dacb1-6451-4a1a-b3ac-09a30979b733)
},
2: {
"name": "new_rule",
"id": UUID(fa7abc8e-2070-40b6-af84-6a8d78676b89)
},
3: {
"name": "new_rule_test",
"id": UUID(dd70a968-778c-4e45-9044-599effd243a6)
}
}
</code></pre>
<p>In the above output, the key is <code>ref_id</code> and the value is again a dict having name and id.</p>
<p>I have tried with <code>annotate</code> but it is not working. Only the below code which is working but I need more optimised as the data is very massive.</p>
<pre><code>res_dict = Test.objects.filter(publisher_state='Published').values('ref', 'name', 'id')
final_dict = {row['ref']: {'name': row['name'], 'id':row['id']} for row in res_dict}
</code></pre>
|
<python><python-3.x><django><django-models><django-orm>
|
2023-04-04 09:08:32
| 1
| 1,323
|
Shubham Srivastava
|
75,927,440
| 11,193,122
|
Group by a column and then sum an array column elementwise in pyspark
|
<p>Hi I have a pyspark dataframe in the form:</p>
<pre><code> CATEGORY VALUE
0 A [4, 5, 6]
1 A [1, 2, 3]
2 B [7, 8, 9]
</code></pre>
<p>I would like my output to be</p>
<pre><code> CATEGORY VALUE
0 A [5, 7, 9]
1 B [7, 8, 9]
</code></pre>
<p>The actual dataframe is ~2billion records and each array is ~1500 elements so this needs to be as efficient as possible, I have tried expanding the array into columns and then groupby which worked fine on my sample but I need a more efficient solution for the full dataframe.</p>
<p>Thanks!</p>
|
<python><pyspark>
|
2023-04-04 09:02:11
| 1
| 336
|
liamod
|
75,927,299
| 13,775,842
|
After running an open3d window and closing it, tkinter messagebox crashes and gives X Error of failed request: BadWindow (invalid Window Parameter)
|
<p>I am trying to add a messagebox yes no question after closing a open3d window.</p>
<p>code to reproduce the error:</p>
<pre class="lang-py prettyprint-override"><code>open3d.visualization.draw_geometries([])
#create yes no
dialougdialog = messagebox.askquestion("Remove Object", "Remove Object?", icon='warning')
</code></pre>
<p>i cant find an answer online and chatgpt gives me a generic answer that I don't understand.
if someone had this issue or similar i would be grateful to know how you handled this error.</p>
<p>Thank you in advance.</p>
<p>the error produce after clicking on "yes":</p>
<pre><code>X Error of failed request: BadWindow (invalid Window parameter)
Major opcode of failed request: 15 (X_QueryTree)
Resource id in failed request: 0x6a04d88
Serial number of failed request: 58195
Current serial number in output stream: 58195
</code></pre>
|
<python><tkinter><open3d>
|
2023-04-04 08:46:40
| 2
| 483
|
Meyer Buaharon
|
75,927,084
| 1,864,294
|
Are pickle files reproducible?
|
<p>I would like to hash some <code>pickle</code> files for verification but I wonder if Python's <code>pickle</code> always produces the same output for the same input, at least within a protocol version? I wonder if the OS makes a difference? Do you have any references?</p>
|
<python><hash><pickle>
|
2023-04-04 08:23:19
| 1
| 20,605
|
Michael Dorner
|
75,927,048
| 1,571,823
|
stacking 0s and 1s vectors in TensorFlow
|
<p>Suppose <code>z = 3</code> and <code>e = 3</code>. I need to create a <code>z x (e*z)</code> design matrix that looks like this</p>
<pre><code>[[1., 1., 1., 0., 0., 0., 0., 0., 0.],
[0., 0., 0., 1., 1., 1., 0., 0., 0.],
[0., 0., 0., 0., 0., 0., 1., 1., 1.]]
</code></pre>
<p>How can I get such a matrix in <code>TensorFlow</code> using <code>concat</code>, <code>stack</code>, or <code>linalg.band_part</code>? I cannot simply define it by hand, and then convert it into a tensor.</p>
<p>The firs and last <code>z x e</code> matrices are easy to achieved by stacking/concatenating vectors of 1s and 0s, but the one in the middle is what I cannot achieve.</p>
<p>UPDATE:
I have used <code>tf.Variable</code>, which has an <code>assign</code> method</p>
<pre><code>e = tf.Variable(tf.zeros((batch_size,z, z*e), dtype=tf.float32))
for b in range(batch_size):
for i in range(z):
e = e[b, i, i*e:e*(i+1)].assign(1)
</code></pre>
<p>While <code>assign</code> is very handy for such a purpose, can be problematic in graph mode since <code>tf.Variable</code> can only be created once in the computational graph.</p>
|
<python><concatenation><tensorflow2.0>
|
2023-04-04 08:20:13
| 0
| 404
|
user1571823
|
75,926,918
| 2,622,678
|
Pytest AttributeError: 'TestMyTest' object has no attribute
|
<p>I have multiple tests written in the following format. When the test is run, why do some tests fail with exception <code>AttributeError: 'TestMyTest' object has no attribute 'var1'</code>?</p>
<p><code>var1</code> is defined as class variable in <code>test_init()</code>. Still the exception is thrown in <code>test_1()</code>.</p>
<pre><code>@pytest.fixture(scope="module")
def do_setup(run_apps):
"""
Do setup
"""
# Do something
yield some_data
class TestMyTest(BaseTestClass):
ids = dict()
task_id = list()
output_json = None
def test_init(self, do_setup):
TestMyTest.var1 = Feature1()
def test_1(self):
self._logger.info("self.var1: {}".format(self.var1))
</code></pre>
|
<python><pytest>
|
2023-04-04 08:04:36
| 1
| 443
|
user2622678
|
75,926,636
| 19,079,397
|
Databricks: Issue while creating spark data frame from pandas
|
<p>I have a pandas data frame which I want to convert into spark data frame. Usually, I use the below code to create spark data frame from pandas but all of sudden I started to get the below error, I am aware that pandas has removed iteritems() but my current pandas version is 2.0.0 and also I tried to install lesser version and tried to created spark df but I still get the same error. The error invokes inside the spark function. What is the solution for this? which pandas version should I install in order to create spark df. I also tried to change the runtime of cluster databricks and tried re running but I still get the same error.</p>
<pre><code>import pandas as pd
spark.createDataFrame(pd.DataFrame({'i':[1,2,3],'j':[1,2,3]}))
error:-
UserWarning: createDataFrame attempted Arrow optimization because 'spark.sql.execution.arrow.pyspark.enabled' is set to true; however, failed by the reason below:
'DataFrame' object has no attribute 'iteritems'
Attempting non-optimization as 'spark.sql.execution.arrow.pyspark.fallback.enabled' is set to true.
warn(msg)
AttributeError: 'DataFrame' object has no attribute 'iteritems'
</code></pre>
|
<python><pandas><apache-spark><pyspark><databricks>
|
2023-04-04 07:32:51
| 5
| 615
|
data en
|
75,926,634
| 10,020,441
|
pandas column-slices with mypy
|
<p>Lately I've found myself in a strange situation I cannot solve for myself:</p>
<p>Consider this MWE:</p>
<pre class="lang-py prettyprint-override"><code>import pandas
import numpy as np
data = pandas.DataFrame(np.random.rand(10, 5), columns=list("abcde"))
observations = data.loc[:, :"c"]
features = data.loc[:, "c":]
print(data)
print(observations)
print(features)
</code></pre>
<p>According to <a href="https://stackoverflow.com/a/44736467/10020441">this Answer</a> the slicing itself is done correct and it works in the sense that the correct results are printed.
But when I try to run mypy over it I get this error:</p>
<pre class="lang-bash prettyprint-override"><code>mypy.exe .\t.py
t.py:1: error: Skipping analyzing "pandas": module is installed, but missing library stubs or py.typed marker
t.py:1: note: See https://mypy.readthedocs.io/en/stable/running_mypy.html#missing-imports
t.py:6: error: Slice index must be an integer or None
t.py:7: error: Slice index must be an integer or None
Found 3 errors in 1 file (checked 1 source file)
</code></pre>
<p>Which is also correct since the slicing is not done with an integer.
How can I either satisfy or disable the <code>Slice index must be an integer or None</code> error?</p>
<p>Of course you could use <code>iloc(:,:3)</code> to solve this problem, but this feels like a bad practice, since with <code>iloc</code> we are depending on the order of the columns (in this example <code>loc</code> is also dependent on the ordering, but this is only done to keep the MWE short).</p>
|
<python><pandas><slice><mypy>
|
2023-04-04 07:32:34
| 1
| 515
|
Someone2
|
75,926,569
| 9,776,699
|
Excluding certain string using regex in python
|
<p>I would like to apply regex to the below code such that I remove any string that appears between a comma and the word 'AS'.</p>
<pre><code>Select customer_name, customer_type, COUNT(*) AS volume\nFROM table\nGROUP BY customer_name, customer_type\nORDER BY volume DESC\nLIMIT 10
</code></pre>
<p>Expected output:</p>
<pre><code>Select customer_name, customer_type, volume\nFROM table\nGROUP BY customer_name, customer_type\nORDER BY volume DESC\nLIMIT 10
</code></pre>
<p>I tried the below but that did not give the desired output</p>
<pre><code>result = re.sub(r",\s*COUNT\(\*\)\s*AS\s*\w+", "", text)
</code></pre>
|
<python><regex>
|
2023-04-04 07:25:31
| 2
| 1,571
|
Kevin Nash
|
75,926,017
| 7,745,011
|
os.environ['SSL_CERT_FILE'] stores a path to a non-existing file in visual studio code while debugging
|
<p>This question is directly connected to <a href="https://stackoverflow.com/q/75904821/7745011">my last question</a>, however tackles a different topic so I am opening a new one.</p>
<p>As mentioned there I am getting an error relating to a missing SSL cert. The error does not appear when the script is started from Terminal, using PyCharm or running from VSCode, but without the debugger. Only when the script is run with the debugger, the exception is thrown.</p>
<p><strong>After debugging a while I have found that the reason for the problem is the environment variable <code>os.environ['SSL_CERT_FILE']</code> which in this case leads to a non-existing <code>C:\\Users\\MYUSER~1\\AppData\\Local\\Temp\\_MEI97082\\certifi\\cacert.pem</code></strong></p>
<ol>
<li>Starting the script without the debugger or in PyCharm, this variable is not set (debugging the imported minio package showed me that the result of <code>certifi.where()</code> is used if the variable is empty.</li>
<li>With the debugger on, it is set before any of my script is executed (import os and print out all environment variables in the first line)</li>
<li>If I manually delete the variable with <code>del os.environ['SSL_CERT_FILE']</code> the rest of the script works fine, but the variable is again set in the next run</li>
<li>I am using python 3.11, MiniConda and Windows 10, Visual Studio Code is updated to the last version 1.77.0</li>
<li>Setting the environment variable in <code>launch.json</code> with <code>"env": {"SSL_CERT_FILE": "foo"}</code> will override the varible as expected, however as soon as I remove this line the wrong value appears again.</li>
<li>The part "<code>..\\_MEI247522\\...</code>" in the value will change from run to run</li>
<li>Creating a completely new folder/project the problem still exists</li>
<li>I also tested with another python environment (Python 3.9.7) and the problem still is the same.</li>
<li>From user @Horsing's suggestion: I have also removed all the code from my script, except for <code>import os</code>. As soon as <code>os</code> is imported and I can inspect <code>os.environ</code>, the environment variable is already set.</li>
</ol>
<p>I honestly have no idea, where and why this variable is set when the script is run in the debugger and what triggers it. Any help would be much appreciated, since manually deleting it is not really a good solution!</p>
<p><strong>Addition</strong>
Here is the Python Debug Console output in VS Code (with my username changed). For this I have removed the launch.json and started the debugger with <code>Python:File</code></p>
<p><em>complete code:</em></p>
<pre><code>import os
print(os.environ.get('SSL_CERT_FILE'))
</code></pre>
<p><em>console output:</em></p>
<pre><code>(minio) PS C:\Users\myuser\Documents\source\Python\minio-project> c:; cd 'c:\Users\myuser\Documents\source\Python\minio-project'; & 'C:\Users\myuser\Miniconda3\envs\minio\python.exe' 'c:\Users\myuser\.vscode\extensions\ms-python.python-2023.6.0\pythonFiles\lib\python\debugpy\adapter/../..\debugpy\launcher' '60007' '--' 'C:\Users\myuser\Documents\source\Python\minio-project\main.py'
C:\Users\MYUSER~1\AppData\Local\Temp\_MEI223042\certifi\cacert.pem
</code></pre>
<p>Again, the printed path does not exist on my computer</p>
|
<python><visual-studio-code><environment-variables><urllib3>
|
2023-04-04 06:12:24
| 2
| 2,980
|
Roland Deschain
|
75,925,905
| 1,780,761
|
openCvSharp4 - apply graytone mask to image
|
<p>In my code, I take a picture, find the edges, make them thicker, invert the image, apply a blur and end up with a white image, with black blurred lines. The blurred zones are obviously gray tones.</p>
<p>I want to use this as a mask to apply to a base image, but for some reason the mask is taken as binary mask, so either black or white. The graytones produced by the blur are ignored.
why is this happening? how can my code be changed so it takes graytones into account in a mask?</p>
<pre><code>'Load overlay image
Dim overlayBmp As New Bitmap("dust/dust1.png")
'Load the Image
Dim matImg As Mat = BitmapConverter.ToMat(fr_bm)
'Convert Image to Grayscale
Dim grayImg As New Mat()
Cv2.CvtColor(matImg, grayImg, ColorConversionCodes.BGR2GRAY)
'Detect Edges using the Canny Edge Detection Algorithm
Dim edgesImg As New Mat()
Cv2.Canny(grayImg, edgesImg, 100, 200)
'Apply Dilation to the Edge Detected Image
Dim dilatedImg As New Mat()
Dim element As Mat = Cv2.GetStructuringElement(MorphShapes.Ellipse, New Size(5, 5))
Cv2.Dilate(edgesImg, dilatedImg, element)
'invert the edges
Cv2.BitwiseNot(dilatedImg, dilatedImg)
'Apply Gaussian Blur to the Image
Dim blurMask As New Mat()
Cv2.GaussianBlur(dilatedImg, blurMask, New Size(21, 21), 0)
'Create a 4-Channel Mat from the Original Image
Dim originalMat As Mat = New Mat(fr_bm.Height, fr_bm.Width, MatType.CV_8UC4, 4)
Cv2.CvtColor(BitmapConverter.ToMat(fr_bm), originalMat, ColorConversionCodes.BGR2BGRA)
'Create a 4-Channel Mat from the Overlay Image
Dim overlayMat As Mat = New Mat(overlayBmp.Height, overlayBmp.Width, MatType.CV_8UC4, 4)
Cv2.CvtColor(BitmapConverter.ToMat(overlayBmp), overlayMat, ColorConversionCodes.BGR2BGRA)
'Create a Composite Image
Dim compositeMat1 As New Mat()
Cv2.BitwiseAnd(overlayMat, overlayMat, compositeMat1, blurMask)
</code></pre>
|
<vb.net><mask><opencv><grayscale><python>
|
2023-04-04 05:54:32
| 1
| 4,211
|
sharkyenergy
|
75,925,852
| 12,357,035
|
Getting syntax error while using virtualenv package in Ubuntu 16.04
|
<p>I am running Ubuntu 16.04 in VMware virtual environment. When I try to install virtualenv, I am facing issues.</p>
<p>What I did:</p>
<pre><code>sudo apt install python3-pip
sudo pip3 install virtualenv
virtualenv --version
</code></pre>
<p>The last command is showing this:</p>
<pre><code>Traceback (most recent call last):
File "/home/sbedanabett/.local/bin/virtualenv", line 7, in <module>
from virtualenv.__main__ import run_with_catch
File "/home/sbedanabett/.local/lib/python3.5/site-packages/virtualenv/__init__.py", line 1, in <module>
from .run import cli_run, session_via_cli
File "/home/sbedanabett/.local/lib/python3.5/site-packages/virtualenv/run/__init__.py", line 70
raise RuntimeError(f"failed to find interpreter for {discover}")
^
SyntaxError: invalid syntax
</code></pre>
<p>This seems because it is using Python2 interpreter. But I have changed my default interpreter to be Python3, following <a href="https://stackoverflow.com/q/41986507/12357035">this question</a>. So I am clueless why this error.</p>
<pre><code>$ python --version
Python 3.5.2
$ python2 --version
Python 2.7.12
</code></pre>
|
<python><python-3.x><virtualenv><ubuntu-16.04><apt>
|
2023-04-04 05:44:19
| 1
| 3,414
|
Sourav Kannantha B
|
75,925,741
| 15,010,256
|
Monitor batch job by Prometheus
|
<p>There is a Python batch job that pushes huge file(s) to a shared location, once the file(s) are pushed, couple of tests will be run against that/those file(s).
I'm trying to get some metrics around the batch job & planning to use Node exporter having below metrics or labels.</p>
<pre><code>file_push_status (success or failure)
first_test_status (Pass or Fail)
second_test_status (Pass or Fail)
first_test_time_taken (How long)
second_test_time_taken (How long)
</code></pre>
<p>Gone thru prometheus documentation, but unable to get a clarity whether Summary or Histogram should be used here ? I understand, Prometheus doesnt support Boolean(1st 3 cases), how those should be handled ?</p>
<p>If needed will attach the existing batch job code, thank you.</p>
|
<python><prometheus><prometheus-node-exporter>
|
2023-04-04 05:18:22
| 1
| 466
|
Chel MS
|
75,925,705
| 21,346,793
|
Answers after fune-tuning model
|
<p>I make fine-tuning with my model on essays like :</p>
<pre><code>[{
"topic": "Мы были достаточно цивилизованны, чтобы построить машину, но слишком примитивны, чтобы ею пользоваться». (Карл Краус)",
"text": "Высказывание Карла Крауса, австрийского писателя, о том, что «мы были достаточно цивилизованны, чтобы построить машину...
}]
</code></pre>
<p>And also when i try to make prompt like "Напиши сочинение на тему война" (Write an essay on the topic of war) my model answered like: <strong>'_"Война и мир" \u043f\u0443. /."},{"topic":"1845_1853","text'(word and peace)'</strong> or <strong>'_?¿)\u00bb \u041f\e0440\d044f, / "", ".",","text":"'</strong></p>
<p>Where is mistake?
There are my code:</p>
<pre><code>modelpath = 'tinkoff-ai/ruDialoGPT-medium'
tokenizer = AutoTokenizer.from_pretrained(modelpath)
model = AutoModelWithLMHead.from_pretrained('toodles_essays')
</code></pre>
<p>And parametrs are:</p>
<pre><code>async def generate_response(text):
inputs = tokenizer(text, return_tensors='pt')
generated_token_ids = model.generate(
**inputs,
top_k=8,
top_p=0.65,
num_beams=4,
num_return_sequences=1,
do_sample=True,
no_repeat_ngram_size=2,
temperature=1.5,
repetition_penalty=1.5,
length_penalty=1.0,
eos_token_id=50256,
pad_token_id=50256,
max_new_tokens=40
)
response = [tokenizer.decode(sample_token_ids) for sample_token_ids in generated_token_ids][0]
return response
</code></pre>
<p>I need to writing an essay by my model, how can ii do it?</p>
|
<python><nlp>
|
2023-04-04 05:10:50
| 0
| 400
|
Ubuty_programmist_7
|
75,925,620
| 13,049,379
|
Find all pixels bounded between two contours in a binary image in Python
|
<p>I have a binary image, like the one shown below, which has two contours shown in white,</p>
<p><a href="https://i.sstatic.net/VDCOz.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VDCOz.png" alt="enter image description here" /></a></p>
<p>I wish to make all the pixels in between the contours white some thing like below,</p>
<p><a href="https://i.sstatic.net/VB43S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VB43S.png" alt="enter image description here" /></a></p>
<p>How can I do this in Python using Numpy or OpenCV or Scipy?</p>
<p>I currently use FloodFill as shown below,</p>
<pre><code>from PIL import Image, ImageDraw
# read image
seed = [377,273]
rep_value = (255, 255, 0)
ImageDraw.floodfill(img, seed, rep_value, thresh=50)
img.save(f'cloud_floodfill_{seed}.png')
</code></pre>
<p>But here I need to provide a <code>seed</code> value. I have thousands of images like this one where the two contours can be translated anywhere. How to automate the process for several images.</p>
<p>I guess OpenCV's FloodFill will also have the same limitation.</p>
|
<python><numpy><opencv><image-processing><binary>
|
2023-04-04 04:52:16
| 1
| 1,433
|
Mohit Lamba
|
75,925,524
| 610,569
|
Accessing the `chrome://` pages inside Python without Selenium
|
<p>There's the <code>requests</code> and <code>urllib</code> page that can access <code>http(s)</code> protocol pages in Python, e.g.</p>
<pre class="lang-py prettyprint-override"><code>import requests
requests.get('stackoverflow.com')
</code></pre>
<p>but when it comes to <code>chrome://</code> pages, e.g. <code>chrome://settings/help</code>, the url libraries won't work and this:</p>
<pre><code>import requests
requests.get('chrome://settings/help')
</code></pre>
<p>throws the error:</p>
<pre><code>InvalidSchema: No connection adapters were found for 'chrome://settings/help'
</code></pre>
<p>I guess there's no way for requests or urllib to determine which chrome to use and where's the binary executable file for the browser. So the adapter can't be easily coded.</p>
<p>The goal here is to pythonically obtain the string <code>Version 111.0.5563.146 (Official Build) (x86_64)</code> from the <code>chrome://settings/help</code> page of the default chrome browser on the machine, e.g. it looks like this:</p>
<p><a href="https://i.sstatic.net/paSCW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/paSCW.png" alt="enter image description here" /></a></p>
<hr />
<p>Technically, it is possible to get to the page through <code>selenium</code> e.g.</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
driver = webdriver.Chrome("./lib/chromedriver_111.0.5563.64")
driver.get('chrome://settings/help')
</code></pre>
<p>But even if we can get the selenium driver to get to the <code>chrome://settings/help</code>, the <code>.page_source</code> is information for <code>Version ...</code> is missing.</p>
<p>Also, other than the chrome version, the access to <code>chrome://</code> pages would be used to retrieve other information, e.g. <a href="https://www.ghacks.net/2012/09/04/list-of-chrome-urls-and-their-purpose/" rel="nofollow noreferrer">https://www.ghacks.net/2012/09/04/list-of-chrome-urls-and-their-purpose/</a></p>
<hr />
<p>While's there's a way to call a Windows command line function to retrieve the browser version details, e.g. <a href="https://stackoverflow.com/questions/50880917/how-to-get-chrome-version-using-command-prompt-in-windows">How to get chrome version using command prompt in windows</a> , the solution won't generalize and work on Mac OS / Linux.</p>
|
<python><google-chrome><selenium-webdriver><unix><python-requests>
|
2023-04-04 04:28:53
| 2
| 123,325
|
alvas
|
75,925,200
| 1,898,734
|
How to sum values of rows based on condition within each row?
|
<p>Can you please help me with a better solution on summarizing data based on conditions in separate rows.</p>
<p>I have to group by the deal with a custom logic on amount column. Add Investment and Capital and subtract borrow and interest amounts from that, this should happen only if currency key matches else create a new column with above logic considering currencies separately and not mixing the amounts of different currencies. One more condition to consider is, do this computation only if all 4 types of flows are present.</p>
<p>For example my data is</p>
<pre class="lang-py prettyprint-override"><code> Deal Flow Amount Currency
0 1 Investment 100 USD
1 1 Borrow 10 USD
2 1 Interest 5 USD
3 1 Capital 50 EUR
4 2 Investment 100 USD
5 2 Borrow 10 USD
6 2 Interest 5 USD
7 2 Capital 50 USD
8 3 Investment 100 USD
9 3 Borrow 10 EUR
10 3 Interest 5 USD
11 4 Investment 100 USD
12 4 Borrow 10 EUR
13 4 Interest 5 USD
14 4 Capital 50 USD
</code></pre>
<p>Expected output would be</p>
<pre><code> Deal Amount Currency Amount Currency
0 1 85 USD 50.0 EUR
1 2 135 USD NaN None
2 4 145 USD -10.0 EUR
</code></pre>
|
<python><pandas>
|
2023-04-04 03:00:45
| 1
| 460
|
Pavan Kumar Polavarapu
|
75,925,138
| 342,553
|
Is it safe to add user_id column to Django db session model?
|
<p>Currently the Django session model doesn't have an user ID field:</p>
<pre class="lang-py prettyprint-override"><code>class AbstractBaseSession(models.Model):
session_key = models.CharField(_('session key'), max_length=40, primary_key=True)
session_data = models.TextField(_('session data'))
expire_date = models.DateTimeField(_('expire date'), db_index=True)
</code></pre>
<p>This is impossible for us to implement "back-channel logout" because every service provider would have different session ids. To make this work, I will need to add an user identification field to the model, eg. <code>username</code>, so that the IdP can issue log out signal to all service providers to log the user out by using the <code>username</code></p>
<pre class="lang-py prettyprint-override"><code>class AbstractBaseSession(models.Model):
session_key = models.CharField(_('session key'), max_length=40, primary_key=True)
session_data = models.TextField(_('session data'))
expire_date = models.DateTimeField(_('expire date'), db_index=True)
username = models.CharField(...)
</code></pre>
<p>I am not 200% sure if this will have any security implications? Thought I'd post here to check with the experts.</p>
|
<python><django>
|
2023-04-04 02:43:55
| 0
| 26,828
|
James Lin
|
75,924,802
| 3,312,274
|
web2py: How to target a div using the response.menu
|
<p>The A() helper can take a <code>cid</code> or <code>target</code> attribute to target a DIV to load content. Using the response.menu tuples, how can this be achieved?</p>
<pre><code>response.menu = [
(T('Home'), False, URL('default', 'index'), []),
(T('Library'), False, None, [
(T('Services'), False, URL('default', 'library.load', args=['service'], user_signature=True))
])
]
</code></pre>
<p><em>edit:</em></p>
<p>My ultimate goal is to load the page in library.load into a target div.</p>
|
<python><web2py>
|
2023-04-04 01:16:06
| 1
| 565
|
JeffP
|
75,924,661
| 2,591,343
|
New users on Django app at heroku doesn't persist on database
|
<p>We started a project on Heroku, using Django, but users aren't persisted on Django User table on our database, but as Admin users at Django?</p>
<p>We use User.objects.create_user() from django.contrib.auth.models.User</p>
<p>Database is MySQL</p>
<p>Any tips?</p>
<h2><strong>Update 1 (2023-04-04)</strong></h2>
<p>Some code snippets</p>
<p><em>views.py</em></p>
<pre><code>from django.shortcuts import render, redirect
from django.contrib.auth import authenticate, login, logout
from django.contrib.auth.models import User
...code...
def insertUser(request):
...code...
# Username validation
if User.objects.filter(username=request.POST['name']).exists():
data['msg'] = 'Usuário já cadastrado!'
data['class'] = 'alert-danger'
return render(request, 'cadUser.html', data)
# Email validation
if User.objects.filter(email=request.POST['email']).exists():
data['msg'] = 'Email já cadastrado!'
data['class'] = 'alert-danger'
return render(request, 'cadUser.html', data)
# Validação de Registro funcional
# New user recording
user = User.objects.create_user(request.POST['name'], request.POST['email'], request.POST['password'])
Docente.reg_funcional = request.POST['reg_funcional']
#user.reg_funcional = request.POST['reg_funcional']
user.save()
data['msg'] = 'Usuário cadastrado com sucesso!'
data['class'] = 'alert-success'
return render(request, 'loginUser.html', data)
</code></pre>
<p><em>settings.py</em></p>
<pre><code> from pathlib import Path
import django_on_heroku
...settings...
INSTALLED_APPS = [
<our stuff>,
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
'django_simple_cookie_consent',
]
...settings...
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.mysql',
<connection stuff>
}
}
AUTH_PASSWORD_VALIDATORS = [
{
'NAME': 'django.contrib.auth.password_validation.UserAttributeSimilarityValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.MinimumLengthValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.CommonPasswordValidator',
},
{
'NAME': 'django.contrib.auth.password_validation.NumericPasswordValidator',
},
]
...settings...
django_on_heroku.settings(locals())
</code></pre>
|
<python><mysql><django><heroku>
|
2023-04-04 00:32:56
| 1
| 463
|
HufflepuffBR
|
75,924,536
| 5,212,614
|
Trying to Optimize Process Using Linear Programming. Getting error about: IndexError: index 1 is out of bounds for axis 0 with size 1
|
<p>I am trying to optimize worker's schedules, based on the following dataframe.</p>
<pre><code> Time Windows Shift 1 Shift 2 Shift 3 Shift 4 Workers Required
0 6:00 - 9:00 1 0 0 1 55.0
1 9:00 - 12:00 1 0 0 0 46.0
2 12:00 - 15:00 1 1 0 0 59.0
3 15:00 - 18:00 0 1 0 0 23.0
4 18:00 - 21:00 0 1 1 0 60.0
5 21:00 - 24:00 0 0 1 0 38.0
6 24:00 - 3:00 0 0 1 1 20.0
7 3:00 - 6:00 0 0 0 1 30.0
8 Wage_Rate 135 140 190 188 0.0
</code></pre>
<p>First (create dataframe):</p>
<pre><code>import pandas as pd
df = pd.read_clipboard(sep='\\s+')
df = pd.DataFrame(df)
</code></pre>
<p>Here is the code that I am testing.</p>
<pre><code>import pandas as pd
import pulp
from pulp import LpMaximize, LpMinimize, LpProblem, LpStatus, lpSum, LpVariable
import numpy as np
df = df.fillna(0).applymap(lambda x: 1 if x == "X" else x)
df.set_index('Time Windows')
a = df.drop(columns=["Workers Required"]).values
a
df.drop(df.tail(1).index,inplace=True)
print(df.shape)
df = df.fillna(0).applymap(lambda x: 1 if x == "X" else x)
print(df.shape)
a = df.to_numpy()
a
# number of shifts
n = a.shape[0]
# number of time windows
T = a.shape[0]
# number of workers required per time window
d = df["Workers Required"].values
# wage rate per shift
#Get last row of dataframe
last_row = df.iloc[-1:,1:]
#Get last row of dataframe as numpy array
w = last_row.to_numpy()
w
# Decision variables
y = LpVariable.dicts("num_workers", list(range(n)), lowBound=0, cat="Integer")
y
# Create problem
prob = LpProblem("scheduling_workers", LpMinimize)
prob += lpSum([w[j] * y[j] for j in range(n)])
for t in range(T):
prob += lpSum([a[t, j] * y[j] for j in range(n)]) >= d[t]
prob.solve()
print("Status:", LpStatus[prob.status])
for shift in range(n):
print(f"The number of workers needed for shift {shift} is {int(y[shift].value())} workers")
</code></pre>
<p>When I get to this line:</p>
<pre><code>prob += lpSum([w[j] * y[j] for j in range(n)])
</code></pre>
<p>I get this error.</p>
<pre><code>Traceback (most recent call last):
Cell In[197], line 1
prob += lpSum([w[j] * y[j] for j in range(n)])
Cell In[197], line 1 in <listcomp>
prob += lpSum([w[j] * y[j] for j in range(n)])
IndexError: index 1 is out of bounds for axis 0 with size 1
</code></pre>
<p>The example I am trying to follow is from the link below.</p>
<p><a href="https://towardsdatascience.com/how-to-solve-a-staff-scheduling-problem-with-python-63ae50435ba4" rel="nofollow noreferrer">https://towardsdatascience.com/how-to-solve-a-staff-scheduling-problem-with-python-63ae50435ba4</a></p>
|
<python><python-3.x><optimization><linear-programming>
|
2023-04-03 23:54:41
| 1
| 20,492
|
ASH
|
75,924,358
| 1,405,736
|
How can I convert my aggregate result set data type into the correct data type for Jinja 2?
|
<p>How can I convert my aggregate result set data type into the correct data type for Jinja 2?</p>
<p>I can't figure out what the difference is between what a MongoDB .find() returns and what .aggregate() returns. The .find works as expected. the .aggregate doesn't. I'm sure my problem is converting the aggregate result set into the correct data type to be rendered in Jinja 2.</p>
<p>Python Code:</p>
<pre><code>###################################################################################
# The Python code for the find() operations works as expected
###################################################################################
findResults = current_app.db.inventory.find({"account_id": session["user_id"]})
findResults_form = [InventoryDetailClass(**inventory) for inventory in findResults ]
print(findResults)
print(findResults_form)
return render_template('Inventorylist.html',findResults_form = findResults_form )
###################################################################################
# I get the desired result set from the aggregate operation see print(aggregateResults) below
# But the aggregateResults_form is empty
###################################################################################
aggregateResults = current_app.db.inventory.aggregate([{
'$lookup': {
'from': 'contacts',
'localField': 'seller_id',
'foreignField': '_id',
'as': 'contacts_docs'
}
}, {
'$project': {
"_id": "$_id",
"item": "$item",
"quantity": "$quantity",
"price": "$askingPrice",
"sellerName": "$contacts_docs.contactName"
}
}])
# >> I believe this is where my problem is, but I don't know how to fix it.
aggregateResults_form = [InvListClass(**aggList) for aggList in aggregateResults]
print(aggregateResults)
for data in aggregateResults:
print(f'aggregateResults: {data}\n')
print(aggregateResults_form)
return render_template('Inventorylist.html',aggregateResults_form = aggregateResults_form )
</code></pre>
<p>Find - Output from print function:</p>
<pre><code>print(findResults):
<pymongo.cursor.Cursor object at 0x0000021CE4767E90>
print(findResults_form):
InventoryDetailClass(
_id='d61a84c3d2bb4f548a31aff39deb12de',
item='Dresser',
quantity=1,
price='500',
sellerName=''
),
InventoryDetailClass(
_id='eef188ade2fe4384aeff201893b006ec',
item='Car',
quantity=1,
price='2500',
sellerName=''
)
</code></pre>
<p>Aggregate - Output from print function:</p>
<pre><code>print(aggregateResults):
<pymongo.command_cursor.CommandCursor object at 0x000001CED9CC0610>
for data in aggregateResults:
print(f'aggregateResults: {data}\n')
aggregateResults:
{
'_id': 'd61a84c3d2bb4f548a31aff39deb12de',
'item': 'Dresser',
'quantity': 1,
'price': '500',
'sellerName': ['Bob Smith']
}
aggregateResults:
{
'_id':
'eef188ade2fe4384aeff201893b006ec',
'item': 'Car',
'quantity': 1,
'price': '2500',
'sellerName': ['Jane Doe']}
aggregateResults_form:
[]
</code></pre>
<p>Jinja2 Code:</p>
<pre><code>{% for i in aggregateResults_form %}
<tr>
<td id='{{ i._id }}'>{{ i.item }}</td>
<td id='{{ i._id }}'>{{ i.quantity }}</td>
<td id='{{ i._id }}'>{{ i.price }}</td>
<td id='{{ i._id }}'>{{ i.sellerName }}</td>
</tr>
{% endfor %}
</code></pre>
|
<python><jinja2>
|
2023-04-03 23:13:37
| 1
| 375
|
user1405736
|
75,924,321
| 1,876,739
|
Setting the timezone of a pandas datetime64 column to another column
|
<p>Given a pandas <code>DataFrame</code> with a <code>datetime64</code> column and a column representing the timezone:</p>
<pre><code>dates_df = pandas.read_json("""[{"event_time": "2023-04-03T15:03:52", "event_tz": "EDT"}, {"event_time": "2023-04-03T15:03:52", "event_tz": "EDT"}]""")
</code></pre>
<p><strong>How can the timezone of the <code>event_time</code> column be overwritten with the timezone specified in the <code>event_tz</code> column?</strong></p>
<p>The <code>tz_localize</code> and <code>tz_convert</code> functions of the datetime column seem to only accept single values, as opposed to a <code>Series</code>.</p>
<p>Thank you in advance for your consideration and response.</p>
|
<python><pandas>
|
2023-04-03 23:00:25
| 1
| 17,975
|
Ramón J Romero y Vigil
|
75,923,964
| 10,140,821
|
extracting contents of file as variables in python
|
<p>I have a file like below in <code>Linux</code>.</p>
<p><code>file_name</code> is <code>batch_file.txt</code>.</p>
<p><code>sub_directory</code> is <code>code_base/workflow_1</code></p>
<p><code>script_name</code> is <code>code_base/workflow_1/session_1.py</code></p>
<p><code>batch_file.txt</code> contents are:</p>
<pre><code>1#1#workflow_1#1#session_1#2023-04-02#FDR#2
1#2#workflow_2#2#session_2#2023-04-02#FDR#2
1#3#workflow_1_2#3#session_2#2023-04-02#FDR#2
</code></pre>
<p>I want to read the contents of <code>batch_file.txt</code> in the <code>session_1.py</code> file and create variables based on the <code>file_name</code> and <code>sub_directory</code>. The variables would be:</p>
<pre><code>batch_id = number before 1st #
workflow_id = number between 1st and 2nd #
workflow_name = number between 2nd and 3rd #
session_id = number between 3rd and 4th #
session_name = number between 4th and 5th #
run_date = number between 5th and 6th #
flow_name = number between 6th and 7th #
flow_id = number after 7th #
</code></pre>
<p>I have this:</p>
<pre><code>batch_content = open('batch_file.txt', 'r')
batch_content.readlines()
</code></pre>
<p>But I am not sure how to proceed further?</p>
|
<python>
|
2023-04-03 21:39:26
| 3
| 763
|
nmr
|
75,923,928
| 12,297,666
|
Check duplicated indices for each subset of values in pandas dataframe
|
<p>I have the following dataframe:</p>
<pre><code>import pandas as pd
df_test = pd.DataFrame(data=[['AP1', 'House1'],
['AP1', 'House1'],
['AP2', 'House1'],
['AP3', 'House2'],
['AP4','House2'],
['AP5', 'House2']],
columns=['AP', 'House'],
index=[0, 1, 2, 0, 1, 1])
</code></pre>
<p>I need to check at each subset of values of a column and see if there are duplicated indices. For example, in column <code>House</code>, we have three entries of <code>House1</code> and no duplicated indices. But for entry <code>House2</code> we have one duplicated index <code>1</code>.</p>
<p>I have tried this:</p>
<pre><code>print(f'{df_test.index.duplicated().sum()} repeated entries')
</code></pre>
<p>But this gives <code>3</code> duplicated entries, since it does not consider each value of the column separately.</p>
|
<python><pandas>
|
2023-04-03 21:31:42
| 2
| 679
|
Murilo
|
75,923,812
| 2,178,850
|
Error when trying to concatenate lists with pd.concat
|
<p>I have a folder with 100 PDF files. Page 1 of all PDF files contains a table that I am extracting. Then I am concatenating all the tables into a dataframe and writing as a CSV file. However, I am getting error while concatenating.</p>
<pre class="lang-py prettyprint-override"><code>import os
import camelot
import pandas as pd
import PyPDF2
import tabula
# Set the directory path where the PDF files are located
dir_path = "my/path/"
# Create an empty list to store the tables
tables = []
# Loop through each file in the directory
for filename in os.listdir(dir_path):
# Check if the file is a PDF file
if filename.endswith(".pdf"):
# Open the PDF file
with open(os.path.join(dir_path, filename), "rb") as pdf_file:
# Create a PDF reader object
pdf_reader = PyPDF2.PdfFileReader(pdf_file)
# Get the first page of the PDF file
page = pdf_reader.getPage(0)
# Extract the table from the first page using tabula-py
table = tabula.read_pdf(pdf_file, pages=1, pandas_options={"header": True})
print(table)
# Append the table to the tables list
tables.append(table)
# Concatenate all tables into a single DataFrame
df = pd.concat(tables)
# Write the DataFrame to a CSV file
df.to_csv("Output.csv", index=False)
</code></pre>
<blockquote>
<p>TypeError: cannot concatenate object of type '<class 'list'>'; only Series and DataFrame objs are valid</p>
</blockquote>
|
<python><pandas><concatenation><tabula-py>
|
2023-04-03 21:12:52
| 1
| 669
|
akang
|
75,923,450
| 5,032,387
|
How memory consumption works when passing large objects to functions
|
<p>I would like to understand how memory management works when an argument is passed to a function in Python. The motivation for this question is that I am passing a dictionary with many keys to wrapper functions as my code progresses. The wrapper functions take in some arguments and write new key-value pairs to the dictionary and return it. I know it's best practice for functions to take in arguments that they use only. But let's assume it's easier to keep passing this dictionary and adding to it for the sake of code readibility and docstring brevity.</p>
<p>Would like to understand the following. When I pass this large dictionary to the function, is there more memory being consumed by this large dictionary as opposed to if I were to only pass the values that are used by the function? In other words, does Python create a copy of the input argument, thus taking up more memory than necessary? If not, then if I were to only pass the necessary arguments, does Python still hold the dictionary as in memory so that we can work with it after we finish executing the particular function?</p>
|
<python><memory-management>
|
2023-04-03 20:17:35
| 0
| 3,080
|
matsuo_basho
|
75,923,324
| 5,859,583
|
python seperating values
|
<p>I am getting a response like this <code>id=51555263943&code=q15cd225s6s8</code> and I want to define the value of the <code>id</code> and the value of the <code>code</code> in separate strings
How can I do this in python selenium maybe with regex(which I don't completely understand it)?</p>
|
<python>
|
2023-04-03 19:56:16
| 3
| 589
|
CatChMeIfUCan
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.