QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,288,965
| 10,859,114
|
How to get unlimited multiple options from typer.Option with custom devider
|
<p>I want to divide unlimited values with a custom divider in <code>typer.Option</code>. below is an example of what I need:</p>
<pre class="lang-bash prettyprint-override"><code>$ python main.py add -C "first one","second one","third"
</code></pre>
<p>and output should be:</p>
<pre class="lang-py prettyprint-override"><code>["first one","second one","third"]
</code></pre>
<p>is there any way to do this with typer and python?</p>
|
<python><command-line-interface><typer>
|
2023-01-30 19:01:30
| 1
| 470
|
Alirezaarabi
|
75,288,946
| 6,027,879
|
python iterate yaml with lookup
|
<p>I have this sample yaml,</p>
<pre><code>data:
- name: data1
sourceType: aws
sourceSpecifier: acme/data123.zip
- name: data2
sourceType: aws
sourceSpecifier: acme/data234.zip
- name: data3
sourceType: webdav
sourceSpecifier: acme/data334.zip
sources:
- type: aws
baseUrl: example.s3.us-east-2.amazonaws.com
- type: webdav
baseUrl: https://internal-acme.example.com/
</code></pre>
<p>Please help to share the python script that will give an output:</p>
<ul>
<li>if data1, source url will be <code>example.s3.us-east-2.amazonaws.com/acme/data123.zip</code></li>
<li>if data3, source url will be <code>https://internal-acme.example.com/acme/data334.zip</code></li>
</ul>
|
<python><yaml>
|
2023-01-30 18:59:13
| 1
| 406
|
hare krshn
|
75,288,863
| 10,796,158
|
What is the default behavior of index_col in Pandas' read_csv?
|
<p>In the pandas <a href="https://pandas.pydata.org/docs/user_guide/io.html#column-and-index-locations-and-names" rel="nofollow noreferrer">docs</a>, for the function <code>read_csv</code>, I'm trying to understand what the following explanation about the behavior of the function is when <code>index_col</code> is set to its default value, <code>None</code>:</p>
<blockquote>
<p>The default value of None instructs pandas to guess. If the number of
fields in the column header row is equal to the number of fields in
the body of the data file, then a default index is used. If it is
larger, then the first columns are used as index so that the remaining
number of fields in the body are equal to the number of fields in the
header.</p>
<p>The first row after the header is used to determine the number of
columns, which will go into the index. If the subsequent rows contain
less columns than the first row, they are filled with NaN.</p>
</blockquote>
<p>So I came up with the following toy example:</p>
<pre><code>with open("io_tools_example_index.txt", "w") as f:
f.write("pandas, koalas, lizards, kangaroos\n1,2,3\n4,5,6")
</code></pre>
<p>When I do <code>pd.read_csv("io_tools_example_index.txt")</code>, I get:</p>
<p><a href="https://i.sstatic.net/tspBA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tspBA.png" alt="enter image description here" /></a></p>
<p>whereas based on their explanation, I would have expected Pandas to use the <code>pandas</code> column as the index since the number of fields in the column header is larger than the number of fields in the remaining lines. What am I missing here?</p>
|
<python><pandas>
|
2023-01-30 18:51:06
| 1
| 1,682
|
An Ignorant Wanderer
|
75,288,775
| 683,233
|
How to solve this Error while installing libraries in Python?
|
<p>While executing this code <code>pip install simpletransformers datasets tqdm pandas</code></p>
<p>I'm getting errors. I'm attaching the logs.</p>
<pre><code>Collecting simpletransformers
Using cached simpletransformers-0.63.9-py3-none-any.whl (250 kB)
Collecting datasets
Using cached datasets-2.9.0-py3-none-any.whl (462 kB)
Collecting tqdm
Using cached tqdm-4.64.1-py2.py3-none-any.whl (78 kB)
Collecting pandas
Using cached pandas-1.5.3-cp311-cp311-win_amd64.whl (10.3 MB)
Collecting numpy
Using cached numpy-1.24.1-cp311-cp311-win_amd64.whl (14.8 MB)
Requirement already satisfied: requests in d:\program files\python\lib\site-packages (from simpletransformers) (2.28.2)
Collecting regex
Using cached regex-2022.10.31-cp311-cp311-win_amd64.whl (267 kB)
Collecting transformers>=4.6.0
Using cached transformers-4.26.0-py3-none-any.whl (6.3 MB)
Collecting scipy
Using cached scipy-1.10.0-cp311-cp311-win_amd64.whl (42.2 MB)
Collecting scikit-learn
Using cached scikit_learn-1.2.1-cp311-cp311-win_amd64.whl (8.2 MB)
Collecting seqeval
Using cached seqeval-1.2.2-py3-none-any.whl
Collecting tensorboard
Using cached tensorboard-2.11.2-py3-none-any.whl (6.0 MB)
Requirement already satisfied: tokenizers in d:\program files\python\lib\site-packages (from simpletransformers) (0.13.2)
Collecting wandb>=0.10.32
Using cached wandb-0.13.9-py2.py3-none-any.whl (2.0 MB)
Collecting streamlit
Using cached streamlit-1.17.0-py2.py3-none-any.whl (9.3 MB)
Collecting sentencepiece
Using cached sentencepiece-0.1.97.tar.gz (524 kB)
Preparing metadata (setup.py): started
Preparing metadata (setup.py): finished with status 'done'
Collecting pyarrow>=6.0.0
Using cached pyarrow-11.0.0-cp311-cp311-win_amd64.whl (20.5 MB)
Collecting dill<0.3.7
Using cached dill-0.3.6-py3-none-any.whl (110 kB)
Collecting xxhash
Using cached xxhash-3.2.0-cp311-cp311-win_amd64.whl (30 kB)
Collecting multiprocess
Using cached multiprocess-0.70.14-py310-none-any.whl (134 kB)
Collecting fsspec[http]>=2021.11.1
Using cached fsspec-2023.1.0-py3-none-any.whl (143 kB)
Collecting aiohttp
Using cached aiohttp-3.8.3-cp311-cp311-win_amd64.whl (317 kB)
Collecting huggingface-hub<1.0.0,>=0.2.0
Using cached huggingface_hub-0.12.0-py3-none-any.whl (190 kB)
Requirement already satisfied: packaging in d:\program files\python\lib\site-packages (from datasets) (23.0)
Collecting responses<0.19
Using cached responses-0.18.0-py3-none-any.whl (38 kB)
Requirement already satisfied: pyyaml>=5.1 in d:\program files\python\lib\site-packages (from datasets) (6.0)
Requirement already satisfied: colorama in d:\program files\python\lib\site-packages (from tqdm) (0.4.6)
Requirement already satisfied: python-dateutil>=2.8.1 in d:\program files\python\lib\site-packages (from pandas) (2.8.2)
Requirement already satisfied: pytz>=2020.1 in d:\program files\python\lib\site-packages (from pandas) (2022.7.1)
Requirement already satisfied: attrs>=17.3.0 in d:\program files\python\lib\site-packages (from aiohttp->datasets) (22.2.0)
Collecting charset-normalizer<3.0,>=2.0
Using cached charset_normalizer-2.1.1-py3-none-any.whl (39 kB)
Collecting multidict<7.0,>=4.5
Using cached multidict-6.0.4-cp311-cp311-win_amd64.whl (28 kB)
Collecting async-timeout<5.0,>=4.0.0a3
Using cached async_timeout-4.0.2-py3-none-any.whl (5.8 kB)
Collecting yarl<2.0,>=1.0
Using cached yarl-1.8.2-cp311-cp311-win_amd64.whl (55 kB)
Collecting frozenlist>=1.1.1
Using cached frozenlist-1.3.3-cp311-cp311-win_amd64.whl (32 kB)
Collecting aiosignal>=1.1.2
Using cached aiosignal-1.3.1-py3-none-any.whl (7.6 kB)
Collecting filelock
Using cached filelock-3.9.0-py3-none-any.whl (9.7 kB)
Collecting typing-extensions>=3.7.4.3
Using cached typing_extensions-4.4.0-py3-none-any.whl (26 kB)
Requirement already satisfied: six>=1.5 in d:\program files\python\lib\site-packages (from python-dateutil>=2.8.1->pandas) (1.16.0)
Requirement already satisfied: idna<4,>=2.5 in d:\program files\python\lib\site-packages (from requests->simpletransformers) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in d:\program files\python\lib\site-packages (from requests->simpletransformers) (1.26.14)
Requirement already satisfied: certifi>=2017.4.17 in d:\program files\python\lib\site-packages (from requests->simpletransformers) (2022.12.7)
Collecting Click!=8.0.0,>=7.0
Using cached click-8.1.3-py3-none-any.whl (96 kB)
Collecting GitPython>=1.0.0
Using cached GitPython-3.1.30-py3-none-any.whl (184 kB)
Requirement already satisfied: psutil>=5.0.0 in d:\program files\python\lib\site-packages (from wandb>=0.10.32->simpletransformers) (5.9.4)
Collecting sentry-sdk>=1.0.0
Using cached sentry_sdk-1.14.0-py2.py3-none-any.whl (178 kB)
Collecting docker-pycreds>=0.4.0
Using cached docker_pycreds-0.4.0-py2.py3-none-any.whl (9.0 kB)
Collecting pathtools
Using cached pathtools-0.1.2-py3-none-any.whl
Collecting setproctitle
Using cached setproctitle-1.3.2-cp311-cp311-win_amd64.whl (11 kB)
Requirement already satisfied: setuptools in d:\program files\python\lib\site-packages (from wandb>=0.10.32->simpletransformers) (65.5.0)
Collecting appdirs>=1.4.3
Using cached appdirs-1.4.4-py2.py3-none-any.whl (9.6 kB)
Collecting protobuf!=4.21.0,<5,>=3.19.0
Using cached protobuf-4.21.12-cp310-abi3-win_amd64.whl (527 kB)
Collecting joblib>=1.1.1
Using cached joblib-1.2.0-py3-none-any.whl (297 kB)
Collecting threadpoolctl>=2.0.0
Using cached threadpoolctl-3.1.0-py3-none-any.whl (14 kB)
Collecting altair>=3.2.0
Using cached altair-4.2.2-py3-none-any.whl (813 kB)
Collecting blinker>=1.0.0
Using cached blinker-1.5-py2.py3-none-any.whl (12 kB)
Collecting cachetools>=4.0
Using cached cachetools-5.3.0-py3-none-any.whl (9.3 kB)
Collecting importlib-metadata>=1.4
Using cached importlib_metadata-6.0.0-py3-none-any.whl (21 kB)
Collecting pillow>=6.2.0
Using cached Pillow-9.4.0-cp311-cp311-win_amd64.whl (2.5 MB)
Collecting protobuf!=4.21.0,<5,>=3.19.0
Using cached protobuf-3.20.3-py2.py3-none-any.whl (162 kB)
Collecting pympler>=0.9
Using cached Pympler-1.0.1-py3-none-any.whl (164 kB)
Collecting rich>=10.11.0
Using cached rich-13.3.1-py3-none-any.whl (239 kB)
Collecting semver
Using cached semver-2.13.0-py2.py3-none-any.whl (12 kB)
Collecting toml
Using cached toml-0.10.2-py2.py3-none-any.whl (16 kB)
Collecting tzlocal>=1.1
Using cached tzlocal-4.2-py3-none-any.whl (19 kB)
Collecting validators>=0.2
Using cached validators-0.20.0-py3-none-any.whl
Collecting pydeck>=0.1.dev5
Using cached pydeck-0.8.0-py2.py3-none-any.whl (4.7 MB)
Requirement already satisfied: tornado>=5.0 in d:\program files\python\lib\site-packages (from streamlit->simpletransformers) (6.2)
Collecting watchdog
Using cached watchdog-2.2.1-py3-none-win_amd64.whl (78 kB)
Collecting absl-py>=0.4
Using cached absl_py-1.4.0-py3-none-any.whl (126 kB)
Collecting grpcio>=1.24.3
Using cached grpcio-1.51.1-cp311-cp311-win_amd64.whl (3.7 MB)
Collecting google-auth<3,>=1.6.3
Using cached google_auth-2.16.0-py2.py3-none-any.whl (177 kB)
Collecting google-auth-oauthlib<0.5,>=0.4.1
Using cached google_auth_oauthlib-0.4.6-py2.py3-none-any.whl (18 kB)
Collecting markdown>=2.6.8
Using cached Markdown-3.4.1-py3-none-any.whl (93 kB)
Collecting tensorboard-data-server<0.7.0,>=0.6.0
Using cached tensorboard_data_server-0.6.1-py3-none-any.whl (2.4 kB)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in d:\program files\python\lib\site-packages (from tensorboard->simpletransformers) (1.8.1)
Collecting werkzeug>=1.0.1
Using cached Werkzeug-2.2.2-py3-none-any.whl (232 kB)
Collecting wheel>=0.26
Using cached wheel-0.38.4-py3-none-any.whl (36 kB)
Collecting entrypoints
Using cached entrypoints-0.4-py3-none-any.whl (5.3 kB)
Requirement already satisfied: jinja2 in d:\program files\python\lib\site-packages (from altair>=3.2.0->streamlit->simpletransformers) (3.1.2)
Requirement already satisfied: jsonschema>=3.0 in d:\program files\python\lib\site-packages (from altair>=3.2.0->streamlit->simpletransformers) (4.17.3)
Collecting toolz
Using cached toolz-0.12.0-py3-none-any.whl (55 kB)
Collecting gitdb<5,>=4.0.1
Using cached gitdb-4.0.10-py3-none-any.whl (62 kB)
Collecting pyasn1-modules>=0.2.1
Using cached pyasn1_modules-0.2.8-py2.py3-none-any.whl (155 kB)
Collecting rsa<5,>=3.1.4
Using cached rsa-4.9-py3-none-any.whl (34 kB)
Collecting requests-oauthlib>=0.7.0
Using cached requests_oauthlib-1.3.1-py2.py3-none-any.whl (23 kB)
Collecting zipp>=0.5
Using cached zipp-3.12.0-py3-none-any.whl (6.6 kB)
Collecting markdown-it-py<3.0.0,>=2.1.0
Using cached markdown_it_py-2.1.0-py3-none-any.whl (84 kB)
Requirement already satisfied: pygments<3.0.0,>=2.14.0 in d:\program files\python\lib\site-packages (from rich>=10.11.0->streamlit->simpletransformers) (2.14.0)
Collecting pytz-deprecation-shim
Using cached pytz_deprecation_shim-0.1.0.post0-py2.py3-none-any.whl (15 kB)
Collecting tzdata
Using cached tzdata-2022.7-py2.py3-none-any.whl (340 kB)
Requirement already satisfied: decorator>=3.4.0 in d:\program files\python\lib\site-packages (from validators>=0.2->streamlit->simpletransformers) (5.1.1)
Requirement already satisfied: MarkupSafe>=2.1.1 in d:\program files\python\lib\site-packages (from werkzeug>=1.0.1->tensorboard->simpletransformers) (2.1.2)
Collecting smmap<6,>=3.0.1
Using cached smmap-5.0.0-py3-none-any.whl (24 kB)
Requirement already satisfied: pyrsistent!=0.17.0,!=0.17.1,!=0.17.2,>=0.14.0 in d:\program files\python\lib\site-packages (from jsonschema>=3.0->altair>=3.2.0->streamlit->simpletransformers) (0.19.3)
Collecting mdurl~=0.1
Using cached mdurl-0.1.2-py3-none-any.whl (10.0 kB)
Collecting pyasn1<0.5.0,>=0.4.6
Using cached pyasn1-0.4.8-py2.py3-none-any.whl (77 kB)
Collecting oauthlib>=3.0.0
Using cached oauthlib-3.2.2-py3-none-any.whl (151 kB)
Installing collected packages: sentencepiece, pyasn1, pathtools, appdirs, zipp, xxhash, wheel, werkzeug, watchdog, validators, tzdata, typing-extensions, tqdm, toolz, toml, threadpoolctl, tensorboard-data-server, smmap, setproctitle, sentry-sdk, semver, rsa, regex, pympler, pyasn1-modules, protobuf, pillow, oauthlib, numpy, multidict, mdurl, markdown, joblib, grpcio, fsspec, frozenlist, filelock, entrypoints, docker-pycreds, dill, Click, charset-normalizer, cachetools, blinker, async-timeout, absl-py, yarl, scipy, pytz-deprecation-shim, pydeck, pyarrow, pandas, multiprocess, markdown-it-py, importlib-metadata, google-auth, gitdb, aiosignal, tzlocal, scikit-learn, rich, responses, requests-oauthlib, huggingface-hub, GitPython, altair, aiohttp, wandb, transformers, streamlit, seqeval, google-auth-oauthlib, tensorboard, datasets, simpletransformers
Running setup.py install for sentencepiece: started
Running setup.py install for sentencepiece: finished with status 'error'
Note: you may need to restart the kernel to use updated packages.
DEPRECATION: sentencepiece is being installed using the legacy 'setup.py install' method, because it does not have a 'pyproject.toml' and the 'wheel' package is not installed. pip 23.1 will enforce this behaviour change. A possible replacement is to enable the '--use-pep517' option. Discussion can be found at https://github.com/pypa/pip/issues/8559
error: subprocess-exited-with-error
Running setup.py install for sentencepiece did not run successfully.
exit code: 1
[15 lines of output]
running install
D:\Program Files\Python\Lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_py
creating build
creating build\lib.win-amd64-cpython-311
creating build\lib.win-amd64-cpython-311\sentencepiece
copying src\sentencepiece/__init__.py -> build\lib.win-amd64-cpython-311\sentencepiece
copying src\sentencepiece/_version.py -> build\lib.win-amd64-cpython-311\sentencepiece
copying src\sentencepiece/sentencepiece_model_pb2.py -> build\lib.win-amd64-cpython-311\sentencepiece
copying src\sentencepiece/sentencepiece_pb2.py -> build\lib.win-amd64-cpython-311\sentencepiece
running build_ext
building 'sentencepiece._sentencepiece' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: legacy-install-failure
Encountered error while trying to install package.
sentencepiece
note: This is an issue with the package mentioned above, not pip.
hint: See above for output from the failure.
</code></pre>
<p><a href="https://i.sstatic.net/wlx2R.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/wlx2R.png" alt="screenshot" /></a>
I'm new to Python, please help. System Windows 10, with MS VC++ installed & updated.</p>
|
<python><windows><jupyter-notebook>
|
2023-01-30 18:42:13
| 2
| 17,588
|
Sourav
|
75,288,528
| 1,715,579
|
Specify DC cover with espresso_exprs
|
<p>I'm just getting started learning pyeda, and fairly new to python in general. I have a very complex partially-defined Boolean expression (80~140 variables, 10K terms) that's too big to express as a truth table, but I can express it pretty easily as two DNF's (1 DNF that describes defined ON and OFF sets, and 1 DNF that describes the DC set). My problem is that pyeda offers only to APIs:</p>
<ul>
<li><a href="https://pyeda.readthedocs.io/en/latest/2llm.html#minimize-boolean-expressions" rel="nofollow noreferrer"><code>espresso_exprs</code></a> accepts a list of DNF's and seems to assume these are fully-defined expressions because there's apparently no consideration of DC terms.</li>
<li><a href="https://pyeda.readthedocs.io/en/latest/2llm.html#minimize-truth-tables" rel="nofollow noreferrer"><code>espresso_tts</code></a> accepts a linearized truth table, potentially with DC terms.</li>
</ul>
<p>Is there any way to specify a DC DNF or cover when calling espresso with pyeda?</p>
<p>My current work around is to solve for the sum of both DNF's and then to try to drop any terms that only cover DC's, but I'm not sure if that's working correctly yet and anyway I believe this approach is probably relatively inefficient.</p>
|
<python><boolean-logic><minimization><truthtable><pyeda>
|
2023-01-30 18:16:07
| 0
| 149,556
|
p.s.w.g
|
75,288,451
| 15,500,727
|
Change column value with arithmetic sequences using df.loc in pandas
|
<p>Suppose I have the following dataframe :</p>
<pre><code>data = {"age":[2,3,2,5,9,12,20,43,55,60],'alpha' : [0,0,0,0,0,0,0,0,0,0]}
df = pd.DataFrame(data)
</code></pre>
<p>I want to change value of column <code>alpha</code> based on column <code>age</code> using <code>df.loc</code> and an arithmetic sequences but I got syntax error:</p>
<pre><code>df.loc[((df.age <=4)) , "alpha"] = ".4"
df.loc[((df.age >= 5)) & ((df.age <= 20)), "alpha"] = 0.4 + (1 - 0.4)*((df$age - 4)/(20 - 4))
df.loc[((df.age > 20)) , "alpha"] = "1"
</code></pre>
<p>Thank you in advance</p>
|
<python><pandas>
|
2023-01-30 18:09:27
| 3
| 485
|
mehmo
|
75,288,401
| 14,353,779
|
Split a string column based on a logic into two new columns in pandas
|
<p>I have a dataframe <code>df</code> :-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Type</th>
</tr>
</thead>
<tbody>
<tr>
<td>9F,5F/6F,3T,1T</td>
</tr>
<tr>
<td>18F/19F,2F,9T,4T</td>
</tr>
<tr>
<td>17F/12F</td>
</tr>
<tr>
<td>3T</td>
</tr>
<tr>
<td>3T,2F/3F</td>
</tr>
<tr>
<td>No Types</td>
</tr>
</tbody>
</table>
</div>
<p>I want to generate these two additional columns based on <code>Type</code> column ,All wordsending with Fs seperated in one and all Ts seperated in another column respectively.If <code>Type</code> is <code>No Types</code> then <code>No Types</code> in both the columns:-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Type</th>
<th>Fcol</th>
<th>Tcol</th>
</tr>
</thead>
<tbody>
<tr>
<td>9F,5F/6F,3T,1T</td>
<td>9F,5F/6F</td>
<td>3T,1T</td>
</tr>
<tr>
<td>18F/19F,2F,9T,4T</td>
<td>18F/19F,2F</td>
<td>9T,4T</td>
</tr>
<tr>
<td>17F/12F</td>
<td>17F/12F</td>
<td>Absent</td>
</tr>
<tr>
<td>3T</td>
<td>Absent</td>
<td>3T</td>
</tr>
<tr>
<td>3T,2F/3F</td>
<td>2F/3F</td>
<td>3T</td>
</tr>
<tr>
<td>No Types</td>
<td>No Types</td>
<td>No Types</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas>
|
2023-01-30 18:04:00
| 1
| 789
|
Scope
|
75,288,377
| 9,262,339
|
Celery: 'function' object has no attribute 'apply_async'
|
<p>I get an error <code>'function' object has no attribute 'apply_async'</code> when trying to run a celery task</p>
<p><strong>db_sinc.py</strong></p>
<pre><code>def create_or_update_google_creative():
...logic
</code></pre>
<p><strong>tasks.py</strong></p>
<pre><code>@shared_task
def run_create_or_update_google_creative():
print('Task running')
try:
result = create_or_update_google_creative()
return 201, {'message': 'Database successfully updated'}
except Exception as ex:
return 500, {'message': 'Database query error'}
</code></pre>
<p><strong>views.py</strong></p>
<pre><code>def run_db_sinc():
result = run_create_or_update_google_creative.apply_async()
print('+++++++++')
print(result.get(timeout=1))
</code></pre>
<p>When I run <code>run_db_sinc</code> an error occurs. How to fix it?</p>
<p>p/s</p>
<p>After changing the code in accordance with the comment sahasrara62 the fuction is running but in the end I got</p>
<pre><code>tuple' object has no attribute 'apply_async'
gpanel_1 | Traceback (most recent call last):
gpanel_1 | File "/usr/local/lib/python3.10/site-packages/ninja/operation.py", line 99, in run
gpanel_1 | result = self.view_func(request, **values)
gpanel_1 | File "/app/apps/creative_performer/api.py", line 63, in save_creative
gpanel_1 | return run_db_sinc()
gpanel_1 | File "/app/apps/creative_performer/views.py", line 136, in run_db_sinc
gpanel_1 | result = x.apply_async()
gpanel_1 | AttributeError: 'tuple' object has no attribute 'apply_async'
</code></pre>
<p>And I can't see result of</p>
<pre><code> print('+++++++++')
print(result.get(timeout=1))
</code></pre>
|
<python><celery>
|
2023-01-30 18:01:38
| 1
| 3,322
|
Jekson
|
75,288,354
| 9,983,652
|
how to slice a string dynamically?
|
<p>I'd like keep the first few characters of a string using a[:-3] etc. But I'd like to keep this -3 as a variable, so it could be a[:-1] so I will use a variable to dynamically slice the string using a[:-b], However, to use this format, how to keep all the characters like using a[:]? I don't want to use a[:len(a)] because I am passing this variable into a function where slcing is done. so I don't know string a outside the function. Thanks</p>
<pre><code>def slicing_string(slice_variable):
a='mytext'
return a[:slice_variable]
b=slicing_string(-3)
b
</code></pre>
<p>How to keep all the character using this function without knowning string a</p>
|
<python>
|
2023-01-30 17:59:25
| 2
| 4,338
|
roudan
|
75,288,232
| 14,640,064
|
How to detect horizontal line (stagnation) in data?
|
<p>How can I detect horizontal lines in my data in python?</p>
<p>I have used function <code>scipy.signal.find_peaks()</code> to find local minima and maxima. I can use that to separate ascending and descending parts. But I need to isolate the peaks from the flat line (marked with red circes in image).</p>
<p>What method should I use? Is there any library that could do that?</p>
<p><a href="https://i.sstatic.net/oILCX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oILCX.png" alt="Graph with red markers around flat horizontal lines" /></a></p>
|
<python><math><scipy><data-science>
|
2023-01-30 17:48:12
| 1
| 705
|
herdek550
|
75,287,824
| 5,539,674
|
Split concatenated functions keeping the delimiters
|
<p>I am trying to split strings containing python functions, so that the resulting output keeps separate functions as list elements.<br />
<code>s='hello()there()'</code> should be split into <code>['hello()', 'there()']</code><br />
To do so I use a regex lookahead to split on the closing parenthesis, but not at the end of the string.</p>
<p>While the lookahead seems to work, I cannot keep the <code>)</code> in the resulting strings as suggested in various posts. Simply splitting with the regex discards the separator:</p>
<pre><code>import re
s='hello()there()'
t=re.split("\)(?!$)", s)
</code></pre>
<p>This results in: <code>'hello(', 'there()']</code> .</p>
<pre><code>s='hello()there()'
t=re.split("(\))(?!$)", s)
</code></pre>
<p><a href="https://stackoverflow.com/questions/2136556/in-python-how-do-i-split-a-string-and-keep-the-separators">Wrapping the separator as a group</a> results in the <code>)</code> being retained as a separate element: <code>['hello(', ')', 'there()']</code>
As does <a href="https://stackoverflow.com/questions/21208223/python-how-can-i-include-the-delimiters-in-a-string-split">this approach using the <code>filter()</code> function</a>:</p>
<pre><code>s='hello()there()'
u = list(filter(None, re.split("(\))(?!$)", s)))
</code></pre>
<p>resulting again in the parenthesis as a separate element: <code>['hello(', ')', 'there()']</code></p>
<p>How can I split such a string so that the functions remain intact in the output?</p>
|
<python><regex><string><split><python-re>
|
2023-01-30 17:10:18
| 2
| 315
|
O René
|
75,287,819
| 3,755,861
|
'list' object has no attribute 'add_subplot' in matplotlib
|
<p>I have followed the suggestions from another threat, but this code still gives me the list object has no no attribute error - what is the correction?</p>
<pre><code>import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
#create data
x = np.array([4.1,7.1,5.2,7.1,0.3, 8.3, 9.7, 8.2])
y = np.array([0.7,0.72,0.5,0.73, 0.65, 0.57, 0.7, 0.8])
#create basic scatterplot
plt.plot(x, y, 'o',markersize=10,c='black')
#obtain m (slope) and b(intercept) of linear regression line
m, b = np.polyfit(x, y, 1)
#add linear regression line to scatterplot
fig = plt.plot(x, m*x+b)
ax = fig.add_subplot(111)
ax.set_ylim([0.45,0.8])
sns.despine(top=True, right=True, left=False, bottom=False)
plt.xlabel('Accuracy', fontsize=22)
plt.ylabel('Score', fontsize=22)
</code></pre>
|
<python><matplotlib>
|
2023-01-30 17:10:00
| 1
| 452
|
Pugl
|
75,287,817
| 8,012,206
|
Psycopg2 - Error during insert due to double quotes
|
<p>I am using Python + Psycopg2 to insert an array of texts.
<code>elems</code> column is of type <code>text[]</code>.</p>
<pre><code>a = ["A ''B'' C"]
cursor.execute(f"""\
INSERT INTO table(elems) \
VALUES (ARRAY{a}::text[]) \
""")
</code></pre>
<p>Error:</p>
<pre><code>column "A ''B'' C" does not exist
</code></pre>
<p>The error above is due to the double quotes from Python list.</p>
<p>Using a database tool, the exact query that works is:</p>
<pre><code>INSERT INTO table(elems)
VALUES (ARRAY['A ''B'' C']::text[])
</code></pre>
<p>Now my question:
What's the proper way of inserting a Python list of strings where an element may contain a single quote?</p>
|
<python><postgresql><psycopg2>
|
2023-01-30 17:09:52
| 1
| 12,242
|
Joseph D.
|
75,287,773
| 9,720,696
|
Getting binary labels on from a dataframe and a list of labels
|
<p>Suppose I have the following list of labels,</p>
<pre><code>labs = ['G1','G2','G3','G4','G5','G6','G7']
</code></pre>
<p>and also suppose that I have the following df:</p>
<pre><code> group entity_label
0 0 G1
1 0 G2
3 1 G5
4 1 G1
5 2 G1
6 2 G2
7 2 G3
</code></pre>
<p>to produce the above df you can use:</p>
<pre><code>df_test = pd.DataFrame({'group': [0,0,0,1,1,2,2,2,2],
'entity_label':['G1','G2','G2','G5','G1','G1','G2','G3','G3']})
df_test.drop_duplicates(subset=['group','entity_label'], keep='first')
</code></pre>
<p>for each group I want to use a mapping to look up on the labels and make a new dataframe with binary labels</p>
<pre><code> group entity_label_binary
0 0 [1, 1, 0, 0, 0, 0, 0]
1 1 [1, 0, 0, 0, 1, 0, 0]
2 2 [1, 1, 1, 0, 0, 0, 0]
</code></pre>
<p>namely for group 0 we have G1 and G2 hence 1s in above table and so on. I wonder how one can do this?</p>
|
<python><pandas>
|
2023-01-30 17:06:31
| 1
| 1,098
|
Wiliam
|
75,287,621
| 9,074,190
|
TensorFlow random forest get label as prediction output
|
<p>I'm using TensorFlow decision forest to predict the suitable crop based on few parameters. How do i get the predict() method to return the label ?</p>
<p>Im using <a href="https://www.kaggle.com/datasets/atharvaingle/crop-recommendation-dataset" rel="nofollow noreferrer">this</a> dataset for training</p>
<p>My code</p>
<pre><code>import tensorflow_decision_forests as tfdf
import tensorflow as tf
import pandas as pd
import numpy as np
df = pd.read_csv("Crop_recommendation.csv")
#TensorFlow dataset
train_ds = tfdf.keras.pd_dataframe_to_tf_dataset(df,label="label")
# Train the model
model = tfdf.keras.RandomForestModel()
model.fit(train_ds)
print(model.summary())
pd_serving_dataset = pd.DataFrame({
"N": [83],
"P": [45],
"K" : [30],
"temperature" : [25],
"humidity" : [80.3],
"ph" : [6],
"rainfall" : [200.91],
})
tf_serving_dataset = tfdf.keras.pd_dataframe_to_tf_dataset(pd_serving_dataset)
prediction = model.predict(tf_serving_dataset)
print(prediction)
</code></pre>
<p>My Output</p>
<pre><code>1/1 [==============================] - 0s 38ms/step
[[0. 0. 0. 0. 0.02333334 0.07666666
0.04666667 0. 0.08333332 0. 0. 0.
0. 0. 0. 0. 0. 0.
0. 0. 0.7699994 0. ]]
</code></pre>
<p>Expected Output <code>rice</code></p>
|
<python><tensorflow><tensorflow-decision-forests>
|
2023-01-30 16:54:10
| 1
| 1,745
|
Nishuthan S
|
75,287,501
| 5,695,336
|
Seems like a bug in Pylance type checking
|
<p>I got this weird error that <code>list[dict[str, str | int]]</code> cannot be assigned to <code>Sequence[dict[str, str | float | int] | None]</code>.</p>
<pre><code>Argument of type "list[dict[str, str | int]]" cannot be assigned to parameter "params" of type "Sequence[dict[str, str | float | int] | None]" in function "batch_call"
"list[dict[str, str | int]]" is incompatible with "Sequence[dict[str, str | float | int] | None]"
TypeVar "_T_co@Sequence" is covariant
Type "dict[str, str | int]" cannot be assigned to type "dict[str, str | float | int] | None"
"dict[str, str | int]" is incompatible with "dict[str, str | float | int]"
TypeVar "_VT@dict" is invariant
Type cannot be assigned to type "None"
</code></pre>
<p>This makes no sense. The list is obviously a subset of the sequence.</p>
|
<python><python-typing><pyright>
|
2023-01-30 16:43:24
| 1
| 2,017
|
Jeffrey Chen
|
75,287,357
| 11,197,301
|
python apply a function with two arguments to alla element of a matrix
|
<p>let's say that I have the following function</p>
<pre><code>def my_func(a,b):
res = a[0] + a[1]*b
return res
</code></pre>
<p>I know how to apply it to one element of a matrix:</p>
<pre><code>import numpy as np
mydata = np.matrix([[1, 2], [3, 4]])
my_par = np.array([1, 2])
res = my_func(my_par,mydata[1,1])
</code></pre>
<p>I would like now to apply it to all the element of the matrix mydata. I have tried thus</p>
<pre><code>myfunc_vec = np.vectorize(my_func)
res = myfunc_vec(my_par,mydata)
</code></pre>
<p>and I have the following error:</p>
<pre><code>in my_func
res = a[0] + a[1]*b
IndexError: invalid index to scalar variable.
</code></pre>
<p>I believe that the error is due to the fact that I pass two arguments to the function.</p>
<p>Is there any way to apply my function to all the element of the matrix without having an error?</p>
|
<python><numpy><function><matrix><vectorization>
|
2023-01-30 16:33:47
| 2
| 623
|
diedro
|
75,287,178
| 10,967,961
|
Comparison of two long lists for every combination
|
<p>I have two lists of 170000 elements each say list1 and list2 containing strings. The elements of the 2 are the same. I would like to confront the elements of list1 with all the elements of list2 and get a score of their similarity.</p>
<p>What I am trying to do at the moment is:</p>
<pre><code>%%time
from fuzzywuzzy import fuzz
fuzzy_match = {}
lista1 = data_min.doc_std_name.to_list()
lista2 = data_min.doc_std_name.to_list()
for i in range(len(lista1)):
for j in range(len(lista2)):
fuzzy_match[lista1[i]+"_vs_"+lista2[j]]=fuzz.partial_ratio(lista1[i], lista2[j])
</code></pre>
<p>however this takes exponential time to run (I have tried with the first 10 elements it takes almost 140ms; with the first 100 4 seconds and with the first 1000 7 minutes).
Is there a way to perform the same task in linear time?</p>
<p>Thank you</p>
|
<python><string><list><similarity>
|
2023-01-30 16:18:01
| 0
| 653
|
Lusian
|
75,287,112
| 6,494,707
|
How to have an ordered list of files with digits?
|
<p>I have a folder of files and want to read the files one by one because it is frames of a video.</p>
<p>However, when I am trying to have an ordered list of files, it is ordered as follows:</p>
<pre><code>data_dir = './data/'
filenames =listdir(data_dir)
N=len(filenames)
filenames.sort()
filenames
['Image1.jpg',
'Image10.jpg',
'Image11.jpg',
'Image12.jpg',
'Image13.jpg',
'Image14.jpg',
'Image15.jpg',
'Image2.jpg',
'Image3.jpg',
'Image4.jpg',
'Image5.jpg',
'Image6.jpg',
'Image7.jpg',
'Image8.jpg',
'Image9.jpg']
</code></pre>
<p>How to have an ordered list of images based on the numbers?</p>
|
<python><glob><listdir>
|
2023-01-30 16:11:18
| 2
| 2,236
|
S.EB
|
75,287,066
| 1,818,120
|
splprep and splrep returning different results for the same data in scipy
|
<p>I am trying to fit a spline through some data point and I am getting a different spline if I use <code>splprep</code> or <code>splrep</code> with the same conditions and data. <code>splrep</code> results are much easier to use, but <code>splprep</code> return the result I think is better for my data. I can't tell why. I also tried using <code>make_interp_spline</code>, but the results are the same as <code>splrep</code>.
I want the points I give to be the peaks of the spline, and for the function to interpolate between them, like <code>splprep</code> does</p>
<p>I am not sure what is causing the discrepancy?</p>
<pre><code>x_points = [ 0, 5, 10, 15, 20, 30, 40, 50, 60]
y_points = [12, 5, 19, 5, 19, 5, 19, 5 ,12]
</code></pre>
<pre><code>def get_spline_points(xs, ys):
mytck,myu=itp.splprep([xs, ys], k=2, s=0)
print(myu)
xnew,ynew= itp.splev(np.linspace(0, 1, 1000), mytck)
return (xnew, ynew)
</code></pre>
<p><a href="https://i.sstatic.net/txxIm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/txxIm.png" alt="splprep output" /></a></p>
<pre><code>def get_spline_points(xs, ys):
mytck=itp.splrep(x=xs, y=ys, k=2, s=0)
ys = itp.splev(np.arange(0, 60.5, 0.5), mytck)
return ys
</code></pre>
<pre><code>def get_spline_points(xs, ys):
bspline = interpolate.make_interp_spline(xs, ys, k=2)
fig, ax = plt.subplots()
xs_n = np.arange(0, 60.5, 0.5)
ys_n = bspline(xs_n)
ax.plot(xs_n, ys_n, linewidth=2.0)
ax.scatter(x_points, y_points, c='green')
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/i8FHr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/i8FHr.png" alt="enter image description here" /></a></p>
|
<python><scipy><spline>
|
2023-01-30 16:07:26
| 1
| 2,027
|
Yoav Schwartz
|
75,287,041
| 10,671,274
|
How to optimally set active/inactive flag based on a date and store id in a dataframe?
|
<p>I have a dataframe with the following structure (showing the relevant columns):</p>
<pre><code>store_price_date: date
store_id: string
</code></pre>
<p>For every store, I have to set <code>isActive</code> column to a value of <code>X</code> if the store was open. For example, I have a start and end date (e.g. 2022-01-01, 2023-01-29), and if the store was opened on 2023-01-04 (based on <code>store_price_date</code>), the flag should be set to <code>X</code>. If it wasn't, it should be empty.</p>
<p>I am dealing with a large dataset (>1TB) so I wanted to ask for an optimal way to do this in PySpark.</p>
<p>Resulting dataframe should have the date (not <code>store_price_date</code>), <code>store_id</code>, and <code>isActive</code> flag.</p>
<p>Example input:</p>
<pre><code>+---+------------+------+--+
| store_price_date|store_id
+---+------------+------+--+
| 2022-01-05 |T105|
| 2022-01-07 |T105|
| 2022-01-11 |T105|
| 2022-01-05 |WQ05|
| 2023-01-06 |WQ05|
| 2022-07-05 |RT00|
+---+------------+------+
</code></pre>
<p>Expected output:</p>
<pre><code>+---+------------+------+--+
| date|store_id | isActive
+---+------------+------+--+
| 2022-01-01|T105| |
| 2022-01-02|T105| |
| 2022-01-05|T105| X |
| 2022-01-06|T105| X |
| 2022-01-06|WQ05| X |
| 2022-01-04|WQ05| |
| 2022-01-05|RT00| X |
| 2022-01-06|RT00| |
+---+------------+------+
</code></pre>
<p>Thank you!</p>
|
<python><pyspark>
|
2023-01-30 16:06:13
| 1
| 601
|
ms12
|
75,286,846
| 7,342,782
|
uwsgi only sending stdout logs to GCP in Kubernetes Engine
|
<p>I have a django application for which I want to send logs to GCP.</p>
<p>Locally, everything works fine using django dev server and Cloud Logging for Python. I see the logs on my GCP dashboard with the right level, I can also see the json structured logs when I use them.
It also works well when I'm using gunicorn in a local docker instead of the django dev server.</p>
<p>However, as soon as I'm using uwsgi locally, I can't find any trace of my logs in the GCP dashboard. When I deploy my docker image in Kubernetes Engine, all the logs are only displayed as info and they are not json structured anymore. I noticed that the logger name is <code>stdout</code> in my log explorer.</p>
<p>I'm supposing that somehow uwsgi don't use my python logging config and only logs to <code>stdout</code> that is automatically sent as info by some internal gcp process.</p>
<p>Here's my <code>uwsgi.ini</code>:</p>
<pre><code>[uwsgi]
chdir=xxx
module=xxx
http = 0.0.0.0:8080
vacuum = true
enable-threads = true
listen = 128
# socket-timeout, http-timeout and harakiri are in s
socket-timeout = 180
http-timeout = 180
harakiri = 180
harakiri-verbose = true
py-autoreload = false
processes = 4
memory-report = false
master = true
master-fifo = /tmp/master-fifo
post-buffering = 65536
buffer-size = 65535
max-requests = 1500
max-requests-delta = 100
max-worker-lifetime = 3600
hook-accepting1-once = write:/tmp/bkm.ready ok
# Logging configuration
# Disable request logging. It's done from the application
disable-logging = true
log-master = false
# this will only prefix system logs
log-prefix = [uWSGI]
</code></pre>
<p>and my logging configuration</p>
<pre><code>LOGGING = {
'version': 1,
'disable_existing_loggers': False,
'filters': {
'require_debug_false': {
'()': 'django.utils.log.RequireDebugFalse',
},
'require_debug_true': {
'()': 'django.utils.log.RequireDebugTrue',
},
},
'formatters': {
'verbose': {
'format': '%(name)s [%(module)s:%(funcName)s:%(lineno)d] > %(message)s',
},
'simple': {
'format': '%(levelname)s > %(message)s',
},
},
'handlers': {
'console_simple': {
'class': 'logging.StreamHandler',
'filters': ['require_debug_true'],
'formatter': 'simple',
'level': 'DEBUG',
},
'console_verbose': {
'class': 'logging.StreamHandler',
'formatter': 'verbose',
'level': 'INFO',
},
'slack': {
'level': 'ERROR',
'filters': ['require_debug_false'],
'class': 'api.project.logger.SlackHandler',
},
'stackdriver_logging': {
'level': 'INFO',
'class': 'google.cloud.logging_v2.handlers.CloudLoggingHandler',
'formatter': 'verbose',
'client': logging_v2.Client(),
'labels': {'env': ENV_LABEL},
},
},
'loggers': {
'api': {
'handlers': [
# 'console_simple',
'console_verbose',
'stackdriver_logging',
'slack',
],
'level': 'DEBUG',
'propagate': False,
},
},
'root': {
'handlers': ['console_verbose'],
'level': 'DEBUG',
},
}
</code></pre>
<p>I've tried different configuration of uwsgi (<code>log-master</code>, <code>log-encoder</code>...) but without any luck. I can correctly see my logs when I use gunicorn that's why I suspect uwsgi to be the cause but I'd like to avoid changing my wsgi server if possible.</p>
<p>I'm using a json service key and the environment variable <code>GOOGLE_APPLICATION_CREDENTIALS</code> to manage my gcp credentials.</p>
|
<python><django><google-cloud-platform><logging><uwsgi>
|
2023-01-30 15:50:03
| 0
| 476
|
RogerFromSpace
|
75,286,784
| 4,909,923
|
How do I gracefully close (terminate) Gradio from within gradio.Blocks()?
|
<p>I am new to using Gradio. I am attempting to modify Google Colaboratory Notebook (i.e. a Jupyter Notebook) for my own purposes.</p>
<p>I would like to terminate Gradio from within a <code>gradio.Column()</code> that performs my function <code>myfunc()</code>, once the function has been executed.</p>
<p>How can I end Gradio (in this case, <code>demo</code>) gracefully from within the column? I have found references to <code>gradio.Blocks.close()</code> and <code>gradio.Interface.close()</code> but the sources I found are sparse on details on how to implement this in my example.</p>
<p>My structure is:</p>
<pre><code>import gradio as gr
demo = gr.Blocks(title="title")
with demo:
with gr.Tab("tab"):
with gr.Row():
with gr.Column():
myfunc()
# Quit Gradio here
# Continue code execution here
my_next_func()
</code></pre>
<p>Please note that the Notebook is supposed to continue execute code after closing Gradio. Therefore I don't believe I can use <code>sys.exit()</code> or something similar as I think it will stop all code execution in the Notebook cell.</p>
|
<python><jupyter-notebook><google-colaboratory><gradio>
|
2023-01-30 15:45:20
| 3
| 5,942
|
P A N
|
75,286,641
| 11,770,286
|
Add x-axis including tickmarks at 0 with matplotlib
|
<p>I have a chart with data going below and above 0, and I want to have my x-axis with tick marks at y==0, while tick labels are still below the chart. Note that using <code>axhline</code> is not sufficient as I need tick marks. Also, there are workarounds on SO that use <code>spines</code> to put the top spine at 0, with tick marks, but in my case I would need to keep the spines add the top and bottom.</p>
<p>Is there a way to do this?</p>
<pre><code>import matplotlib.pyplot as plt
fig, ax = plt.subplots()
ax.plot(range(-2, 3))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/Ux3V0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Ux3V0.png" alt="example" /></a></p>
|
<python><matplotlib>
|
2023-01-30 15:35:05
| 1
| 3,271
|
Wouter
|
75,286,549
| 4,753,897
|
TypeError: col should be Column with apache spark
|
<p>I have this method where I am gathering positive values</p>
<pre><code>def pos_values(df, metrics):
num_pos_values = df.where(df.ttu > 1).count()
df.withColumn("loader_ttu_pos_value", num_pos_values)
df.write.json(metrics)
</code></pre>
<p>However I get <code>TypeError: col should be Column</code> whenever I go to test it. I tried to cast it but that doesn't seem to be an option.</p>
|
<python><python-3.x><apache-spark><pyspark>
|
2023-01-30 15:29:01
| 1
| 12,145
|
Mike3355
|
75,286,526
| 4,872,291
|
Getting 'RuntimeError: Working outside of application context.' when trying to import function from another Blueprint
|
<p><em><strong>Initial Notes:</strong> The project uses Blueprints and below are the file structure and extracts of the code used...</em></p>
<h2><strong>File Structure:</strong></h2>
<pre><code>/app
├── flaskapp/
│ ├── posts/
│ │ ├── __init__.py
│ │ ├── forms.py
│ │ ├── routes.py
│ │ ├── utils.py
│ ├── users/
│ │ ├── __init__.py
│ │ ├── forms.py
│ │ ├── routes.py
│ │ ├── utils.py
│ ├── main/
│ │ ├── __init__.py
│ │ ├── crons.py
│ │ ├── routes.py
│ ├── templates/
│ │ ├── users.html
│ ├── __init__.py
│ ├── config.py
│ ├── models.py
├── run.py
</code></pre>
<hr />
<h2><em>posts/utils.py</em></h2>
<pre><code># Function to get all posts from DB
def get_all_posts():
post = Post.query.order_by(Post.id.asc())
return post
</code></pre>
<h2>users/routes.py</h2>
<pre><code># Importing 'get_all_posts' function from 'posts/utils.py'
from flaskapp.posts.utils import get_all_posts
users = Blueprint('users', __name__)
#All Users Route + Related Posts
@posts.route("/posts", methods=['GET'])
@login_required
def all_users():
users = User.query.order_by(User.id.asc())
return render_template('users.html', USERS=users, POSTS=get_all_posts())
</code></pre>
<h2>main/crons.py</h2>
<pre><code># Importing 'get_all_posts' function from 'posts/utils.py'
from flaskapp.posts.utils import get_all_posts
# A function to be called using 'scheduler' from 'flaskapp/__init__.py' on launch
def list_expired_posts():
posts = get_all_posts()
for p in posts:
if p.expired == 1:
print(p.title)
scheduler = BackgroundScheduler()
scheduler.add_job(func=list_expired_posts, trigger="interval", seconds=60)
scheduler.start()
# Terminate Scheduler on APP termination
atexit.register(lambda: scheduler.shutdown())
</code></pre>
<h2>flaskapp/<strong>init</strong>.py</h2>
<pre><code># __init__.py Main
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager
from flaskapp.config import Config
db = SQLAlchemy()
login_manager = LoginManager()
login_manager.login_view = 'users.login'
login_manager.login_message_category = 'warning'
# Create APP Function
def create_app(config_class=Config):
app = Flask(__name__)
# Import Configuration from Config File Class
app.config.from_object(Config)
db.init_app(app)
login_manager.init_app(app)
# Import Blueprint objects
from flaskapp.posts.routes import posts
from flaskapp.users.routes import users
from flaskapp.main.routes import main
# Register Blueprints
app.register_blueprint(posts)
app.register_blueprint(users)
app.register_blueprint(main)
# END
return(app)
# Calling scheduler function 'list_expired_posts' FROM '/main/crons.py' as a scheduled job to be triggered on app initiation
from flaskapp.main.crons import list_expired_posts
list_expired_posts()
</code></pre>
<h1>Explanation:</h1>
<p>I have the function '<strong>get_all_posts()</strong>' located in '<strong>posts/utils.py</strong>' which works fine when I import it and use it in another blueprint's <strong>routes.py</strong> file (<em><strong>ex.</strong></em>** users/routes.py**) as shown above.</p>
<p>But I'm getting the below error when importing the same function in another blueprint (<em><strong>specifically.</strong></em>** main/crons.py**) as shown above.</p>
<p>I'm trying to use the '<strong>get_all_posts()</strong>' function from '<strong>posts/utils.py</strong>' within the '<strong>list_expired_posts()</strong>' in '<strong>main/crons.py</strong>' and then calling the '<strong>list_expired_posts()</strong>' function from '<strong>flaskapp/init.py</strong>' to trigger it on launch and keep on executing it every 60 minutes.</p>
<h2>Error:</h2>
<p><em>RuntimeError: Working outside of application context.</em></p>
<p><em>This typically means that you attempted to use functionality that needed</em>
<em>the current application. To solve this, set up an application context</em>
<em>with app.app_context(). See the documentation for more information.</em></p>
<p>*<strong>Conclusion Notes + Steps attempted:</strong> *</p>
<p><em>I have already eliminated the '<strong>Scheduler</strong>' temporarily and tried working only with the function itself and not even calling it from '<strong>flaskapp/init.py</strong>'.</em></p>
<hr />
<p>I have also tried moving the below code to the '<strong>def create_app(config_class=Config)</strong>' section without any luck</p>
<pre><code>from flaskapp.main.crons import list_expired_posts
list_expired_posts()
</code></pre>
<hr />
<p>I have also tried creating a specific Blueprint for '<strong>crons.py</strong>' and registering it to my '<strong>flaskapp/init.py</strong>' but still got the same result.</p>
<hr />
<p>As a final outcome, I am trying to call the '<strong>get_all_posts()</strong>' FROM <strong>'posts/utils.py</strong>', then filter out the '<strong>expired posts</strong>' using the '<strong>list_expired_posts()</strong>' function FROM '<strong>main/crons.py</strong>' and schedule it to print the title of the expired posts every 60 minutes.</p>
<p>Since I've eliminated the scheduler already to test out, I'm quite sure this is not a scheduler issue but some import mixup I'm not figuring out.</p>
<hr />
<p>I am also aware that the '<strong>list_expired_posts()</strong>' can become as another function in '<strong>posts/utils.py</strong>' and then directly call the function from there using the scheduler which I've also tried but keep getting the same error.</p>
<hr />
<p>I also tried manually configuring the app's context as instructed in other posts but I keep getting the same error.</p>
<pre><code>with app.app_context():
</code></pre>
<hr />
<p>I'm not really a Python pro and I always try seeking multiple online resources prior to posting a question here but it seems like i'm out of luck this time. Your help is truly appreciated.</p>
|
<python><python-3.x><function><flask><flask-sqlalchemy>
|
2023-01-30 15:27:36
| 2
| 378
|
Eric
|
75,286,299
| 498,504
|
detectAllFaces() method can not recognize any faces in face-api.js
|
<p>I'm using <a href="https://github.com/justadudewhohacks/face-api.js" rel="nofollow noreferrer">face-api.js</a> Javascript API to develop a web app that user uploads her/his picture and we want to detect faces in the picture.</p>
<p>this is my HTML codes:</p>
<pre><code><input type="file" id="user_pic" accept="image/x-png,image/gif,image/jpeg">
<img src="images/250x250.webp" id="preview" alt="">
</code></pre>
<p>and following code are what I wrote to face detection:</p>
<pre><code>document.addEventListener('DOMContentLoaded', function() {
run()
});
async function run() {
// load the models
await faceapi.loadMtcnnModel('../faceapi_models')
await faceapi.loadFaceRecognitionModel('../faceapi_models')
}
const user_pic = document.getElementById('user_pic')
const preview = document.getElementById('preview')
user_pic.addEventListener('change', () => {
const reader = new FileReader()
reader.onload = (e) => {
preview.src = e.target.result
}
reader.readAsDataURL(user_pic.files[0]);
detectFaces(user_pic.files[0])
})
preview.onclick = () => user_pic.click()
async function detectFaces(input) {
let imgURL = URL.createObjectURL(input)
const imgElement = new Image()
imgElement.src = imgURL
const results = faceapi.detectAllFaces(imgElement)
.withFaceLandmarks()
.withFaceExpressions()
console.log(results)
results.forEach(result => {
const { x, y, width, height } = result.detection.box;
ctx.strokeRect(x, y, width, height);
});
}
</code></pre>
<p>Now whenever I select an image <code>results</code> variable is empty and this error occured:</p>
<pre><code>Uncaught (in promise) TypeError: results.forEach is not a function
</code></pre>
|
<javascript><python><keras><face-detection><face-api>
|
2023-01-30 15:09:15
| 1
| 6,614
|
Ahmad Badpey
|
75,286,292
| 2,163,392
|
Get maximum of spectrum from audio file with python (audacity-like)
|
<p>I am brand new to Digital Signal Processing and I am trying to find the peak of an audio file spectrum, I usually open an audio file with Audacity and plot the spectrum.</p>
<p><a href="https://i.sstatic.net/bCz4m.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bCz4m.png" alt="enter image description here" /></a></p>
<p>I could find the peak at 120HZ by visualizing the spectrum above, but it requires some manual work.</p>
<p>I would like to find the peak in a more programatically way with Python. I am not sure which spectrum is plotted in Audacity but I am supposing it is the spectogram. I tried to find such a peak programatically as below:</p>
<pre><code>import matplotlib.pyplot as plt
from scipy import signal
from scipy.io import wavfile
import numpy as np
sample_rate, samples = wavfile.read('audio1.wav')
frequencies, times, spectrogram = signal.spectrogram(samples, sample_rate)
#get maximum
x,y=np.where(spectrogram == spectrogram.max())
print("Frequency index where the maximum is")
print(x)
print("Frequency Value")
print(frequencies[x])
</code></pre>
<p>However, by running the code above I find the frequency of the maximum as 74.21875, which is very far away from the 120HZ I found in Audacity.</p>
<p>So, what I am mistaking here? is there any way to do such a task with Python? or is the spectogram the wrong place to look at the maximum?</p>
<p>P.s: you can find my audio file <a href="https://1drv.ms/u/s!Ami480Qt9yV8gQmne-6WY3VuxUHU?e=wHnDGx" rel="nofollow noreferrer">here</a></p>
|
<python><plot><audio><signal-processing><fft>
|
2023-01-30 15:08:38
| 2
| 2,799
|
mad
|
75,286,009
| 2,129,263
|
python hard-coded multi-char delimiter vs passed multi-char delimiter
|
<p>I have a python script which is using csv DictReader to read a csv with unicode delimiter '\x1f'.</p>
<p>I am running the script by calling a bash shell script which is passing the delimiter as follows:</p>
<pre><code>python python_script.py '\x1f' import.csv
</code></pre>
<p>however, I am getting following error:
<strong>TypeError: "delimiter" must be a 1-character string</strong></p>
<p>but when I hard-code the delimiter into the python script like this:</p>
<p><code>reader = csv.DictReader(import.csv, delimiter='\x1f')</code> it works, while</p>
<p><code>reader = csv.DictReader(import.csv, delimiter=sys.argv[1])</code> gives above 1-character string error mentioned above.</p>
<p>How can I pass the multi-byte delimiter from shell script above without hard-coding the delimiter in the python script?</p>
|
<python><csv>
|
2023-01-30 14:45:40
| 1
| 499
|
Zeeshan Arif
|
75,285,909
| 7,739,375
|
How to get a C object and pass it in argument of a function in cython
|
<p>I am trying to use a C library "myClib" from python.</p>
<p>This library has a function "myCfunction" that return a pointer struct "myCStruct" and then had other functions that take a pointer of this struct in argument.</p>
<p>I don't have the definition of this struct I only get this in the .h file :</p>
<pre><code>typedef struct myCStruct myCStruct
</code></pre>
<p>The library is provided with as a static library with a .a file. Some .so files are also provided they contains some dependencies used by the library. The .h files are also provided but the full definition of "myCStruct" seems not provided in any of the .h.</p>
<p>I don't have to read or process any information that would be contained in this object but I need to pass it in argument of an other C functions of the library.</p>
<p>Right now to execute the C code I am using cython to generate a .so file that I can import in my python code.</p>
<p>The definition of the function in my pyx file looks like this :</p>
<pre><code>cdef extern from "myClib.h":
void myCfunction()
def py_myCfunction():
return myCfunction()
</code></pre>
<p>I didn't declare the return type of py_myCfunction as I didn't define any class matching the myCStruct since I don't have any idea about the members.</p>
<p>My problem is that when I call py_myCfunction in my python program I am getting an object of type NoneType.</p>
<p>So 2 questions :</p>
<ul>
<li><p>Could I get a Nonetype object because I miss the declaration in the pyx ? Or does it necessary means that myCfunction return nothing (it's supposed to return something).</p>
</li>
<li><p>If the problem is there because I miss the declaration, which return type could my py_myCfunction has since I don't know the member of the C struct equivalent ?</p>
</li>
</ul>
|
<python><c><cython>
|
2023-01-30 14:36:59
| 1
| 630
|
bloub
|
75,285,772
| 244,297
|
Using bisect functions with a key
|
<p>While using <a href="https://docs.python.org/3/library/bisect.html#bisect.bisect_left" rel="nofollow noreferrer"><code>bisect_left</code></a> with a <code>key</code> parameter I've noticed that the key function is not applied to the searched value. For example, I want to find the first range in a sorted list of ranges where a start value is bigger than the given number:</p>
<pre><code>>>> from bisect import bisect_left
>>> ranges = [[1, 3], [4, 10], [11, 20]]
>>> num = 5
>>> bisect_left(ranges, [num+1, num+2], key=lambda range: range[0] > num)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: '<' not supported between instances of 'bool' and 'list'
>>> bisect_left(ranges, True, key=lambda range: range[0] > num) # this works
2
</code></pre>
<p>The documentation on <code>bisect_left</code> reads:</p>
<blockquote>
<p><em>key</em> specifies a <a href="https://docs.python.org/3/glossary.html#term-key-function" rel="nofollow noreferrer">key function</a> of one argument that is used to extract a
comparison key from each element in the array. To support searching
complex records, the key function is not applied to the <em>x</em> value.</p>
<p>If key is <code>None</code>, the elements are compared directly with no intervening
function call.</p>
</blockquote>
<p>What exactly is the reason the key function is not applied to the value to be searched for here? This seems a bit inconsistent with <a href="https://docs.python.org/3/library/bisect.html#bisect.insort_left" rel="nofollow noreferrer"><code>insort_left</code></a> where the key function <em>is</em> applied to it.</p>
|
<python><binary-search><key-function>
|
2023-01-30 14:24:29
| 2
| 151,764
|
Eugene Yarmash
|
75,285,751
| 7,714,681
|
What is the most time efficient way to calculate the distance between tuples in a list?
|
<p>I have a list with tuples:</p>
<p><code>tuple_list = [(1,3),(4,7),(8,1),(5,4),(9,3),(7,2),(2,7),(3,1),(8,9),(5,2)]</code></p>
<p>From this list, I want to return the minimum distance of two numbers in a tuple.</p>
<p>In the naive approach, I would do the following:</p>
<pre><code>distance = 10
for tup in tuple_list:
if abs(tup[0]-tup[1]) < distance:
distance = abs(tup[0]-tup[1])
</code></pre>
<p>Then, in the end, <code>distance</code> would equal 1.</p>
<p>However, I suspect there is a faster method to obtain the minimum distance that calculates all the distances in parallel.</p>
|
<python><list><loops><distance>
|
2023-01-30 14:22:19
| 2
| 1,752
|
Emil
|
75,285,712
| 4,478,466
|
How to add additional arguments to existing decorator with custom decorator
|
<p>I know my question sounds a bit silly, but there is an existing decorator from a library that can either take a kwarg or not. I would like to pass this parameter in when a certain condition is achieved. So my initial thought was to write a decorator which will wrap up this function. But I have no idea if this is possible in Python.</p>
<p>So for example this is the original decorator:</p>
<pre><code>@existing_decorator('This is argument')
</code></pre>
<p>And if a certain condition is called it needs to be called like:</p>
<pre><code>@existing_decorator('This is argument', additional=False)
</code></pre>
<p>And I imagined my final result to be something like:</p>
<pre><code>@check_condition(existing_decorator('This is argument'))
</code></pre>
<p>I tried writing a custom decorator:</p>
<pre><code>def check_condition(f):
@wraps(f)
def wrapper(*args, **kwargs):
if is_production():
return f(additional=False, *args, **kwargs)
return f(*args, **kwargs)
return wrapper
</code></pre>
<p>This works when condition is False, but will return the following error when condition is true and argument should be included:</p>
<pre><code>TypeError: wrapper() got an unexpected keyword argument 'doc'
</code></pre>
|
<python><python-decorators>
|
2023-01-30 14:18:32
| 1
| 658
|
Denis Vitez
|
75,285,711
| 13,962,514
|
Difference between collections.abc.Sequence and typing.Sequence
|
<p>I was reading an article and about collection.abc and typing class in the python standard library and discover both classes have the same features.</p>
<p>I tried both options using the code below and got the same results</p>
<pre><code>from collections.abc import Sequence
def average(sequence: Sequence):
return sum(sequence) / len(sequence)
print(average([1, 2, 3, 4, 5])) # result is 3.0
from typing import Sequence
def average(sequence: Sequence):
return sum(sequence) / len(sequence)
print(average([1, 2, 3, 4, 5])) # result is 3.0
</code></pre>
<p>Under what condition will collection.abc become a better option to typing. Are there benefits of using one over the other?</p>
|
<python><python-typing><generic-collections>
|
2023-01-30 14:18:27
| 2
| 311
|
Oluwasube
|
75,285,604
| 12,845,199
|
Optimization of map, in grouped by object
|
<p>I have the following dataframe</p>
<pre><code>test_df = pd.DataFrame({'Category': {0: 'product-availability address-confirmation input',
1: 'registration register-data-confirmation options',
2: 'onboarding return-start input',
3: 'registration register-data-confirmation input',
4: 'decision-tree first-interaction-validation options'},
'Original_UserId': {0: '5511949551865@wa.gw.msging.net',
1: '5511949551865@wa.gw.msging.net',
2: '5511949551865@wa.gw.msging.net',
3: '5511949551865@wa.gw.msging.net',
4: '5511949551865@wa.gw.msging.net'}})
</code></pre>
<p>Thank to jezrael I am applying the following map, which follows the logic given in this question <a href="https://stackoverflow.com/questions/75285136/after-certain-string-is-found-mark-every-after-string-as-true-pandas/75285152?noredirect=1#comment132846602_75285152">After certain string is found mark every after string as true,pandas</a></p>
<pre><code>test_df.groupby('Original_UserId',observed=True)['Category'].apply(lambda s : s.eq('onboarding return-start input').cummax())
</code></pre>
<p>Which returns the following series</p>
<pre><code>pd.Series({0: False, 1: False, 2: True, 3: True, 4: True})
</code></pre>
<p>The thing is when I apply this condition, to a larger dataset it takes quite a while to run this code. Any clues on how to optimize?</p>
|
<python><pandas>
|
2023-01-30 14:08:45
| 2
| 1,628
|
INGl0R1AM0R1
|
75,285,427
| 1,848,244
|
Custom Aggregation Across Parallel Hierarchy Levels in a Multi-Index
|
<p>I have a dataframe that is organised hierarchically. Consider this:</p>
<pre><code> baseval
indexlevel0 indexlevel1 indexlevel2
L0-0 L1-0 L2-0 1
L2-1 1
L2-2 20
L1-1 L2-0 2
L2-1 2
L2-2 10
</code></pre>
<p>What I need to do is create a new dataframe, that collapses the intermediate level (indexlevel1) by replacing the corresponding (indexlevel2) with a single value that is the minimum of the two levels that were once contained in indexlevel1.</p>
<p>Probably easier to just show what I mean - the solution to the above example would be (that is, indexlevel0, and 2 are preserved, along with the minimum basevals per-indexlevel2):</p>
<pre><code> minbylevel
indexlevel0 indexlevel2
L0-0 L2-0 1
L2-1 1
L2-2 10
</code></pre>
<p>I have not the slightest idea of where to even start with this. All the examples of aggregation etc work from the bottom up.</p>
<p>Here's some test code to create the starting point dataframe.</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from io import StringIO
testdata = """
indexlevel0,indexlevel1,indexlevel2,baseval
L0-0,L1-0,L2-0,1
L0-0,L1-0,L2-1,1
L0-0,L1-0,L2-2,20
L0-0,L1-1,L2-0,2
L0-0,L1-1,L2-1,2
L0-0,L1-1,L2-2,10
"""
testinput = StringIO(testdata)
data_df = pd.read_csv(testinput, index_col=[0,1,2], header=[0]).sort_index()
print(data_df)
</code></pre>
|
<python><pandas><multi-index>
|
2023-01-30 13:56:16
| 2
| 437
|
user1848244
|
75,285,362
| 6,748,145
|
How does Python know to use the same object in memory?
|
<p>If I use the below:</p>
<pre><code>a = 1000
print(id(a))
myList = [a,2000,3000,4000]
print(id(myList[0]))
# prints the same IDs
</code></pre>
<p>I get the same id. This makes sense to me. I can understand how the memory manager could assign the same object to these variables, because I am directly referencing <code>a</code> in the list.</p>
<p>However, if I do this instead:</p>
<pre><code>a = 1000
print(id(a))
myList = [1000,2000,3000,4000]
print(id(myList[0]))
# prints the same IDs
</code></pre>
<p>I STILL get the same id being output for both prints. How does Python know to use the same object for these assignments? Searching for pre-existence would surely be hugely inefficient so I am presuming something more clever is going on here.</p>
<p>My first thought was something to do with the integer itself being used to calculate the objects address, but the behaviour also holds true for strings:</p>
<pre><code>a = "car"
print(id(a))
myList = ["car",2000,3000,4000]
print(id(myList[0]))
# prints the same IDs
</code></pre>
<p>The behaviour does NOT however, hold true for list elements:</p>
<pre><code>a = [1,2,3]
print(id(a))
myList = [[1,2,3],2000,3000,4000]
print(id(myList[0]))
# prints different IDs
</code></pre>
<p>Can someone explain the behaviour I am seeing?</p>
<p>EDIT - I have encountered that for small values between -5 and 256, the same object may be used. The thing is that I am seeing the same object still being used even for huge values, or even strings:</p>
<pre><code>a = 1000000000000
myList = [1000000000000,1000,2000]
print(a is myList[0])
# outputs True!
</code></pre>
<p>My question is <strong>How can Python work out that it is the same object in these cases without searching for pre-existence?</strong> Let's say CPython specifically</p>
<p>EDIT - I am using Python V3.8.10</p>
|
<python><memory-management><cpython>
|
2023-01-30 13:51:04
| 1
| 2,270
|
SuperHanz98
|
75,285,357
| 14,808,637
|
Graph Edges intialization between nodes using numpy?
|
<p>Let's say I have to initialise the bi-directional edges for the following graph between the nodes:</p>
<p><a href="https://i.sstatic.net/J59zH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/J59zH.png" alt="enter image description here" /></a></p>
<p>I can easily do this using the following code:</p>
<pre><code>import numpy as np
node_num = 3
graph = np.ones([node_num, node_num]) - np.eye(node_num)
</code></pre>
<p>Now I am extending this graph in the following way:</p>
<p><a href="https://i.sstatic.net/YjwbO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/YjwbO.png" alt="enter image description here" /></a></p>
<p>What is the simple and efficient way to make it code for this graph?</p>
|
<python><numpy><graph><edges>
|
2023-01-30 13:50:18
| 1
| 774
|
Ahmad
|
75,285,242
| 911,971
|
Strange pylint errors
|
<p>I want to add linter to my github repository, and during tests on dummy code I got strange results.</p>
<p>Code:</p>
<pre class="lang-py prettyprint-override"><code>"""
Dummy module for pylint tests
"""
def is_prime(num):
""" Checking if a number is prime """
if num > 1:
for i in range(2, num//2):
if (num % i) == 0:
#print(num, "is not a prime number")
break
else:
print(num, "is a prime number")
if __name__ == "__main__":
for x in range(2,100000):
is_prime(x)
</code></pre>
<p>I got something like this:</p>
<pre class="lang-none prettyprint-override"><code>pylint .\pytest.py
*************
<?>:16:4: W0622: Redefining built-in 'exit' (redefined-builtin)
*************
<?>:4:4: W0611: Unused import _pytest.mark (unused-import)
*************
<?>:5:4: W0611: Unused import _pytest.recwarn (unused-import) *************
<?>:6:4: W0611: Unused import _pytest.runner (unused-import) *************
<?>:7:4: W0611: Unused import _pytest.python (unused-import)
*************
<?>:8:4: W0611: Unused import _pytest.skipping (unused-import) *************
<?>:9:4: W0611: Unused import _pytest.assertion (unused-import) *************
<?>:36:4: W0611: Unused import _pytest.freeze_support (unused-import) *************
<?>:40:8: W0611: Unused import _pytest.genscript (unused-import) *************
<?>:46:4: W0611: Unused import _pytest.debugging (unused-import) *************
<?>:50:8: W0611: Unused import _pytest.pdb (unused-import) *************
<?>:56:4: W0611: Unused import _pytest.fixtures (unused-import) ----------------------------------- Your code has been rated at 0.00/10
</code></pre>
<p>What is it about? Where did these results come from?</p>
<p>Yes, I can disable W0611 and W0622, but that's not the point.</p>
|
<python><pylint>
|
2023-01-30 13:39:28
| 1
| 506
|
parasit
|
75,285,147
| 159,072
|
Unable to render Flask web-page
|
<p>I am using Ubuntu 16.04. It had builtin Python 3.5. I was unable to download Flask in that version. So, I changed it to Anaconda Python3.8.</p>
<p>However, I am unable to render any Flask page.</p>
<p>The following command output shows that, when I run <code>app.py</code> of my <code>Flask</code> application, the web server doesn't return any web-appress:</p>
<pre><code>(base) user_1@dell-vostro:~$ cd git
(base) user_1@dell-vostro:~/git$ cd MyFlaskProject1/
(base) user_1@dell-vostro:~/git/MyFlaskProject1$ python3.5 app.py
Traceback (most recent call last):
File "app.py", line 1, in <module>
from flask import Flask, render_template, jsonify
File "/usr/local/lib/python3.5/dist-packages/flask/__init__.py", line 4, in <module>
from . import json as json
File "/usr/local/lib/python3.5/dist-packages/flask/json/__init__.py", line 1
from __future__ import annotations
SyntaxError: future feature annotations is not defined
(base) user_1@dell-vostro:~/git/MyFlaskProject1$ python3.8 app.py
* Restarting with inotify reloader
* Debugger is active!
* Debugger PIN: 107-653-334
</code></pre>
<p>Then, I manually entered <code>127.0.0.1:5000</code> in the web-browser, but I see nothing.</p>
<p>How can I solve this issue?</p>
|
<python><flask><ubuntu-16.04><python-3.8>
|
2023-01-30 13:32:04
| 1
| 17,446
|
user366312
|
75,285,136
| 12,845,199
|
After certain string is found mark every after string as true,pandas
|
<pre><code>s = pd.Series({0: 'registration address-complement-insert-confirmation input',
1: 'decision-tree first-interaction-validation options',
2: 'decision-tree invalid-format-validation options',
3: 'decision-tree first-interaction-validation options',
4: 'registration address-complement-request view',
5: 'onboarding return-start origin',
6: 'registration address-complement-request origin',
7: 'decision-tree identified-regex options',
8: 'decision-tree first-interaction-validation options',
9: 'decision-tree first-interaction-validation options'})
</code></pre>
<p>I have the following series object. What I want to do is to map it and mark every single string after 'onboarding return-start origin' as true. Any ideas on how I could build this condition?</p>
<p>Wanted result</p>
<pre><code>s = pd.Series({0: False,
1: False,
2: False,
3: False,
4: False,
5: True,
6: True,
7: True,
8: True,
9: True})
</code></pre>
|
<python><pandas>
|
2023-01-30 13:31:15
| 1
| 1,628
|
INGl0R1AM0R1
|
75,284,890
| 14,667,788
|
how to cut image according to two points in opencv?
|
<p>I have this input image (feel free to download it and try your solution, please):</p>
<p><a href="https://i.sstatic.net/GIy6j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GIy6j.png" alt="enter image description here" /></a></p>
<p>I need to find points A and B that are closest to the left down and right upper corner. And than I would like to cut of the image. See desired output:</p>
<p><a href="https://i.sstatic.net/ue5BP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ue5BP.png" alt="enter image description here" /></a></p>
<p>So far I have this function, but it does not find points A, B correctly:</p>
<pre class="lang-py prettyprint-override"><code>
def CheckForLess(list1, val):
return(all(x < val for x in list1))
def find_corner_pixels(img):
# Get image dimensions
height, width = img.shape[:2]
# Find the first non-black pixel closest to the left-down and right-up corners
nonempty = []
for i in range(height):
for j in range(width):
# Check if the current pixel is non-black
if not CheckForLess(img[i, j], 10):
nonempty.append([i, 1080 - j])
return min(nonempty) , max(nonempty)
</code></pre>
<p>Can you help me please?</p>
<hr />
<p>The <a href="https://stackoverflow.com/a/75285714/5446749">solution provided by Achille</a> works on one picture, but if I change input image to this:</p>
<p><a href="https://i.sstatic.net/qZ8WM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qZ8WM.png" alt="enter image description here" /></a></p>
<p>It gives wrong output:</p>
<p><a href="https://i.sstatic.net/ntcHy.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ntcHy.jpg" alt="enter image description here" /></a></p>
|
<python><opencv>
|
2023-01-30 13:11:02
| 2
| 1,265
|
vojtam
|
75,284,768
| 14,163,533
|
"ModuleNotFoundError" after installing a python package
|
<h1>Problem summary</h1>
<p>I am very new to python package development. I developed a package and <a href="https://test.pypi.org/project/spark-map/" rel="noreferrer">published it at TestPyPI</a>. I install this package trough <code>pip</code> with no errors. However, python is giving me a "ModuleNotFoundError" when I try to import it, and I have no idea why. Can someone help me?</p>
<h1>Repro steps</h1>
<p>First, I install the package with:</p>
<pre><code>pip install -i https://test.pypi.org/simple/ spark-map==0.2.76
</code></pre>
<p>Then, I open a new terminal, start the python interpreter, and try to import this package, but python gives me a <code>ModuleNotFoundError</code>:</p>
<pre class="lang-py prettyprint-override"><code>>>> import spark_map
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'spark_map'
</code></pre>
<h1>What I discover</h1>
<ul>
<li><p>When I <code>cd</code> to the root folder of the package, and open the python interpreter, and run <code>import spark_map</code>, it works fine with no errors;</p>
</li>
<li><p>That <code>pip</code> did not installed the package succesfully; However I checked this. I got no error messages when I install the package, and when I run <code>pip list</code> after the <code>pip install</code> command, I see <code>spark_map</code> on the list of installed packages.</p>
</li>
</ul>
<pre class="lang-bash prettyprint-override"><code>> pip list
... many packages
spark-map 0.2.76
... more packages
</code></pre>
<ul>
<li>The folder where <code>spark_map</code> was installed can be out of the module search path of Python; I checked this as well. <code>pip</code> is installing the package on a folder called <code>Python310\lib\site-packages</code>, and this folder is included inside the <code>sys.path</code> variable:</li>
</ul>
<pre><code>>>> import sys
>>> for path in sys.path:
... print(path)
C:\Users\pedro\AppData\Local\Programs\Python\Python310\python310.zip
C:\Users\pedro\AppData\Local\Programs\Python\Python310\DLLs
C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib
C:\Users\pedro\AppData\Local\Programs\Python\Python310
C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib\site-packages
C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib\site-packages\win32
C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib\site-packages\win32\lib
C:\Users\pedro\AppData\Local\Programs\Python\Python310\lib\site-packages\Pythonwin
</code></pre>
<h1>Information about the system</h1>
<p>I am on Windows 10, Python 3.10.9, trying to install and import the <code>spark_map</code> package, version 0.2.76.(<a href="https://test.pypi.org/project/spark-map/" rel="noreferrer">https://test.pypi.org/project/spark-map/</a>).</p>
<h1>Information about the code</h1>
<p><a href="https://github.com/pedropark99/spark_map" rel="noreferrer">The package source code is hosted at GitHub</a>, and the folder structure of this package is essentially this:</p>
<pre><code>root
│
├───spark_map
│ ├───__init__.py
│ ├───functions.py
│ └───mapping.py
│
├───tests
│ ├───functions
│ └───mapping
│
├───.gitignore
├───LICENSE
├───pyproject.toml
├───README.md
└───README.rst
</code></pre>
<p>The <code>pyproject.toml</code> file of the package:</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=61.0", "toml"]
build-backend = "setuptools.build_meta"
[project]
name = "spark_map"
version = "0.2.76"
authors = [
{ name="Pedro Faria", email="pedropark99@gmail.com" }
]
description = "Pyspark implementation of `map()` function for spark DataFrames"
readme = "README.md"
requires-python = ">=3.7"
license = { file = "LICENSE.txt" }
classifiers = [
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
]
dependencies = [
"pyspark",
"setuptools",
"toml"
]
[project.urls]
Homepage = "https://pedropark99.github.io/spark_map/"
Repo = "https://github.com/pedropark99/spark_map"
Issues = "https://github.com/pedropark99/spark_map/issues"
[tool.pytest.ini_options]
pythonpath = [
"."
]
[tool.setuptools]
py-modules = []
</code></pre>
<h1>What I tried</h1>
<p>As @Dorian Turba suggested, I moved the source code into a <code>src</code> folder. Now, the structure of the package is this:</p>
<pre><code>root
├───src
│ └───spark_map
│ ├───__init__.py
│ ├───functions.py
│ └───mapping.py
│
├───tests
├───.gitignore
├───LICENSE
├───pyproject.toml
├───README.md
└───README.rst
</code></pre>
<p>After that, I executed <code>python -m pip install -e .</code> (the log of this command is on the image below). The package was compiled and installed succesfully. However, when I open a new terminal, in a different location, and try to run <code>python -c "import spark_map"</code>, I still get the same error.</p>
<p><a href="https://i.sstatic.net/zVNQQ.png" rel="noreferrer"><img src="https://i.sstatic.net/zVNQQ.png" alt="enter image description here" /></a></p>
<p>I also tried to start a virtual environment (with <code>python -m venv env</code>), and install the package inside this virtual environment (with <code>pip install -e .</code>). Then, I executed <code>python -c "import spark_map"</code>. But the problem still remains. I executed <code>pip list</code> too, to check if the package was installed. The full log of commands is on the image below:</p>
<p><a href="https://i.sstatic.net/ZG7tW.png" rel="noreferrer"><img src="https://i.sstatic.net/ZG7tW.png" alt="enter image description here" /></a></p>
|
<python><pip><package>
|
2023-01-30 13:00:33
| 4
| 889
|
Pedro Faria
|
75,284,417
| 14,037,055
|
AttributeError: module 'networkx' has no attribute 'from_numpy_matrix'
|
<p>A is co occurrence dataframe. Why it shown AttributeError: module 'networkx' has no attribute 'from_numpy_matrix'</p>
<pre><code>import numpy as np
import networkx as nx
import matplotlib
A=np.matrix(coocc)
G=nx.from_numpy_matrix(A)
</code></pre>
|
<python><python-3.x><graph><networkx>
|
2023-01-30 12:32:36
| 3
| 469
|
Pranab
|
75,284,350
| 9,720,696
|
operate over values of a dictionary when values are lists
|
<p>Suppose I have the following dictionary:</p>
<pre><code>data = {'ACCOUNT_CLOSURE': ['account closure',
'close account',
'close bank',
'terminate account',
'account deletion',
'cancel account',
'account cancellation'],
'ACCOUNT_CHANGE': ['change my account',
'switch my account',
'change from private into savings',
'convert into family package',
'change title of the account',
'make title account to family',
'help me access the documentation']}
</code></pre>
<p>I want to go through each key and subsequently the elements of the values and drop the stopwords, so I do:</p>
<pre><code>stop_words = set(stopwords.words("english"))
for key, values in data.items():
data[key] = [value for value in values if value not in stop_words]
</code></pre>
<p>but this returns the exact same dictionary as my original one. I wonder what am I doing wrong?</p>
|
<python><dictionary><nltk>
|
2023-01-30 12:25:53
| 1
| 1,098
|
Wiliam
|
75,284,301
| 20,051,041
|
How to remove all digits after the last string
|
<p>I would like to remove all the digits from the end of: "Car 7 5 8 7 4".
How can I achieve it using regex or other approaches?
I tried following but it only deletes 1 digit:</p>
<pre><code>re.sub(r'\s*\d+$', '', text)
</code></pre>
<p>Thanks</p>
|
<python><regex><substitution>
|
2023-01-30 12:20:30
| 2
| 580
|
Mr.Slow
|
75,284,240
| 13,440,165
|
Pass from filter's frequency response to time-domain [b, a] representation
|
<p>FIR and IIR filters are usually defined by [b, a] coefficients in time-domain regression formula:</p>
<pre><code>y[n] = b_0 * x[n] + b_1 * x[n-1] + ... + b_N * x[n - N] - a_1 * y[n-1] - ... - a_M * y[n-M]
</code></pre>
<p>One can extract the frequency response <code>h</code> from <code>[b,a]</code> using Fourier Transform or use the built-in <code>scipy</code> function <code>signal.freqz</code>:</p>
<pre><code>w, h = signal.freqz(b, a, worN=FFT_LENGTH, fs=FS)
</code></pre>
<p><a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.freqz.html" rel="nofollow noreferrer">https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.freqz.html</a></p>
<p>My question is how to go the other way around. Assuming I get a frequency response <code>h(w)</code> generated by <code>freqz</code> or <code>fft</code> of <code>[b, a]</code>, how to recover those <code>[b, a]</code> so I can apply the filter in time-domain using <code>signal.lfilter</code> for example?</p>
<p>I am aware that this problem may have no general solution and that inverse may not exist.</p>
<p>Added in Edit:</p>
<p>In the case of FIR I think one can do FFT, because basically: <code>h = fft(b)</code>. In IIR case <code>h = fft(b)/fft(a)</code> so it might not work.</p>
|
<python><scipy><filtering><signal-processing>
|
2023-01-30 12:14:10
| 0
| 883
|
Triceratops
|
75,284,222
| 15,112,773
|
Find the most similar number from list of lists in a dictionary python
|
<p>I have a dictionary where the values are a list of lists:</p>
<pre><code>dict1 = {"['x1', 'y1']": [['r1', 'r2'], [78, 125]],
"['x1', 'y1']": [['r1', 'r2'], [77, 112]],
"['x1', 'y1']": [['r1', 'r2'], [73, 110]],
"['x2', 'y2']": [['r2', 'r3'], [71, 103]]}
</code></pre>
<p>I am also giving a list of lists as input <strong>which i want to find in dict1</strong></p>
<pre><code> input1 = [['r1', 'r2'], [72, 112]]
</code></pre>
<p>The first list <code>['r1', 'r2']</code> is the same as in <code>dict1</code> and can still be easily found. But for the second list, i need to find approximate numbers in <code>dict1</code>. So for <code>[72, 112]</code> it's <code>[73, 110]</code>.I don't understand how to do it</p>
<p>Estimated output I want:</p>
<pre><code>output = { "['x1', 'y1']": [['r1', 'r2'], [73, 110]]}
</code></pre>
|
<python><list>
|
2023-01-30 12:12:32
| 1
| 383
|
Rory
|
75,284,207
| 15,414,616
|
Encrypting java JWEObject with RSA in python
|
<p>I asked a while ago how to do something similar: <a href="https://stackoverflow.com/questions/71928580/decrypting-and-encrypting-java-jweobject-with-algorithm-rsa-oaep-256-on-python">Decrypting and encrypting java JWEObject with algorithm RSA-OAEP-256 on python</a>
Now I have a different encryption key and that code is not working for me anymore.</p>
<p>I need to be able to encrypt my data: <code>{"value": "Object Encryption"}</code> with JWE using RSA.</p>
<p>I have this key id: <code>a4d4039e-a8c7-4d06-98c8-2bda90ab169c</code>
and this encryption key:</p>
<pre><code>MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA9JJaeFiDdB+dGvi3jUzKUMU73kG6vvc/P+jwZXRKKpJSwf8PU4SapMyFPFFoHwca6Z8vZogF4ghEJ18JipNyF3BLnfCt1EHuZ15FG1Aywvpi+xw7F0UoJ9DWItBM1SodKXIh1be44/9SiLrpcyROKId349zWMOl3IVVxekLPKWTHsy2Iowp7JsjNEK3t9RdV+PAtUzp1ACyqHD/MDYSmpJuEOR9AbmBayaFIWVP+52q1ir7ea88zocmklDg0SGjiRNXq1tUAljWezpKstKQNz/IZN1kMLQ8SknrlpZL0vjjAnHFlgtLfcwPbESt76surRshfGwwfx8T9AOfXMgELNQIDAQAB
</code></pre>
<p>and I should get this:</p>
<pre><code>eyJhbGciOiJSU0EtT0FFUC0yNTYiLCJlbmMiOiJBMjU2R0NNIiwia2lkIjoiYTRkNDAzOWUtYThjNy00ZDA2LTk4YzgtMmJkYTkwYWIxNjljIn0.2hGqQVSbgZ9-9Hiz8VZizORpWRR2yioHb8vK6R9tQCpxr0jxBGehNL0K36XfJWJC5KxcxDdD9byeI_YTtB_hYTgsuMTHS5p-4aJ4nLk43Ya5yR8p8nn4s11wbkfSj0jbqSFb_1IOCMgX0Xu8lmnVe7Tjc4vACwBoaM6VpudEsLHpyQ9OxNaa1apbRp-BX3DEVM3l7ltHhMIh_DCRWbC4-LbS51L4RqLWxmihqRA97FYX4HX38Vbt3O__2tq5KfSjq78UEOffEFe_CRg8mXZ1CHgyH4YPMNmjS-jAI4m07Coja4zLXgv7ctFaFQePISLaZLgdp3a0a-Sht5cwwZfAhg.mc7_YA9mg3l7VV5B.ZOnYjkiXx1YSxDIILjcHUXluwW8jqsSO5NuIkto.9KtJGJRS9QevrqZPYYlcTQ
</code></pre>
<p>That's the java code I'm trying to rewrite in python:</p>
<pre><code> private RSAPublicKey getObjectEncryptionKey()
throws NoSuchAlgorithmException, InvalidKeySpecException {
logger.debug("Getting object encryption key");
if (Objects.isNull(objectEncryptionKey)) {
objectEncryptionKey = getActiveKey(Algorithm.RSA);
}
byte[] encryptionKey = base64Decode(String.valueOf(objectEncryptionKey.getEncryptionKeyValue()).getBytes());
KeyFactory keyFactory = getInstance(Algorithm.RSA.name());
return (RSAPublicKey) keyFactory.generatePublic(new X509EncodedKeySpec(encryptionKey));
}
public String encryptObject(Object object) {
logger.debug("Encrypting object with keyId: {}", getObjectEncryptionKeyId());
JsonWebEncryption encryptedObject = getJWEObject(object);
try {
return encryptedObject.getCompactSerialization();
} catch (JoseException e) {
throw new CryptoException("Could not encrypt object/event", e);
}
}
private JsonWebEncryption getJWEObject(Object object) {
JsonWebEncryption jwe = new JsonWebEncryption();
try {
jwe.setAlgorithmHeaderValue(KeyManagementAlgorithmIdentifiers.RSA_OAEP_256);
jwe.setEncryptionMethodHeaderParameter(ContentEncryptionAlgorithmIdentifiers.AES_256_GCM);
jwe.setKey(getObjectEncryptionKey());
jwe.setKeyIdHeaderValue(getObjectEncryptionKeyId());
} catch (NoSuchAlgorithmException | InvalidKeySpecException e) {
throw new CryptoException("Could not create JsonWebEncryption", e);
}
}
</code></pre>
<p>How is it different from my previous question and what is the correct way to do it in python?</p>
<p>I tried doing something like that:</p>
<pre><code>def grouper(iterable, n, fillvalue=''):
args = [iter(iterable)] * n
return zip_longest(*args, fillvalue=fillvalue)
def decryption_key_to_pem(decryption_key: str) -> bytes:
pem = ['-----BEGIN PRIVATE KEY-----']
for group in grouper(decryption_key, 64):
pem.append(''.join(group))
pem.append('-----END PRIVATE KEY-----')
return str.encode('\n'.join(pem))
jwk.JWK.from_pem(decryption_key_to_pem(key))
</code></pre>
<p>but I get this exception:</p>
<pre><code>ValueError: ('Could not deserialize key data. The data may be in an incorrect format, it may be encrypted with an unsupported algorithm, or it may be an unsupported key type (e.g. EC curves with explicit parameters).', [_OpenSSLErrorWithText(code=503841036, lib=60, reason=524556, reason_text=b'error:1E08010C:DECODER routines::unsupported'), _OpenSSLErrorWithText(code=109052072, lib=13, reason=168, reason_text=b'error:068000A8:asn1 encoding routines::wrong tag'), _OpenSSLErrorWithText(code=109576458, lib=13, reason=524554, reason_text=b'error:0688010A:asn1 encoding routines::nested asn1 error'), _OpenSSLErrorWithText(code=109576458, lib=13, reason=524554, reason_text=b'error:0688010A:asn1 encoding routines::nested asn1 error')])
</code></pre>
<p>Also tried something like:</p>
<pre><code>def get_jwe_key(data, encryption_key, encryption_key_id):
jwe = jwcrypto.jwe.JWE()
jwe.plaintext = json.dumps(data).encode('utf-8')
jwe.alg = 'RSA-OAEP-256'
jwe.enc = 'A256GCM'
jwe.recipient = encryption_key
jwe.header = {'kid': encryption_key_id}
return jwe
jwe_key = get_jwe_key(decrypted_data, key, key_id)
jwe_key.serialize()
</code></pre>
<p>and I get: <code>jwcrypto.common.InvalidJWEOperation: No available ciphertext</code></p>
|
<python><java><encryption><rsa><jwe>
|
2023-01-30 12:11:04
| 1
| 437
|
Ema Il
|
75,284,167
| 14,311,263
|
Why does DataFrame.cumsum work with lists/strings, but GroupBy.cumsum doesn't?
|
<p>With the following dataframe example</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
df = pd.DataFrame({
"group": [1, 1, 2, 2, 2],
"list": [[1], [2], [1], [2], [3]],
"string": ["a", "b", "a", "b", "c"]
})
</code></pre>
<p><a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.cumsum.html" rel="nofollow noreferrer"><code>df.cumsum()</code></a> works fine</p>
<pre class="lang-none prettyprint-override"><code> group list string
0 1 [1] a
1 2 [1, 2] ab
2 4 [1, 2, 1] aba
3 6 [1, 2, 1, 2] abab
4 8 [1, 2, 1, 2, 3] ababc
</code></pre>
<p>but <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.core.groupby.GroupBy.cumsum.html" rel="nofollow noreferrer"><code>df.groupby("group").cumsum()</code></a> doesn't, it seems to be designed only for columns with numeric dtype:</p>
<blockquote>
<p>FutureWarning: The default value of numeric_only in DataFrameGroupBy.cumsum is deprecated. In a future version, numeric_only will default to False. Either specify numeric_only or select only columns which should be valid for the function.
df = df.groupby("group").cumsum()</p>
</blockquote>
<pre class="lang-none prettyprint-override"><code>Empty DataFrame
Columns: []
Index: [0, 1, 2, 3, 4]
</code></pre>
<p>My question is <strong>why</strong> is that the case? (I'm not looking for workarounds, I have a couple that work fine.)</p>
<p>If I <a href="https://github.com/pandas-dev/pandas/blob/v1.5.3/pandas/core/generic.py#L11233" rel="nofollow noreferrer">understand correctly</a>, then <code>DataFrame.cumsum</code> essentially delegates to <a href="https://numpy.org/doc/stable/reference/generated/numpy.cumsum.html" rel="nofollow noreferrer"><code>numpy.cumsum</code></a></p>
<pre class="lang-py prettyprint-override"><code> def cumsum(self, axis: Axis | None = None, skipna: bool_t = True, *args, **kwargs):
return self._accum_func("cumsum", np.cumsum, axis, skipna, *args, **kwargs)
</code></pre>
<p>and <code>numpy.cumsum</code> is fine with lists/strings.</p>
<p>Whereas <code>GroupBy.cumsum</code> seems to involve <a href="https://github.com/pandas-dev/pandas/blob/v1.5.3/pandas/core/groupby/groupby.py#L3677" rel="nofollow noreferrer">cythonisation</a>:</p>
<pre class="lang-py prettyprint-override"><code> def cumsum(self, axis=0, *args, **kwargs) -> NDFrameT:
"""
Cumulative sum for each group.
Returns
-------
Series or DataFrame
"""
nv.validate_groupby_func("cumsum", args, kwargs, ["numeric_only", "skipna"])
if axis != 0:
f = lambda x: x.cumsum(axis=axis, **kwargs)
return self._python_apply_general(f, self._selected_obj, is_transform=True)
return self._cython_transform("cumsum", **kwargs)
</code></pre>
<p>I don't have any experience with Cython, and maybe that isn't the reason behind it, so can anyone point out to me why lists/strings aren't viable here?</p>
|
<python><pandas><group-by><cumsum>
|
2023-01-30 12:06:50
| 0
| 11,434
|
Timus
|
75,284,052
| 420,892
|
Dataframe timestamp interpolation for multiple variables
|
<p>I have a dataframe with three columns: <code>timestamp</code>, <code>variable_name</code> and <code>value</code>. There is a total of 10 variables whose names are in the <code>variable_name</code> column.</p>
<p>I would like to have a single dataframe, indexed by timestamp, with one column per variable. Ideally, the dataframe should be "full", i.e. each timestamp should have an interpolate value for each variable.</p>
<p>I'm struggling to find a direct way to do that (without looping over the variable list, etc.). The dataframe comes from Spark but is small enough to be converted to Pandas. Any pointers will be most welcome.</p>
|
<python><pandas><dataframe><apache-spark><pyspark>
|
2023-01-30 11:55:32
| 1
| 8,316
|
Cedric H.
|
75,283,937
| 10,656,093
|
Capture real time `stdout` and `stderr` when run a function in a process python
|
<p>I have a python function and want to run it as a separate process with <code>multiprocessing</code> package.</p>
<pre><code>def run(ctx: Context):
print("hello world!")
return ctx
</code></pre>
<p>afterward running it as a separate process with the following script:</p>
<pre><code>import multiprocessing
p = multiprocessing.Process(target=run, args=(ctx, ))
p.start()
p.join()
</code></pre>
<p>Now, I need to capture live <code>stdout</code> and <code>stderr</code> of the above process. Is there any way like as:</p>
<pre><code>import subprocess
proc = subprocess.Popen(['python','fake_utility.py'],stdout=subprocess.PIPE)
while True:
line = proc.stdout.readline()
if not line:
break
</code></pre>
<p>But I need to pass the function not running a command with <code>Popen</code>. Do you know how can I read <code>stdout</code> when I run my function in a separate process?</p>
|
<python><multiprocessing><subprocess><stdout>
|
2023-01-30 11:43:56
| 1
| 730
|
Maryam
|
75,283,556
| 14,343,465
|
Access to fetch at 'http://localhost:8000/api/v1' from origin 'http://0.0.0.0:8000' has been blocked by CORS policy
|
<p>This issue is well documented but my attempts have been unsuccessful... any suggestions are welcome!</p>
<ul>
<li>cookiecutter project on Github: <a href="https://github.com/Buuntu/fastapi-react" rel="nofollow noreferrer">Buuntu/fastapi-react</a></li>
</ul>
<h2>Recreating Error</h2>
<pre class="lang-bash prettyprint-override"><code>cookiecutter gh:Buuntu/fastapi-react --no-input
cd fastapi-react-project
</code></pre>
<p><em>modified files before running build script (in order to address prior errors):</em></p>
<p><code>frontend/Dockerfile</code>
line 7</p>
<pre class="lang-dockerfile prettyprint-override"><code>RUN npm install --legacy-peer-deps
</code></pre>
<p><code>docker-compose.yml</code>
line 37</p>
<pre class="lang-yaml prettyprint-override"><code> # flower:
# image: mher/flower
# command: celery flower --broker=redis://redis:6379/0 --port=5555
# ports:
# - 5555:5555
# depends_on:
# - "redis"
</code></pre>
<pre class="lang-bash prettyprint-override"><code>cd fastapi-react-project
chmod +x scripts/build.sh
./scripts/build.sh
</code></pre>
<ul>
<li>open: <a href="http://0.0.0.0:8000/" rel="nofollow noreferrer">http://0.0.0.0:8000/</a></li>
<li>click: "Click to make request to backend"</li>
</ul>
<blockquote>
<p>Access to fetch at 'http://localhost:8000/api/v1' from origin 'http://0.0.0.0:8000' has been blocked by CORS policy: The request client is not a secure context and the resource is in more-private address space <code>local</code>.</p>
</blockquote>
<blockquote>
<p>GET http://localhost:8000/api/v1 net::ERR_FAILED <code>api.ts:4</code> <-- fetch()</p>
</blockquote>
<h2>Attempts</h2>
<ul>
<li><a href="https://serverfault.com/questions/162429/how-do-i-add-access-control-allow-origin-in-nginx">adding Access-Control-Allow-Origin to nginx</a></li>
<li><a href="https://stackoverflow.com/questions/47902840/enabling-cors-in-create-react-app-utility">adding proxy to package.json</a></li>
<li><a href="https://stackoverflow.com/questions/61238680/access-to-fetch-at-from-origin-http-localhost3000-has-been-blocked-by-cors">adding no-cors to fetch()</a></li>
<li><a href="https://fastapi.tiangolo.com/tutorial/cors/?h=cors#use-corsmiddleware" rel="nofollow noreferrer">adding CORSMiddleware to FastAPI app</a></li>
</ul>
<h2>System</h2>
<pre class="lang-bash prettyprint-override"><code>➜ uname -v
# Darwin Kernel Version 21.6.0: Wed Aug 10 14:25:27 PDT 2022; root:xnu-8020.141.5~2/RELEASE_X86_64
➜ docker-compose version
# Docker Compose version v2.13.0
</code></pre>
<h2>Additional Info</h2>
<ul>
<li><a href="https://github.com/Buuntu/fastapi-react/blob/master/%7B%7Bcookiecutter.project_slug%7D%7D/nginx/nginx.conf" rel="nofollow noreferrer">nginx.conf</a></li>
</ul>
|
<python><reactjs><docker-compose><cors><fastapi>
|
2023-01-30 11:08:15
| 1
| 3,191
|
willwrighteng
|
75,283,435
| 12,858,691
|
Tf keras tuning Adamax optimizer: InvalidArgumentError: lr is not a scalar : [1]
|
<p>I want to tune my LSTM Model. Playing around with different optimizers, I stumbled on an issue with the Adamax optimizer. My code:</p>
<pre><code>import tensorflow as tf
from keras.models import Sequential
from keras.layers import Dense, SimpleRNN, LSTM, Dropout
import keras_tuner
from tensorflow.keras.callbacks import EarlyStopping
def build_lstm_for_tuning(hp):
activation=['relu','sigmoid']
lossfct='binary_crossentropy'
hidden_units_first_layer = hp.Choice('neurons first layer',[32,64,128,256,512,1024])
lr = hp.Choice('learning_rate', [0.0005]), #0.005,0.001,,0.0001,5e-05,1e-05
optimizer_name = hp.Choice('optimizer', ["Adamax"])#,"Ftrl","Adadelta","Adagrad","RMSprop","Nadam","SGD"
model = Sequential()
model.add(LSTM(hidden_units_first_layer,input_shape=(24, 237),activation=activation[0]))
model.add(Dense(units=21, activation=activation[1]))
optimizer = {"Ftrl":tf.keras.optimizers.Ftrl(lr),"Adadelta":tf.keras.optimizers.Adadelta(lr),"Adagrad":tf.keras.optimizers.Adagrad(lr),\
"Adamax":tf.keras.optimizers.Adamax(lr),"RMSprop":tf.keras.optimizers.RMSprop(lr),\
"Nadam":tf.keras.optimizers.Nadam(lr),"SGD":tf.keras.optimizers.SGD(lr)}[optimizer_name]
model.compile(loss=lossfct, optimizer= optimizer,\
metrics=[tf.keras.metrics.Precision(),tf.keras.metrics.Recall(),tf.keras.metrics.TruePositives(),tf.keras.metrics.AUC(multi_label=True)])
return model
tuner = keras_tuner.RandomSearch(
build_lstm_for_tuning,
objective=keras_tuner.Objective("val_auc", direction="max"),
max_trials=20,
overwrite=True)
tuner.search(input_data['X_train'], input_data['Y_train'], epochs=1, batch_size=512,
validation_data=(input_data['X_valid'], input_data['Y_valid']))
</code></pre>
<p>Output:</p>
<pre><code>WARNING:tensorflow:Layer lstm_1 will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
Search: Running Trial #1
Value |Best Value So Far |Hyperparameter
1024 |? |neurons first layer
0.0005 |? |learning_rate
Adamax |? |optimizer
WARNING:tensorflow:Layer lstm will not use cuDNN kernels since it doesn't meet the criteria. It will use a generic GPU kernel as fallback when running on GPU.
---------------------------------------------------------------------------
InvalidArgumentError Traceback (most recent call last)
/tmp/ipykernel_263/2162334881.py in <module>
27 overwrite=True)
28 tuner.search(input_data['X_train'], input_data['Y_train'], epochs=1, batch_size=512,
---> 29 validation_data=(input_data['X_valid'], input_data['Y_valid']))
~/.local/lib/python3.7/site-packages/keras_tuner/engine/base_tuner.py in search(self, *fit_args, **fit_kwargs)
177
178 self.on_trial_begin(trial)
--> 179 results = self.run_trial(trial, *fit_args, **fit_kwargs)
180 # `results` is None indicates user updated oracle in `run_trial()`.
181 if results is None:
~/.local/lib/python3.7/site-packages/keras_tuner/engine/tuner.py in run_trial(self, trial, *args, **kwargs)
292 callbacks.append(model_checkpoint)
293 copied_kwargs["callbacks"] = callbacks
--> 294 obj_value = self._build_and_fit_model(trial, *args, **copied_kwargs)
295
296 histories.append(obj_value)
~/.local/lib/python3.7/site-packages/keras_tuner/engine/tuner.py in _build_and_fit_model(self, trial, *args, **kwargs)
220 hp = trial.hyperparameters
221 model = self._try_build(hp)
--> 222 results = self.hypermodel.fit(hp, model, *args, **kwargs)
223 return tuner_utils.convert_to_metrics_dict(
224 results, self.oracle.objective, "HyperModel.fit()"
~/.local/lib/python3.7/site-packages/keras_tuner/engine/hypermodel.py in fit(self, hp, model, *args, **kwargs)
135 If return a float, it should be the `objective` value.
136 """
--> 137 return model.fit(*args, **kwargs)
138
139
~/.local/lib/python3.7/site-packages/keras/utils/traceback_utils.py in error_handler(*args, **kwargs)
65 except Exception as e: # pylint: disable=broad-except
66 filtered_tb = _process_traceback_frames(e.__traceback__)
---> 67 raise e.with_traceback(filtered_tb) from None
68 finally:
69 del filtered_tb
~/.local/lib/python3.7/site-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
57 ctx.ensure_initialized()
58 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 59 inputs, attrs, num_outputs)
60 except core._NotOkStatusException as e:
61 if name is not None:
InvalidArgumentError: lr is not a scalar : [1]
[[node Adamax/Adamax/update_4/ResourceApplyAdaMax
(defined at /home/cdsw/.local/lib/python3.7/site-packages/keras/optimizer_v2/adamax.py:141)
]] [Op:__inference_train_function_1833827]
Errors may have originated from an input operation.
...
</code></pre>
<p>Does anyone have an idea what is causing the error and how to avoid it? My guess is that is connected to <code>hp.Choice</code>. The tuning framework might change the dtype of <code>lr</code> or something similar, but I did not manage to find solid proof for that.</p>
|
<python><tensorflow><keras><hyperparameters>
|
2023-01-30 10:57:20
| 1
| 611
|
Viktor
|
75,283,426
| 4,028,662
|
python serial readline() vs C# serial ReadLine()
|
<p>I'm trying to read serial input from my device, and have gotten it to work in Python using pyserial, e.g.</p>
<pre><code>import serial
port = serial.Serial(port='COM1', baudrate=19200, bytesize=8, parity='N', stopbits=1, timeout=None, xonxoff=False, rtscts=False, dsrdtr=False)
while 1:
N = port.in_waiting
if N>4:
msg = port.readline(N)
print(list(msg))
</code></pre>
<p>I'm trying to implement this same code in C#, but it does not quite seem to work, e.g.</p>
<pre><code>port = new SerialPort(COM1);
port.BaudRate = baudRate;
port.DataBits = 8;
port.Parity = Parity.None;
port.StopBits = StopBits.One;
port.ReadTimeout = SerialPort.InfiniteTimeout;
port.Handshake = Handshake.None;
port.RtsEnable = false;
port.DtrEnable = false;
port.DataReceived += new SerialDataReceivedEventHandler(DataReceived);
</code></pre>
<p>and</p>
<pre><code>void DataReceived(object sender, SerialDataReceivedEventArgs e)
{
int N = port.BytesToRead;
if (N > 4)
{
string line = port.ReadLine();
}
}
</code></pre>
<p>I'm able to read stuff properly using port.Read() in C#, however it seems like ReadLine doesn't quite work properly--it doesn't seem to be able to find the end-of-line character ("\n")? And the program just freezes. Yet I am not sure why it works with pyserial.ReadLine() which also seeks the same character (and works without timeout). As far as I can tell, the rest of the port settings are identical.</p>
<p>Thanks!</p>
|
<python><c#><serial-port><pyserial>
|
2023-01-30 10:56:41
| 1
| 362
|
wenhoo
|
75,283,405
| 7,415,134
|
how to declare cursor in flask_mysqldb for only once
|
<p>I have this code(not complete)</p>
<pre><code>@app.route('/', methods = ['GET','POST'])
def home():
"""
the main function for routing home
"""
if request.method == 'POST':
try:
if not cursor:
cursor = mysql.connection.cursor()
url = request.form.get('link',"")
</code></pre>
<p>the thing is i have to re declare cursor every time inside request.method=='POST' block
if i declare it outside(say first line) i got an error saying cursor not defined.because the post connection is not active yet</p>
|
<python><mysql><flask><mysql-python>
|
2023-01-30 10:54:28
| 1
| 379
|
Sid
|
75,283,203
| 10,889,650
|
How to simulate number of occurences over a time period
|
<p>I have an event which happens on average once every x seconds. In Python, I wish to "simulate" a specific time interval of t seconds, and obtain a reasonable randomly sampled integer n which denotes the number of times the event happened in that time period.</p>
<p>How can I achieve this?</p>
<p>(And no, before you ask, this is not my homework, and I'm asking the experienced statistician-coders on here instead of working out exactly which combination of scipy.stats.poisson calls I need so I can get on with something else in the meantime.)</p>
|
<python><statistics><poisson>
|
2023-01-30 10:37:42
| 1
| 1,176
|
Omroth
|
75,283,167
| 5,680,504
|
How to replace the variable in Json format string in Python?
|
<p>I have the following json format string as below.</p>
<pre><code>json_data_string = "{\"size\":0,\"query\":{\"bool\":{\"must\":[{\"range\":{\"@timestamp\":{\"time_zone\":\"+09:00\",\"gte\":\"2023-01-24T00:00:00.000Z\",\"lt\":\"2023-01-25T00:00:00.000Z\"}}},{\"term\":{\"serviceid.keyword\":{\"value\":\"430011397\"}}}]}},\"aggs\":{\"by_day\":{\"auto_date_histogram\":{\"field\":\"@timestamp\",\"minimum_interval\":\"minute\"},\"aggs\":{\"agg-type\":{\"terms\":{\"field\":\"nxlogtype.keyword\",\"size\":1000},\"aggs\":{\"my-sub-agg-name\":{\"avg\":{\"field\":\"size\"}}}}}}}}"
</code></pre>
<p>In this field, there is <code>gte</code> field which is the starting time.
I would like to put this value by variable, not by the constant string as shown above.</p>
<p>For example, I want to generate many json format string by using the <code>for-loop</code> as below.</p>
<pre><code>fmt = '%Y-%m-%d %H:%M:%S'
d1 = datetime.strptime('2010-01-01 00:00:00', fmt)
d2 = datetime.strptime('2010-01-02 00:00:00', fmt)
minutesDiff = (d2 - d1).days * 24 * 60
for n in range(minutesDiff):
print(json_data_string_variable.format(datetime.strptime(str(d1 + timedelta(minutes=n)),fmt)))
</code></pre>
<p>By using the iterative style, I think that I can generate the multiple json string but I don't have idea to insert variable format in the <code>json_data_string</code>. I have googled it and tried to use <code>{}</code> syntax in the <code>json_data_string</code> but it did not work.</p>
<p>How to achieve it?</p>
<p>Thanks.</p>
|
<python>
|
2023-01-30 10:34:37
| 2
| 1,329
|
sclee1
|
75,283,069
| 19,369,310
|
New column based to boost entries who have raced with other entries in this race in the past
|
<p>I have the following large dataset recording the result of a math competition among students in descending order of date: So for example, student 1 comes third in Race 1 while student 3 won Race 2, etc.</p>
<pre><code>Race_ID Date Student_ID Rank
1 1/1/2023 1 3
1 1/1/2023 2 2
1 1/1/2023 3 1
1 1/1/2023 4 4
2 11/9/2022 1 2
2 11/9/2022 2 3
2 11/9/2022 3 1
3 17/4/2022 5 4
3 17/4/2022 2 1
3 17/4/2022 3 2
3 17/4/2022 4 3
4 1/3/2022 1 1
4 1/3/2022 2 2
5 1/1/2021 1 2
5 1/1/2021 2 3
5 1/1/2021 3 1
</code></pre>
<p>I want to add a new column called "Competitor" which is a recency weighted of the rank difference with all other students in this student's past races give by the formula:
1/(date@ today's race - date@race t-1)( sum of (Student's rank @ race t-1 - rank @ t-1 of students of current race))</p>
<p>So the desired column looks like this:</p>
<pre><code>Race_ID Date Student_ID Rank Competitor
1 1/1/2023 1 3 -0.003268 =(1/112)((2-3)+(2-1))+(1/259)(0)+(1/306)(1-2)+(1/730)((2-3)+(2-1))
1 1/1/2023 2 2 0.03150884
1 1/1/2023 3 1 -0.0308953
1 1/1/2023 4 4 0.01158301
2 11/9/2022 1 2 -0.0051546
2 11/9/2022 2 3 0.00320629
2 11/9/2022 3 1 -0.0048544
3 17/4/2022 5 4 0
3 17/4/2022 2 1 0.02764602
3 17/4/2022 3 2 -0.0063694
3 17/4/2022 4 3 0
4 1/3/2022 1 1 -0.0023585
4 1/3/2022 2 2 0.00235849
5 1/1/2021 1 2 0
5 1/1/2021 2 3 0
5 1/1/2021 3 1 0
</code></pre>
<p>So for example for the first entry: -0.003268 =(1/112)((2-3)+(2-1))+(1/259)(0)+(1/306)(1-2)+(1/730)((2-3)+(2-1)) because 112 is the difference between current race date 1/1/2023 and last race date: 11/9/2022, and in the last race, student 1 ranked 2 while his competitor this race (namely student 2,3,4) ranked 3,1,NaN respectively, hence the first weight factor is (1/112)((2-3)+(2-1)+0) and so on and so forth</p>
<p>Here is a little excel to illustrate how I compute the new column:
<a href="https://i.sstatic.net/3YL7T.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3YL7T.png" alt="enter image description here" /></a></p>
<p>I don't know of any fast way to do that.</p>
<p>I know it's quite complicated so any help would be appreciated. Thanks in advance</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-01-30 10:26:28
| 1
| 449
|
Apook
|
75,282,891
| 6,856,520
|
How to merge predicted values to original pandas test data frame where X_test has been converted using CountVectorizer before splitting
|
<p>I want to merge my predicted results of my test data to my X_test. I was able to merge it with y_test but since my X_test is a corpus I'm not sure how I can identify the indexes to merge.
My codes are as below</p>
<pre><code>def lr_model(df):
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
import pandas as pd
# Create corpus as a list
corpus = df['text'].tolist()
cv = CountVectorizer()
X = cv.fit_transform(corpus).toarray()
y = df.iloc[:, -1].values
# Splitting to testing and training
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.2, random_state = 0)
# Train Logistic Regression on Training set
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
# Predicting the Test set results
y_pred = classifier.predict(X_test)
# Merge true vs predicted labels
true_vs_pred = pd.DataFrame(np.concatenate((y_pred.reshape(len(y_pred),1), y_test.reshape(len(y_test),1)),1))
return true_vs_pred
</code></pre>
<p>This gives me the y_test and y_pred but I'm not sure how I can add the X_test as an original data frame (the ids of the X_test) to this.
Any guidance is much appreciated. Thanks</p>
|
<python><pandas><scikit-learn><nlp>
|
2023-01-30 10:12:09
| 1
| 313
|
Jessie
|
75,282,840
| 4,584,863
|
use trial.suggest_int to pick values from given list in optuna, just like trial.suggest_categorical does
|
<p>I'm working with optuna for hyperparameter tuning of ML models in Python. While defining the objective function for tuning a Deep Learning model I tried to define a list of choices from which the <code>trail.suggest_int</code> can pick up values from.
<strong>For example</strong> -</p>
<pre><code>'batch_size': trial.suggest_int('batch_size', [16, 32, 64, 128, 256])
</code></pre>
<p>optuna documentation suggest that <code>trial.suggest_int</code> should be in the following format</p>
<pre><code>'some_param': trial.suggest_int('some_param', low, high, step)
</code></pre>
<p>my code looks something like below</p>
<pre><code>def objective(trial):
DL_param = {
'learning_rate': trial.suggest_float('learning_rate', 1e-3, 1e-1),
'optimizer': trial.suggest_categorical('optimizer', ["Adam", "RMSprop", "SGD"]),
'h_units': trial.suggest_int('h_units', 50, 250, step = 50),
'alpha': trial.suggest_float('alpha', [0.001,0.01, 0.1, 0.2, 0.3]),
'batch_size': trial.suggest_int('batch_size', [16, 32, 64, 128, 256]),
}
DL_model = build_model(DL_param)
DL_model.compile(optimizer=DL_param['optimizer'], loss='mean_squared_error')
DL_model.fit(x_train, y_train, validation_split = 0.3, shuffle = True,
batch_size = DL_param['batch_size'], epochs = 30)
y_pred_2 = DL_model.predict(x_test)
return mse(y_test_2, y_pred_2, squared=True)
</code></pre>
<p>I'm facing problem in defining a list for the parameters <code>'alpha'</code> and <code>'batch_size'</code>. Is there a way? something like <code>trial.suggest_categorical</code> can pick strings from the given list like in the above code</p>
<pre><code>'optimizer': trial.suggest_categorical('optimizer', ["Adam", "RMSprop", "SGD"])
</code></pre>
<p>Any suggestions are welcome. Thanks in advance.</p>
|
<python><machine-learning><hyperparameters><optuna>
|
2023-01-30 10:06:22
| 3
| 510
|
Bhanu Chander
|
75,282,809
| 6,883,721
|
Python module not found when calling from another
|
<p>I have this project structure:</p>
<pre><code>/project_name
main.py
----- __init__.py
------ /modules
-------- __init__.py
-------- module1.py
-------- module2.py
</code></pre>
<p>I've edited to add more information. After working and testing a lot of recomendations to solve the problem, nothing works.</p>
<p><strong>Enviroment</strong></p>
<ul>
<li>Windows</li>
<li>Conda virtual enviroment project python 3.10</li>
<li>VSCode</li>
</ul>
<p><strong>Problem</strong></p>
<p>When running main.py from VScode</p>
<pre><code>from modules.module1 import *
if __name__ == "__main__":
pass
</code></pre>
<p>this error raise</p>
<pre><code>from module1 import *
ModuleNotFoundError: No module named 'module2'
</code></pre>
<p><strong>Modules</strong></p>
<p><em>module1.py</em></p>
<pre><code>from module2 import *
</code></pre>
<p><em>module2.py</em></p>
<pre><code>def test():
print("just testing")
</code></pre>
<p>So the problem always occurs when from main.py i import a module that imports another module. The second module imported from the module that i have imported from main.py is not found.</p>
<p>Still looking for the solution</p>
|
<python>
|
2023-01-30 10:03:24
| 5
| 567
|
Vince
|
75,282,782
| 2,491,592
|
Choosing the earliest date per record when equal dates are present
|
<p>I have a table with multiple dates per record. Example of the table:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>identifier</th>
<th>date</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>1985-01-01</td>
<td>ex1</td>
</tr>
<tr>
<td>a</td>
<td>1985-01-01</td>
<td>ex2</td>
</tr>
<tr>
<td>a</td>
<td>1985-01-03</td>
<td>ex3</td>
</tr>
<tr>
<td>b</td>
<td>1990-01-05</td>
<td>ex1</td>
</tr>
<tr>
<td>b</td>
<td>1990-05-10</td>
<td>ex4</td>
</tr>
<tr>
<td>c</td>
<td>1987-01-01</td>
<td>ex1</td>
</tr>
<tr>
<td>c</td>
<td>1987-01-01</td>
<td>ex3</td>
</tr>
<tr>
<td>d</td>
<td>1986-01-01</td>
<td>ex1</td>
</tr>
<tr>
<td>d</td>
<td>1986-01-01</td>
<td>ex3</td>
</tr>
</tbody>
</table>
</div>
<p>I found out how to extract the earliest date in a group using:</p>
<pre><code>df2 = df.loc[df.groupby('identifier')['date'].idxmin()]
</code></pre>
<p>However, when I have two equal dates, as the column value is sorted in alphabetical order, I end up choosing always the first alphabetic value.<br />
I would like to find a way to randomize such behavior whenever I have equal dates, in order to pick:</p>
<ul>
<li>the first value the 1st time</li>
<li>the second value the 2nd time</li>
<li>the third value (whenever present) the 3rd time</li>
</ul>
<p>and restart accordingly</p>
<p>Is there a way to use the formula above together with a condition or a randomize method? How can I do that?</p>
<p>Expected output :</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>identifier</th>
<th>date</th>
<th>value</th>
</tr>
</thead>
<tbody>
<tr>
<td>a</td>
<td>1985-01-01</td>
<td>ex1</td>
</tr>
<tr>
<td>b</td>
<td>1990-01-05</td>
<td>ex1</td>
</tr>
<tr>
<td>c</td>
<td>1987-01-01</td>
<td>ex3</td>
</tr>
<tr>
<td>d</td>
<td>1986-01-01</td>
<td>ex1</td>
</tr>
</tbody>
</table>
</div>
|
<python><pandas><dataframe>
|
2023-01-30 10:01:44
| 2
| 315
|
K3it4r0
|
75,282,758
| 9,606,828
|
Django Model Formset from ManyToMany not accepting queryset
|
<p>I have my model called Game that has a ManyToMany field.</p>
<pre><code>consoles = models.ManyToManyField('Console', through='GameConsole')
</code></pre>
<p>That ManyToMany has some additional attributes</p>
<pre><code>class GameConsole(models.Model):
game = models.ForeignKey(Game, on_delete=models.CASCADE)
console = models.ForeignKey(Console, on_delete=models.CASCADE)
released = models.DateTimeField
exclusive = models.BooleanField(default=False)
</code></pre>
<p>I have a page where I want to create/edit those relations.</p>
<pre><code>#forms.py
class GameConsoleForm(ModelForm):
class Meta:
model = GameConsole
fields = ['console', 'released', 'exclusive']
#to prevent the submission of consoles with the same id (taken from django topics forms formsets)
class BaseGameConsoleFormSet(BaseFormSet):
def clean(self):
"""Checks that no two alias have the same name."""
if any(self.errors):
# Don't bother validating the formset unless each form is valid on its own
return
console_ids = []
for form in self.forms:
if self.can_delete and self._should_delete_form(form):
continue
console= form.cleaned_data.get('console')
if console in console_ids:
raise ValidationError("Consoles in a set must be different.")
console_ids.append(console)
NewGameConsoleFormSet = modelformset_factory(GameConsole, form=GameConsoleForm, formset=BaseGameConsoleFormSet, extra=1, can_delete=True)
GameConsoleFormSet = modelformset_factory(GameConsole, form=GameConsoleForm, formset=BaseGameConsoleFormSet, extra=0, can_delete=True)
</code></pre>
<p>The creation of multiple GameConsole's works fine. The problem is on the edition.
On the views, when I do the following: <code>formset = GameConsoleFormSet(queryset = game_consoles)</code> I get the following error <code>__init__() got an unexpected keyword argument 'queryset'</code> which is strange, since I already used this logic with another model (normal table, not a ManyToMany) and it worked.</p>
<p>Full view:</p>
<pre><code>def mng_game(request, game_id):
#get game and verify if it exists
game = Game.objects.filter(id = game_id).first()
if not game:
request.session['error_code'] = 33
return redirect('error')
game_consoles = game.consoles.all()
form = GameForm(instance = game)
if game_consoles :
#TODO understand why this does not accept queryset
formset = GameConsoleFormSet(queryset = game_consoles)
else:
formset = NewGameConsoleFormSet()
consoles = Console.objects.all()
context = {
'form':form,
'game': game,
'formset': formset,
'consoles': consoles
}
#change details from the Game
if 'name' in request.POST:
#update game
return render(request, 'games/mng_game.html', context)
</code></pre>
<p>My question is: Am I doing something wrong, or a ManyToMany model formset does not support querysets for edition?</p>
|
<python><django><django-views><django-forms><django-queryset>
|
2023-01-30 09:59:24
| 1
| 303
|
Pedro Silva
|
75,282,670
| 1,805,275
|
Plotly animation speed : Faster than duration 0
|
<p>I have a plot with only one line of 6500 points
Above this line I have a marker that is animated following the line, like we can see in this example : <a href="https://plotly.com/python/animations/#moving-point-on-a-curve" rel="nofollow noreferrer">Intro to animations in Python</a></p>
<p>Everything is working great, when I press play, the point starts to move along the line. But my issue is the speed. I need to make it as fast as the real data recorded.</p>
<p>I have set the frame duration and the transition duration to 0 but still it is not fast enough !</p>
<pre><code># Buttons
fig.update_layout(
updatemenus=[dict(buttons = [dict(label="Play",
method="animate",
args=[None, {"frame": {"duration": 0, "redraw": False}, "mode": "immediate", "transition": {"duration": 0}}])],
type='buttons',
showactive=False,
y=1,
x=1.12,
xanchor='right',
yanchor='top')])
</code></pre>
<p><a href="https://i.sstatic.net/hmYLh.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hmYLh.gif" alt="enter image description here" /></a></p>
<p>How can I make this even faster ?</p>
<p>Thanks</p>
|
<python><plotly>
|
2023-01-30 09:51:55
| 0
| 3,322
|
SJU
|
75,282,588
| 7,095,530
|
Try next view in Django URL dispatcher
|
<p>We have the following URL configuration in Django.</p>
<p>Django will try to match the URL with the rules down below. Once it finds a match, it will use the appropriate view and lookup the object in the model.</p>
<p>The thing is, once it finds a match in the URL pattern, it will match the view. But once the object in the view can't be found, it will return a page not found (404) error.</p>
<p><code>urls.py</code>:</p>
<pre><code>from django.urls import path
from . import views
urlpatterns = [
path('articles/<slug:category>/<slug:editor>/', views.ArticleByThemeView.as_view(), name='articles_by_editor'),
path('articles/<slug:category>/<slug:theme>/', views.ArticleDetailView.as_view(), name='articles_by_theme')
]
</code></pre>
<p><code>views.py</code></p>
<pre><code>class ArticleByThemeView(ListView):
"""
List all articles by a certain theme; "World War 2".
"""
model = Article
def dispatch(self, request, *args, **kwargs):
try:
# Check if the theme_slug matches a theme
theme = ArticleTheme.objects.get(slug=self.kwargs['theme_slug'])
except ArticleTheme.DoesNotExist:
# Theme does not exist, slug must be an article_slug
return redirect(
'article_detail',
category_slug=category_slug
article_slug=theme_slug
)
return super().dispatch(request, *args, **kwargs)
class ArticleDetailView(DetailView):
"""
Detailview for a certain article
"""
model = Article
def get_object(self):
return get_object_or_404(
Article,
category__slug=self.kwargs['category_slug'],
slug=self.kwargs['article_slug']
)
</code></pre>
<p>We have the following URL patterns, we can sort articles either by the editor or by theme. We do this to create a logical URL structure for SEO purposes.</p>
<p>Is there any way we can redirect to another view once the object isn't found?</p>
<p>Can we modify the <code>dispatch</code> method to return to the URL patterns and find the following matching rule?</p>
|
<python><django>
|
2023-01-30 09:44:02
| 2
| 315
|
Kevin D.
|
75,282,560
| 8,541,953
|
pip.conf file does not exist
|
<p>I am looking for the <code>pip.conf</code> file to add an URL from which install some packages. When running <code>python3 -m pip config debug</code> I get:</p>
<pre><code>> env_var: env: global: /Library/Application Support/pip/pip.conf,
> exists: False site: /Users/myuser/miniconda3/pip.conf, exists: False
> user: /Users/myuser/.pip/pip.conf, exists: False
> /Users/user/.config/pip/pip.conf, exists: False
</code></pre>
<p>So the file does not exist. Why is that? How can I generate it?</p>
|
<python><pip>
|
2023-01-30 09:42:06
| 1
| 1,103
|
GCGM
|
75,282,511
| 13,568,193
|
df to table throw error TypeError: __init__() got multiple values for argument 'schema'
|
<p>I have dataframe in pandas :- purchase_df. I want to convert it to sql table so I can perform sql query in pandas. I tried this method</p>
<pre><code>purchase_df.to_sql('purchase_df', con=engine, if_exists='replace', index=False)
</code></pre>
<p>It throw an error</p>
<pre><code>TypeError: __init__() got multiple values for argument 'schema'
</code></pre>
<p>I have dataframe name purchase_df and I need to perform sql query on it. I need to perform sql query on this dataframe like this ....<code>engine.execute('''select * from purchase_df where condition''')</code>. For this I need to convert dataframe into sql table as in our server pandas_sql is not installed only sql alchemy is installed.</p>
<p>I ran this code in pycharm locally and it work perfectly fine but when i tried this in databrick notebook it is showing an error. Even though week ago it was running fine in databrick notebook too. Help me to fix this.</p>
<p>note:- pandas version '1.3.4'
Name: SQLAlchemy
Version: 2.0.0</p>
|
<python><pandas><dataframe><sqlalchemy><azure-databricks>
|
2023-01-30 09:37:33
| 3
| 383
|
Arpan Ghimire
|
75,282,464
| 826,983
|
Alembic creates tables despite errors in migration upgrade()
|
<p>I've a simple migration script which is supposed to create a few tables.</p>
<p>For brevity, this is basically the content:</p>
<pre class="lang-py prettyprint-override"><code>op.create_table(
'a',
sa.Column('id', sa.BIGINT, primary_key=True),
)
op.create_table(
'b',
sa.Column('id', sa.BIGINT, primary_key=True),
sa.Column('id', sa.BIGINT, primary_key=True),
)
</code></pre>
<p>Table <code>b</code> has two <code>id</code> columns, which causes an error.</p>
<p>After fixing and restarting the server, I get an error for the fact, that table <code>a</code> already exists.</p>
<p>The table <code>alembic_version</code> does not contain a record of this revision:</p>
<pre class="lang-none prettyprint-override"><code>mysql> select * from alembic_version;
Empty set (0.00 sec)
</code></pre>
<p>Yet the table actually exists:</p>
<pre class="lang-none prettyprint-override"><code>mysql> show tables;
+-------------------------+
| Tables_in_fingerprinter |
+-------------------------+
| alembic_version |
| a |
+-------------------------+
</code></pre>
<p>How can I make sure that alembic puts everything into one transaction and avoid committing partial changes to the database?</p>
|
<python><sqlalchemy><alembic>
|
2023-01-30 09:33:14
| 0
| 25,793
|
Stefan Falk
|
75,281,731
| 18,330,370
|
pre-commit fails to install isort 5.10.1 with error "RuntimeError: The Poetry configuration is invalid"
|
<pre><code>[INFO] Installing environment for https://github.com/pycqa/isort.
[INFO] Once installed this environment will be reused.
[INFO] This may take a few minutes...
An unexpected error has occurred: CalledProcessError: command: ('/builds/.../.cache/pre-commit/repo0_h0f938/py_env-python3.8/bin/python', '-mpip', 'install', '.')
return code: 1
expected return code: 0
[...]
stderr:
ERROR: Command errored out with exit status 1:
[...]
File "/tmp/pip-build-env-_3j1398p/overlay/lib/python3.8/site-packages/poetry/core/masonry/api.py", line 40, in prepare_metadata_for_build_wheel
poetry = Factory().create_poetry(Path(".").resolve(), with_groups=False)
File "/tmp/pip-build-env-_3j1398p/overlay/lib/python3.8/site-packages/poetry/core/factory.py", line 57, in create_poetry
raise RuntimeError("The Poetry configuration is invalid:\n" + message)
RuntimeError: The Poetry configuration is invalid:
- [extras.pipfile_deprecated_finder.2] 'pip-shims<=0.3.4' does not match '^[a-zA-Z-_.0-9]+$'
</code></pre>
<p>I know I can upgrade the hook to isort-5.12.0 to fix the issue.
However, our project are using python-3.7, isort-5.12.0 does not support it. Considering compatibility, we don't want to update python for now. What should I do?</p>
|
<python><pre-commit-hook><pre-commit><pre-commit.com><isort>
|
2023-01-30 08:19:34
| 2
| 331
|
LOTEAT
|
75,281,725
| 291,557
|
My Azure Function in Python v2 doesn't show any signs of running, but it probably is
|
<p>I have a simple function app in Python v2. The plan is to process millions of images, but right I just want to make the scaffolding right, i.e. no image processing, just dummy data. So I have two functions:</p>
<ul>
<li><code>process</code> with an HTTP trigger <code>@app.route</code>, this inserts 3 random image URLs to the Azure Queue Storage,</li>
<li><code>process_image</code> with a Queue trigger <code>@app.queue_trigger</code>, that processes one image URL from above (currently only logs the event).</li>
</ul>
<p>I trigger the first one with <code>curl</code> request and as expected, I can see the invocation in the Azure portal in the function's invocation section and I can see the items in the Storage Explorer's queue.</p>
<p>But unexpectedly, I do not see any invocations for the second function, even though after a few seconds the items disappear from the <code>images</code> queue and end up in the <code>images-poison</code> queue. So this means that something did run with the queue items 5 times. I see the following warning in the application insights checking <code>traces</code> and <code>exceptions</code>:</p>
<pre><code>Message has reached MaxDequeueCount of 5. Moving message to queue 'case-images-deduplication-poison'.
</code></pre>
<p>Can anyone help with what's going on? Here's the <a href="https://gist.github.com/rokcarl/3aa843b66ee645a038d364c2b700bcc6" rel="nofollow noreferrer">gist of the code</a>.</p>
|
<python><azure><azure-functions>
|
2023-01-30 08:18:39
| 1
| 18,906
|
duality_
|
75,281,579
| 13,568,193
|
Check three column condtion and delete based on condition
|
<p>I have following dataframe df in pandas</p>
<pre><code> item purchase_date purchase_qty purchase_price other_adjustments sold
0 0040030 2022-01 0 0.00 0 0.0
1 0050064 2022-01 0 0.00 -5 854.0
2 0050066 2022-01 0 0.00 2979 0.0
3 0050202 2022-01 0 0.00 14673 1320.0
4 0050204 2022-01 0 0.00 2538 0.0
</code></pre>
<p>I need to delete rows if all purchase_qty,other_adjustments and sold is 0.</p>
<p>I tried this</p>
<pre class="lang-py prettyprint-override"><code>test_df = df[(df['purchase_qty'] != 0) & (df['other_adjustments'] != 0) & (df['sold'] != 0)]
</code></pre>
<p>This code delete all purchase_qty where it's value is 0 but what I want is to check those 3 column and if all three are 0 then delete. Please help me</p>
|
<python><pandas>
|
2023-01-30 08:02:35
| 5
| 383
|
Arpan Ghimire
|
75,281,452
| 11,311,729
|
read json file nested dictionary
|
<p>consider this example</p>
<pre><code>{
"items": {
"PYGXGE": {
"id": "a",
"number": "1"
},
"sbe7oa": {
"id": "b",
"number": "2"
},
"sbe7ob": {
"id": "c",
"number": "3"
},
"sbe7oc": {
"id": "d",
"number": "4"
},
"sbe7od": {
"id": "e",
"number": "5"
},
"sbe7oe": {
"id": "f",
"number": "6"
}
}
}
</code></pre>
<p>i want to access all nested number values, how can I do that in python here is my code so far:</p>
<pre><code>import json
f = open('sample.json')
data = json.load(f)
for i in data['items']:
print(i)
f.close()
</code></pre>
<p>also, is this format for json better or list of dict?</p>
|
<python><json><dictionary>
|
2023-01-30 07:47:21
| 3
| 407
|
vdotup
|
75,281,339
| 420,827
|
Grouping Python dictionary by 2nd item in key tuple
|
<p>I have a dictionary:</p>
<pre><code>{(bob, CA): 90, (tom, NJ): 80, (sarah, CA): 90, (jon, TX): 90}
</code></pre>
<p>I would like to group by the 2nd item in the key tuple:</p>
<pre><code>{CA: {bob: 90, sarah: 90},
NJ: {tom: 80},
TX: {jon: 90}}
</code></pre>
<p>I could formulate this by iterating through the original list, but wondering if there is a way to do this using <code>itertools</code>.</p>
<p>I have tried something along the lines of:</p>
<pre class="lang-py prettyprint-override"><code>groupby(l, operator.itemgetter(list(l.keys())[0][1])):
</code></pre>
<p>but it is not working.</p>
|
<python><dictionary><tuples>
|
2023-01-30 07:34:41
| 2
| 551
|
Manas
|
75,281,277
| 16,568,780
|
Suggest faster way to check if a proccess is active or not
|
<p>I am developing a solution that is a voice chatbot using a Raspberry pi 4 and aws lex due to having many if condition and many stuff to check
It takes a little bit of time to reach the recording part that will capture the user's voice</p>
<p>which is sometimes not convenient since the user will speak but it didn't reach the record part</p>
<pre><code> if loadingVid.poll() is None:
os.killpg(os.getpgid(loadingVid.pid), signal.SIGTERM)
</code></pre>
<p>this is the part that I think needs to be improved on that I need to play videos in the background to know that his voice has been processed and I have to check if the video is on or not to kill it when the answer has arrived from lex</p>
<p>Also, The video is played by omxplayer which is a player that is specially designed for The Raspberry pi to encode the video on the GPU directly I am using Subprocess to open the video but I can't use it to terminate I don't know the reason but this method work</p>
<p>I think that I need to make the checking in a separate thread or something but this will complicate the whole code</p>
<p>Also, I am currently using 2 threads for hot word detection and motion detection</p>
<p>This is how I am recording</p>
<pre><code>def record():
"""Record audio from the microphone"""
os.system( "sox -d -t wavpcm -c 1 -b 16 -r 16000 -e signed-integer --endian little - silence 1 0 1% 5 0.8t 5% -highpass 300> request.wav" )
</code></pre>
<p>This the main function to get better idea about the scirpit and how things work and if you would have other suggestion to improve on</p>
<p>I am not looking for an answer I am looking to guidance how to approach this problem and what can I improve on and what to look out for</p>
<p>If there is somthing not claer Please feel free to ask me I will answer</p>
<pre><code>def main():
"""
Main function:
1. Load environment variables and start idle video.
2. Initialize Amazon Lex runtime client.
3. Set max waiting time, current session id, and last response to initial values.
4. Enter an infinite loop:
5. If the last response's session state's intent state is "Fulfilled" or "Failed":
a. Wait for a hot word and return the time elapsed waiting for it.
b. If the idle time duration is greater than the max waiting time, start a new session and play a greeting video. Otherwise, play a confirmation video.
6. If the last response is None:
a. Wait for a hot word and return the time elapsed waiting for it.
b. Start a new session and play a greeting video.
7. Stop the listening video and start a loading video.
8. Record audio and send it to the Amazon Lex runtime to get a response.
9. Handle the response by playing an audio file, displaying an image, or playing a video.
10. Stop the loading video and start the idle video again.
"""
dotenv.load_dotenv()
hologram.minimze()
hologram.hide_cursor()
# hologram.bench_idle()
idle = hologram.play_idle("/home/alexa/Videos/menu.mp4", 5)
lexruntimev2 = boto3.client(
"lexv2-runtime",
aws_access_key_id=os.environ.get("aws_access_key_id"),
aws_secret_access_key=os.environ.get("aws_secret_access_key"),
region_name="us-east-1",
)
maxWaitingTime = 30.0
currentSessionId = None
last_response = None
current_process = None
listeningVid = None
intermediate_vid = False
while True:
if last_response and (
last_response["sessionState"]["intent"]["state"] == "Fulfilled"
or last_response["sessionState"]["intent"]["state"] == "Failed"
):
if idle.poll() is not None:
idle = hologram.play_idle("/home/alexa/Videos/menu.mp4", 5)
# print(last_response)
# wait for hot word and return time elased waiting for it
idleTimeDuration = triggers.wait_for_triggers()
# print(idleTimeDuration)
if idleTimeDuration > maxWaitingTime:
currentSessionId = lex.newSession()
greeting_vid = lex.say_greeting()
else:
listeningVid = lex.say_confirm_listening()
elif last_response == None:
triggers.wait_for_triggers()
currentSessionId = lex.newSession()
lex.say_greeting()
# if this is a follow-up question (slot elicitation)
else:
# listeningVid = lex.say_confirm_listening()
listeningVid= hologram.play_idle("/home/alexa/project/video/speaking.mp4",8)
lex.record()
# say one moment please
loadingVid = lex.say_one_moment()
# if the idle vid is still running (return value is still none)
if idle.poll() is None:
# then kill it
os.killpg(os.getpgid(idle.pid), signal.SIGTERM)
if listeningVid is not None:
# then kill the listening
if listeningVid.poll() is None:
os.killpg(os.getpgid(listeningVid.pid), signal.SIGTERM)
# listeningVid.kill()
response, responseAssetURL = lex.recognize_audio(lexruntimev2, currentSessionId)
last_response = response
video_case_handling = True
Image_case_handling = True
if responseAssetURL != None:
if ".png" not in responseAssetURL.lower():
audio.play_audio("audio/lex_response.mpeg")
video_case_handling = False
if ".png" in responseAssetURL.lower():
Image_case_handling = False
feh = hologram.display_image(
responseAssetURL, "/home/alexa/project/images/image.png"
)
# hologram.play_with_mpv(responseAssetURL)
if Image_case_handling:
hologram.play_with_omx(responseAssetURL, 9)
# hologram.play_idle()
# time.sleep(5)
# loadingVid.terminate()
# if the loading vid is still running (return value is still none)
if loadingVid.poll() is None:
# then kill it
os.killpg(os.getpgid(loadingVid.pid), signal.SIGTERM)
if responseAssetURL == None:
intermediate_vid = True
intermediate = hologram.play_idle(
"/home/alexa/project/video/speaking.mp4", 8
)
if video_case_handling:
audio.play_audio("audio/lex_response.mpeg")
if intermediate_vid:
os.killpg(os.getpgid(intermediate.pid), signal.SIGTERM)
intermediate_vid = False
if Image_case_handling == False:
idle = hologram.play_idle("/home/alexa/Videos/menu.mp4", 5)
os.system("pkill feh")
</code></pre>
<p>I tried doing multiprocessing but I think it won't help tried also reducing the if the condition that I used to have and simplify what is needed</p>
|
<python><multithreading><raspberry-pi><amazon-lex>
|
2023-01-30 07:27:03
| 1
| 345
|
Ali Redha
|
75,281,172
| 16,220,410
|
scrape fiba stats box score
|
<p>i am barely a beginner at python and i would like to have a data set of my favorite local basketball team, that's why i search for a code scraping fiba stats box score , i found one here on stackoverflow and i tried to edit the headers but it just generates an empty csv file, wondering if anyone could help my edit the code below and scrape box score of each team</p>
<pre><code>import requests
from bs4 import BeautifulSoup
import pandas
stats_basic = ['NO.', 'PLAYER', 'POS', 'MINS', 'PTS', 'FG', 'FG%', '2P', '2P%', '3P', '3P%', 'FT', 'FT%', 'OFF', 'DEF', 'REB', 'AST', 'TO', 'STL', 'BLK', 'BLKR', 'PF', 'FLS ON', '+/-']
#stats_adv = ['TS%', 'eFG%', '3PAr', 'FTr', 'ORB%', 'DRB%', 'TRB%', 'AST%', 'STL%', 'BLK%', 'TOV%', 'USG%', #'ORtg', 'DRtg', 'BPM']
url_boxscore = "https://fibalivestats.dcd.shared.geniussports.com/u/PBA/2145647/bs.html"
stats1 = []
r = requests.get(url_boxscore)
c = r.content
soup = BeautifulSoup(c, "html.parser")
box_scores_content = soup.find_all("div",{"id":"content"})
d = {}
for item in box_scores_content:
for stat in stats_basic:
d[stat] = (item.find_all("td",{"data-stat":"fg"})[11].text)
stats1.append(d)
df=pandas.DataFrame(stats1)
df.to_csv("ginebra.csv")
</code></pre>
<p><a href="https://i.sstatic.net/5PPuM.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5PPuM.jpg" alt="page to scrape" /></a></p>
|
<python><dataframe><web-scraping><beautifulsoup><python-requests>
|
2023-01-30 07:12:07
| 2
| 1,277
|
k1dr0ck
|
75,280,913
| 20,508,530
|
How to access request object of django in react?
|
<p>I have multiple apps in my Django project but for One app I would like to use react, so I have create two apps one for apis and other for frontend.
I used webpack to merge django and react.
Now I want to access <code>request.user</code> and <code>user.is_authenticated</code> in my components.
How can I access those without calling apis as I am not using token-based authentication so I can not call APIs.</p>
<p>views.py</p>
<pre><code>def index(request):
return render(request,'index.html')
</code></pre>
<p>urls.py</p>
<pre><code>urlpatterns = [
re_path(r'^(?:.*)/?$',index),
]
</code></pre>
<p>I would like to use is_authenticated in my sidebar everywhere.</p>
|
<javascript><python><reactjs><django><babeljs>
|
2023-01-30 06:33:03
| 1
| 325
|
Anonymous
|
75,280,818
| 11,922,765
|
Pandas Dataframe: Slice a part of read_html table into a dataframe
|
<p>I want to import a portion of html table into a dataframe.</p>
<p>Here is the table: In the below, I want to import only "Total Electric Industry"
<a href="https://i.sstatic.net/urU9J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/urU9J.png" alt="enter image description here" /></a></p>
<p>I am running below code in the Google colab:</p>
<pre><code># Total residential customers
us_res_html = 'https://www.eia.gov/electricity/annual/html/epa_02_01.html'
us_res = pd.read_html(us_res_html)
us_res = us_res[1]
print(type(us_res))
print(us_res.columns)
print(us_res.dtypes)
# Consider only the total electric industry table
# The table starts with "Total Electric Industry"
# Next table starts with "Full-Service Providers"
# These texts were repeated in all the columns; Hence, search in the first column
start_idx = us_res[us_res[us_res.columns[0]]=="Total Electric Industry"].index
end_idx = us_res[us_res[us_res.columns[0]]=="Full-Service Providers"].index
</code></pre>
<p>Present output:</p>
<pre><code>Int64Index([], dtype='int64')
</code></pre>
|
<python><html><pandas><dataframe>
|
2023-01-30 06:18:20
| 1
| 4,702
|
Mainland
|
75,280,665
| 14,353,779
|
Get Date in specified format in python
|
<p>I am looking to get today's <code>date</code> and <code>n</code> - today's <code>date</code> in the format below : -</p>
<pre><code>tod = datetime.datetime.now()
d = datetime.timedelta(days = 365)
x = tod - d
</code></pre>
<p>I want <code>x</code> and <code>tod</code> in YYYYMMDD format for eg : 20230130
How do I get it to this format</p>
|
<python><python-3.x>
|
2023-01-30 05:51:20
| 3
| 789
|
Scope
|
75,280,572
| 427,494
|
PyQt5: Moving QTableView rows up and down - segmentation fault error on down but works on up?
|
<p>I have a simple GUI in PyQt5 that uses a QTableView to display a list of incidents. I am trying to change the order of these incidents by clicking the up and down buttons. I want the selected row to either move up one or down one row from where it is when the respective button is pressed.</p>
<p>However, I am having issues with my <code>on_move_down_button_clicked</code> slot method. It keeps crashing the app with a <code>segmentation fault</code> error. The <code>on_move_up_button_clicked</code> method works as intended. Using the debugger, I determined that the <code>self.endMoveRows()</code> is where the app crashes. I do not have that method overriden in my <code>IncidentTableModel</code> class.</p>
<p>I cannot figure out why the down button method is not working. I found this post <a href="https://www.qtcentre.org/threads/43640-beginMoveRows-working-down-to-up-but-not-up-to-down-Any-insignt-would-be-great" rel="nofollow noreferrer">here</a> which seems to be the same problem I am having. However, the solution is not very clear and in C++. Any help or insight into why I have this problem and how to solve it would be very much appreciated! Thanks!</p>
<pre><code>class IncidentTableModel(QAbstractTableModel):
def moveRow(self, source_row, destination_row):
if destination_row > -1 and destination_row < len(self.incidents):
self.beginMoveRows(
QModelIndex(),
source_row,
source_row,
QModelIndex(),
destination_row,
)
incident = self.incidents.pop(source_row)
self.incidents.insert(destination_row, incident)
self.endMoveRows()
return True
class MainWindow(QMainWindow):
# Works
def on_move_up_button_clicked(self):
"""Move selected table row up in table"""
index = self.table_view.currentIndex()
row = index.row()
if row != -1:
self.table_model.moveRow(row, row - 1)
self.table_view.update()
# Segmentation Fault Errors
def on_move_down_button_clicked(self):
"""Move selected table row down in table"""
index = self.table_view.currentIndex()
if index.row() != -1 and index.row() < len(self.incidents) - 1:
# index.row() + 2, will work but will skip one row (not desired).
self.table_model.moveRow(index.row(), index.row() + 1)
self.table_view.update()
index = self.table_view.currentIndex()
</code></pre>
|
<python><pyqt5>
|
2023-01-30 05:34:52
| 0
| 17,190
|
ab217
|
75,280,472
| 1,316,668
|
Updating Image in a python canvas not working (Updated)
|
<p>Edit: I've narrowed down the problem to the fact that the code inside the function is not correctly referencing the main canvas widget, but because the function is called via a bind event, I'm not quite sure how to reference the main canvas...(more detail at the bottom)</p>
<p>I am have the following code that displays an image inside a tkinter canvas widget. This is the code for the container creation:</p>
<pre><code>canvas = tk.Canvas(width=640, height=480)
canvas.place(x=50, y=50)
canvas.bind("<B1-Motion>", draw_rectangle)
test_img = ImageTk.PhotoImage(Image.open("test_image.png"))
image_container = canvas.create_image(0, 0, anchor="nw", image=test_img)
</code></pre>
<p>This works exactly as expected: the image displays inside the canvas. The next step is to update the image displayed using an opencv image that has a rectangle added, converted to a TkImage, and then handed to the canvas as follows (all of the tutorials I've found use some version of this same methodology). This was done inside a function:</p>
<pre><code>def draw_rectangle(event):
cv2img = cv2.rectangle(data, pt1=(x1, y1), pt2=(x2, y2), color=(255, 255, 255), thickness=5)
img = Image.fromarray(cv2img)
photo_img = ImageTk.PhotoImage(img)
canvas.itemconfig(image_container, image=photo_img)
</code></pre>
<p>The cv2img displays correctly in it's own window if I call:</p>
<pre><code>cv2.imshow("image", cv2img)
</code></pre>
<p>...and the Image.fromarray shows correctly in it's own window if I call:</p>
<pre><code>img.show()
</code></pre>
<p>Yet when I try to pass the image to the canvas, I just end up with a blank canvas. There are no errors displayed in the console. Any ideas?</p>
<p>P.S. I know that the color channels will be reversed but I haven't swapped them yet as it seemed less of a priority than getting an image to display :)</p>
<p>UPDATE</p>
<p>I neglected a very important piece of the puzzle in my original question. The code to update the canvas is in a function that is called when the mouse button is moved through a binding.</p>
<p>I moved the updated code outside of the function, and it works as expected. But this leads me to a related problem...</p>
<p>Becuase the function is called via an event binding (mouse move) I'm suspect that the "canvas" call inside the function is not pointing towards the canvas created outside the function... but I'm not quite sure how to correct that as the first positional argument needs to the be the event...</p>
<p>I've updated the code above.</p>
|
<python><tkinter-canvas>
|
2023-01-30 05:12:25
| 0
| 767
|
Greg Steiner
|
75,280,345
| 4,281,353
|
In detail explanation on how to use DeepSort
|
<p><a href="https://github.com/nwojke/deep_sort" rel="nofollow noreferrer">Deep SORT</a> Github does not give much information on how to use it e.g.</p>
<ul>
<li>what inputs it expects in what format</li>
<li>which function in which code file handles the input</li>
<li>What are the outputs</li>
</ul>
<p>The github lists the code files.</p>
<blockquote>
<p>In package deep_sort is the main tracking code:</p>
<ul>
<li>detection.py: Detection base class.</li>
<li>kalman_filter.py: A Kalman filter implementation and concrete parametrization for image space filtering.</li>
<li>linear_assignment.py: This module contains code for min cost matching and the matching cascade.</li>
<li>iou_matching.py: This module contains the IOU matching metric.</li>
<li>nn_matching.py: A module for a nearest neighbor matching metric.</li>
<li>track.py: The track class contains single-target track data such as Kalman state, number of hits, misses, hit streak, associated feature vectors, etc.</li>
<li>tracker.py: This is the multi-target tracker class.
The deep_sort_app.py expects detections in a custom format, stored in .npy files. These can be computed from MOTChallenge detections using generate_detections.py. We also provide pre-generated detections.</li>
</ul>
</blockquote>
<p>By looking at <code>tracker.py</code>, I guess <code>update</code> method seems like the interface to feed the data about the detected objects. But what to do after?</p>
<pre><code> def update(self, detections):
"""Perform measurement update and track management.
Parameters
----------
detections : List[deep_sort.detection.Detection]
A list of detections at the current time step.
"""
</code></pre>
<p>The detected object seems to be provided as the instance of Detection class in <code>detection.py</code>. But not sure exactly what data to use.</p>
<pre><code>class Detection(object):
"""
This class represents a bounding box detection in a single image.
Parameters
----------
tlwh : array_like
Bounding box in format `(x, y, w, h)`.
confidence : float
Detector confidence score.
feature : array_like
A feature vector that describes the object contained in this image.
Attributes
----------
tlwh : ndarray
Bounding box in format `(top left x, top left y, width, height)`.
confidence : ndarray
Detector confidence score.
feature : ndarray | NoneType
A feature vector that describes the object contained in this image.
"""
def __init__(self, tlwh, confidence, feature):
self.tlwh = np.asarray(tlwh, dtype=np.float)
self.confidence = float(confidence)
self.feature = np.asarray(feature, dtype=np.float32)
</code></pre>
<h1>Question</h1>
<p>Instead of reverse engineering the sample application, are there resources that explain the input, output, formats to use?</p>
<h1>Research</h1>
<p>There are multiple articles and githubs implementing object tracking with Yolo and DeepSort but it does not explain it either.</p>
<ul>
<li><a href="https://medium.com/analytics-vidhya/object-tracking-using-deepsort-in-tensorflow-2-ec013a2eeb4f" rel="nofollow noreferrer">Object Tracking using DeepSORT in TensorFlow 2
</a></li>
<li><a href="https://github.com/anushkadhiman/ObjectTracking-DeepSORT-YOLOv3-TF2/blob/master/object_tracker.py" rel="nofollow noreferrer">ObjectTracking-DeepSORT-YOLOv3-TF2</a></li>
</ul>
|
<python><object-tracking>
|
2023-01-30 04:45:33
| 1
| 22,964
|
mon
|
75,280,182
| 2,666,270
|
Topic Modelling Coherence Score:
|
<p>I'm trying to calculate the coherence score after using BERTopic modelling to discover topics from an input text. I'm facing this error though <code>"unable to interpret topic as either a list of tokens or a list of ids"</code>, and I'm not sure why.</p>
<p>This is how I get the tokens and topics words:</p>
<pre><code> from bertopic import BERTopic
import gensim.corpora as corpora
from gensim.models.coherencemodel import CoherenceModel
topic_model = BERTopic(n_gram_range=(2, 3), min_topic_size=5)
topics, _ = topic_model.fit_transform(docs)
cleaned_docs = topic_model._preprocess_text(docs)
vectorizer = topic_model.vectorizer_model
analyzer = vectorizer.build_analyzer()
tokens = [analyzer(doc) for doc in cleaned_docs]
dictionary = corpora.Dictionary(tokens)
corpus = [dictionary.doc2bow(token) for token in tokens]
topics = topic_model.get_topics()
topics.pop(-1, None)
topic_words = [
[word for word, _ in topic_model.get_topic(topic) if word != ""] for topic in topics
]
topic_words = [[words for words, _ in topic_model.get_topic(topic)]
for topic in range(len(set(topics))-1)]
# Evaluate
coherence_model = CoherenceModel(topics=topic_words,
texts=tokens,
corpus=corpus,
dictionary=dictionary,
coherence='c_v')
coherence = coherence_model.get_coherence()
</code></pre>
<p>It fails here:</p>
<pre><code> def _ensure_elements_are_ids(self, topic):
ids_from_tokens = [self.dictionary.token2id[t] for t in topic if t in self.dictionary.token2id]
ids_from_ids = [i for i in topic if i in self.dictionary]
if len(ids_from_tokens) > len(ids_from_ids):
return np.array(ids_from_tokens)
elif len(ids_from_ids) > len(ids_from_tokens):
return np.array(ids_from_ids)
else:
raise ValueError('unable to interpret topic as either a list of tokens or a list of ids')
</code></pre>
<p>It seems that something weird is happening in the <code>topic_words</code> step. I'm getting words that don't exist in the data and I don't understand why. To test it out, I set manually docs as a given list of strings.</p>
<p>For instance:</p>
<pre><code>logger.info(id2word.token2id[t])
KeyError: 'calendar happy'
</code></pre>
<p>I don't have any entry for <code>calendar happy</code> in docs. But I see it when I log the topic words:</p>
<pre><code>logger.info(topic_words)
[['new year', 'chinese new year', 'chinese new', 'calendar happy', '2023 chinese new', '2023 chinese', 'new month', 'happy new month', 'happy new year', 'monthly calendar happy']...
</code></pre>
<p>I'm not sure how this can be and I see this is how people use to evaluate Bertopic, for instance: <code>https://www.theanalyticslab.nl/topic-modeling-with-bertopic/</code></p>
|
<python><gensim><topic-modeling>
|
2023-01-30 04:09:43
| 1
| 9,924
|
pceccon
|
75,280,115
| 1,358,829
|
python mpire: modifying internal state of object within multiprocessing
|
<p>I have a class with a method which modifies its internal state, for instance:</p>
<pre class="lang-py prettyprint-override"><code>class Example():
def __init__(self, value):
self.param = value
def example_method(self, m):
self.param = self.param * m
# By convention, these methods in my implementation return the object itself
return self
</code></pre>
<p>I wanna run <code>example_method</code> in parallel (I am using the <a href="https://slimmer-ai.github.io/mpire/v2.1.0/index.html" rel="nofollow noreferrer">mpire</a> lib, but other options are welcome as well), for many instances of <code>Example</code>, and have their internal states altered in my instances. Something like:</p>
<pre class="lang-py prettyprint-override"><code>
import mpire
list_of_instances = [Example(i) for i in range(1, 6)]
def run_method(ex):
ex.example_method(10)
print("Before parallel calls, this should print <1>}")
print(f"<{list_of_instances[0]}>")
with mpire.WorkerPool(n_jobs=3) as pool:
pool.map_unordered(run_method, [(example,) for example in list_of_instances])
print("After parallel calls, this should print <10>}")
print(f"<{list_of_instances[0]}>")
</code></pre>
<p>However, the way that <code>mpire</code> works, what is being modified are copies of <code>example</code>, and not the objects within <code>list_of_instances</code>, making any changes to internal state not being kept after the parallel processing. So the second print will print <code><1></code> instead, because that object`s internal state was not changed, a copy of it was.</p>
<p>I am wondering if there are any solutions to have the internal state changes be applied to the original objects in <code>list_of_instances</code>.</p>
<p>The only solutions I can think about is:</p>
<ul>
<li>replace <code>list_of_instances</code> by the result of <code>pool.map_unordered</code> (changing to <code>pool.map_ordered</code> if order is important).</li>
</ul>
<p>Since in any other case (even when using <code>shared_objects</code>) I have a copy of the original objects being made, resulting in the state changes being lost.</p>
<p>Is there any way to solve this with parallel processing? I also accept answers using other libs.</p>
|
<python><parallel-processing>
|
2023-01-30 03:53:36
| 1
| 1,232
|
Alb
|
75,280,083
| 14,251,610
|
Resampling timeseries Data Frame for different customized seasons and finding aggregates
|
<p>I am working on a huge timeseries dataset of following format:</p>
<pre><code>import pandas as pd
import numpy as np
import random
import seaborn as sns
df = pd.DataFrame({'date':pd.date_range('1990',end = '1994',freq='3H'),
'A': np.random.randint(low = 0,high=100,size=11689),
'B': np.random.randint(low = 10,high=45,size=11689) })
df['date'] = pd.to_datetime(df.date.astype(str), format='%Y/%m/%d %H:%M',errors ='coerce')
df.index = pd.DatetimeIndex(df.date)
df.drop('date', axis = 1, inplace = True)
</code></pre>
<p>My aim is to first filter the dataframe according to the customized seasons: <code>winter (12,1,2)</code> (i.e. Dec, Jan, Feb), <code>Pre-monsoon (3,4,5)</code> , <code>monsoon (6,7,8,9)</code> and <code>post-monsoon (10,11)</code>. I am aware of <code>resample('Q-NOV')</code> function but it is quarterly only. As mentioned, I need to customize the months.
I have been able to do so by executing the following codes:</p>
<pre><code># DF-Winter
winterStart = '-12'
winterEnd = '-02'
df_winter = pd.concat([df.loc[str(year) + winterStart : str(year+1) + winterEnd].mean() for year in range(1990, 1994)]) # DJF
# used year and year+1 because winter season spans from an initial year to the next year.
</code></pre>
<pre><code># DF - Premonsoon
df_preMonsoon = df[df.index.month.isin([3,4,5])] # MAM
</code></pre>
<p>and so on.</p>
<p><strong>Problem</strong></p>
<p>I want to find the seasonal average values (every year and season) of my parameters <code>A</code> and <code>B</code> for my data period. Any help will be highly appreciated.</p>
<p>Thank you in advance.</p>
|
<python><pandas><dataframe><datetime><time-series>
|
2023-01-30 03:45:09
| 1
| 303
|
Peshal1067
|
75,280,067
| 2,355,730
|
RTL (Arabic) ligatures problem when extracting text from PDF
|
<p>When extracting Arabic text from a PDF file using librairies like PyMuPDF or PDFMiner, the words are returned in backward order which is a normal behavior for RTL languages, and you need to use bidi algorithm to be able to display it correctly across UI/GUIs.</p>
<p>The problem is when you have ligatures chars that are composed of two chars, these ligatures chars are not reversed which makes the extracted text inaccurate.</p>
<p><em><strong>Here's an example :</strong></em></p>
<p>Let's say we have a font with a ligature glyph "لا" that maps to "uni0644 uni0627". The pdf is rendered like this:</p>
<p><a href="https://i.sstatic.net/JllMa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JllMa.png" alt="enter image description here" /></a></p>
<p>When you extract the pdf text using the library text extraction method, you get this:</p>
<pre><code>كارتــــــشلاا
</code></pre>
<p>Notice how all chars are in reverse order except "لا".</p>
<p>And here's the final result after applying bidi algorithm:</p>
<pre><code>االشــــــتراك
</code></pre>
<p>Am I missing something? Is there any workaround to fix this without detecting false positives and breaking them, or should I write my own implementation that correctly handles ligatures decomposition in bidirectional text?</p>
|
<python><pdfminer><pymupdf><bidi><pdf-extraction>
|
2023-01-30 03:41:03
| 2
| 793
|
Naourass Derouichi
|
75,280,031
| 1,357,015
|
Pythonic way to reshape dataframe for LSTM input in Keras
|
<p>I want to build a simple LSTM that takes as input the last 4 observations to make a prediction for the next time point. Each observation is a 5-tuple.</p>
<p>I realize LSTM wants something in the format <code>[num_samples, time_steps, features]</code>. For example if I had 100 people, I would want something in the shape of [100, 4, 5].</p>
<p>However, my dataset is in this format (currently loaded as a pandas dataframe -- also anonymized):</p>
<pre><code> col1 col2 col3 col4 col5
id date
x1 01-01-22 43 304 340 340 394
x1 01-02-22 65 595 304 204 124
...
x1 09-10-22 54 340 283. 194 304
x2 02-01-22 ...
...
</code></pre>
<p>Note, the id and date are index values. Each id has multiple observations, sometimes 100's. I want to take every group of 5 sequential dates per id to make a dataset that I can then pass into something like this:</p>
<pre><code>import tensorflow as tf
model = tf.keras.Sequential()
model.add(tf.keras.layers.LSTM(50, activation='relu', input_shape=[None, 4,5]))
model.add(tf.keras.layers.Dense(2))
model.compile(optimizer='adam', loss='mse')
</code></pre>
<p>I believe putting in the None in the beginning parameter will make it infer the correct shape but how do I rearrange my data int he correct format. I can do some ugly for loop where I keep track of the indices and the dates and append to a dataframe but I'm hoping there's a cleaner way?</p>
|
<python><pandas><dataframe><keras><lstm>
|
2023-01-30 03:32:11
| 0
| 11,724
|
user1357015
|
75,279,933
| 12,427,876
|
shutil.unpack_archive and ziplib both return FileNotFoundError: [Errno 2] No such file or directory
|
<p>I'm trying to unzip some zip files in Python:</p>
<pre><code>
print(f"Unzipping:\n{os.getcwd()}\\{constants.DOWNLOADS_FOLDER}\\{case_number}\\{username_hostname}.zip")
# with zipfile.ZipFile(f"{os.getcwd()}\\{constants.DOWNLOADS_FOLDER}\\{case_number}\\{username_hostname}.zip") as z:
# z.extractall(path=f"{os.getcwd()}\\{constants.DOWNLOADS_FOLDER}\\{case_number}\\{username_hostname}")
shutil.unpack_archive(f"{os.getcwd()}\\{constants.DOWNLOADS_FOLDER}\\{case_number}\\{username_hostname}.zip",
f"{os.getcwd()}\\{constants.DOWNLOADS_FOLDER}\\{case_number}\\{username_hostname}")
print(f"Unzipped:\n{os.getcwd()}\\{constants.DOWNLOADS_FOLDER}\\{case_number}\\{username_hostname}")
</code></pre>
<p>It works for most of the zip files (<code>case_number</code> being the same, and <code>username_hostname</code> being different), but I got only 2~3 files which always return:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: 'D:\\current_workdirectory\\downloads\\case_number\\username_hostname\\path\\OmniDesk \\OmniDesk.pdf'
</code></pre>
<p>... or ...</p>
<pre><code>'D:\\current_workdirectory\\downloads\\case_number\\username_hostname\\path\\Users\\username\\Desktop\\path\\OCT 11 to 16th \\2022-10-11.xlsx'
</code></pre>
<p>I tried to manually unzip the file (in Windows) and it works fine.</p>
<p>I do notice a strange space at the end of these files' parent foldername, and when I manually unzip the archive, the space is NOT in the foldername of the unpacked archive.</p>
<p>Is this what caused the problem?</p>
<hr />
<p><code>case_number</code> and <code>username_hostname</code> is created from <code>sys.argv</code>:</p>
<pre><code>case_number = sys.argv[1] # e.g., XXX-YYY
username_hostname = sys.argv[2] # e.g., johnsmith_PC12345
</code></pre>
<hr />
<p>printout:</p>
<pre><code>DEBUG: This is case_number -->F23-003<--
Unzipping:
D:\path\F23-003\username_hostname.zip
...
FileNotFoundError: [Errno 2] No such file or directory:
...
</code></pre>
<hr />
<p>I saw a discussion on cpython' github repo: <a href="https://github.com/python/cpython/issues/94018#issuecomment-1160309581" rel="nofollow noreferrer">https://github.com/python/cpython/issues/94018#issuecomment-1160309581</a></p>
<p>But, according to the repo, this issue should has been solved: <code>gvanrossum pushed a commit to gvanrossum/cpython that referenced this issue on Jul 1, 2022</code></p>
<p>I'm using Python 3.11.1</p>
|
<python><unzip><shutil>
|
2023-01-30 03:05:23
| 1
| 411
|
TaihouKai
|
75,279,859
| 7,347,911
|
How can i schedule multiple redshift (plpgsql) stored procedure in Airflow
|
<p>How can i schedule multiple redshift (plpgsql) stored procedure in Airflow using python ?</p>
<p>for eg.</p>
<pre><code>call stored_p_1() ;
call stored_p_2() ;
call stored_p_3() ;
</code></pre>
<p>I want all of tasks to be running in parallel and all of them are independent of each other (except slight exception described in the end), weekly schedule interval that i want to keep</p>
<p>Also how to add the config and other details of database credential for connection, would be nice if one can share</p>
<p><strong>PS:</strong>
I want <code>stored_p_1</code> to be run slightly before than other ones but also other <code>stored_p_%d</code> should not wait for <code>stored_p_1</code> to be completed also.</p>
<p>This happens because <code>stored_p_1</code> writes some data in a target table during the first few minutes of run and that written data from target table gets accessed by the other rest of the stored procedures to operate</p>
<p>The above kind of violates the acyclic graph criterion thus if there any trick that can work</p>
|
<python><stored-procedures><airflow><amazon-redshift><plpgsql>
|
2023-01-30 02:47:27
| 0
| 404
|
manoj
|
75,279,826
| 1,256,495
|
Stop QTimer after a few seconds in PyQt6
|
<p>I am trying to understand how QTimer works.</p>
<p>I have a list to be randomized and I tried to use QTimer to do the randomization every few miliseconds interval, and would like the QTimer to stop after a few seconds after that.</p>
<p>Here is my code.</p>
<pre><code>class Client(QObject):
application = QApplication(sys.argv)
def __init__(self):
super(Client, self).__init__()
self._window = QMainWindow()
self._window.showMaximized()
self._ui = Ui_MainWindow()
self._ui.setupUi(self._window)
name_list = ['A', 'V', 'D', 'E', 'L']
Utils.set_name_list(name_list)
self.timer = QTimer()
self.timer.timeout.connect(self.draw)
self._ui.pushButton.clicked.connect(self.start)
def draw(self):
self._ui.label.setText(random.choice(Utils.name_list()))
def start(self):
self.timer.start(250)
</code></pre>
<p>once the pushButton is clicked, the timer.start fires, so how should I stop the timer after a few seconds? Or should I use another timer/loop to monitor?</p>
|
<python><pyqt6>
|
2023-01-30 02:36:31
| 1
| 559
|
ReverseEngineer
|
75,279,802
| 1,812,464
|
How to find smallest cluster of location which are within a given distance
|
<p>I have a set of co-ordinates in latitude and longitude format. I need to find the smallest cluster from these coordinates which are within say 50 mile distance to each other.</p>
<p>I am new to data science, how can I implement this in Python without using sklearn library.</p>
|
<python><geolocation><data-science>
|
2023-01-30 02:30:28
| 1
| 2,136
|
Indranil
|
75,279,785
| 5,092,662
|
How to retrieve month with optional separator?
|
<p>I tried to write python script to retrieve year and month from string.</p>
<p>The requirements are,</p>
<ul>
<li>year is fixed at 4 characters</li>
<li>month is allowed to be two consecutive characters, or one or two characters when followed by a non-numeric value</li>
</ul>
<pre class="lang-py prettyprint-override"><code>"""
This is the "search year and month" module.
>>> search_year_month('202301')
True
>>> search_year_month('2023-1')
True
>>> search_year_month('2023-01')
True
>>> search_year_month('20231')
False
"""
import re
_re = re.compile(
r"(?P<year>\d{4})"
r"(?P<month>\d{2}|(?<=[^\d])\d{1,2})"
)
def search_year_month(v):
match = _re.search(v)
return match is not None
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
<p>But, <code>2023-1</code> and <code>2023-01</code> are failed...
Is there a better way to build regular expressions?</p>
<p>I tried only month part, I got expected result.</p>
<pre class="lang-py prettyprint-override"><code>"""
This is the "single lookbehind sample" module.
>>> search_lookbehind('01')
True
>>> search_lookbehind('-1')
True
>>> search_lookbehind('-01')
True
>>> search_lookbehind('1')
False
"""
import re
_re = re.compile(
r"(?P<month>\d{2}|(?<=[^\d])\d{1,2})"
)
def search_lookbehind(v):
match = _re.search(v)
return match is not None
if __name__ == "__main__":
import doctest
doctest.testmod()
</code></pre>
|
<python><regex>
|
2023-01-30 02:24:20
| 2
| 346
|
Kohei Sugimura
|
75,279,666
| 5,987
|
Safe use of ctypes.create_string_buffer?
|
<p>Usually in Python when you do an assignment of a variable, you don't get a copy - you just get a second reference to the same object.</p>
<pre><code>a = b'Hi'
b = a
a is b # shows True
</code></pre>
<p>Now when you use <code>ctypes.create_string_buffer</code> to get a buffer to e.g. interact with a Windows API function, you can use the <code>.raw</code> attribute to access the bytes. But what if you want to access those bytes after you've deleted the buffer?</p>
<pre><code>c = ctypes.create_string_buffer(b'Hi')
d = c.raw
e = c.raw
d is e # shows False?
d == e # shows True as you'd expect
c.raw is c.raw # shows False!
del c
</code></pre>
<p>At this point are <code>d</code> and <code>e</code> still safe to use? From my experimentation it looks like the <code>.raw</code> attribute makes <em>copies</em> when you access it, but I can't find anything in the official documentation to support that.</p>
|
<python><ctypes>
|
2023-01-30 01:57:32
| 1
| 309,773
|
Mark Ransom
|
75,279,532
| 1,331,446
|
Implementing my own code to handle "*" and / or "**" on a class
|
<p>I know what <code>*</code> and <code>**</code> do in Python (e.g. <a href="https://stackoverflow.com/questions/2921847/what-do-the-star-and-double-star-operators-mean-in-a-function-call">What do the * (star) and ** (double star) operators mean in a function call?</a>). But if I want to create my own "dict-like" class. I'm wondering if <code>**</code> maps to some dunder method that I can add to my class such that my class can respond to <code>**</code> in a similar way. If not a dunder, is there some other way to create a fully dict-like class that <em>does</em> support a <code>**</code> prefix operator?</p>
<p>(Bonus points for the same on a tuple-like class for <code>*</code>, but it's the dict version I need a solution to!)</p>
|
<python><python-3.x><dictionary>
|
2023-01-30 01:17:54
| 0
| 5,332
|
dsz
|
75,279,531
| 2,778,405
|
Guarantee all logs go to file
|
<p>I need to set up logging so it will always log to a file, no matter what a user or other programmer might setLevel too. For instance I have this logger set up:</p>
<p>initialize_logging.py</p>
<pre class="lang-py prettyprint-override"><code>import logging
filename="./files/logs/attribution.log", level=logging.INFO)
logger = logging.getLogger('attribution')
formatter = logging.Formatter('%(asctime)s - %(levelname)s: %(message)s')
fh = logging.FileHandler('./files/logs/attribution.log')
fh.setLevel(logging.DEBUG)
fh.setFormatter(formatter)
ch = logging.StreamHandler()
ch.setLevel(logging.WARNING)
ch.setFormatter(formatter)
logger.setLevel(logging.DEBUG)
logger.addHandler(fh)
logger.addHandler(ch)
</code></pre>
<p>then I have this file:</p>
<p>main.py</p>
<pre class="lang-py prettyprint-override"><code>%load_ext autotime
import initialize_logging
import logging
logger = logging.getLogger('attribution')
logger.setLevel(logging.WARNING)
logger.info('TEST')
</code></pre>
<p>The above results in nothing being logged to my file and nothing being output.</p>
<p>However in main, if I <code>setLevel(logging.INFO)</code> then everything gets written to file and to the sterrout (I'm using a jupyter notebook so it prints on screen immediatly.)</p>
<p>The behavior I would like is that the user of my notebook can <code>setLevel</code> to determine what they want to see printed on screen but no matter what all logs get sent to the log file.</p>
<p>How do I do this?</p>
|
<python><logging><python-logging>
|
2023-01-30 01:17:48
| 1
| 2,386
|
Jamie Marshall
|
75,279,346
| 8,150,682
|
Python weasyprint unable to find 'gobject-2.0-0' library
|
<p>While following the <a href="https://docs.saleor.io/docs/3.x/category/setup" rel="nofollow noreferrer">installation</a> process for <a href="https://github.com/romeopeter/saleor.git" rel="nofollow noreferrer">Saleor</a> headless e-commerce, the Python <a href="https://doc.courtbouillon.org/weasyprint/latest/index.html" rel="nofollow noreferrer">Weasyprint</a> package fails to load the <code>gobject-2.0.0</code> dependency which I have already <a href="https://doc.courtbouillon.org/weasyprint/latest/first_steps.html#macports" rel="nofollow noreferrer">installed</a> on my machine using Macport.</p>
<p>Below is the source code showing where the error is emitting from after starting the Django server. The file holds <a href="https://github.com/romeopeter/saleor/blob/main/saleor/plugins/invoicing/utils.py" rel="nofollow noreferrer">utility functions</a> for generating invoices for the <a href="https://github.com/romeopeter/saleor/blob/main/saleor/plugins/invoicing/plugin.py" rel="nofollow noreferrer">plugin</a> file.</p>
<h3>utils.py</h3>
<pre><code>import os
import re
from datetime import datetime
from decimal import Decimal
import pytz
from django.conf import settings
from django.template.loader import get_template
from prices import Money
from weasyprint import HTML # <----------- This what is emitting the error because the
# package can't find the needed dependency.
from ...giftcard import GiftCardEvents
from ...giftcard.models import GiftCardEvent
from ...invoice.models import Invoice
MAX_PRODUCTS_WITH_TABLE = 3
MAX_PRODUCTS_WITHOUT_TABLE = 4
MAX_PRODUCTS_PER_PAGE = 13
def make_full_invoice_number(number=None, month=None, year=None):
now = datetime.now()
current_month = int(now.strftime("%m"))
current_year = int(now.strftime("%Y"))
month_and_year = now.strftime("%m/%Y")
if month == current_month and year == current_year:
new_number = (number or 0) + 1
return f"{new_number}/{month_and_year}"
return f"1/{month_and_year}"
def parse_invoice_dates(number: str):
match = re.match(r"^(\d+)\/(\d+)\/(\d+)", number)
if not match:
raise ValueError("Unrecognized invoice number format")
return int(match.group(1)), int(match.group(2)), int(match.group(3))
def generate_invoice_number():
last_invoice = Invoice.objects.filter(number__isnull=False).last()
if not last_invoice or not last_invoice.number:
return make_full_invoice_number()
try:
number, month, year = parse_invoice_dates(last_invoice.number)
return make_full_invoice_number(number, month, year)
except (IndexError, ValueError, AttributeError):
return make_full_invoice_number()
def chunk_products(products, product_limit):
"""Split products to list of chunks.
Each chunk represents products per page, product_limit defines chunk size.
"""
chunks = []
for i in range(0, len(products), product_limit):
limit = i + product_limit
chunks.append(products[i:limit])
return chunks
def get_product_limit_first_page(products):
if len(products) < MAX_PRODUCTS_WITHOUT_TABLE:
return MAX_PRODUCTS_WITH_TABLE
return MAX_PRODUCTS_WITHOUT_TABLE
def get_gift_cards_payment_amount(order):
events = GiftCardEvent.objects.filter(
type=GiftCardEvents.USED_IN_ORDER, order_id=order.id
)
total_paid = Decimal(0)
for event in events:
balance = event.parameters["balance"]
total_paid += Decimal(balance["old_current_balance"]) - Decimal(
balance["current_balance"]
)
return Money(total_paid, order.currency)
def generate_invoice_pdf(invoice): # <------- The function calling the HTML module from
# weasyprint
font_path = os.path.join(
settings.PROJECT_ROOT, "templates", "invoices", "inter.ttf"
)
all_products = invoice.order.lines.all()
product_limit_first_page = get_product_limit_first_page(all_products)
products_first_page = all_products[:product_limit_first_page]
rest_of_products = chunk_products(
all_products[product_limit_first_page:], MAX_PRODUCTS_PER_PAGE
)
order = invoice.order
gift_cards_payment = get_gift_cards_payment_amount(order)
creation_date = datetime.now(tz=pytz.utc)
rendered_template = get_template("invoices/invoice.html").render(
{
"invoice": invoice,
"creation_date": creation_date.strftime("%d %b %Y"),
"order": order,
"gift_cards_payment": gift_cards_payment,
"font_path": f"file://{font_path}",
"products_first_page": products_first_page,
"rest_of_products": rest_of_products,
}
)
return HTML(string=rendered_template).write_pdf(), creation_date
</code></pre>
<h3>plugins.py</h3>
<pre><code>from typing import Any, Optional
from uuid import uuid4
from django.core.files.base import ContentFile
from django.utils.text import slugify
from ...core import JobStatus
from ...invoice.models import Invoice
from ...order.models import Order
from ..base_plugin import BasePlugin
from .utils import generate_invoice_number, generate_invoice_pdf
class InvoicingPlugin(BasePlugin):
PLUGIN_ID = "mirumee.invoicing"
PLUGIN_NAME = "Invoicing"
DEFAULT_ACTIVE = True
PLUGIN_DESCRIPTION = "Built-in saleor plugin that handles invoice creation."
CONFIGURATION_PER_CHANNEL = False
def invoice_request(
self,
order: "Order",
invoice: "Invoice",
number: Optional[str],
previous_value: Any,
) -> Any:
invoice_number = generate_invoice_number()
invoice.update_invoice(number=invoice_number)
file_content, creation_date = generate_invoice_pdf(invoice)
invoice.created = creation_date
slugified_invoice_number = slugify(invoice_number)
invoice.invoice_file.save(
f"invoice-{slugified_invoice_number}-order-{order.id}-{uuid4()}.pdf",
ContentFile(file_content), # type: ignore
)
invoice.status = JobStatus.SUCCESS
invoice.save(
update_fields=[
"created_at",
"number",
"invoice_file",
"status",
"updated_at",
]
)
return invoice
</code></pre>
<p>To fix the issue, I followed <a href="https://github.com/Kozea/WeasyPrint/issues/1448#issuecomment-925549118" rel="nofollow noreferrer">this</a> instruction of creating a symlink which I did and pointed to it in the path environment of my machine yet it didn't fix the issue. Does that mean that Django isn't checking for the depenecy using the path environment?</p>
<p>It's also worth noting that an <a href="https://github.com/Kozea/WeasyPrint/issues/1448#issuecomment-925559421" rel="nofollow noreferrer">installation</a> of Python and weasyprint using Homebrew would fix the issue but I don't use home because I'm MacOS Catalina 10.15 which isn't supported anymore thus the version for it is unstable to use.</p>
<p>I know the dependency is on my machine but it's been difficult to point to it? What am I doing wrong?</p>
<p>I've been on this for days!</p>
|
<python><django><weasyprint>
|
2023-01-30 00:24:00
| 1
| 818
|
Romeo
|
75,279,306
| 6,150,310
|
Check if any words in two lists are anagrams
|
<p>I'm trying to write a function where it takes in two lists of any length, and returns <code>True</code> if list_1 has any words that are anagrams in <code>list_2</code>, and false if not.</p>
<p>For example:</p>
<p>words1 = [“dog”, “kitten”]
words2 = [“tiger”, “god”]
return True, as “dog” and “god” are anagrams.</p>
<p>words1 = [“doggy”, “cat”, “tac”]
words2 = [“tiger”, “lion”, “dog”]
return False, as no anagram pair between words1 and words2 exists.</p>
<p>I have a version of the code where I can check specific strings for anagrams, but not sure how to go through two lists and cross check the words:</p>
<pre><code>def anagram_pair_exists(words1, words2):
if(sorted(words1)== sorted(words2)):
return True
else:
return False
</code></pre>
<p>Any help is appreciated</p>
|
<python><python-3.x><list><anagram>
|
2023-01-30 00:14:45
| 2
| 1,264
|
mlenthusiast
|
75,279,304
| 10,761,353
|
Why can't Idle import from .py files created after it was launched?
|
<p>I can't explain this phenomena, other than to assume that Idle <em>somehow</em> works off of a <code>snapshot</code> of the filesystem, taken at launch-time.</p>
<p>Repro steps:</p>
<ul>
<li>create a <code>myLib.py</code> file with (e.g.):</li>
</ul>
<pre><code>#!/usr/bin/env python3
pre_launch_str = "Pre-launch!"
# post_launch_str = "Post-launch!"
</code></pre>
<ul>
<li>launch Idle (from the containing folder)</li>
<li><code>from myLib import pre_launch_str</code> works as expected: the string is imported/usable</li>
<li>Keep Idle running/open</li>
<li>[from another application/terminal] <em>modify</em> <code>myLib.py</code> to include a new object (e.g.) <code>post_launch_str</code></li>
<li><code>from myLib import post_launch_str</code> will throw an error:</li>
</ul>
<pre><code>Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 'post_launch_str' from 'myLib' (/home/myUser/myLib.py)```
Anyone know what the cause of this is?
Linux (zsh) + Python 3.10 inthe example above, but I've noticed this long ago (~Python3.5 and on MacOS too)
</code></pre>
|
<python>
|
2023-01-30 00:14:24
| 2
| 1,424
|
Adam Smooch
|
75,279,176
| 8,681,882
|
How to connect from GKE to Cloud SQL using Python and Private IP
|
<p>I want to connect to my MySQL database from my GKE pods using python using a private IP</p>
<p>I've done all the configurations, the connection is working inside a test pod through bash using</p>
<pre><code>mysql -u root -p --host X.X.X.X --port 3306
</code></pre>
<p>But it doesn't work inside my Python app... maybe i'm missing something</p>
<p>Here is my current code</p>
<pre><code># initialize Connector object
connector = Connector(ip_type=IPTypes.PRIVATE)
# function to return the database connection object
def getconn():
conn = connector.connect(
INSTANCE_CONNECTION_NAME,
"pymysql",
user=DB_USER,
password=DB_PASS,
db=DB_NAME
)
return conn
# create connection pool with 'creator' argument to our connection object function
pool = sqlalchemy.create_engine(
"mysql+pymysql://",
creator=getconn,
)
</code></pre>
<p>I'm still getting these errors</p>
<pre><code>aiohttp.client_exceptions.ClientResponseError: 403, message="Forbidden: Authenticated IAM principal does not seeem authorized to make API request. Verify 'Cloud SQL Admin API' is enabled within your GCP project and 'Cloud SQL Client' role has been granted to IAM principal.", url=URL('https://sqladmin.googleapis.com/sql/v1beta4/projects/manifest-altar-223913/instances/rapminerz-apps/connectSettings')
</code></pre>
|
<python><mysql><kubernetes><google-cloud-platform>
|
2023-01-29 23:39:24
| 1
| 337
|
Noa Be
|
75,278,894
| 4,652,358
|
youtube-dl :: search for video and listen
|
<p>One of the features of youtube-dl allows you to search for a video and download.</p>
<p>Example: if I want to search for <em><strong>P!NK - Never Gonna Not Dance Again</strong></em> I just need to do:</p>
<pre><code>youtube-dl "ytsearch1:P!NK - Never Gonna Not Dance Again"
</code></pre>
<p>And the video is downloaded on my computer.</p>
<p>But what I want to do instead is not downloading.</p>
<p>I want to grab the URL ID which for <em>P!NK - Never Gonna Not Dance Again</em> is <code>iqYK79jCssA</code> :</p>
<p><a href="https://i.sstatic.net/OGQ8x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OGQ8x.png" alt="enter image description here" /></a></p>
<p>and I want to listen to it on a web browser this way:</p>
<pre><code>youtube-dl -g 'iqYK79jCssA'
</code></pre>
<p>The steps should be:</p>
<ol>
<li>Search for <em>P!NK - Never Gonna Not Dance Again</em> and not download</li>
<li>Grab the URL ID</li>
<li>Listen to web browser</li>
</ol>
|
<python><youtube><youtube-data-api><youtube-dl>
|
2023-01-29 22:38:44
| 1
| 12,818
|
Francesco Mantovani
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.