QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,391,156 | 8,373,832 | Custom Labels in Vertex AI Pipeline PipelineJobSchedule | <p>I would like to know the steps involved in adding custom labels to a Vertex AI pipeline’s <code>PipelineJobSchedule</code>. Can anyone please provide me with the necessary guidance as it's not working when I am adding inside the Pipelinejob parameters?</p>
<pre><code># https://cloud.google.com/vertex-ai/docs/pipelines/schedule-pipeline-run#create-a-schedule
pipeline_job = aiplatform.PipelineJob(
template_path="COMPILED_PIPELINE_PATH",
pipeline_root="PIPELINE_ROOT_PATH",
display_name="DISPLAY_NAME",
labels="{"name":"test_xx"}"
)
pipeline_job_schedule = aiplatform.PipelineJobSchedule(
pipeline_job=pipeline_job,
display_name="SCHEDULE_NAME"
)
pipeline_job_schedule.create(
cron="TZ=CRON",
max_concurrent_run_count=MAX_CONCURRENT_RUN_COUNT,
max_run_count=MAX_RUN_COUNT,
)
</code></pre>
| <python><google-cloud-platform><google-cloud-vertex-ai><kubeflow-pipelines><vertex-ai-pipeline> | 2023-10-30 18:44:49 | 2 | 356 | Rituraj kumar |
77,391,144 | 489,088 | How to get pandas pct_change to affect rows with a given index value independently from each other? | <p>I have a dataframe like so:</p>
<pre><code>df = pd.DataFrame([
['A', 2],
['B', 4],
['C', 20],
['B', 8],
['C', 2],
['A', 2]],
columns=['Label', 'Val1',])
print(df)
Label Val1
0 A 2
1 B 4
2 C 20
3 B 8
4 C 2
5 A 2
</code></pre>
<p>If I calculate the percentage change of <code>Val1</code>:</p>
<pre><code>df['Val1_change'] = df['Val1'].pct_change(periods=1)
</code></pre>
<p>I get this:</p>
<pre><code> Label Val1 Val1_change
0 A 2 NaN
1 B 4 1.00
2 C 20 4.00
3 B 8 -0.60
4 C 2 -0.75
5 A 2 0.00
</code></pre>
<p>Each row has the change according to it's value in relation to the previous value. Cool.</p>
<p>I would like however to calculate percentage change between rows that have the same Label value, so each value in a row that has Label <code>A</code> is calculated according the change in relation to the previous row with value <code>A</code>, and so forth.</p>
<p>So I would get this:</p>
<pre><code> Label Val1 Val1_change
0 A 2 NaN
1 B 4 NaN
2 C 20 NaN
3 B 8 1.00 # 100% increase from previous B row
4 C 2 -0.90 # 90 decrease from previous C row
5 A 2 0.00 # no change from previous A row
</code></pre>
<p>I tried by setting Label as the index first:</p>
<pre><code>df.set_index('Label', inplace=True)
df['Val1_change'] = df['Val1'].pct_change(periods=1)
</code></pre>
<p>But there is no change to the calculated pct_change:</p>
<pre><code>Label
A 2 NaN
B 4 1.00
C 20 4.00
B 8 -0.60
C 2 -0.75
A 2 0.00
</code></pre>
<p>How can I accomplish this in pandas?</p>
| <python><python-3.x><pandas><dataframe> | 2023-10-30 18:43:20 | 1 | 6,306 | Edy Bourne |
77,391,095 | 10,557,442 | How to Calculate Time Difference from Previous Value Change in PySpark DataFrame | <p>Suppose I have the following dataframe in pyspark:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th><strong>object</strong></th>
<th><strong>time</strong></th>
<th><strong>has_changed</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>A</td>
<td>4</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>7</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>2</td>
<td>1</td>
</tr>
<tr>
<td>B</td>
<td>5</td>
<td>0</td>
</tr>
</tbody>
</table>
</div>
<p>What I want is to add a new column that, for each row, keeps track of the time difference with respect to the last value change for the current object (or first element of the corresponding partition if no value changes exists). For the table I've posted above, the result would be the following:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th><strong>object</strong></th>
<th><strong>time</strong></th>
<th><strong>has_changed</strong></th>
<th><strong>time_alive</strong></th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>A</td>
<td>2</td>
<td>1</td>
<td>1</td>
</tr>
<tr>
<td>A</td>
<td>4</td>
<td>0</td>
<td>2</td>
</tr>
<tr>
<td>A</td>
<td>7</td>
<td>1</td>
<td>5</td>
</tr>
<tr>
<td>B</td>
<td>2</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>B</td>
<td>5</td>
<td>0</td>
<td>3</td>
</tr>
</tbody>
</table>
</div>
<p>That is, within each partition by the "object" column, sorted by the "time" column, each value of the corresponding row is calculated as the difference between the time of that row and the previous time at which there is a 1 in the "has_changed" column (if a 1 is not found, the window will scroll to the first element of the partition).</p>
<p>What I would like to implement would be something like the following (pseudo-code):</p>
<pre><code>from pyspark.sql.window import Window as w
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
# Define the data
data = [("A", 1, 0), ("A", 2, 1), ("A", 4, 0), ("A", 7, 1), ("B", 2, 1), ("B", 5, 0)]
# Define the schema
schema = ["object", "time", "has_changed"]
# Create the DataFrame
df = spark.createDataFrame(data, schema)
# Window function (pseudo-code, this won't work)
window = (
w.partitionBy("object")
.orderBy("time")
.rowsBetween(f.when(f.col("has_changed") == 1), w.currentRow)
)
df.withColumn("time_alive", f.col("time") - f.lag("time", 1).over(window))
</code></pre>
| <python><pyspark><rolling-computation> | 2023-10-30 18:35:48 | 1 | 544 | Dani |
77,390,990 | 10,010,688 | How to extract html text based on the lowest html tag level | <p>Is there a way to extract text with Beautifulsoup that is associated with the most relevant html tag? For example:</p>
<pre><code><div>
I'm a div
<p>I'm a paragraph</p>
</div>
</code></pre>
<p>Is there a way that I end up with</p>
<pre><code>I'm a div
</code></pre>
<p>when getting the text from the div tag and I end up with:</p>
<pre><code>I'm a paragraph
</code></pre>
<p>when getting the text from the p tag?</p>
<p>I've been working with the code below:</p>
<pre><code>soup = BeautifulSoup(html_description, 'html.parser')
TAGS_TO_APPEND = ['div', 'p', 'h1']
for tag in soup.find_all(True):
if tag.name in TAGS_TO_APPEND:
sanitised_description += tag.get_text(strip=True) + '\n\n' # Add two new lines for <p> tags
elif tag.name == 'li':
sanitised_description += '\n* ' + tag.get_text(strip=True) # Add '*' for <li> tags
</code></pre>
<p>Because <code>tag.get_text()</code> returns all the text within the tag, ie I get "I'm a div I'm a paragraph" when looking at the div tag, I end up with duplicated texts. I also can't just get all the texts at the highest level because I need to reformat the text.</p>
<p>I've looked at multiple threads, one of them being: <a href="https://stackoverflow.com/questions/54994297/show-text-inside-the-tags-beautifulsoup">Show text inside the tags BeautifulSoup</a>, but I don't think it's the same situation as I'm encountering for the solution provided.</p>
| <python><beautifulsoup> | 2023-10-30 18:18:17 | 1 | 3,858 | Mark |
77,390,865 | 12,474,157 | Error Compiling `spacy` Package with Pip and Requirements.txt (Mac M1) | <p>I'm trying to install the <code>spacy</code> package and its dependencies using <code>pip</code> and a <code>requirements.txt</code> file in a Python environment. However, I'm encountering the following error during the installation process:</p>
<h2>requirements.txt</h2>
<pre class="lang-none prettyprint-override"><code>-i https://pypi.python.org/simple
anyio==3.7.0
appdirs==1.4.4
...
spacy==3.0.3
spacy-legacy==3.0.1
spacy-lookups-data==1.0.3
...
websockets==11.0.3
wrapt==1.15.0; python_version >= '2.7' and python_version not in '3.0, 3.1, 3.2, 3.3, 3.4'
xmltodict==0.13.0; python_version >= '3.4'
zipp==3.16.2; python_version >= '3.8'
</code></pre>
<p>I run the command <code>pip install -r requirements.txt</code>, also tried <code>conda install --file requirements.txt</code></p>
<h2>Error</h2>
<pre class="lang-none prettyprint-override"><code> Compiling spacy/tokens/morphanalysis.pyx because it changed.
Compiling spacy/tokens/_retokenize.pyx because it changed.
Compiling spacy/matcher/matcher.pyx because it changed.
Compiling spacy/matcher/phrasematcher.pyx because it changed.
Compiling spacy/matcher/dependencymatcher.pyx because it changed.
Compiling spacy/symbols.pyx because it changed.
Compiling spacy/vectors.pyx because it changed.
[ 1/41] Cythonizing spacy/attrs.pyx
[ 2/41] Cythonizing spacy/kb.pyx
Traceback (most recent call last):
File ".../lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>
main()
File ".../lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File ".../lib/python3.9/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
File ".../site-packages/setuptools/build_meta.py", line 355, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
File ".../site-packages/setuptools/build_meta.py", line 325, in _get_build_requires
self.run_setup()
File ".../site-packages/setuptools/build_meta.py", line 341, in run_setup
exec(code, locals())
File "<string>", line 225, in <module>
File "<string>", line 211, in setup_package
File ".../site-packages/Cython/Build/Dependencies.py", line 1154, in cythonize
cythonize_one(*args)
File ".../site-packages/Cython/Build/Dependencies.py", line 1321, in cythonize_one
raise CompileError(None, pyx_file)
Cython.Compiler.Errors.CompileError: spacy/kb.pyx
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>I'm using Python 3.9.16 on macOS (M1). I've tried updating Pipenv, ensuring the correct dependencies are installed, and recreating the virtual environment, but I'm still encountering this issue. How can I resolve this error and successfully install the <code>spacy</code> package with its dependencies?</p>
| <python><pip><anaconda><conda><pipenv> | 2023-10-30 17:53:53 | 0 | 1,720 | The Dan |
77,390,686 | 1,095,202 | Construct a callable argument in embedded Python with Python C API | <p>I am writing code that starts in Python, then goes to C via <code>ctypes</code> and inside C it uses Python embedding to invoke a Python function, that is, the flow looks like this:</p>
<pre><code>Python user code passes function name -> C mediator library -> Python "backend code"
</code></pre>
<ol>
<li><p>Python user code loads C mediator library and passes to it a function name <code>funcname</code> and its arguments (types and values) via <code>ctypes</code>.</p>
</li>
<li><p>C mediator library embeds Python, loads required module and call
the function with <code>funcname</code> and passed arguments.</p>
</li>
<li><p>Embedded Python execute the function and returns result.</p>
</li>
</ol>
<p>The Python function accepts different parameters and one of them
is a callback function. When I am at the C mediator library, I have a <code>void</code> pointer to this callback. <em>Question</em>: How to convert it to a Python <code>callable</code>?</p>
<p>Thank you!</p>
<p>Minimal (not-)working example consists of files <code>run.py</code> (user code), <code>callstuff.c</code> (C mediator library), <code>dostuff.py</code> (Python "backend" code) and <code>CMakeLists.txt</code> to compile the C mediator library.</p>
<pre class="lang-py prettyprint-override"><code># File run.py
import ctypes
import sys
if __name__ == "__main__":
if sys.platform == "darwin":
ext = "dylib"
elif sys.platform == "linux":
ext = "so"
else:
raise ValueError("Handle me somehow")
lib = ctypes.PyDLL(f"build/libcallstuff.{ext}")
initialize = lib.__getattr__("initialize")
initialize()
call = lib.__getattr__("call")
# signature: int call(char *funcname, void * fn_p, int x)
call.restype = ctypes.c_int
call.argtypes = [ctypes.c_char_p, ctypes.c_void_p, ctypes.c_int]
def myfn(x):
return 2 * x
# Prepare all arguments to call func
# fn_p signature is int f(int x)
fn_t = ctypes.CFUNCTYPE(ctypes.c_int, *[ctypes.c_int])
fn_p = ctypes.cast(ctypes.pointer(fn_t(myfn)), ctypes.c_void_p)
a = ctypes.c_int(21)
result = call("apply".encode(), fn_p, a)
print("Result is", result)
finalize = lib.__getattr__("finalize")
finalize()
</code></pre>
<pre class="lang-c prettyprint-override"><code>#define PY_SSIZE_T_CLEAN
#include <Python.h>
#include <stdio.h>
#include <string.h>
/**
* This function invokes function `func` from the Python module `dostuff.py`
* via Python embedding.
* Here, the function is constrained, as the other arguments are passed
* explicitly because there is only one value of `func` for this example.
* In general case, it will be a list that carries types information as ints
* and void pointers to values.
* All memory release and error checks are omitted.
*/
int call(const char *fn_name, void *fn_p, int x) {
printf("I am here\n");
PyObject *pFileName = PyUnicode_FromString("dostuff");
printf("I am here 2\n");
PyObject *pModule = PyImport_Import(pFileName);
printf("I am here 3\n");
PyObject *pFunc = PyObject_GetAttrString(pModule, fn_name);
PyObject *pArgs = PyTuple_New(2); // We have args: f, a, b
PyObject *pValue;
pValue = (PyObject *) fn_p; // ??????? How to convert void *fn_p?
PyTuple_SetItem(pArgs, 0, pValue);
pValue = PyLong_FromLong(x);
PyTuple_SetItem(pArgs, 1, pValue);
PyObject *pResult = PyObject_CallObject(pFunc, pArgs);
if (pResult != NULL) {
return PyLong_AsLong(pResult);
} else {
return -1;
}
}
int initialize() {
Py_Initialize();
return 0;
}
int finalize() {
Py_Finalize();
return 0;
}
</code></pre>
<pre class="lang-py prettyprint-override"><code># file dostuff.py
from typing import Callable
def apply(f: Callable, x):
return f(x)
</code></pre>
<pre><code>cmake_minimum_required(VERSION 3.18)
project(PyCInterop LANGUAGES C)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
find_package(Python REQUIRED Interpreter Development)
add_library(callstuff SHARED callstuff.c)
target_link_libraries(callstuff PRIVATE Python::Python)
</code></pre>
<p>To compile:</p>
<pre class="lang-bash prettyprint-override"><code>$ cmake -S. -B build -DCMAKE_BUILD_TYPE=Debug && cmake --build build
</code></pre>
<p>To run:</p>
<pre class="lang-bash prettyprint-override"><code>$ python run.py
</code></pre>
| <python><function-pointers><python-c-api> | 2023-10-30 17:23:22 | 0 | 927 | Dmitry Kabanov |
77,390,685 | 10,958,326 | How can I make PyCharm recognize method parameters when extracting a method? | <p>PyCharm has a few refactoring features. One of them is the <code>Extract Method</code> feature. It usually does not work as I would expect. e.g. a minimal example:</p>
<pre><code>a = 2
b = a**2
</code></pre>
<p>after selecting <code>a**2</code> and performing the <code>Extract Method</code> feature I am getting:</p>
<pre><code>a = 2
def method_name():
return a ** 2
b = method_name()
</code></pre>
<p>What I would expect and wish is the following:</p>
<pre><code>a = 2
def method_name(a): #a should be a parameter...
return a ** 2
b = method_name(a) #...and should be passed here
</code></pre>
<p>Is there a way to somehow force PyCharm to recognize parameters? Or is it just how this feature works? In that case is there any other way to accomplish this in PyCharm?</p>
| <python><pycharm> | 2023-10-30 17:23:20 | 0 | 390 | algebruh |
77,390,669 | 8,389,618 | Azure function not able to read the data from CosmosDB using Python | <p>I am trying to deploy the Azure function inside the function app and trying to read the data in the Azure function from CosmosDB.</p>
<p>The code is not able to fetch the data from the CosmosDB in the az-cli deployed function but if the same function I am running locally it is able to fetch the data properly.</p>
<p>If we deploy the Azure function using the VS code then it works properly.</p>
<p>Code snippet for fetching the data.</p>
<pre><code>def getting_meta_data(nodecellid):
logging.info('inside getting meta data function')
logging.info('---------- %s',nodecellid)
logging.info(type(nodecellid))
endpoint = 'https://subscription-cosmosdb.documents.azure.com:443/'
key = 'key'
client = CosmosClient(endpoint, key)
# Connect to the database and container
container_name = 'container-name'
database_name = 'database-name'
database = client.get_database_client(database_name)
container = database.get_container_client(container_name)
data_dict = {}
logging.info("&&&&& %s",nodecellid)
for item in container.query_items(
query = f"SELECT * FROM r WHERE r['NodeCellId'] = '{nodecellid}'",
enable_cross_partition_query=True):
print(item)
data_dict = json.loads(json.dumps(item, indent=True))
logging.info("++++++++",json.dumps(item, indent=True))
logging.info("//////",'meta-data value %s',data_dict)
logging.info('completed meta-data ')
if data_dict == {}:
logging.info('empty meta-data)
return -1
return data_dict
nodecellid = 'nodecellid'
getting_meta_data(nodecellid)
</code></pre>
<p>When I am trying to fetch the data from cosmosDB using the VS deployed function it is able to fetch the code even though the content of both functions (one deployed using VS code and one deployed using azure function code-tools)</p>
<p>Both function code contains the same code but they have some difference, which I will mention below.</p>
<pre><code>VS code deployed code
**__init.py__** file which contains the code
**function.json** - which contains the binding details to the Azure function.
**sample.dat**
**readme.md**
Azure func core tools deployed code
**function_app.py** file which contains the code
**host.json** - Function App settings
**oryx-manifest.toml**
</code></pre>
<p>I am not sure what I am doing wrong for this.</p>
<p>The same code is working locally and on VS code deployed Azure function but not in func core-tools deployed Azure function.</p>
<p>I am attaching the logs for more information.</p>
<pre><code>Result: Failure Exception: TypeError: not all arguments converted during string formatting Stack: File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py", line 493, in _handle__invocation_request call_result = await self._loop.run_in_executor( File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py", line 762, in _run_sync_func return ExtensionManager.get_sync_invocation_wrapper(context, File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/extension.py", line 215, in _raw_invocation_wrapper result = function(**args) File "/home/site/wwwroot/function_app.py", line 45, in ChronosAzureFunction tagging(blob_client) File "/home/site/wwwroot/function_app.py", line 147, in tagging getting_meta_data(nodecellid_test) File "/home/site/wwwroot/function_app.py", line 369, in getting_meta_data logging.info("++++++++",json.dumps(item, indent=True)) File "/usr/local/lib/python3.9/logging/__init__.py", line 2097, in info root.info(msg, *args, **kwargs) File "/usr/local/lib/python3.9/logging/__init__.py", line 1446, in info self._log(INFO, msg, args, **kwargs) File "/usr/local/lib/python3.9/logging/__init__.py", line 1589, in _log self.handle(record) File "/usr/local/lib/python3.9/logging/__init__.py", line 1599, in handle self.callHandlers(record) File "/usr/local/lib/python3.9/logging/__init__.py", line 1661, in callHandlers hdlr.handle(record) File "/usr/local/lib/python3.9/logging/__init__.py", line 952, in handle self.emit(record) File "/azure-functions-host/workers/python/3.9/LINUX/X64/azure_functions_worker/dispatcher.py", line 829, in emit msg = self.format(record) File "/usr/local/lib/python3.9/logging/__init__.py", line 927, in format return fmt.format(record) File "/usr/local/lib/python3.9/logging/__init__.py", line 663, in format record.message = record.getMessage() File "/usr/local/lib/python3.9/logging/__init__.py", line 367, in getMessage msg = msg % self.args
</code></pre>
| <python><azure><azure-functions><azure-cosmosdb> | 2023-10-30 17:20:28 | 1 | 348 | Ravi kant Gautam |
77,390,647 | 6,458,245 | Alpaca on Google colab: cannot import name 'TypeAliasType' from 'typing_extensions' | <p>I'm trying to use alpaca (the trading platform) on google colab. It works locally on my laptop, but I get the following error:</p>
<pre><code>ImportError Traceback (most recent call last)
<ipython-input-4-791aac88b96e> in <cell line: 16>()
14 # from alpaca.data.historical import CryptoHistoricalDataClient
15
---> 16 from alpaca.data.historical import StockHistoricalDataClient
17
18 import alpaca
9 frames
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_typing_extra.py in <module>
11 from typing import TYPE_CHECKING, Any, ForwardRef
12
---> 13 from typing_extensions import Annotated, Final, Literal, TypeAliasType, TypeGuard, get_args, get_origin
14
15 if TYPE_CHECKING:
ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' (/usr/local/lib/python3.10/dist-packages/typing_extensions.py)
</code></pre>
<p>Does anyone know what's causing this?</p>
<p>This is the only lines I have in colab before I hit this error:</p>
<pre><code>!pip install transformers
!pip install alpaca-py
from alpaca.data.historical import StockHistoricalDataClient
</code></pre>
| <python><python-3.x><google-colaboratory><type-alias><alpaca> | 2023-10-30 17:16:16 | 1 | 2,356 | JobHunter69 |
77,390,425 | 3,505,206 | Polars merging list of lists into a comma separated string while removing duplicates | <p>Theres a <a href="https://stackoverflow.com/questions/77053181/python-polars-merge-lists-of-lists">similar question</a> already on this, but the answer does not solve the question.</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({"id": [1, 2, 1], "name": ['jenobi', 'blah', 'jenobi'],
"company": [[['some company 1', 'some company2'], ['some company2']],
[['company 1'], ['company 2', 'company 3']],
[['some company 1'], ['some company2', 'some company 1', 'some company 2']]]
})
</code></pre>
<p>Dataframe follows the schema as above. Want to merge the lists of lists during a groupby and aggregate on the id and name.</p>
<p>Want the result to show a string concatenated value, for example jenobi should show the following company: "some company 1, some company2, some company 2".</p>
<p>Have tried doing a groupby agg on the company and flattening the result however this produces a panic error.</p>
<p>Based on jqurious comments, the issue with doing a flatten then a join is that the list is flattened. However, there are double quotes around the flattened sub-lists in the output.</p>
<p><a href="https://i.sstatic.net/fgEnu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fgEnu.png" alt="enter image description here" /></a></p>
<p>This is produced from the following..</p>
<pre class="lang-py prettyprint-override"><code>df.groupby("name").agg(pl.col("company").flatten().list.join(", "))
df.with_columns(pl.col("company").list.unique())
</code></pre>
<p>Ideally, the final result will show..
<a href="https://i.sstatic.net/MyJYB.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MyJYB.png" alt="enter image description here" /></a></p>
<h4>Panic Error</h4>
<pre class="lang-py prettyprint-override"><code>data = (
pl.read_parquet(r"input.parquet")
.select("id", "name", "company")
.groupby("id", "name")
.agg(
pl.col("company").flatten().list.unique()
)
)
</code></pre>
<p><a href="https://i.sstatic.net/h8nRj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/h8nRj.png" alt="enter image description here" /></a></p>
<p>Any suggestions?</p>
| <python><python-polars> | 2023-10-30 16:38:40 | 1 | 456 | Jenobi |
77,390,328 | 5,560,091 | Pandas Custom Groupby Shift that skips over horizon | <p>I would like a custom groupby shift function that first skips the first n days before fetching lag 1,2,3 and so on. Its important to note that there are missing days, we want skip over the missing days to fetch the lags.</p>
<p>Here is a sample df:</p>
<pre><code>import pandas as pd
import numpy as np
# Sample data
data = {
'group': ['A', 'A', 'A', 'B', 'B', 'B', 'B', 'C', 'C'],
'date': ['2023-01-01', '2023-01-03', '2023-01-04', '2023-02-01', '2023-02-02', '2023-02-05', '2023-02-06',
'2023-03-02', '2023-03-04'],
'value': [1, 2, 3, 4, 5, 6, 7, 8, 9]
}
horizon = 2
df = pd.DataFrame(data)
df['date'] = pd.to_datetime(df['date'])
display(df)
</code></pre>
<p>Given horizon=2 or in other words skip 1 day before beginning the shift operations, I would like the output to look like:</p>
<p><a href="https://i.sstatic.net/zpnRV.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/zpnRV.png" alt="desired output" /></a></p>
<p>Here is my failed attempt:</p>
<pre><code>def custom_shift(group, lag):
values = (group
.reindex(pd.date_range(start=group.index.min(), end=group.index.max()), fill_value=np.nan)
.shift(horizon-1)
.dropna()
.values
)
values = np.insert(values, 0, [np.nan]*(len(group.index) - len(values)))
return pd.Series(values, index=group.index).shift(lag)
df['value_lag1'] = (df
.set_index('date')
.groupby('group')['value']
.transform(custom_shift, lag=1)
.reset_index(drop=True)
)
df['value_lag2'] = (df
.set_index('date')
.groupby('group')['value']
.transform(custom_shift, lag=2)
.reset_index(drop=True)
)
display(df)
</code></pre>
<p><a href="https://i.sstatic.net/o9xtp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/o9xtp.png" alt="failed attempt" /></a></p>
| <python><pandas> | 2023-10-30 16:23:22 | 2 | 1,057 | nrcjea001 |
77,390,236 | 466,339 | Capturing repeated groups from a single Python regex | <p>I need to parse logs that may include one or more "FAULT" reports per line.
Then I need to extract (to simplify) the file name and the fault code for <em>each</em> occurrence.</p>
<p>I am struggling to find an elegant solution to retrieve all fault captures, though.</p>
<pre class="lang-py prettyprint-override"><code> lines = [
r"arbitrary/path/one.file(285): some unimportant text [FAULT: 1234], [FAULT: 4321]",
]
pattern = (r'^\s*(?P<path>[\w\/\\\.]+)\(\d+\):\s+([^\[]+)'
'((\[\s*FAULT:\s(?P<fault_code>\d{2,4})\])(,\s*)?)+$')
for line in lines:
for match in [m.groupdict() for m in re.finditer(pattern, line)]:
print(f"path: {match['path']}; fault_code: {match['fault_code']}")
</code></pre>
<p>(<a href="https://onecompiler.com/python/3zs23a2b4" rel="nofollow noreferrer">Live</a>)</p>
<p>In the example above I'd expect to have two matches:</p>
<pre><code>path: arbitrary/path/one.file; fault_code: 1234
path: arbitrary/path/one.file; fault_code: 4321
</code></pre>
<p>But I get only the longest lenght match:</p>
<pre><code>path: arbitrary/path/one.file; fault_code: 4321
</code></pre>
<p>Would anyone have any good suggestions, please?</p>
<hr />
<p><strong>Note:</strong></p>
<p>In my real problem, log is quite more complicated and there are more fields to extract both in common part and the fault specific part, but I tried to keep the example simple.</p>
<p>Thanks in advance!</p>
| <python><regex> | 2023-10-30 16:08:15 | 2 | 3,736 | j4x |
77,390,154 | 14,535,309 | Flask-restful: Did not attempt to load JSON data because the request Content-Type was not 'application/json' | <pre><code>Flask==2.3.3
Werkzeug==2.3.7
</code></pre>
<p>I'm trying to get a ms graph subscription going in my app but when I send the subscription request to:</p>
<p><code>https://graph.microsoft.com/v1.0/subscriptions</code></p>
<p>and get the follow-up request from ms grap to my endpoint (ValidateSubscription), flask refuses to parse it because of its incorrect <strong>mediatype</strong> which is <code>text/plain</code>. So far I've tried using <code>flask-accept</code> module to parse the response like this:</p>
<pre><code>class ValidateSubscription(Resource):
@accept('text/plain')
def post:
if flask.request.args.get("validationToken"):
token = flask.request.args.get('validationToken')
return Response(status=200, mimetype='text/plain', response=token)
else:
# process notification
pass
</code></pre>
<p>but it didn't work and I got the same error.</p>
<p>Also I've tried to add an <strong>api representation</strong> to my flask app like this:</p>
<pre><code>@api.representation('text/plain')
def output_text(data, code, headers=None):
resp = flask.make_response(data, code, headers)
resp.headers.extend(headers or {})
return resp
</code></pre>
<p>When I print out <code>api.representations</code> I see:</p>
<pre><code>OrderedDict([('application/json', <function output_json at 0x7f872b021424>), ('text/plain', <function output_text at 0x7f8728d04214>)])
</code></pre>
<p>And I still get the same exact error without any change whatsoever.
Is there a better way to allow flask-restful to accept a <code>text/plain</code> header or am I doing something wrong?</p>
| <python><flask><microsoft-graph-api><mime-types><flask-restful> | 2023-10-30 15:55:57 | 1 | 2,202 | SLDem |
77,390,134 | 7,615,872 | Construct Pydantic models from json at Runtime | <p>From Pydantic <a href="https://docs.pydantic.dev/latest/integrations/datamodel_code_generator/" rel="nofollow noreferrer">documentation</a>, it's described how to statically create a Pydantic model from a json description using a code generator called <code>datamodel-code-generator</code>.</p>
<p>My question here, is there a way or a workaround to do it dynamically in runtime without using a code generator. So I can construct Pydantic validators and use them when running the application. Something like:</p>
<pre><code>validator_description = {
"$id": "person.json",
"title": "Person",
"type": "object",
"properties": {
"first_name": {"type": "string", "description": "The person's first name."},
"last_name": {"type": "string", "description": "The person's last name."},
"age": {"description": "Age in years.", "type": "integer", "minimum": 0},
},
}
Person = Pydantic.from_dict_description(validator_description)
data = {"first_name": "John", "last_name": "Doe", "age": 25}
Person(**data) # accepted
</code></pre>
| <python><pydantic> | 2023-10-30 15:53:39 | 1 | 1,085 | Mehdi Ben Hamida |
77,390,094 | 22,371,917 | How to use User Profile With SeleniumBase? | <p>Code:</p>
<pre><code>from seleniumbase import Driver
driver = Driver(uc=True)
driver.get("https://example.com")
driver.click("a")
p_text = driver.find_element("p").text
print(p_text)
</code></pre>
<p>this code works fine but i want to add a user profile but when i try</p>
<pre><code>from seleniumbase import Driver
ud = r"C:\Users\USER\AppData\Local\Google\Chrome\User Data\Profile 9"
driver = Driver(uc=True, user_data_dir=ud)
driver.get("https://example.com")
driver.click("a")
p_text = driver.find_element("p").text
print(p_text)
</code></pre>
<p>this makes a profile called person 1 that works like a normal user and has everything saved but what if i want to access a specific profile?</p>
<p>edit: it goes to the path i give it but appends a \Default so it goes to the default profile of that path so:
C:\Users\USER\AppData\Local\Google\Chrome\User Data\Profile 9
would be
C:\Users\USER\AppData\Local\Google\Chrome\User Data\Profile 9\Default</p>
<p>Command Line "C:\Program Files\Google\Chrome\Application\chrome.exe" --window-size=1280,840 --disable-dev-shm-usage --disable-application-cache --disable-browser-side-navigation --disable-save-password-bubble --disable-single-click-autofill --allow-file-access-from-files --disable-prompt-on-repost --dns-prefetch-disable --disable-translate --disable-renderer-backgrounding --disable-backgrounding-occluded-windows --disable-features=OptimizationHintsFetching,OptimizationTargetPrediction --disable-popup-blocking --homepage=chrome://new-tab-page/ --remote-debugging-host=127.0.0.1 --remote-debugging-port=53654 --user-data-dir="C:\Users\fun64\AppData\Local\Google\Chrome\User Data\Profile 9" --lang=en-US --no-default-browser-check --no-first-run --no-service-autorun --password-store=basic --log-level=0 --flag-switches-begin --flag-switches-end --origin-trial-disabled-features=WebGPU
Executable Path C:\Program Files\Google\Chrome\Application\chrome.exe
Profile Path C:\Users\fun64\AppData\Local\Google\Chrome\User Data\Profile 9\Default</p>
| <python><python-3.x><google-chrome><selenium-webdriver><seleniumbase> | 2023-10-30 15:47:58 | 1 | 347 | Caiden |
77,390,045 | 764,285 | How do I make database calls asynchronous inside Telegram bot? | <p>I have a Django app that runs a Telegram chatbot script as a command.</p>
<p>I start the Django app with <code>python manage.py runserver</code>.
I start the telegram client with <code>python manage.py bot</code>.</p>
<p>I want to list the entries from the Animal table within the async method that is called when a user types "/animals" in the telegram chat. My code works if I use a hard-coded list or dictionary as a data source. However, I'm not able to get the ORM call to work in async mode.</p>
<p>File structure:</p>
<pre><code>|Accounts---------------------
|------| models.py------------
|Main-------------------------
|------| Management-----------
|---------------| Commands----
|-----------------------bot.py
</code></pre>
<p>Animal model:</p>
<pre><code>class Animal(models.Model):
id = models.AutoField(primary_key=True)
user = models.ForeignKey(User, on_delete=models.CASCADE)
name = models.CharField(max_length=255)
</code></pre>
<p>I removed a lot from the file, leaving only the relevant bits.<br />
bot.py</p>
<pre><code># removed unrelated imports
from asgiref.sync import sync_to_async
from accounts.models import Animal
class Command(BaseCommand):
help = "Starts the telegram bot."
# assume that the token is correct
TOKEN = "abc123"
def handle(self, *args, **options):
async def get_animals():
await Animal.objects.all()
async def animals_command(update: Update, context: ContextTypes.DEFAULT_TYPE) -> None:
async_db_results = get_animals()
message = ""
counter = 0
for animal in async_db_results:
message += animal.name + "\n"
counter += 1
await update.message.reply_text(message)
application = Application.builder().token(TOKEN).build()
application.add_handler(CommandHandler("animals", animals_command))
application.run_polling(allowed_updates=Update.ALL_TYPES)
</code></pre>
<p>Error message for this code:<br />
TypeError: 'coroutine' object is not iterable</p>
<p>Initially I had <code>Animal.objects.all()</code> in place of <code>async_db_results</code>. The ORM call is not async, so I got this error message:</p>
<pre><code>django.core.exceptions.SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async.
</code></pre>
<p>This is a prototype app, I know I should not be using <code>runserver</code>. And I should also use a webhook instead of long-polling, but I don't think these issues are related to my trouble with async.<br />
The next thing I'm going to try is using asyncio but I have spent a lot of time already, I figured I would ask the question.</p>
<p>I have looked at these resources (and many others):<br />
<a href="https://docs.djangoproject.com/en/4.2/topics/async/#asgiref.sync.sync_to_async" rel="nofollow noreferrer">docs.djangoproject.com: asgiref.sync.sync_to_async</a><br />
<a href="https://stackoverflow.com/questions/61926359/django-synchronousonlyoperation-you-cannot-call-this-from-an-async-context-u">stackoverflow: Django: SynchronousOnlyOperation: You cannot call this from an async context - use a thread or sync_to_async</a><br />
<a href="https://stackoverflow.com/questions/74737310/sync-to-async-django-orm-queryset-foreign-key-property?noredirect=1&lq=1">stackoverflow: Sync to Async Django ORM queryset foreign key property</a></p>
| <python><django><asynchronous><async-await><python-telegram-bot> | 2023-10-30 15:41:40 | 1 | 5,446 | afaf12 |
77,390,014 | 5,029,101 | ERROR: Could not build wheels for pyminizip, which is required to install pyproject.toml-based projects | <p>I was following the installation process for openimis backend -> openimis-be_py, but when I tried installing <code>pip install -r modules-requirements.txt</code> I get the following errors below. I have already installed microsoft visual c++ build tools 2022.</p>
<pre><code> Building wheel for pyminizip (setup.py) ... error
error: subprocess-exited-with-error
× python setup.py bdist_wheel did not run successfully.
│ exit code: 1
╰─> [5 lines of output]
running bdist_wheel
running build
running build_ext
building 'pyminizip' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for pyminizip
Running setup.py clean for pyminizip
Successfully built openimis-be-core openimis-be-individual openimis-be-workflow openimis-be-tasks_management openimis-be-report openimis-be-location openimis-be-medical openimis-be-medical_pricelist openimis-be-product openimis-be-insuree openimis-be-policy openimis-be-contribution openimis-be-payer openimis-be-payment openimis-be-claim openimis-be-claim_batch openimis-be-tools openimis-be-api_fhir_r4 openimis-be-calculation openimis-be-contribution_plan openimis-be-policyholder openimis-be-contract openimis-be-invoice openimis-be-calcrule_contribution_legacy openimis-be-calcrule_third_party_payment openimis-be-calcrule-capitation_payment openimis-be-calcrule_commission openimis-be-calcrule_contribution_income_percentage openimis-be-calcrule_fees openimis-be-calcrule_unconditional_cash_payment openimis-be-im_export openimis-be-dhis2_etl openimis-be-social_protection openimis-be-opensearch_reports openimis-be-payment_cycle openimis-be-calcrule_social_protection openimis-be-payroll
Failed to build pyminizip
ERROR: Could not build wheels for pyminizip, which is required to install pyproject.toml-based projects.
</code></pre>
| <python> | 2023-10-30 15:37:13 | 1 | 461 | Benjamin Ikwuagwu |
77,390,003 | 7,119,501 | How to fix data loss in multi threading with API calls and appending data to a Spark Dataframe? | <p>I have an API call: <code><API_URL></code> that will return a payload. Each API call corresponds to 1 record which should be ingested into a table.</p>
<p>There are 200,000 records that I need to ingest in my table, so I ran them in a loop by ingesting one by one and it took almost 5hours. I checked the logs and it was taking time to do all the file system updates, snapshots, log updates. For every insert, this process is repeated i.e., 200,000 times hence it is taking so long to complete processing small amount of records.</p>
<p>So I created an empty DataFrame and then kept appending each api call's output to it so that I have one single dataframe where I accumulate all the data and then simply write it into a table.
This is how I implemented multi threading in Python.</p>
<pre><code>def prepare_empty_df(schema, spark: SparkSession) -> DataFrame:
empty_rdd = spark.sparkContext.emptyRDD()
empty_df = spark.createDataFrame(empty_rdd, schema)
return empty_df
class RunApiCalls:
def __init__(self, df: DataFrame=None):
self.finalDf = df
def do_some_transformations(df: DataFrame) -> DataFrame:
return do_some_transformation_output_dataframe
def get_json(self, spark, PARAMETER):
try:
token_headers = create_bearer_token()
session = get_session()
api_response = session.get(f'API_URL/?API_PARAMETER={PARAMETER}', headers=token_headers)
print(f'API call: API_URL/?API_PARAMETER={PARAMETER} -> Status code: {api_response.status_code}')
api_json_object = json.loads(api_response.text)
string_data = json.dumps(api_json_object)
json_df = spark.createDataFrame([(1, string_data)],["id","value"])
api_dataframe = do_some_transformations(json_df)
self.finalDf = self.finalDf.unionAll(api_dataframe)
except Exception as error:
traceback.print_exc()
def api_main(self, spark, batch_size, state_names) -> DataFrame:
try:
for i in range(0, len(state_names), batch_size):
sub_list = state_names[i:i + batch_size]
threads = []
for index in range(len(sub_list)):
t = threading.Thread(target=self.get_json, name=str(index), args=(spark, sub_list[index]))
threads.append(t)
t.start()
for index, thread in enumerate(threads):
thread.join()
print(f"All Threads completed for this sub_list{i}")
return self.finalDf
except Exception as e:
traceback.print_exc()
if __name__ == "__main__":
spark = SparkSession.builder.appName('SOME_APP_NAME').getOrCreate()
batch_size = 15
empty_df = prepare_empty_df(schema=schema, spark=spark)
print('Created Empty Dataframe')
api_param_list = get_list()
print(f'api param list: {api_param_list}')
api_call = RunApiCalls(df=empty_df)
final_df = api_call.api_main(spark=spark, batch_size=batch_size, state_names=api_param_list)
final_df.write.mode('append').saveAsTable("some_database.some_tablebname")
</code></pre>
<p>When I submit this code, I could see multi threads running in the background and their log as well.
Log:</p>
<pre><code>All Threads completed for this sub_list0
API call: API_URL/?API_PARAMETER={PARAMETER} -> Status code: 200
....
API call: API_URL/?API_PARAMETER={PARAMETER} -> Status code: 200
All Threads completed for this sub_list15
API call: API_URL/?API_PARAMETER={PARAMETER} -> Status code: 200
....
API call: API_URL/?API_PARAMETER={PARAMETER} -> Status code: 200
All Threads completed for this sub_list30
API call: API_URL/?API_PARAMETER={PARAMETER} -> Status code: 200
..
..
..
API call: API_URL/?API_PARAMETER={PARAMETER} -> Status code: 200
All Threads completed for this sub_list199985
</code></pre>
<p><a href="https://i.sstatic.net/M40hZ.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M40hZ.png" alt="enter image description here" /></a></p>
<p>In <code>do_some_transformations()</code>, I am doing nothing but applying a schema to the json output.
There are no errors/failures while I dump data into the table as well.
But when I checked the table for data, I don't see all of the records.
<code>select count(*) from some_database.some_tablebname</code> I only see <code>1735</code> records (result shown in the screenshot).
Also this count varies every time I run all of the threads. Sometimes it becomes <code>5000</code>, sometimes its is <code>8000</code> and henceforth.</p>
<p>All the API calls have status code of <code>200</code> and I also printed the content of API responses once and saw that the API calls are indeed returning data.
Could anyone let me know if I am doing any mistake here so that I can see the entire amount of data i.e. as many number of rows as they are in my list.</p>
| <python><apache-spark><pyspark><python-multiprocessing><python-multithreading> | 2023-10-30 15:36:05 | 2 | 2,153 | Metadata |
77,389,869 | 12,684,429 | Infill datetime index with all dates | <p>I have a dataframe with various dates and that dates equivalent value.</p>
<p>I would like to have, however, a dataframe whereby every day is accounted for and the empty day infills with previous values.</p>
<p>So at present I have</p>
<pre><code> Value
01/01/2013 23
09/01/2013 43
13/01/2013 12
19/01/2013 35
</code></pre>
<p>and I would like:</p>
<pre><code> Value
01/01/2013 23
02/01/2013 23
03/01/2013 23
04/01/2013 23
05/01/2013 23
06/01/2013 23
07/01/2013 23
08/01/2013 23
09/01/2013 43
10/01/2013 43
11/01/2013 43
12/01/2013 43
13/01/2013 12
14/01/2013 12
15/01/2013 12
16/01/2013 12
17/01/2013 12
18/01/2013 12
19/01/2013 35
</code></pre>
| <python><pandas><date><datetime> | 2023-10-30 15:18:54 | 2 | 443 | spcol |
77,389,825 | 1,191,058 | Serve file from zipfile using FastAPI | <p>I would like to serve a file from a zip file.</p>
<p>Is there some method to server files from a zip file that is nice and supports handling exceptions?</p>
<hr />
<p>Here are my experiments</p>
<p>There is the first naive approach but served files can be arbitrarily large so I don't want to load the whole content in memory.</p>
<pre class="lang-py prettyprint-override"><code>import zipfile
from typing import Annotated, Any
from fastapi import FastAPI, Depends
from fastapi.responses import StreamingResponse
app = FastAPI()
zip_file_path = "data.zip"
file_path = "index.html"
@app.get("/zip0")
async def zip0():
with zipfile.ZipFile(zip_file_path, 'r') as zip_file:
return Response(content=zip_file.read(file_path))
</code></pre>
<p>FastAPI/starlette provides <code>StreamingResponse</code> which should do exactly what I want but it does not work in this case with zipfile claiming <code>read from closed file.</code></p>
<pre class="lang-py prettyprint-override"><code>
@app.get("/zip1")
async def zip1():
with zipfile.ZipFile(zip_file_path, 'r') as zip_file:
with zip_file.open(file_path) as file_like:
return StreamingResponse(file_like)
</code></pre>
<p>I can do a hack by moving everything to another function and setting it as a dependency. Now I can use <code>yield</code> so it correctly streams the content and closes the file after finishing. The problem is that it is an ugly hack and also using <code>Response</code> as a dependency is not supported. Just "fixing" the type annotation from <code>Any</code> to <code>StreamingResponse</code> raises an assertion saying a big no-no to using <code>StreamingResponse</code> for dependency injection.</p>
<pre class="lang-py prettyprint-override"><code>def get_file_stream_from_zip():
with zipfile.ZipFile(zip_file_path, 'r') as zip_file:
with zip_file.open(file_path) as file_like:
yield StreamingResponse(file_like)
@app.get("/zip2")
async def zip2(
streaming_response: Annotated[Any, Depends(get_file_stream_from_zip)],
):
return streaming_response
</code></pre>
<p>I can do something in the middle that seems legit, but it is not possible to handle exceptions. Headers are already sent when the code recognizes that e.g. the zip file does not exist. Which is not a problem with the previous methods.</p>
<pre class="lang-py prettyprint-override"><code>def get_file_from_zip():
with zipfile.ZipFile(zip_file_path, 'r') as zip_file:
with zip_file.open(file_path) as file_like:
yield file_like
@app.get("/zip3")
async def zip3(
file_like: Annotated[BinaryIO, Depends(get_file_from_zip)],
):
return StreamingResponse(file_like)
</code></pre>
| <python><zip><fastapi><starlette> | 2023-10-30 15:13:38 | 0 | 3,487 | j123b567 |
77,389,793 | 1,515,891 | How to decode following string (dictionary as a string value to outer dictionary key) in Python as JSON? | <pre><code>t1 =
"""
{"abc": "{\"version\": \"1\"}"}
"""
ans = json.loads(t1)
</code></pre>
<p>Above code results in error as Python doesn't like that I'm providing dictionary as a string value of key "abc".</p>
<p>Assuming I cannot change the input mentioned above, I want to be able to decode the above-mentioned string in Python as JSON, how can I solve this problem?</p>
| <python><json> | 2023-10-30 15:09:40 | 1 | 2,324 | spt025 |
77,389,609 | 6,326,148 | Iteratively join tree branches/nodes that have the same leaf values | <p>Let's say I have a dataframe with features <code>x..</code> and outcomes <code>y</code>:</p>
<pre><code>import pandas as pd
def crossing(df1: pd.DataFrame, df2: pd.DataFrame) -> pd.DataFrame:
return pd.merge(df1.assign(key=1), df2.assign(key=1), on='key').drop(columns='key')
def crossing_many(*args):
from functools import reduce
return reduce(crossing, args)
df = crossing_many(
pd.DataFrame({'x1': ['A', 'B', 'C']}),
pd.DataFrame({'x2': ['X', 'Y', 'Z']}),
pd.DataFrame({'x3': ['xxx', 'yyy', 'zzz']}),
).assign(y = lambda d: np.random.choice([0, 1], size=len(d)))
</code></pre>
<p>I can plot a tree with <code>bigtree</code> package quite simply:</p>
<pre><code>from bigtree import dataframe_to_tree
def view_pydot(pdot):
from IPython.display import Image, display
plt = Image(pdot.create_png())
display(plt)
tree = (
df
.assign(y=lambda d: d['y'].astype('str'))
.assign(root='Everyone')
.assign(path=lambda d: d[['root'] + features + ['y']].agg('/'.join, axis=1))
.pipe(dataframe_to_tree, path_col='path')
)
view_pydot(tree_to_dot(tree))
</code></pre>
<p>I get something like: <a href="https://i.sstatic.net/846Ue.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/846Ue.png" alt="enter image description here" /></a></p>
<p>Tree is way complex than it could be. I want to iteratively "join" branches/nodes that have the same leave node - on all levels. For example, something like that:</p>
<p><a href="https://i.sstatic.net/846Ue.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/846Ue.png" alt="enter image description here" /></a></p>
<p>Basically I want to create as simple tree as possible so person will be able to use it in the sense, IF x1=A AND x2=X THEN 1 (so come to the decision through the shortest path possible). It would also make sense to remove nodes that cover all possible values for this features (for example <code>xxx|yyy|zzz</code>). Thanks!</p>
| <python><tree> | 2023-10-30 14:44:50 | 2 | 1,417 | mihagazvoda |
77,389,583 | 1,100,107 | Cannot create mpf from array | <p>I am not fluent in Python.</p>
<p>I have this code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pyvista as pv
from mpmath import hyp2f1, gamma, re, exp
k = 5
n = (k - 1)/k
rho = 1.0/np.sqrt(4**n * gamma((3 - n)/2) * gamma(1 + n/2)/(gamma((3 + n)/2)*gamma(1 - n/2)))
def phi(n, z):
return z**(1 + n)*hyp2f1((1 + n)/2, n, (3 + n)/2, z**2)/(1 + n)
def f(r, t):
z = r * exp(1j*t)
return np.array([
float(re(0.5*(phi(n, z)/rho - rho*phi(-n, z)))),
float(re(0.5j * (rho*phi(-n, z) + phi(n, z)/rho))),
float(re(z))
])
</code></pre>
<p>When I run <code>f(1, 1)</code>, it works fine. But when I do</p>
<pre class="lang-py prettyprint-override"><code>r_ = np.linspace(0, 2, 50)
t_ = np.linspace(1e-6, 2*np.pi, 50)
r, t = np.meshgrid(r_, t_)
x, y, z = f(r, t)
</code></pre>
<p>then I get the error <em>Cannot create mpf from array</em>. What's wrong?</p>
| <python><numpy><mpmath> | 2023-10-30 14:40:05 | 0 | 85,219 | Stéphane Laurent |
77,389,536 | 1,313,789 | User Mailbox usage report from Google Workspace | <p>I am trying to get a basic email usage report. We need to get user names and the size of their email boxes. The code provided below throws an exception</p>
<blockquote>
<p>"<HttpError 400 when requesting <a href="https://admin.googleapis.com/admin/directory/v1/users?alt=json" rel="nofollow noreferrer">https://admin.googleapis.com/admin/directory/v1/users?alt=json</a> returned "Bad Request". Details: "[{'message': 'Bad Request', 'domain': 'global', 'reason': 'badRequest'}]">**" on the line</p>
</blockquote>
<pre><code>users = service.users().list().execute()
</code></pre>
<p>The entire code is provided below:</p>
<pre><code>from __future__ import print_function
import os.path
import csv
import io
from google.auth.transport.requests import Request
from google.oauth2.credentials import Credentials
from google_auth_oauthlib.flow import InstalledAppFlow
from googleapiclient.discovery import build
# If modifying these scopes, delete the file token.json.
#SCOPES = ['https://www.googleapis.com/auth/admin.directory.user.readonly']
SCOPES = ['https://admin.googleapis.com/admin/directory/v1/users']
creds = None
# The file token.json stores the user's access and refresh tokens, and is
# created automatically when the authorization flow completes for the first
# time.
if os.path.exists('token.json'):
creds = Credentials.from_authorized_user_file('token.json', SCOPES)
# If there are no (valid) credentials available, let the user log in.
if not creds or not creds.valid:
if creds and creds.expired and creds.refresh_token:
creds.refresh(Request())
else:
flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
# Save the credentials for the next run
with open('token.json', 'w') as token:
token.write(creds.to_json())
service = build('admin', 'directory_v1', credentials=creds)
print('Getting users in the domain')
users = service.users().list().execute()
# Create a dictionary to store the user data.
user_data = {}
# Iterate over the list of users and get their first name, last name, and mailbox size.
for user in users["users"]:
user_data[user["primaryEmail"]] = {
"firstName": user["name"]["givenName"],
"lastName": user["name"]["familyName"],
"mailboxSize": user["storage"]["quotaUsage"],
}
# Open the CSV file for writing.
with open("user_data.csv", "w", newline="") as f:
writer = csv.writer(f)
# Write the header row.
writer.writerow(["email", "firstName", "lastName", "mailboxSize"])
# Iterate over the user data and write each user's data to a new row in the CSV file.
for email, data in user_data.items():
writer.writerow([email, data["firstName"], data["lastName"], data["mailboxSize"]])
# Close the CSV file.
f.close()
</code></pre>
<p>The credentials are correct. Tested multiple times. As you can see from the sample I tried using different scopes.
What I am doing wrong here?</p>
| <python><google-api><google-admin-sdk><google-api-python-client><google-directory-api> | 2023-10-30 14:34:25 | 1 | 2,908 | Yuri |
77,389,396 | 7,347,925 | How to check lon/lat polygon pixels over land or ocean quickly? | <p>I have 2d lon/lat arrays and am trying to check the land type like this:</p>
<pre><code>import numpy as np
from shapely.geometry import Polygon
import cartopy.io.shapereader as shpreader
from shapely.ops import unary_union
lon = np.arange(-180, 181, .1)
lat = np.arange(-90, 91, .1)
lons, lats = np.meshgrid(lon, lat)
land_shp_fname = shpreader.natural_earth(resolution='110m',
category='physical', name='land')
land_geom = unary_union(list(shpreader.Reader(land_shp_fname).geometries()))
grid_names = np.empty_like(lons, dtype=int)
for i in range(len(lon)-1):
for j in range(len(lat)-1):
poly = Polygon([(lon[i], lat[j]), (lon[i+1], lat[j]),
(lon[i+1], lat[j+1]), (lon[i], lat[j+1])])
if poly.intersects(land_geom):
grid_names[j,i] = 1 # Land
else:
grid_names[j,i] = 0 # Ocean
</code></pre>
<p>The speed is slow for creating the high-resolution one for 1000x1000 pixels. Any suggestions for improvement?</p>
| <python><numpy><shapely><cartopy> | 2023-10-30 14:16:05 | 2 | 1,039 | zxdawn |
77,389,253 | 1,080,189 | Connexion response validation | <p>Using the latest stable version of Connexion (2.14.2) the <a href="https://connexion.readthedocs.io/en/stable/response.html#response-validation" rel="nofollow noreferrer">documentation</a> states that response validation can be used to "validate all the responses using jsonschema and is specially useful during development"</p>
<p>With the following basic app and openapi specification, the validation mechanism doesn't pick up that the response contains a JSON key/value pair that isn't defined in the schema:</p>
<pre class="lang-yaml prettyprint-override"><code>openapi: "3.0.0"
info:
title: Hello World
version: "1.0"
servers:
- url: /openapi
paths:
/version:
get:
operationId: app.version
responses:
200:
description: Version
content:
application/json:
schema:
type: object
properties:
version:
description: Version
example: '1.2.3'
type: string
</code></pre>
<pre class="lang-py prettyprint-override"><code>import connexion
def version() -> dict:
return {
'foo': 'bar',
'version': '1.2.3'
}
app = connexion.FlaskApp(__name__)
app.add_api('openapi.yaml', validate_responses=True)
app.run(port=3000)
</code></pre>
<p>Validation appears to be working generally, by that I mean I can change the definition of the <code>version</code> function to the following and a validation error will be produced:</p>
<pre class="lang-py prettyprint-override"><code>def version() -> dict:
return {
'foo': 'bar',
'version': 5
}
</code></pre>
<pre class="lang-json prettyprint-override"><code>{
"detail": "5 is not of type 'string'\n\nFailed validating 'type' in schema['properties']['version']:\n {'description': 'Version', 'example': '1.2.3', 'type': 'string'}\n\nOn instance['version']:\n 5",
"status": 500,
"title": "Response body does not conform to specification",
"type": "about:blank"
}
</code></pre>
<p>Is there something I'm doing wrong here or does Connexion/jsonschema not perform validation of extraneous (or unknown) key/value pairs?</p>
| <python><validation><openapi><connexion> | 2023-10-30 13:54:57 | 1 | 1,626 | gratz |
77,389,205 | 1,256,496 | Why is my async Python unit test using mock not catching the assertion? | <p>I have a simple Python script <code>check_all_members.py</code> that calls the Microsoft Graph API to check some Entra ID groups and their members.</p>
<pre class="lang-py prettyprint-override"><code>"""Check if a group contains any external users."""
import asyncio
from msgraph import GraphServiceClient
from azure.identity import DefaultAzureCredential
GROUP_OBJECT_ID = "a69bc697-1c38-4c81-be00-b2632e04f477"
credential = DefaultAzureCredential()
client = GraphServiceClient(credential)
async def get_group_members():
"""Get all members of a group and check if there are any external users."""
members = await client.groups.by_group_id(GROUP_OBJECT_ID).members.get()
externals = [
member
for member in members.value
if member.user_principal_name.lower().startswith("x")
]
assert not externals, "Group contains external users"
asyncio.run(get_group_members())
</code></pre>
<p>I'm trying to write a unit test for this function and here is what I've got so far.</p>
<pre class="lang-py prettyprint-override"><code>import unittest
from unittest.mock import patch, AsyncMock
from check_all_members import get_group_members
class TestGetGroupMembers(unittest.IsolatedAsyncioTestCase):
@patch("check_all_members.client")
async def test_get_group_members_no_externals(self, mock_client):
mock_members = AsyncMock()
mock_members.get.return_value = {
"value": [
{"id": "123", "user_principal_name": "user1@example.com"},
{"id": "456", "user_principal_name": "xuser@example.com"},
]
}
mock_client.groups.by_group_id.return_value = mock_members
await get_group_members()
mock_client.groups.by_group_id.assert_called_once_with(
"a69bc297-1c88-4c89-be00-b2622e04f475"
)
</code></pre>
<p>That seems to work and also fails the test when I change the last assertion. However it should also raise an error since one "user_principal_name" starts with an "x". Unfortunately it doesn't and I cannot figure out why :(</p>
<pre class="lang-py prettyprint-override"><code> with self.assertRaises(AssertionError):
await get_group_members()
</code></pre>
<p>I'm getting the error message and it looks like my returned mock object isn't working properly.</p>
<blockquote>
<p>AssertionError: AssertionError not raised</p>
</blockquote>
<p>Do you have any ideas?</p>
| <python><unit-testing><python-asyncio><python-unittest> | 2023-10-30 13:49:51 | 0 | 16,425 | zemirco |
77,389,149 | 5,013,084 | Panel dashboard: links in multipage with loop not correct | <p>I am trying to create a multipage dashboard with <code>panel</code>.</p>
<p>My question is based on this <a href="https://stackoverflow.com/a/76222741/5013084">SO answer</a>:</p>
<pre class="lang-py prettyprint-override"><code>import panel as pn
from panel.template import FastListTemplate
pn.extension()
class Page1:
def __init__(self):
self.content = pn.Column("# Page 1", "This is the content of page 1.")
def view(self):
return self.content
class Page2:
def __init__(self):
self.content = pn.Column("# Page 2", "This is the content of page 2.")
def view(self):
return self.content
def show_page(page_instance):
main_area.clear()
main_area.append(page_instance.view())
pages = {
"Page 1": Page1(),
"Page 2": Page2()
}
page_buttons = {}
for page in pages:
page_buttons[page] = pn.widgets.Button(name=page, button_type="primary")
# page_buttons[page].on_click(lambda event: show_page(pages[page]))
page1_button, page2_button = page_buttons.values()
page1_button.on_click(lambda event: show_page(pages["Page 1"]))
page2_button.on_click(lambda event: show_page(pages["Page 2"]))
sidebar = pn.Column(*page_buttons.values())
main_area = pn.Column(pages["Page 1"].view())
template = FastListTemplate(
title="Multi-Page App",
sidebar=[sidebar],
main=[main_area],
)
template.servable()
</code></pre>
<p>If I am uncommenting this line in the <code>for</code> loop</p>
<pre class="lang-py prettyprint-override"><code>page_buttons[page].on_click(lambda event: show_page(pages[page]))
</code></pre>
<p>and remove (or comment as below) the two <code>on_click</code> statements</p>
<pre class="lang-py prettyprint-override"><code># page1_button.on_click(lambda event: show_page(pages["Page 1"]))
# page2_button.on_click(lambda event: show_page(pages["Page 2"]))
</code></pre>
<p>both links on the sidebar point to page 2.</p>
<p>Can somebody explain to me why this is the case and how I can fix this issue?</p>
<p>Note: Of course, for two pages, a for loop is not needed, however in my case, my app will include a few more pages, and I would like to make the code more robust (i.e. to avoid forgetting to add a page or a click event).</p>
<p>Thank you!</p>
<p>Note: <code>page1_button, page2_button = page_buttons.values()</code> is currently only used because my <code>for</code> loop does not work as intended right now.</p>
| <python><panel-pyviz> | 2023-10-30 13:42:15 | 1 | 2,402 | Revan |
77,389,092 | 2,991,243 | Retrieve text from an XML-formatted string in Python | <p>I have a list of strings that follow a relatively similar format. Here are two examples:</p>
<pre><code>text_1 = ''<abstract lang="en" source="my_source" format="org"><p id="A-0001" num="none">My text is here </p><img file="Uxx.md" /></abstract>''
text_2 = ''<abstract lang="db" source="abs" format="hrw" abstract-source="my_source"><p>Another text.</p></abstract>''
</code></pre>
<p>I can't vouch for other variations since it's an extensive collection of strings, but it's evident that the format is XML, and my sole objective is to retrieve the text from each of these strings. What do you sugest for this?</p>
| <python><xml><nsregularexpression> | 2023-10-30 13:33:44 | 3 | 3,823 | Eghbal |
77,389,023 | 8,126,390 | VS Code (Windows) workspace with multiple editable python packages | <p>I have a VS Code (1.83.1) workspace with multiple folders each containing a python package.</p>
<pre><code>-Package1
--ClassA
-Package2
--ClassB
</code></pre>
<p>Package1 and Package2 are both packages I'm developing, but Package2 uses the modules in Package1, among other modules. I have installed Package1 in the virtual environment for Package2 with the following command (I added the editable_mode recently to see if it helped...it did not).</p>
<pre><code>pip install -e c:\\users\\user\\GitLab\\modules --config-settings editable_mode=strict
</code></pre>
<p>If I'm editing within ClassB, then open ClassA, the Intellisense or syntax highlighting (language server?) immediately stops working and all text goes white for that ClassA package. The functionality seems to continue in Package2 just fine.</p>
<p>If I restart VS Code, everything works until I do the same above actions. If I only ever view Package1 and ClassA or other Package1 entities, it continues to work. It only seems to cause grief for Package1.</p>
<p>I tried looking at the Extension Logs for the Python language server and added the following to my User settings.json to increase verbosity, but nothing jumps out as an error like I was hoping:</p>
<pre><code>"python.analysis.logLevel": "Trace",
</code></pre>
<p>imports "working"</p>
<p><a href="https://i.sstatic.net/bIAFE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bIAFE.png" alt="enter image description here" /></a></p>
<p>imports "broken"</p>
<p><a href="https://i.sstatic.net/FB2Ky.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FB2Ky.png" alt="enter image description here" /></a></p>
<p>If I restart the Python language server:
<a href="https://i.sstatic.net/NnsYr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/NnsYr.png" alt="enter image description here" /></a>
functionality is restored</p>
<p>I noticed that if I use <code>"python.analysis.extraPaths"</code> in Package2, pointing to the directory for Package1, it causes code in Package1 to stop being parsed properly. It seems like perhaps there's a collision of name spaces or something similar.</p>
<p>I created a new workspace, and only added Package1 and Package2 directories as roots. When I did that, I noticed that when I CTRL+clicked a Package1 entity from Package2, it took me to the <code>build</code> sub-folder that was generated during the editable pip install:</p>
<pre><code>C:\Users\user\gitlab\modules\build\__editable__.modules-0.0.5-py3-none-any\...
</code></pre>
<p>and the text was unparsed as I saw earlier in my other workspace.</p>
<p>Back in my original workspace, I still see the issue. If I manually open the proper location from the file explorer, the parsing takes place as expected.</p>
| <python><visual-studio-code> | 2023-10-30 13:23:23 | 2 | 740 | Brian |
77,388,886 | 616,460 | Trouble porting some GL line strips to core profile | <p>I have to port some legacy OpenGL code to the 3.3+ core profile (which I'm only somewhat familiar with) but there's a specific section I'm having some trouble with because the only way I can think to do involves a pretty inflated amount of code.</p>
<p>Basically, I have this:</p>
<pre class="lang-py prettyprint-override"><code>def lines (*points):
glBegin(GL_LINE_STRIP)
for p in points:
glVertex3fv(p)
glEnd()
glLineWidth(1)
for camera in self._cameras.values():
for b in camera.cameraPparts.values():
lines(b.head, b.neck, b.center)
lines(b.neck, b.lshoulder, b.lelbow, b.lwrist)
lines(b.neck, b.rshoulder, b.relbow, b.rwrist)
lines(b.center, b.lhip, b.lknee, b.lankle)
lines(b.center, b.rhip, b.rknee, b.rankle)
</code></pre>
<p>That is, I draw a bunch of multi-segment lines, using <code>GL_LINE_STRIP</code>.</p>
<p>Each strip is a separate poly-line, so its five line strips in total. Also there are three "cameras" in that loop so it's really 15 line strips in total.</p>
<p>The only way I know how to do this with what I know so far of the core profile is:</p>
<ol>
<li>Create 15 separate VAO's</li>
<li>Create a VBO for each (15 times)</li>
<li>Load each poly-line into its own VBO (15 times)</li>
<li><code>glDrawArray</code> for each VBO (15 times)</li>
</ol>
<p>Which seems like a ton of code and data management for five lines.</p>
<p>Is there a smoother, less verbose way to make this happen?</p>
<p>I have a similar problem with some <code>GL_LINE_LOOP</code>s elsewhere, too, so anything here can apply to that as well.</p>
| <python><opengl><opengl-3> | 2023-10-30 13:05:19 | 0 | 40,602 | Jason C |
77,388,720 | 10,380,409 | Automation Testing with selenium Click doesn't works on new Safari 17 IOS Sonoma 14.1 | <p>everyone.
I would like to expose my problem.
My tests started to fail on Safari since I did upgrade to Safari 17, IOS Sonoma 14.1.
In particular, is the click event of an element for example.
Element.click() or button.click()</p>
<pre><code> elem = web_driver.find_element(By.XPATH,Mylocator)
self.mouse_over(elem)
elem.click()
</code></pre>
<p>it seems that the click event is not released, only by doing the click with JS, I can get it to work.</p>
<pre><code>web_driver.execute_script("arguments[0].click();", elem)
</code></pre>
<p>I want to point out that the button is visible and not hidden from some other elements and I have no problems with other browsers (Chrome, Firefox, Edge).
I had no problems even with Safari before the upgrade, everything worked fine.</p>
<p>Has anyone had my problem? if yes did you solve it? I would not like to use js all the time to perform my click tests.
Any info is important thank you.</p>
<pre><code>IOS Sonoma 14.1
Safari 17.1
Selenium 4.14.0
</code></pre>
<p>Update---
I found that the problem occurs when on the machine where the test runs some other application is open, iTerm or ActivityMonitor some alert, etc... quitting the applications the test works normally.
If the safari window goes in the background the test fails, or rather the click is not released the item is not found</p>
<p>P.s The same tests are pass without any errors in Chrome, Firefox and Edge</p>
| <python><selenium-webdriver><safari><automation-testing><macos-sonoma> | 2023-10-30 12:39:25 | 3 | 826 | Angelotti |
77,388,712 | 1,173,629 | Python passing sys.stdout directly and via variable change the printing order | <p>I am trying to capture the <code>print</code> message from a function, so I can use them later. I have already found the following code for it.</p>
<pre><code>import io
import sys
from contextlib import redirect_stdout
f = io.StringIO()
def hello():
print("From function", file=sys.stdout)
with redirect_stdout(f):
hello()
print("Message from main")
print(f.getvalue())
</code></pre>
<p>Its output is the following, which is the expected output.</p>
<pre><code>Message from main
From function
</code></pre>
<p>But I noticed that behaviour changes if the <code>sys.stdout</code> gets assigned to a variable and then gets passed to the <code>print</code> function. Like following:</p>
<pre><code>import io
import sys
from contextlib import redirect_stdout
filedest = sys.stdout
f = io.StringIO()
def hello():
print("From function", file=filedest)
with redirect_stdout(f):
hello()
print("Message from main")
print(f.getvalue())
</code></pre>
<p>The output is the following</p>
<pre><code>From function
Message from main
</code></pre>
<p>We can see the message has been revered, so I am confused about assigning the <code>sys.stdout</code> directly to the print function and through the variable what's causing it?</p>
| <python> | 2023-10-30 12:38:12 | 1 | 1,767 | Gul |
77,388,298 | 4,435,175 | How to download file from https://docs.google.com/spreadsheets with Google API and authentification / service account? | <p>I want to automatically download a file on a daily basis from a <code>https://docs.google.com/spreadsheets</code> account (service account).</p>
<p>I have a cred.json file with:</p>
<pre><code>{
"type": "service_account",
"project_id": "id_1234",
"private_key_id": "12345678901234567890",
"private_key": "-----BEGIN PRIVATE KEY-----\n1234567890\n-----END PRIVATE KEY-----\n",
"client_email": "id_1234@id_1234.iam.gserviceaccount.com",
"client_id": "1234567890",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1/metadata/x509/id_1234%40id_1234.iam.gserviceaccount.com",
"universe_domain": "googleapis.com"
}
</code></pre>
<p>So far I have :</p>
<pre><code>import os
import io
import google.auth
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from googleapiclient.http import MediaIoBaseDownload
os.environ["GOOGLE_APPLICATION_CREDENTIALS"] = ".env"
with build(serviceName="drive", version="v3", credentials=os.environ) as service:
???
</code></pre>
<p>I can't find a complete example in the Google API docs for my use case?</p>
| <python><google-api-python-client> | 2023-10-30 11:33:06 | 1 | 2,980 | Vega |
77,388,075 | 15,456,681 | Large performance difference between indexing and reshape in numba | <p>In numba >= 0.57 we can now add axes to arrays by indexing with <code>None</code> whereas previously this would have raised:</p>
<pre><code>TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in function getitem>) found for signature:
>>> getitem(array(float64, 2d, F), Tuple(slice<a:b>, none))
</code></pre>
<p>so I would've used reshape instead as this was supported. I have noticed though that there is a large difference in performance between using indexing with <code>None</code> and reshape, and was wondering what explains this?
Consider the following code, in which the times with pure numpy are the same but the indexing method is ~2x faster than reshape with numba:</p>
<pre><code>import numpy as np
import numba as nb
K = 4000
m = np.random.rand(K, 3)
n = np.random.rand(K, 3)
m, n = m.T, n.T
def func(m, n):
return m[:, None] * m[None, :] - n[:, None] * n[None, :]
def func2(m, n):
assert m.shape == n.shape
x, y = m.shape
return m.reshape(x, 1, y) * m.reshape(1, x, y) - n.reshape(
1, x, y
) * n.reshape(x, 1, y)
func_nb = nb.njit(func)
func2_nb = nb.njit(func2)
assert np.allclose(func(m, n), func_nb(m, n))
assert np.allclose(func(m, n), func2_nb(m, n))
assert np.allclose(func2(m, n), func2_nb(m, n))
%timeit func(m, n)
%timeit func2(m, n)
%timeit func_nb(m, n)
%timeit func2_nb(m, n)
</code></pre>
<p>Output:</p>
<pre><code>227 µs ± 2.58 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
226 µs ± 610 ns per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
49.9 µs ± 268 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
96.6 µs ± 166 ns per loop (mean ± std. dev. of 7 runs, 10,000 loops each)
</code></pre>
| <python><numpy><performance><numba> | 2023-10-30 10:58:15 | 0 | 3,592 | Nin17 |
77,388,062 | 3,138,238 | Activate a Chrome Extension with a "Browser Action" in Selenium with python | <p>For test purposes I need to use a Chrome extension inside my Selenium tests.
In the picture you just see the extensions button (the puzzle piece) and under that (clicking) there is the extension I want to activate.<br><br>
<a href="https://i.sstatic.net/EEENo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EEENo.png" alt="ChromeBar" /></a></p>
<p>Since the code of the extension is available I have added an <code>_execute_browser_action</code> to the <code>manifest.json</code> to activate the extension easily using a selenium ActionChains shortcut.
This is my edited <code>manifest.json</code> file:</p>
<pre><code>{
"manifest_version": 2,
"name": "The Extension Name",
"...", "...",
"...", "...",
"...", "...",
"commands": {
"_execute_browser_action": {
"suggested_key": {
"default": "Ctrl+I",
"mac": "MacCtrl+I"
}
}
}
}
</code></pre>
<p>The problem is that I tried running this code but the extension is not triggered (even if I checked the shortcut manually and it works perfectly):</p>
<pre><code>from selenium import webdriver
from selenium.webdriver import ActionChains, Keys
options = webdriver.ChromeOptions()
# options.add_extension('./my-extension.crx')
options.add_argument("--load-extension=./my-extension-chromium")
options.add_argument("--start-maximized")
driver = webdriver.Chrome(options=options)
url = "http://localhost:9000/"
driver.get(url)
ActionChains(driver)\
.key_down(Keys.CONTROL)\
.send_keys("i")\
.key_up(Keys.CONTROL)\
.perform()
</code></pre>
<p>The other solution I tested, always without success is with <code>pyautogui</code>:</p>
<pre><code>import pyscreeze
import PIL
from selenium import webdriver
import pyautogui
__PIL_TUPLE_VERSION = tuple(int(x) for x in PIL.__version__.split("."))
pyscreeze.PIL__version__ = __PIL_TUPLE_VERSION
options = webdriver.ChromeOptions()
# options.add_extension('./my-extension.crx')
options.add_argument("--load-extension=./my-extension-chromium")
options.add_argument("--start-maximized")
driver = webdriver.Chrome(options=options)
url = "http://localhost:9000/"
driver.get(url)
# Click on extension icon
v = pyautogui.locateOnScreen("./puzzle_piece_icon.png")
print(v)
pyautogui.click(x=v.left, y=v.top, clicks=1, interval=0.0, button="left")
# that click is not working. after that i should click on the extension icon
# ext_icon = pyautogui.locateOnScreen("./extension_icon.png")
# pyautogui.click(x=ext_icon.left, y=ext_icon.top, clicks=1, interval=0.0, button="left")
driver.quit()
</code></pre>
<p>I would prefer to make the first solution work (with the ActionChains trigger) if it were possible... as I wouldn't have to integrate the <code>pyautogui</code> library.
But any other working solution is welcome.</p>
| <python><selenium-webdriver><testing><automated-tests><keyboard-shortcuts> | 2023-10-30 10:56:27 | 0 | 7,311 | madx |
77,388,002 | 848,746 | huggingface embedding large csv in batches | <p>I have a large csv file (35m rows) in the following format:</p>
<pre><code>id, sentence, description
</code></pre>
<p>Normally in inference mode, Id like to use model like so:</p>
<pre><code>for iter_through_csv:
model = SentenceTransformer('flax-sentence-embeddings/some_model_here', device=gpu_id)
encs = model.encode(row[1], normalize_embeddings=True)
</code></pre>
<p>But since I have GPUs Id like to batch it. However, the size is large (35m), so I do not want to read in memory and batch.</p>
<p>I am struggling to find a template to batch csv on huggingface.
What is the most optimal way to do this?</p>
| <python><csv><sentence-transformers> | 2023-10-30 10:47:17 | 2 | 5,913 | AJW |
77,387,881 | 6,221,742 | Timeseries with pycaret hangs in compare models | <p>I am trying to make a timeseries forecasting using pycaret autoML package using the data in the following link <a href="https://www.kaggle.com/datasets/koureasstavros/parts-revenue-from-automotive-industry-dealer" rel="nofollow noreferrer">parts_revenue_data</a> in google colab. When I try to compare the models and find the best the code hangs and stays at 20%.</p>
<p><a href="https://i.sstatic.net/GvcQr.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GvcQr.png" alt="compare_models" /></a></p>
<p>The code can be found in the following</p>
<pre><code># Only enable critical logging (Optional)
import os
os.environ["PYCARET_CUSTOM_LOGGING_LEVEL"] = "CRITICAL"
def what_is_installed():
from pycaret import show_versions
show_versions()
try:
what_is_installed()
except ModuleNotFoundError:
!pip install pycaret
what_is_installed()
import pandas as pd
import numpy as np
import pycaret
pycaret.__version__ # 3.1.0
df = pd.read_csv('parts_revenue.csv', delimiter=';')
from pycaret.utils.time_series import clean_time_index
cleaned = clean_time_index(data=df,
index_col='Posting Date',
freq='D')
# Verify the resulting DataFrame
print(cleaned.head(n=50))
# parts['MA12'] = parts['Parts Revenue'].rolling(12).mean()
# import plotly.express as px
# fig = px.line(parts, x="Posting Date", y=["Parts Revenue",
# "MA12"], template = 'plotly_dark')
# fig.show()
import time
import numpy as np
from pycaret.time_series import *
# We want to forecast the next 12 days of data and we will use 3
# fold cross-validation to test the models.
fh = 12 # or alternately fh = np.arange(1,13)
fold = 3
# Global Figure Settings for notebook ----
# Depending on whether you are using jupyter notebook, jupyter lab,
# Google Colab, you may have to set the renderer appropriately
# NOTE: Setting to a static renderer here so that the notebook
# saved size is reduced.
fig_kwargs = {
# "renderer": "notebook",
"renderer": "png",
"width": 1000,
"height": 600,
}
"""## EDA"""
eda = TSForecastingExperiment()
eda.setup(cleaned,
fh=fh,
numeric_imputation_target = 0,
fig_kwargs=fig_kwargs
)
eda.plot_model()
eda.plot_model(plot="diagnostics",
fig_kwargs={"height": 800, "width": 1000}
)
eda.plot_model(
plot="diff",
data_kwargs={"lags_list": [[1], [1, 7]],
"acf": True,
"pacf": True,
"periodogram": True},
fig_kwargs={"height": 800, "width": 1500} )
"""## Modeling"""
exp = TSForecastingExperiment()
exp.setup(data = cleaned,
fh=fh,
numeric_imputation_target = 0.0,
fig_kwargs=fig_kwargs,
seasonal_period = 5
)
# compare baseline models
best = exp_ts.compare_models(errors = 'raise') # CODE HANGS HERE!
# plot forecast for 36 months in future
plot_model(best,
plot = 'forecast',
data_kwargs = {'fh' : 24}
)
</code></pre>
<p>Is this related with a bug in pycaret or is something wrong with the code?</p>
| <python><time-series><google-colaboratory><forecasting><pycaret> | 2023-10-30 10:30:56 | 1 | 339 | AndCh |
77,387,793 | 1,232,660 | Executing python code with '-c' flag triggers bash | <p>I am using Python with the <a href="https://docs.python.org/3/using/cmdline.html#cmdoption-c" rel="nofollow noreferrer">command-line option <code>-c</code></a>. Printing some strings works:</p>
<pre><code>python -c "print('foo')"
foo
</code></pre>
<p>but printing an exclamation mark triggers some bash-related error:</p>
<pre><code>python -c "print('!')"
-bash: !': event not found
</code></pre>
<p>Can anyone explain what I am doing wrong? I cannot understand why python/bash could not parse this very simple example.</p>
| <python><bash> | 2023-10-30 10:17:05 | 0 | 3,558 | Jeyekomon |
77,387,669 | 10,020,441 | How can I concat strings that might be empty? | <p>Currently I am trying to concat three strings in my Ansible playbook. Two of them can be unset (<code>None</code>/<code>null</code>) and I need them to be separated by underscores.</p>
<p>Here is an example: <code>type_mode="A"</code>, <code>level_mode=None</code>, and <code>what_to_run="C"</code> combines to <code>A_C</code>. If both where <code>None</code> it would just be <code>C</code>, if all where set it would be <code>A_B_C</code>.</p>
<p>My idea was to do this bit in Python:</p>
<pre class="lang-yaml prettyprint-override"><code>- name: "Set Name"
set_fact:
name: "{{ ('' if type_mode is None else type_mode + '_') + ('' if level_mode is None else level_mode + '_') + what_to_run }}"
</code></pre>
<p>But then I get this error:</p>
<pre class="lang-json prettyprint-override"><code>{
"msg": "template error while templating string: Could not load \"None\": 'None'. String: {{ ('' if type_mode is None else type_mode + '_') + ('' if level_mode is None else level_mode + '_') + what_to_run }}. Could not load \"None\": 'None'",
"_ansible_no_log": false
}
</code></pre>
<p>Do you have a idea how I can concat the (possible missing) values with an underscore separator?</p>
| <python><string><ansible> | 2023-10-30 09:59:07 | 1 | 515 | Someone2 |
77,387,641 | 4,451,521 | Why nan values appear on plotly heatmap? | <p>I have the following script</p>
<pre><code>import plotly.graph_objects as go
X = [0.001872507, 0.001873447, 0.001874379, 0.001875308, 0.001876231, 0.001877156, 0.001878074, 0.001878988, 0.001879891, 0.00188079]
Y = [15.87916667, 15.8375, 15.90277778, 15.92638889, 16.0875, 16.05833333, 16.11527778, 16.04166667, 16.1125, 16.09583333]
error = [-0.442834483, -0.404160852, -0.361559069, -0.319924963, -0.27351035, -0.222068364, -0.16524067, -0.194427625, -0.173130923, -0.13978894]
fig = go.Figure(data=go.Heatmap(
x=X,
y=Y,
z=error,
colorscale='Viridis'))
fig.update_layout(
title='Heatmap',
xaxis_title='X',
yaxis_title='Y'
)
fig.show()
</code></pre>
<p>As a result I have</p>
<p><a href="https://i.sstatic.net/RT1TO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RT1TO.png" alt="enter image description here" /></a></p>
<p>You can see that when I hover over some point there is text "X:0.001873447 y: 16.04167,x:NaN"
Why does this appear? Can this be avioded?</p>
| <python><plotly> | 2023-10-30 09:55:19 | 1 | 10,576 | KansaiRobot |
77,387,622 | 4,586,008 | Looking for a way to visualize sequence data in Python | <p>Suppose I have a Pandas dataframe which looks as follow:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Event Type</th>
<th>Start Time</th>
<th>End Time</th>
</tr>
</thead>
<tbody>
<tr>
<td>A</td>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>B</td>
<td>2.5</td>
<td>5</td>
</tr>
<tr>
<td>C</td>
<td>9.5</td>
<td>11</td>
</tr>
<tr>
<td>A</td>
<td>6</td>
<td>9</td>
</tr>
</tbody>
</table>
</div>
<p>I want to draw it as like:</p>
<p><a href="https://i.sstatic.net/M4Uco.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/M4Uco.png" alt="demo viz" /></a></p>
<p>Note that it is possible for events of the same type to overlap each other.</p>
<p>What package and how can it be done in Python?</p>
| <python><pandas><visualization> | 2023-10-30 09:52:29 | 1 | 640 | lpounng |
77,387,509 | 839,733 | Dynamic padding with period using f-string | <p>Given a width <code>n</code>, and an index <code>i</code>, how to generate a string of length <code>n</code> with <code>x</code> at index <code>i</code>, and <code>.</code> at the remaining indices?</p>
<p>For example:</p>
<pre><code>Input: n = 4, i = 1
Output: ".x.."
</code></pre>
<p>Currently I'm doing:</p>
<pre><code>"".join("x" if j == i else "." for j in range(n))
</code></pre>
<p>Another option:</p>
<pre><code>("." * i) + "x" + ("." * (n - i - 1))
</code></pre>
<p>I can also do:</p>
<pre><code>f"{('.' * i)}x{('.' * (n - i - 1))}"
</code></pre>
<p>All work, but I'm wondering if there's a way to do this with f-string, perhaps using some form of padding, as shown <a href="https://stackoverflow.com/a/57826337/839733">here</a>?</p>
| <python><string><string-formatting><padding><f-string> | 2023-10-30 09:33:33 | 1 | 25,239 | Abhijit Sarkar |
77,387,489 | 1,159,290 | Why can modules be imported again after removing their location from sys.path? | <p>My goal is to import a module from a given path, and let my Python program import other modules with the same module name, but imported from different location later on.</p>
<p>I thought I'd do that by changing the system path, but I am facing an issue when removing things from it: it does not look it is really having any effect.</p>
<p>This question may be related to <a href="https://stackoverflow.com/questions/13793921/removing-path-from-python-search-module-path">Removing path from Python search module path</a>, though the question was not really clear there (if related at all) and therefore nor fully answered...</p>
<p>Here is a simple test showing my issue:</p>
<p>First, I created a python module called pttest, in /root</p>
<pre class="lang-none prettyprint-override"><code>:~/bin# cat /root/pttest.py
pttest="/root/"
</code></pre>
<p>Then I just show my current working dir (just being elsewhere)</p>
<pre class="lang-none prettyprint-override"><code>:~/bin# pwd
/root/bin
</code></pre>
<p>Then, I start a Python interpreter, and show the default <code>sys.path</code>... all fine...</p>
<pre class="lang-none prettyprint-override"><code>:~/bin# python3
Python 3.10.4 (main, Apr 2 2022, 09:04:19) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import sys
>>> sys.path
['', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.10/dist-packages']
</code></pre>
<p>From this Python interpreter, I ask for pttest. Since there has been no import, the variable is not found. as expected.</p>
<pre class="lang-none prettyprint-override"><code>>>> pttest
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'pttest' is not defined
</code></pre>
<p>Now, I try to import pttest, but since I have never updated the path, it fails: as expected...</p>
<pre class="lang-none prettyprint-override"><code>>>> from pttest import pttest
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ModuleNotFoundError: No module named 'pttest'
</code></pre>
<p>Now, I add the include dir into the path and show the modified path:</p>
<pre class="lang-none prettyprint-override"><code>>>> sys.path.append("/root")
>>> sys.path
['', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.10/dist-packages', '/root']
</code></pre>
<p>Trying to refer to pttest still does not work since it hasn't yet been imported:</p>
<pre class="lang-none prettyprint-override"><code>>>> pttest
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'pttest' is not defined
</code></pre>
<p>I import my module, and now, since the path contains the correct import directory, it is imported successfully:</p>
<pre class="lang-none prettyprint-override"><code>>>> from pttest import pttest
>>> pttest
'/root/'
</code></pre>
<p>So far so good...
Now I remove the include directory from the system path and show it (clearly showing /root is no longer there):</p>
<pre class="lang-none prettyprint-override"><code>>>> sys.path.remove("/root")
>>> sys.path
['', '/usr/lib/python310.zip', '/usr/lib/python3.10', '/usr/lib/python3.10/lib-dynload', '/usr/local/lib/python3.10/dist-packages', '/usr/lib/python3/dist-packages', '/usr/lib/python3.10/dist-packages']
</code></pre>
<p>I also delete the variable my package defined and showing it no longer exists:</p>
<pre class="lang-none prettyprint-override"><code>>>> del pttest
>>> pttest
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
NameError: name 'pttest' is not defined
</code></pre>
<p><strong>Now, the problem:</strong></p>
<p>I try to re-import my module: since the /root directory which contains this module was removed from the path, I assumed this would not work, but...</p>
<pre class="lang-none prettyprint-override"><code>>>> from pttest import pttest
>>> pttest
'/root/'
</code></pre>
<p>So it looks indeed that despite the directory being shown as removed from the sytsem path, internally, somewhere it is still there, since the module can still be imported successfully.</p>
<p>That is a problem for me since I need to control where modules (with identical module names) are imported from...</p>
<p>What is the explanation to the above behaviour?</p>
| <python><import><module><path> | 2023-10-30 09:28:58 | 2 | 1,003 | user1159290 |
77,387,338 | 13,518,907 | RetrievalQAWithSourcesChain - NotImplementedError: Saving not supported for this chain type | <p>I built a RAG pipeline and now want to save the model/pipeline locally. However when I try to save it I get an error message. Here is the code and the error output:</p>
<pre><code>prompt_template = """ You are a chatbot having a conversation with a human. You can only answer in the German language.
Do not put English language or English translations into your answer.
{summaries}
Human: {question}
Chatbot:
"""
from langchain.chains import RetrievalQA
from langchain.chains import RetrievalQAWithSourcesChain
prompt = PromptTemplate(input_variables=["summaries","question"],
template=prompt_template)
chain_type_kwargs = {"prompt": prompt}
#search_kwargs={'k': 7} -> The more the better, aber irgendwann ist Context Limit errreicht
rag_pipeline = RetrievalQAWithSourcesChain.from_chain_type(
llm=model, chain_type='stuff',
retriever=vectordb.as_retriever(),
chain_type_kwargs=chain_type_kwargs,
)
rag_pipeline.save("llama_rag_modell.json")
</code></pre>
<p>Results in:</p>
<pre><code>NotImplementedError: Saving not supported for this chain type.
</code></pre>
<p>So how can I save my pipeline?</p>
| <python><langchain><large-language-model> | 2023-10-30 09:04:48 | 1 | 565 | Maxl Gemeinderat |
77,387,302 | 2,919,052 | Send email with Python in Windows 11 with "Outlook (New)" | <p>I am trying to send an email in a Windows 11 machine, with a Python code that was previously working on a different Windows 10 machine with Outlook installed.</p>
<p>It seems like, for some reason, in Windows 11, the "real" Outlook desktop app is not installed...instead, it installs some sort of (webview maybe??) version of Outlook that it calls "Outlook New".</p>
<p>Anyway, the issue is that, the original code below that was working before does no longer work.</p>
<pre><code>import win32com.client as win32
outlook = win32.Dispatch("Outlook.Application")
mail = outlook.CreateItem(0)
mail.Subject = "Email Subject"
mail.Body = "Email body,Python win32com and Outlook."
mail.To = "valid@email.here"
mail.Send()
</code></pre>
<p>This fails with traceback</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\win32com\client\dynamic.py", line 84, in _GetGoodDispatch
IDispatch = pythoncom.connect(IDispatch)
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File ".\test_mail.py", line 4, in <module>
outlook = win32.Dispatch("Outlook.Application")
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\win32com\client\__init__.py", line 118, in Dispatch
dispatch, userName = dynamic._GetGoodDispatchAndUserName(dispatch, userName, clsctx)
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\win32com\client\dynamic.py", line 104, in _GetGoodDispatchAndUserName
return (_GetGoodDispatch(IDispatch, clsctx), userName)
File "C:\Users\admin\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.7_qbz5n2kfra8p0\LocalCache\local-packages\Python37\site-packages\win32com\client\dynamic.py", line 87, in _GetGoodDispatch
IDispatch, None, clsctx, pythoncom.IID_IDispatch
pywintypes.com_error: (-2147221005, 'Invalid class string', None, None)
</code></pre>
<p>Which suggests me that it seems to be a problem finding the "real" Outlook app.</p>
<p>Can anyone suggest a fix for this in Windows 11?</p>
| <python><email><outlook><win32com><office-automation> | 2023-10-30 08:57:38 | 1 | 5,778 | codeKiller |
77,387,280 | 20,266,647 | Performance issue with low microservice utilization in K8s (impact to development and devops also) | <p>When I designed microservice and made deployment to K8s, I saw that I had problem to get higher utilization for my microservices (max. utilization was only 0.1-0.3 CPU). Do you have best practices, how can we increase microservice CPU utilization?</p>
<p>Let me describe the LAB environment:</p>
<ul>
<li>K8s with 5 nodes
<ul>
<li>each node with 14 CPU and 128 GB RAM (nodes are build on virtual machines with VMWare)</li>
<li>K8s with nginx with setting full log, etc.</li>
</ul>
</li>
<li>Microservice
<ul>
<li>In python language (GIL limitation for processing in one process, it means max. 1 CPU utilization)</li>
<li>I used three pods</li>
<li>Interface REST request/response (without addition I/O operation)</li>
<li>The processing time per one call is ~100ms</li>
</ul>
</li>
</ul>
<p>We made performance tests, and you can see these outputs:</p>
<ul>
<li>Microservice utilization max. 0.1-0.3 CPU in each pod</li>
</ul>
<p>I expect the issue is, that K8s management (routing, log, …) generate higher utilization of sources and cannot provide high throughput for utilization of our microservices. I think, the best practices for higher utilization of microservices can be:</p>
<p><strong>1] Increase amount of pods</strong></p>
<ul>
<li>Pros, we will get higher microservice utilization but amount of pods are limited per one K8s node</li>
<li>Cons, the utilization of microservice per pod will be still the same</li>
</ul>
<p><strong>2] Use micro batch processing</strong></p>
<ul>
<li>Pros, we can support bundling of calls (per e.g. one, two seconds) and in this case, that processing time on microservice side will be higher</li>
<li>Cons, we will increase processing time because bundling (not ideal scenario for real-time processing)</li>
</ul>
<p><strong>3] K8s change log level</strong></p>
<ul>
<li>Pros, we can decrease level of logs in nginx, … to error</li>
<li>Cons, possible issue for detail issue tracking</li>
</ul>
<p><strong>4] Use K8s nodes with physical HW (not VMware)</strong></p>
<ul>
<li>Pros, better performance</li>
<li>Cons, this change can generate addition costs (new HW) and maintenance</li>
</ul>
<p>Do you use other best practices, ideas for high microservice utilization in k8s (my aim is to get 0.8-1 CPU per pod for this python code)?</p>
| <python><performance><microservices><real-time> | 2023-10-30 08:52:41 | 1 | 1,390 | JIST |
77,387,252 | 11,082,866 | Convert a Column to datetime where some values are not dates | <p>I have a dataframe in which there is a column <code>grn_date</code> which consists of some datetime values and some "-" because earlier i filled it up like this:</p>
<pre><code> df['grn_date'].fillna('-', inplace=True)
</code></pre>
<p>Earlier I was doing this for the whole columns:</p>
<pre><code> # Format the 'date' column to match the rest of the dates
df['grn_date'] = df['grn_date'].dt.strftime('%Y-%m-%d %H:%M:%S.%f+00:00')
# Calculate the "Cycle time" by subtracting 'grn_date' from 'date'
df['allot_date'] = pd.to_datetime(df['allot_date']) # Ensure 'grn_date' is in datetime format
df['grn_date'] = pd.to_datetime(df['grn_date']) # Ensure 'date' is in datetime format
# Calculate the difference and store it in a new column 'Cycle time'
df['Cycle time'] = (df['grn_date'] - df['allot_date']).dt.days
df['grn_date'] = pd.to_datetime(df['grn_date']).dt.strftime('%d-%m-%Y')
df['allot_date'] = pd.to_datetime(df['allot_date']).dt.strftime('%d-%m-%Y')
</code></pre>
<p>But now it'll give the error <code>AttributeError: Can only use .dt accessor with datetimelike values</code>. Do i have to loop over the column and do a try and except or is there any other way to do this?</p>
<p>Error:</p>
<pre><code>Traceback (most recent call last):
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/core/handlers/exception.py", line 34, in inner
response = get_response(request)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 115, in _get_response
response = self.process_exception_by_middleware(e, request)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/core/handlers/base.py", line 113, in _get_response
response = wrapped_callback(request, *callback_args, **callback_kwargs)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/views/decorators/csrf.py", line 54, in wrapped_view
return view_func(*args, **kwargs)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/django/views/generic/base.py", line 71, in view
return self.dispatch(request, *args, **kwargs)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py", line 505, in dispatch
response = self.handle_exception(exc)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py", line 465, in handle_exception
self.raise_uncaught_exception(exc)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py", line 476, in raise_uncaught_exception
raise exc
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/rest_framework/views.py", line 502, in dispatch
response = handler(request, *args, **kwargs)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend copy/reports/views.py", line 1421, in get
df['grn_date'] = df['grn_date'].dt.strftime('%Y-%m-%d %H:%M:%S.%f+00:00')
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/generic.py", line 5575, in __getattr__
return object.__getattribute__(self, name)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/accessor.py", line 182, in __get__
accessor_obj = self._accessor(obj)
File "/Users/rahulsharma/PycharmProjects/Trakkia-Backend/venv/lib/python3.8/site-packages/pandas/core/indexes/accessors.py", line 509, in __new__
raise AttributeError("Can only use .dt accessor with datetimelike values")
AttributeError: Can only use .dt accessor with datetimelike values
[30/Oct/2023 09:09:16] "GET /rfid-reportsdownload-1/ HTTP/1.1" 500 17740
</code></pre>
| <python><pandas> | 2023-10-30 08:47:15 | 1 | 2,506 | Rahul Sharma |
77,387,131 | 13,238,846 | Where can i see Python terminal outputs | <p>I have deployed Python web app on azure web app service that prints outputs in to terminal. Where I can see that without attaching a debugger. My app is using this startup command.</p>
<pre><code>python3.10 -m aiohttp.web -H 0.0.0.0 -P 8000 app:init_func
</code></pre>
<p>I just want to see the outputs of that.</p>
| <python><azure><terminal><azure-web-app-service> | 2023-10-30 08:22:44 | 1 | 427 | Axen_Rangs |
77,387,099 | 3,685,918 | How to fill the rightmost column with values in pandas | <p>There is an unknown number of columns, and each row has exactly one value.</p>
<p>However, I cannot tell which column the number is in.</p>
<p>I would like to know how to fill one value from each row into the rightmost column.</p>
<p>The example below consists of three columns, but I don't know how many there actually are.</p>
<pre><code>import pandas as pd
import io
temp = u"""
col1,col2,col3
nan,nan,3
nan,4,nan
1,nan,nan
"""
data = pd.read_csv(io.StringIO(temp), sep=",")
# data
# Out[31]:
# col1 col2 col3
# 0 NaN NaN 3.0
# 1 NaN 4.0 NaN
# 2 1.0 NaN NaN
What I want:
# data2
# col3
# 0 3.0
# 1 4.0
# 2 1.0
</code></pre>
| <python><python-3.x><pandas><ffill> | 2023-10-30 08:16:11 | 4 | 427 | user3685918 |
77,387,006 | 19,694,624 | ModuleNotFoundError when importing my own python module | <p>so the problem is that when I try to import something from the module I created, I run into ModuleNotFoundError: No module named '<my_module>'.</p>
<p>My project structure is just like this one:</p>
<pre><code>├── first.py
└── some_dir
├── second.py
└── third.py
</code></pre>
<p>You can replicate the problem with these 3 files:</p>
<p><strong>third.py</strong></p>
<pre><code>""" This is the third file and we store some variable here
that will be imported to the second """
a = 69
</code></pre>
<p><strong>second.py</strong></p>
<pre><code>"""This is the second file. We import
a variable from the third and calculate
the sum of a and b"""
from third import a
b = 10
sum = a + b
</code></pre>
<p><strong>first.py</strong></p>
<pre><code>"""This is the first and final file
where everything comes together"""
from some_dir.second import c
print(c)
</code></pre>
<p>And when I run <strong>first.py</strong> I get error:</p>
<pre><code>Traceback (most recent call last):
File "/home/username/moo/goo/foo/first.py", line 3, in <module>
from some_dir.second import c
File "/home/username/moo/goo/foo/first.py", line 5, in <module>
from third import a
ModuleNotFoundError: No module named 'third'
</code></pre>
| <python><python-3.x><python-import><importerror><python-module> | 2023-10-30 07:55:41 | 1 | 303 | syrok |
77,386,715 | 5,832,540 | Recommended way to approach computed field using I/O operations | <p>I have a deeply nested model which on several levels uses this model for an object stored in an S3 bucket:</p>
<pre class="lang-py prettyprint-override"><code>class S3Object(BaseModel):
bucket: str
key: str
</code></pre>
<p>During serialization I would like to create a <a href="https://docs.aws.amazon.com/AmazonS3/latest/userguide/using-presigned-url.html" rel="nofollow noreferrer">pre-signed URL</a> for each object. I was thinking about simply enhancing the S3 object model using a <a href="https://docs.pydantic.dev/latest/api/fields/#pydantic.fields.computed_field" rel="nofollow noreferrer">computed field</a> like this:</p>
<pre class="lang-py prettyprint-override"><code>class S3Object(BaseModel):
bucket: str
key: str
@computed_field
@property
def url(self) -> str:
# I would actually use a global client.
s3 = boto3.client("s3")
return = s3.generate_presigned_url(
"get_object",
Params={"Bucket": self.bucket, "Key": self.key},
ExpiresIn=3600
)
</code></pre>
<p>I'm convinced that this would work quite well, but I've found <a href="https://github.com/pydantic/pydantic/issues/1227" rel="nofollow noreferrer">this issue</a> that generally discourages from using I/O operations during validation. I'm not exactly sure, though, if this also extends to the computed fields which are on the serialization side. And if yes, what would be another way to achieve the goal.</p>
<p>What I would like to avoid in the first place is to think about the structure of the parent model and to generate pre-signed URLs for all the deeply nested S3 objects after dumping the parent model. If the computed fields approach is really not appropriate, maybe there would be a way to traverse the data and act only on the S3 objects?</p>
| <python><pydantic> | 2023-10-30 07:04:37 | 1 | 10,230 | Tomáš Linhart |
77,386,669 | 4,451,521 | Why my heatmap is plotting nans if I expressly extracted the nans? | <p>I have this piece of code</p>
<pre><code> print(abs(data.left_error))
print(data['left_error'].isna().sum())
# Remove rows with NaN values in the 'left_error' column
left_no_nan = data.dropna(subset=['left_error'])
print(abs(left_no_nan.left_error))
print(left_no_nan['left_error'].isna().sum())
fig1 = go.Figure(data=[
go.Heatmap(
z=abs(left_no_nan.left_error),
x=left_no_nan.somedata_left,
y=left_no_nan['velocity'],
colorscale='Viridis'
)])
</code></pre>
<p><code>data</code> has some Nan in the <code>left_error</code> column. The print output is this</p>
<blockquote>
<p>0 0.442834
1 0.404161<br />
2 0.361559<br />
3 0.319925<br />
4 0.273510<br />
...<br />
25405 NaN<br />
25406 NaN<br />
25407 NaN<br />
25408 NaN<br />
25409 NaN<br />
Name: left_error, Length: 25410, dtype: float64<br />
10797<br />
0 0.442834<br />
1 0.404161<br />
2 0.361559<br />
3 0.319925<br />
4 0.273510<br />
...<br />
25322 0.171343<br />
25323 0.347305<br />
25324 0.279475<br />
25325 0.224612<br />
25326 0.299578<br />
Name: left_error, Length: 14613, dtype: float64<br />
0</p>
</blockquote>
<p>So there are no Nan anymore in <code>left_no_nan</code></p>
<p>However when I run the script I got a heatmap with a lot of Nan when I hover over it.
Why could this be happening?</p>
| <python><pandas><heatmap><plotly> | 2023-10-30 06:56:07 | 1 | 10,576 | KansaiRobot |
77,386,631 | 12,519,954 | how to trigger s3 in aws lambda locally | <p>I was testing aws lambda python locally with docker.My purpose is trigger a lambda function when a JSON file is uploading to a s3 bucket. and I want to test it locally and I need the event data.</p>
<p>I have done almost everything with docker and push this docker image to ECR and then deploy to aws lambda.But how can I get the <strong>event</strong> data when a file is uploading to the S3 bucket?</p>
<p>Dockerfile</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.10-x86_64
# Copy requirements.txt
COPY requirements.txt ${LAMBDA_TASK_ROOT}
# Install the specified packages
RUN pip3 install -r requirements.txt --target "${LAMBDA_TASK_ROOT}"
# Copy function code
COPY check_engine_function.py ${LAMBDA_TASK_ROOT}
# Set the CMD to your handler (could also be done as a parameter override outside of the Dockerfile)
CMD [ "check_engine_function.handler" ]
</code></pre>
<p>Lambda . check_engine_function.py</p>
<pre><code>def handler(event, context):
bucket = event['Records'][0]['s3']['bucket']['name']
key = event['Records'][0]['s3']['object']['key']
print(bucket)
return {
'statusCode': 200,
'body': json.dumps('TTS processing initiated.')
}
</code></pre>
<p>Here how can I get the bucket and key when I'm running the Lembda locally
following this article: <a href="https://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-instructions" rel="nofollow noreferrer">https://docs.aws.amazon.com/lambda/latest/dg/python-image.html#python-image-instructions</a></p>
| <python><amazon-web-services><docker><amazon-s3><aws-lambda> | 2023-10-30 06:48:44 | 1 | 308 | Mahfujul Hasan |
77,386,620 | 2,142,577 | Is it possible to call Pyright from code (as an API)? | <p>It seems that <a href="https://github.com/microsoft/pyright" rel="nofollow noreferrer">Pyright</a> (the Python type checker made by Microsoft) can only be used as a command line tool or from VS Code. But is it possible to call pyright from code (as an API)?</p>
<p>For example, mypy <a href="https://mypy.readthedocs.io/en/stable/extending_mypy.html#integrating-mypy-into-another-python-application" rel="nofollow noreferrer">supports</a> usage like:</p>
<pre class="lang-py prettyprint-override"><code>import sys
from mypy import api
result = api.run("your code")
</code></pre>
| <python><pyright> | 2023-10-30 06:46:06 | 3 | 19,596 | laike9m |
77,386,515 | 880,783 | Can I skip a test (or mark it as inconclusive) in pytest while it is running? | <p>I am aware of <code>pytest</code>'s decorators to mark tests as to be skipped (conditionally). However, all those are evaluated before the test starts.</p>
<p>I have a couple of tests that require a user interaction (those are not run on <a href="https://en.wikipedia.org/wiki/Continuous_integration" rel="nofollow noreferrer">CI</a> obviously), and if that interaction is not provided, I would like to mark these tests as, well, anything but "pass" or "fail". "Skipped" would be fine, as would be "inconclusive" or "canceled" or whatever.</p>
<p>Is that possible?</p>
| <python><pytest> | 2023-10-30 06:19:23 | 1 | 6,279 | bers |
77,386,480 | 2,998,077 | Python to loop through .py scripts for key-word | <p>A file directory (Windows 10) that used to store many "*.py" scripts.</p>
<p>I want to loop through the .py scripts to find out, which of them contains the specific key-word.</p>
<p>What would be the better way than below, because it often encounters error as:</p>
<blockquote>
<p>UnicodeDecodeError: 'utf-8' codec can't decode bytes in position xxxx: invalid continuation byte</p>
</blockquote>
<p>(I've also tried to use 'latin-1' encoding, or read the scripts in 'rb' way)</p>
<pre><code>import os, re
# Define the directory to search in and the keyword to look for
directory = '/path_to_directory'
keyword = 'the_keyword'
# Regular expression pattern to match the keyword
pattern = re.compile(r'\b{}\b'.format(re.escape(keyword)))
# Function to search for the keyword in a file
def search_in_file(file_path):
with open(file_path, 'r', encoding='utf-8') as file:
for line_number, line in enumerate(file, start=1):
if pattern.search(line):
print(f'Found keyword "{keyword}" in {file_path} at line {line_number}:')
print(line.strip())
# Loop through the directory and its subdirectories
for root, dirs, files in os.walk(directory):
for file in files:
if file.endswith('.py'):
file_path = os.path.join(root, file)
search_in_file(file_path)
</code></pre>
| <python> | 2023-10-30 06:08:34 | 2 | 9,496 | Mark K |
77,385,954 | 10,987,285 | python re(regular expression) replace all occurrence of a pattern but exclude a scenario | <p>I have a tricky scenario to use regular expression in Python.</p>
<p>Here is the example--</p>
<p>Input:</p>
<pre><code>text = """4056,"Wholesale, Operations","Performed some ""activities"", at 10 am. ",19/12/2022,,"a,B" """
</code></pre>
<p>The Output expecting is:</p>
<pre><code>4056,DUMMY,DUMMY,19/12/2022,,DUMMY
</code></pre>
<p>Basically I want to replace everything in the <code>" "</code> to <code>DUMMY</code>. However, I'd like to escape text wrapped by <code>"" ""</code>. As you can see from the example, <code>"Performed some ""activities"", at 10 am. "</code> should be replaced as a whole.</p>
<p>Any tips? Much appreciated!</p>
| <python><regex> | 2023-10-30 02:34:16 | 0 | 1,615 | QPeiran |
77,385,932 | 7,578,494 | numpy conditioning values by the index within a axis | <pre><code>ar = np.arange(2*3*3).reshape(2, 3, 3)
</code></pre>
<p>I want to initialize a boolean numpy array filled with <code>True</code>, but only give <code>False</code> for elements of the second position of the axis 2. The desired array is</p>
<pre><code>array([[[ True, False, True],
[ True, False, True],
[ True, False, True]],
[[ True, False, True],
[ True, False, True],
[ True, False, True]]])
</code></pre>
| <python><numpy> | 2023-10-30 02:22:00 | 1 | 343 | hlee |
77,385,911 | 18,108,767 | How to reshape a xarray Dataset? | <p>I have a <code>xarray.Dataset</code> named <code>seisnc_2d</code> with dimensions <code>'x': 240200, 'y': 2001</code> these are 200 seismic shots, each seismic shot has a dimension <code>(1201,2001)</code>, which means that the <code>n</code> shot will be <code>seisnc_2d.isel(x=slice((n-1)*1201, n*1201))</code>.</p>
<p>What I want is to somehow reshape this dataset with a new dimension <code>shot</code>, leaving the new <code>seisnc_2d</code> dimensions as <code>'shot':200, 'x': 1201, 'y': 2001</code> so instead of doing the <code>x</code> slice every time I need a seismic shot data, just refer to it as, I guess something like <code>seisnc_2d.isel(shot=n)</code></p>
| <python><python-xarray> | 2023-10-30 02:10:37 | 1 | 351 | John |
77,385,841 | 7,155,684 | How to limit sklearn GridSearchCV cpu usage? | <p>I use <code>GridSearchCV</code> as follows:</p>
<pre><code>gsearch_lgb = GridSearchCV(
model(**self.model_params),
param_grid=self.model_params,
n_jobs=2,
verbose=99,
scoring=self.cv_scoring,
cv=4,
)
</code></pre>
<p>But joblib still uses my all cores:</p>
<p><a href="https://i.sstatic.net/qRCxa.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/qRCxa.jpg" alt="enter image description here" /></a></p>
<p>I also tried <code>n_jobs=-19</code> since sklearn document said
"For n_jobs below -1, (n_cpus + 1 + n_jobs) are used"</p>
<p>But still not working, all my cpus are used.</p>
<p>How should I modified my code to reduce CPU usage?</p>
| <python><scikit-learn><parallel-processing><joblib> | 2023-10-30 01:35:25 | 1 | 3,869 | Jim Chen |
77,385,642 | 1,621,041 | How to sympy.nsolve small values? | <pre class="lang-py prettyprint-override"><code>from sympy import *
x = symbols("x")
y = pi*(7.92e-24*x*(1 - exp(2.47376311844078e-6/x)) + 3.91844077961019e-30*exp(2.47376311844078e-6/x))/(x**7*(1 - exp(2.47376311844078e-6/x))**2)
interval = 0, 1e-6
plot(y, (x, *interval))
</code></pre>
<p><a href="https://i.sstatic.net/XjwPn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XjwPn.png" alt="plot" /></a></p>
<p>So <code>y</code> obviously has a zero close to <code>x=0.5e-6</code>.</p>
<p>At first, I tried <code>solve(y, x)</code> but that just kept running seemingly forever. Fair enough, I don't need a symbolic solution - I'm fine with solving it numerically.</p>
<p>So I tried <code>nsolve(y, x, 0.5e-6)</code> next, but that results in <code>0.250000500000000</code>. Again, fair enough, that's even mentioned in the SymPy docs:</p>
<blockquote>
<p>It is not guaranteed that nsolve() will find the root closest to the initial point</p>
</blockquote>
<p>Source: <a href="https://docs.sympy.org/latest/guides/solving/solve-numerically.html#ensure-the-root-found-is-in-a-given-interval" rel="nofollow noreferrer">https://docs.sympy.org/latest/guides/solving/solve-numerically.html#ensure-the-root-found-is-in-a-given-interval</a></p>
<p>Accordings to the docs, I should specify an interval and <code>solver="bisect"</code> so I tried with the interval that I also used for the plot above:</p>
<pre><code>nsolve(y, x, interval, solver="bisect")
</code></pre>
<p>Which results in <code>ZeroDivisionError</code>. Probably because the interval starts at zero? Never mind, instead of using the whole plotting interval, I can be more specific:</p>
<pre><code>nsolve(y, x, (0.4e-6, 0.6e-6), solver="bisect")
</code></pre>
<p>But even that doesn't succeed:</p>
<blockquote>
<p><code>ValueError: Could not find root within given tolerance. (0.297846609020060650238 > 2.16840434497100886801e-19)</code></p>
</blockquote>
| <python><sympy> | 2023-10-30 00:08:23 | 1 | 11,522 | finefoot |
77,385,599 | 1,444,231 | Pyspark aggregation with random number of row selection by using time column and a duration variable | <p>In pyspark, I have a dataframe like the following sample:</p>
<pre><code>id, execution_time, sym, qty
========================================
1, 2023-10-27 15:01:24.2200, aa1, 100
2, 2023-10-27 15:15:21.2200, aa1, 250
3, 2023-10-27 15:27:24.2200, aa2, 350
4, 2023-10-27 15:35:25.2200, aa3, 400
5, 2023-10-27 16:00.25.2200, aa3, 500
6, 2023-10-27 16:15:24.2200, aa4, 100
7, 2023-10-27 16:55:24.2200, aa1, 100
8, 2023-10-27 16:50:24.2200, aa2, 100
========================================
</code></pre>
<p>Now my requirement is this:
I have a 'duration' variable and the value of this variable is 30 # in minutes
Now Starting from the first row I need to apply the value of the duration variable, and then I need to group these rows like the below-
So, in this sample data, after applying the 'duration' variable, I should be able to group till the third row. Because the time of the 4th row is greater than the first row + duration. (we applied duration on the first row)</p>
<p>Now again I need to start from the 4th row and apply the duration variable and this time we should group only rows 4 and 5 because the time of the 6th row is greater than the 4th row + duration. (we applied duration on the 4th row)</p>
<p>Now again I need to start from the 6th row and apply the duration variable and this time we should group only the row 6th,
because Because time of the 7th row is greater than the 6th row + duration. (we applied duration on the 6th row)</p>
<p>In other words:
So after applying the duration on a row's time column (let's say this is our result),
we need to pick all upcoming rows where the next row's time > result and then pick the next row and apply duration.</p>
<p>is it possible to mark all those rows and store this in a new column, which falls under the above condition?
Because later I need to do the aggregation.</p>
| <python><pyspark> | 2023-10-29 23:50:57 | 2 | 361 | yogendra |
77,385,587 | 2,299,692 | Persist ParentDocumentRetriever of langchain | <p>I am using ParentDocumentRetriever of langchain.
Using mostly the code from their <a href="https://python.langchain.com/docs/modules/data_connection/retrievers/parent_document_retriever" rel="noreferrer">webpage</a> I managed to create an instance of ParentDocumentRetriever using bge_large embeddings, NLTK text splitter and chromadb.
I added documents to it, so that I c</p>
<pre class="lang-py prettyprint-override"><code>embedding_function = HuggingFaceEmbeddings(model_name='BAAI/bge-large-en-v1.5', cache_folder=hf_embed_path)
# This text splitter is used to create the child documents
child_splitter = NLTKTextSplitter(chunk_size=400)
# The vectorstore to use to index the child chunks
vectorstore = Chroma(
collection_name="full_documents",
embedding_function=embedding_function,
persist_directory="./chroma_db_child"
)
# The storage layer for the parent documents
store = InMemoryStore()
retriever = ParentDocumentRetriever(
vectorstore=vectorstore,
docstore=store,
child_splitter=child_splitter,
)
retriever.add_documents(docs, ids=None)
</code></pre>
<p>I added documents to it, so that I can query using the small chunks to match but to return the full document: <code>matching_docs = retriever.get_relevant_documents(query_text)</code>
Chromadb collection 'full_documents' was stored in /chroma_db_child. I can read the collection and query it. I get back the chunks, which is what is expected:</p>
<pre class="lang-py prettyprint-override"><code>vector_db = Chroma(
collection_name="full_documents",
embedding_function=embedding_function,
persist_directory="./chroma_db_child"
)
matching_doc = vector_db.max_marginal_relevance_search('whatever', 3)
len(matching_doc)
>>3
</code></pre>
<p>One thing I can't figure out is how to persist the whole structure. This code uses <code>store = InMemoryStore()</code>, which means that once I stopped execution, it goes away.</p>
<p>Is there a way, perhaps using something else instead of <code>InMemoryStore()</code>, to create <code>ParentDocumentRetriever</code> and persist both full documents and the chunks, so that I can restore them later without having to go through <code>retriever.add_documents(docs, ids=None) </code> step?</p>
| <python><py-langchain><chromadb><content-based-retrieval> | 2023-10-29 23:47:18 | 2 | 1,938 | David Makovoz |
77,385,467 | 4,281,353 | python - explanation on different timezone string format for the same | <p>Why <code>datetime.datetime</code> created with <code>zoneinfo.ZoneInfo</code> timezone gives different timezone string with its <code>tzinfo</code> attribute?</p>
<p><a href="https://docs.python.org/3/library/zoneinfo.html#using-zoneinfo" rel="nofollow noreferrer">Using ZoneInfo</a> document says:</p>
<blockquote>
<p>ZoneInfo is a concrete implementation of the datetime.tzinfo abstract base class, and is intended to be attached to tzinfo, either via the constructor, the datetime.replace method or datetime.astimezone:</p>
</blockquote>
<p>If <code>ZoneInfo</code> is attached to <code>tzinfo</code> of the datetime instance that has been created with it, then why creating with <strong>Australia/Melbourne</strong> gives different name <strong>AEDT</strong>?</p>
<pre><code>datetime.datetime.now(tz=zoneinfo.ZoneInfo("Australia/Melbourne")).astimezone().tzinfo
---
datetime.timezone(datetime.timedelta(seconds=39600), 'AEDT')
</code></pre>
<p>And <strong>AEDT</strong> is not recognized by <code>ZoneInfo</code>.</p>
<pre><code>datetime.datetime.now(tz=zoneinfo.ZoneInfo("AEDT")).astimezone().tzinfo
---
ZoneInfoNotFoundError: 'No time zone found with key AEDT'
</code></pre>
<p>Please help understand what the different here and how to inter-operate among different timezone string e.g. <strong>AEDT</strong> and <strong>Australia/Melbourne</strong>.</p>
| <python><datetime><timezone> | 2023-10-29 22:53:24 | 1 | 22,964 | mon |
77,385,360 | 2,681,662 | Python unittest mock an attribute | <p>I have a device attached to the PC and I wrote a Python wrapper for its API so I can control it. I wrote <code>unittest</code> for my code.</p>
<p>Some values are obtained from the device itself and it cannot be changed.</p>
<p>For example, to check if the device is connected I have to read it from an attribute and I cannot change it.</p>
<p>A really simplified version:</p>
<p>Please notice I know there might be some undefined classes or variables, but it is due to code simplification.</p>
<pre><code>from win32com import client
from pythoncom import com_error
class Device:
def __init__(self, port):
try:
self.device = client.Dispatch(port)
except com_error:
raise WrongSelect(
f"No such {self.__class__.__name__} as {port}")
def get_description(self):
if self.device.connected:
return self.device.Description
raise NotConnected("Device is not Connected")
</code></pre>
<p>Here the test code:</p>
<pre><code>import unittest
from unittest.mock import patch
class TestDevice(unittest.TestCase):
def setUp(self):
self.PORT = "A PORT"
self.DEVICE = Device(self.PORT)
def test_get_description(self):
description = self.DEVICE.get_description()
self.assertIsInstance(description, str)
def test_get_description_not_connected(self):
# How to mock self.DEVICE.device.connected
pass
</code></pre>
<p>How to mock <code>connected</code> value of the object?</p>
| <python><win32com><python-unittest.mock> | 2023-10-29 22:07:10 | 1 | 2,629 | niaei |
77,385,304 | 19,299,757 | How can I set up IAM authentication for my API Gateway using AWS SAM? | <p>I have the below SAM template to create a Lambda function & an API Gateway with <code>AWS_IAM</code> auth (IAM authentication):</p>
<pre class="lang-yaml prettyprint-override"><code>MyDemoLambdaApiFunction:
Type: AWS::Serverless::Function
Properties:
Description: >
Currently does not support S3 upload event.
Handler: app.lambda_handler
Runtime: python3.11
CodeUri: .
MemorySize: 1028
Events:
MyDemoAPI:
Type: Api
Properties:
RestApiId: !Ref api
Path: /gtl
Method: GET
MyDemoLambdaInvokePermission:
Type: 'AWS::Lambda::Permission'
Properties:
FunctionName: !GetAtt MyDemoLambdaApiFunction.Arn
Action: 'lambda:InvokeFunction'
Principal: apigateway.amazonaws.com
SourceArn: !Sub 'arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/*'
api:
Type: AWS::Serverless::Api
Properties:
Name: "myTestApi"
StageName: api
TracingEnabled: true
OpenApiVersion: 3.0.2
Auth:
DefaultAuthorizer: AWS_IAM
InvokeRole: NONE
ResourcePolicy:
CustomStatements:
- Effect: 'Allow'
Action: 'execute-api:Invoke'
Resource: ['execute-api:/api/gtl']
Principal: "*"
</code></pre>
<p>When I try to invoke the API URL from a browser, I get:</p>
<pre class="lang-json prettyprint-override"><code>{"message":"Missing Authentication Token"}
</code></pre>
<p>When I remove the <code>DefaultAuthorizer</code> section, I am able to invoke the gateway URL from the browser.</p>
<p>I want to only allow API invocation from my AWS account.</p>
<p>I also tried this as the <code>Principal</code> but no luck:</p>
<pre class="lang-yaml prettyprint-override"><code>AWS:
- !Sub 'arn:aws:iam::${AWS::AccountId}:role/myRole'
</code></pre>
<p>When do I use AWS_IAM for auth when creating AWS::Lambda::Permission?</p>
| <python><amazon-web-services><aws-lambda><aws-api-gateway> | 2023-10-29 21:42:50 | 1 | 433 | Ram |
77,385,180 | 10,934,417 | Expand Pandas DataFrame with numerical values? | <p>I would like to expand my DataFrame and I have checked out a similar <a href="https://stackoverflow.com/questions/68354526/how-to-repeat-expand-pandas-data-frame">example</a> on stackoverflow but without any success. Any idea how to fix it?</p>
<p>This is my toy example</p>
<pre><code>import pandas as pd
import numpy as np
id = [100, 101]
nums = ['9, 5', '11, 12, 13']
out = [1, 0]
df = pd.DataFrame({'id': id, 'nums':nums, 'y':out})
df
id nums y
0 100 9, 5 1
1 101 11, 12, 13 0
</code></pre>
<p>I use the similar the code with that example as before</p>
<pre><code># explode nums into a sequential order
df['nums'] = ["".join(i.split()) for i in df['nums']]
df['nums'] = df['nums'].apply(lambda s: [s[:i] for i in range(1, len(s)+1, 2)])
df = df.explode('nums')
df.reset_index(drop=True)
df
</code></pre>
<p>but this is what I got (the last 4 rows are wrong)</p>
<pre><code> id nums y
0 100 9 1
0 100 9,5 1
1 101 1 0
1 101 11, 0
1 101 11,12 0
1 101 11,12,1 0
</code></pre>
<p>The CORRECT one should be like following:</p>
<pre><code> id nums y
0 100 9 1
0 100 9,5 1
1 101 11 0
1 101 11,12 0
1 101 11,12,13 0
</code></pre>
<p>any idea? many thanks</p>
| <python><pandas><numpy> | 2023-10-29 20:58:39 | 1 | 641 | DaCard |
77,385,156 | 1,117,119 | Tracing the references for a specific object in Python (memory leak) | <p>I have a 3rd party library creating huge memory leaks. I've called <code>gc.collect</code>, and have no references to any objects created by this library, but it still leaks.</p>
<p>Using <code>gc.get_objects()</code> I've identified numerous objects which should not be alive, as they are specific to this library.</p>
<p>How can I trace which objects are keeping these leaked objects alive (the goal being to trace it to a global variable/list, which I can reset to fix the leak)?</p>
<p>Files and line numbers would be nice, but even having the instances of the holder objects would help a great deal.</p>
| <python><memory-leaks> | 2023-10-29 20:50:11 | 1 | 2,333 | yeerk |
77,385,142 | 3,887,338 | Using a pipe symbol in typing.Literal string | <p>I have a function that accepts certain literals for a specific argument:</p>
<pre><code>from typing import Literal
def fn(x: Literal["foo", "bar", "foo|bar"]) -> None:
reveal_type(x)
</code></pre>
<p>The third contains a pipe symbol (<code>|</code>), <code>"foo|bar"</code>. This is interpreted by <code>mypy</code> as an error, as the name <code>foo</code> is not defined.</p>
<p>I guess this happens due to how forward references are evaluated? I use Python 3.8 with:</p>
<pre><code>from __future__ import annotations
</code></pre>
<p>Is there a way to make this work? I can not change the string due to breaking backward compatibility, but currently, the whole annotation is revealed as <code>Any</code>, i.e. it holds no value.</p>
| <python><mypy><python-typing> | 2023-10-29 20:47:45 | 1 | 1,202 | Håkon T. |
77,385,112 | 6,197,439 | Pandas str.replace with regex doubles results? | <p>Let's say I have this pandas Series:</p>
<pre class="lang-none prettyprint-override"><code>$ python3 -c 'import pandas as pd; print(pd.Series(["1","2","3","4"]))'
0 1
1 2
2 3
3 4
dtype: object
</code></pre>
<p>I'd like to "wrap" the strings "1","2","3","4" so they are prefixed with "a" and suffixed with "b" -> that is, I want to get "a1b","a2b","a3b","a4b". So I try <a href="https://pandas.pydata.org/docs/reference/api/pandas.Series.str.replace.html" rel="nofollow noreferrer">https://pandas.pydata.org/docs/reference/api/pandas.Series.str.replace.html</a></p>
<pre class="lang-none prettyprint-override"><code>$ python3 -c 'import pandas as pd; print(pd.Series(["1","2","3","4"]).str.replace("(.*)", r"a\1b", regex=True))'
0 a1bab
1 a2bab
2 a3bab
3 a4bab
dtype: object
</code></pre>
<p>So - I did get a "wrap" of "1" into "a1b" -> but then "ab" is repeated one more time?</p>
<p>(Trying this regex in regex101.com, I've noticed I get the same "ghost copies" of "ab" at end if the <code>g</code> flag is enabled; so maybe Pandas <code>.str.replace</code> somehow enables it? But then, default is <code>flags=0</code> for Pandas <code>.str.replace</code> as per docs ?!)</p>
<p>How can I get the entire contents of a column cell "wrapped" in only those characters that I want?</p>
| <python><pandas><regex> | 2023-10-29 20:37:40 | 2 | 5,938 | sdbbs |
77,385,005 | 13,184,183 | Is pd.groupby a generator? | <p>What is underneath the following loop?</p>
<pre class="lang-py prettyprint-override"><code>for name, group in df.groupby('some_col'):
pass
</code></pre>
<p>Is it a generator or all groups are computed at once and stored in memory?</p>
| <python><pandas> | 2023-10-29 20:02:45 | 1 | 956 | Nourless |
77,384,995 | 13,184,183 | How to expand group for window function in pyspark? | <p>I have a dataframe with the following columns <code>id</code>, <code>place</code>, <code>date</code>, <code>value</code>. There is also list of dates <code>dates</code>. The dates are the last days of months. Let us say <code>value</code> can take values of <code>core</code> and <code>not core</code>. I want to create new column <code>status</code> the following way : if for the same <code>id</code> and <code>place</code> in the previous month <code>status</code> was <code>core</code> and now it is not <code>core</code> then <code>status</code> is <code>0</code> else <code>1</code>.</p>
<p>The problem is that for some groups some dates may be missed. For example there is df</p>
<pre><code>id place value date
1 A core 2023-08-31
1 A not core 2023-09-30
1 A core 2023-11-30
2 A core 2023-10-30
2 A core 2023-11-30
2 B not core 2023-07-31
2 B core 2023-10-31
</code></pre>
<p>and there is list of dates</p>
<pre><code>['2023-07-31', '2023-08-31', '2023-09-30', '2023-10-31', '2023-11-30']
</code></pre>
<p>I expect to get the following output</p>
<pre><code>id place value date prev_month_value status
1 A NONE 2023-07-31 NONE 1
1 A core 2023-08-31 NONE 1
1 A not core 2023-09-30 core 0
1 A NONE 2023-10-31 not core 1
1 A core 2023-11-30 NONE 1
2 A NONE 2023-07-31 NONE 1
2 A NONE 2023-08-31 NONE 1
2 A NONE 2023-09-30 NONE 1
2 A core 2023-10-31 NONE 1
2 A core 2023-11-30 core 1
2 B not core 2023-07-31 NONE 1
2 B NONE 2023-08-31 not core 1
2 B NONE 2023-09-30 NONE 1
2 B core 2023-10-31 NONE 1
2 B NONE 2023-11-30 core 0
</code></pre>
<p>So far I come up with the following solution:</p>
<pre><code>dates_df = pd.DataFrame([dates], names=['date'])
datas = []
for name, group in df.groupby(['id', 'place']):
local_df = dates_df.assign(id=group['id'].iloc[0], place=group['place'].iloc[0])
data = group.merge(local_df, on=['id', 'place', 'date'], how='right').sort_values('date')
data['prev_month_value'] = data['value'].shift(1)
data['status'] = data.apply(
lambda x: 0 if x['prev_month_value'] == 'core' and x['value'] != 'core' else 0,
axis=1
)
datas.append(data)
result_df = pd.concat(datas)
</code></pre>
<p>I realise that it could be done via <code>distinct</code>, but it seems to be very inefficient.</p>
<p>So my questions are:</p>
<ol>
<li>Can it be done faster/more efficient in terms of speed and memory?</li>
<li>How can it be done in pyspark? It does not support group iterating and doing distinct filter with subsequent outer join seems to be very slow. However as of now I do not see the other way.</li>
</ol>
| <python><pandas><pyspark> | 2023-10-29 19:58:18 | 1 | 956 | Nourless |
77,384,867 | 6,224,975 | Get the health status for a Google Vertex Endpoint | <p>Say I have an endpoint:</p>
<pre class="lang-py prettyprint-override"><code>from google.cloud.aiplatform import Endpoint
endpoint = Endpoint(endpoint_name="some_id",location = "location")
</code></pre>
<p>I can get predictions using <code>endpoint.predict()</code>, but how do I just make a health-check? I would've assumed that <code>endpoint.health()</code> exists but it doesn't and I can't find anything in the <a href="https://cloud.google.com/python/docs/reference/aiplatform/latest/google.cloud.aiplatform.Endpoint#google_cloud_aiplatform_Endpoint_preview" rel="nofollow noreferrer">docs</a>.</p>
| <python><google-cloud-platform><google-cloud-vertex-ai> | 2023-10-29 19:22:31 | 0 | 5,544 | CutePoison |
77,384,427 | 9,494,140 | how to log the user in Directus from a different domain and login form then authenticate him in the Directus admin page | <p>now I have created a <code>Directus</code> application, and am using it as a back-end .. and am letting the user log in from another external domain and form <code>Django</code> app and sending post request to be able to login .. how to stop <code>Derictus</code> from asking him to log in again after i redirect him to the Directus admin panel link after login success ?</p>
<p>here are some of the codes I have used for the whole process :</p>
<p><strong>docekr-compose.yaml</strong> for the <code>Directus</code> part :</p>
<pre class="lang-yaml prettyprint-override"><code>version: "3"
services:
db:
image: ${DB_IMAGE}
container_name: ${DB_CONTAINER_NAME}
volumes:
- ${DB_VOLUME}
ports:
- '${DB_PORT}:5432'
restart: always
environment:
- POSTGRES_DB=${DB_DATABASE}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
# - PGDATA=/tmp
directus:
image: directus/directus:10.7.1
container_name: ${WEB_CONTAINER_NAME}
ports:
- ${APP_PORT}:8055
restart: always
volumes:
- ./uploads:/directus/uploads
environment:
KEY: ${KEY}
SECRET: ${SECRET}
ADMIN_EMAIL: ${ADMIN_EMAIL}
ADMIN_PASSWORD: ${ADMIN_PASSWORD}
DB_CLIENT: ${DB_CLIENT}
DB_FILENAME: ${DB_FILENAME}
DB_HOST: ${DB_HOST}
DB_PORT: 5432
DB_DATABASE: ${DB_DATABASE}
DB_USER: ${DB_USER}
DB_PASSWORD: ${DB_PASSWORD}
WEBSOCKETS_ENABLED: true
depends_on:
- db
</code></pre>
<p>and my Django logic :</p>
<pre class="lang-py prettyprint-override"><code>def authenticate_user(request):
print("function called")
if request.method == 'POST':
email = request.POST.get('email')
password = request.POST.get('password')
# Create a JSON payload
data = {
"email": email,
"password": password
}
# Send a POST request to the external API
url = "http://<my_domain>:<port>/auth/login"
headers = {'Content-Type': 'application/json'}
response = requests.post(url, data=json.dumps(data), headers=headers)
if response.status_code == 200:
# If the response is successful, print it in the console
print(response.json())
# Redirect the user to the given URL
return redirect("http://<my_domain>:<port>/")
else:
print(response.json())
return redirect('login_page')
else:
print(response.json())
return redirect('login_page')
</code></pre>
<p>please note i have changed <code><my_doman></code> and <code><port></code> with the real data</p>
| <python><django><authentication><directus> | 2023-10-29 17:31:11 | 0 | 4,483 | Ahmed Wagdi |
77,384,412 | 14,566,295 | Define the custom function on-the-fly without defining explicitly | <p>I have below code</p>
<pre><code>from joblib import Parallel, delayed
def process(i):
return i * i
results = Parallel(n_jobs=2)(delayed(process)(i) for i in range(10))
</code></pre>
<p>I am wondering if instead of explicitly defining my function process if I can define the function on-the-fly within <code>Parallel(n_jobs=2)(delayed(<<here ??>>)(i) for i in range(10))</code>?</p>
<p>In this example, my function <code>process</code> is one-liner and pretty simple. However actually I have a more complex function and I want to define within <code>Parallel</code> function (not explicitly).</p>
<p>One such example of my custom function may be</p>
<pre><code>def process(i):
def process_1(j) :
return i - j + 12 if j > 100 else i + j
if i > 123 :
k = process_1(i)
else :
k = process_1(i - 23)
return k * i
</code></pre>
| <python> | 2023-10-29 17:28:06 | 1 | 1,679 | Brian Smith |
77,384,222 | 10,966,677 | Pandas read_html throws ParseError: Document is empty because of emoji | <p>While scraping a web page searching for tables using <code>Pandas.read_html()</code>, I get into this error due to emojis in the html source:</p>
<pre><code>lxml.etree.ParserError: Document is empty
</code></pre>
<p>I have tried both reading directly from the html source (as string) and reading the <code><table></code> tag extracted from the html.</p>
<p>For the scraping, I am using Selenium and Beautiful Soup including <code>html5lib</code> and <code>lxml</code> to allow Pandas to interpret html.</p>
<p>Since the page that I am scraping is very large, let me post a reproducible example.</p>
<p>Suppose that you have extracted the <code><table></code> tag from the html source with <code>soup.find_all("table")</code>, so that the string that you want to parse is <code>table_tag</code>:</p>
<pre><code>import pandas as pd
table_tag = """<table>
<tr>
<th>Company</th>
<th>Contact</th>
<th>Country</th>
</tr>
<tr>
<td>Notfall Software 🚑</td>
<td>Mario Müller</td>
<td>Germany</td>
</tr>
<tr>
<td>Centro comercial Pélican</td>
<td>Francisco Villa 😅</td>
<td>Mexico</td>
</tr>
</table>"""
tables = pd.read_html(table_tag, encoding='utf-8') # FAILS HERE
df = tables[0] # extract the 1-st as there is only one table
</code></pre>
<p>If you manually remove the emojis from the string, you get the correct result without errors:</p>
<pre><code>>>> df
Company Contact Country
0 Notfall Software Mario Müller Germany
1 Centro comercial Pélican Francisco Villa Mexico
</code></pre>
<p>However, I cannot manually edit the original large html source, especially when I am scraping about 100 pages.</p>
<p>How do I read the source without errors or how do I remove the emojis as I won't need them anyway?</p>
<p>A snapshot of my requirements.txt (the essentials for this post):</p>
<pre><code>pandas==2.0.2
selenium==4.12.0
beautifulsoup4==4.12.2
webdriver-manager==3.8.6
lxml==4.9.2
html5lib==1.1
</code></pre>
| <python><pandas><selenium-webdriver><beautifulsoup> | 2023-10-29 16:34:25 | 1 | 459 | Domenico Spidy Tamburro |
77,384,100 | 20,266,647 | Access key must be provided in Client() arguments or in the V3IO_ACCESS_KEY environment variable | <p>I got the error <code>ValueError: Access key must be provided in Client() arguments or in the V3IO_ACCESS_KEY environment variable</code> during data ingest in MLRun CE (version 1.5.0).</p>
<p>I used this code:</p>
<pre><code>import mlrun
import mlrun.feature_store as fstore
import sys, time, pandas, numpy
def test():
project_name = "my-project"
feature_name = "fs-01"
mlrun.set_env_from_file("mlrun-nonprod.env")
project = mlrun.get_or_create_project(project_name, context='./', user_project=False)
feature_set = fstore.FeatureSet(feature_name, entities=[fstore.Entity("fn0"),
fstore.Entity("fn1")],
engine="storey")
feature_set.set_targets(targets=[mlrun.datastore.ParquetTarget()], with_defaults=False)
feature_set.save()
dataFrm = pandas.DataFrame(numpy.random.randint(low=0, high=1000, size=(100, 10)),
columns=[f"fn{i}" for i in range(10)])
fstore.ingest(feature_set,dataFrm, overwrite=True)
if __name__ == '__main__':
test()
</code></pre>
<p>Thanks for help.</p>
| <python><mlops><mlrun><feature-store> | 2023-10-29 16:04:34 | 1 | 1,390 | JIST |
77,383,740 | 13,656,045 | wxPython: Video not showing but I can hear the audiio | <p>I'm trying to make a simple program to keep or remove videos from a folder (and it's subfolder) however while I can hear the video, I can't see it.</p>
<pre><code>import os
import wx
import wx.media
from moviepy.editor import VideoFileClip
class VideoSelector(wx.Frame):
def __init__(self):
wx.Frame.__init__(self, None, title='Video Selector', size=(800, 600))
self.panel = wx.Panel(self)
self.media_ctrl = wx.media.MediaCtrl(self.panel, style=wx.SIMPLE_BORDER)
self.yes_button = wx.Button(self.panel, label="Yes (Keep)", pos=(10, 540))
self.no_button = wx.Button(self.panel, label="No (Delete)", pos=(120, 540))
self.video_list = []
self.current_video_index = 0
self.yes_button.Bind(wx.EVT_BUTTON, self.keep_video)
self.no_button.Bind(wx.EVT_BUTTON, self.delete_video)
self.Bind(wx.EVT_CLOSE, self.quit)
self.Show()
self.load_videos_in_directory()
def get_absolute_path(self):
dir_path = os.path.dirname(os.path.realpath(__file__))
folder_path = os.path.join(dir_path, 'downloads')
return folder_path
def play(self, video_path):
self.media_ctrl.Stop()
if self.media_ctrl.Load(video_path):
self.media_ctrl.Play()
else:
print("Media not found")
self.quit(None)
def quit(self, event):
self.media_ctrl.Stop()
self.Destroy()
def keep_video(self, event):
if self.current_video_index < len(self.video_list):
self.check_video_duration(self.video_list[self.current_video_index])
self.next_video()
else:
wx.MessageBox("No more videos in the directory.", "Info", wx.OK | wx.ICON_INFORMATION)
def delete_video(self, event):
if self.current_video_index < len(self.video_list):
os.remove(self.video_list[self.current_video_index])
self.next_video()
else:
wx.MessageBox("No more videos in the directory.", "Info", wx.OK | wx.ICON_INFORMATION)
def next_video(self):
self.current_video_index += 1
if self.current_video_index < len(self.video_list):
video_path = self.video_list[self.current_video_index]
self.play(video_path)
else:
wx.MessageBox("No more videos in the directory.", "Info", wx.OK | wx.ICON_INFORMATION)
self.media_ctrl.Stop()
def check_video_duration(self, video_path):
clip = VideoFileClip(video_path)
duration = clip.duration
if duration > 120:
os.remove(video_path)
def load_videos_in_directory(self):
directory = self.get_absolute_path()
self.video_list = self.get_video_list(directory)
if not self.video_list:
wx.MessageBox("No video files found in the directory.", "Info", wx.OK | wx.ICON_INFORMATION)
else:
self.current_video_index = 0
self.play(self.video_list[self.current_video_index])
def get_video_list(self, directory):
video_list = []
for root, _, files in os.walk(directory):
for file in files:
if file.lower().endswith(('.mp4', '.avi', '.mkv', '.mov')):
video_list.append(os.path.join(root, file))
return video_list
if __name__ == '__main__':
app = wx.App()
Frame = VideoSelector()
app.MainLoop()
</code></pre>
| <python><video><wxpython> | 2023-10-29 14:27:31 | 1 | 2,208 | Sy Ker |
77,383,474 | 7,556,522 | How to label a pl.Series using two pl.DataFrame not joined with null values? | <h1>The problem</h1>
<p>I have pl.DataFrame df_signals</p>
<pre><code>┌──────────────┬──────┬─────────────────────────┬──────────┬──────────┐
│ series_id ┆ step ┆ timestamp ┆ sig1 ┆ sig2 │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ u32 ┆ datetime[μs, UTC] ┆ f32 ┆ f32 │
╞══════════════╪══════╪═════════════════════════╪══════════╪══════════╡
│ 038441c925bb ┆ 0 ┆ 2018-08-14 19:30:00 UTC ┆ 0.550596 ┆ 0.017739 │
│ f8a8de8bdd00 ┆ 0 ┆ 2018-08-14 19:30:00 UTC ┆ 0.220596 ┆ 0.077739 │
│ … ┆ … ┆ … ┆ … ┆ … │
└──────────────┴──────┴─────────────────────────┴──────────┴──────────┘
</code></pre>
<p>and another pl.DataFrame df_events</p>
<pre><code>
┌──────────────┬───────┬────────┬───────┬─────────────────────────┬────────────┬─────────────┐
│ series_id ┆ night ┆ event ┆ step ┆ timestamp ┆ onset_step ┆ wakeup_step │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ i64 ┆ str ┆ u32 ┆ datetime[μs, UTC] ┆ u32 ┆ u32 │
╞══════════════╪═══════╪════════╪═══════╪═════════════════════════╪════════════╪═════════════╡
│ 038441c925bb ┆ 1 ┆ onset ┆ 4879 ┆ 2018-08-15 02:26:00 UTC ┆ 4879 ┆ null │
│ 038441c925bb ┆ 1 ┆ wakeup ┆ 10932 ┆ 2018-08-15 10:41:00 UTC ┆ null ┆ 10932 │
│ … ┆ … ┆ … ┆ … ┆ … ┆ … ┆ … │
└──────────────┴───────┴────────┴───────┴─────────────────────────┴────────────┴─────────────┘
</code></pre>
<p>I want to set a new column 'state' which should be :</p>
<ul>
<li>0 if the series_id is awake</li>
<li>1 if the series_id is sleeping</li>
</ul>
<p>It looks easy but I wasn't able to compute it efficiently.</p>
<p>How can I solve this, minimizing the computation as I have a lot of data ?</p>
<hr />
<h1>Reproducing</h1>
<p>I made artificial data :</p>
<p>3 series ['A','B','C']</p>
<ul>
<li>A sleep from</li>
</ul>
<pre><code>2022-01-01 02:00:00 -> 2022-01-01 14:00:00 | [3-15[
2022-01-01 22:00:00 -> 2022-01-02 08:00:00 | [23-33[
</code></pre>
<ul>
<li>B sleep from</li>
</ul>
<pre><code>2022-01-01 03:00:00 -> 2022-01-01 15:00:00 [4-16[
</code></pre>
<ul>
<li>C
start sleeping at (but never wake up... => mismatch)</li>
</ul>
<pre><code>2022-01-01 04:00:00 [5-?[
</code></pre>
<pre class="lang-py prettyprint-override"><code>timestamps = ([f"2022-01-01 {hour:02d}:00:00" for hour in range(24)] +
["2022-01-02 00:00:00"] +
[f"2022-01-02 {hour:02d}:00:00" for hour in range(1, 13)])
df_signals = pl.DataFrame({
"series_id": ["A"] * 37 + ["B"] * 37 + ["C"] * 37,
"timestamp": timestamps * 3,
"step": list(range(1, 38)) * 3
})
# Extended events data
df_events = pl.DataFrame({
"series_id": ["A", "A", "A", "A", "B", "B", "C"],
"night": [1, 1, 1, 2, 1, 1, 1],
"event": ["onset", "wakeup", "onset", "wakeup", "onset", "wakeup", "onset"],
"timestamp": ["2022-01-01 02:00:00", "2022-01-01 14:00:00","2022-01-01 22:00:00", "2022-01-02 08:00:00", "2022-01-01 03:00:00",
"2022-01-01 15:00:00", "2022-01-01 04:00:00"],
"step": [3, 15, 23, 33, 4, 16, 5]
})
</code></pre>
<p>This is what I tried:</p>
<pre class="lang-py prettyprint-override"><code>df_events = df_events.with_columns(
onset_step = pl.when(pl.col('event') == 'onset').then(pl.col('step')),
wakeup_step = pl.when(pl.col('event') == 'wakeup').then(pl.col('step'))
)
df = df_signals.join(df_events, on=['series_id', 'timestamp', 'step'], how='left')
df = df.sort(['series_id', 'step'])
df = df.with_columns(
onset_step = pl.col('onset_step').forward_fill(),
wakeup_step = pl.col('wakeup_step').forward_fill()
)
df.with_columns(
state = (pl.col('step') >= pl.col('onset_step')) & (pl.col('step') <= pl.col('wakeup_step')).fill_null(False)
)
</code></pre>
<p>However, I'm not sure how to treat the edge case...
When I use forward_fill() it breaks at the start and when I use backward_fill() it breaks at the end...</p>
<h1>Expected Result</h1>
<pre><code>series_id,timestamp,step,state,event
A,2022-01-01T00:00:00.000000,1,0,
A,2022-01-01T01:00:00.000000,2,0,
A,2022-01-01T02:00:00.000000,3,1,onset
A,2022-01-01T03:00:00.000000,4,1,
A,2022-01-01T04:00:00.000000,5,1,
A,2022-01-01T05:00:00.000000,6,1,
A,2022-01-01T06:00:00.000000,7,1,
A,2022-01-01T07:00:00.000000,8,1,
A,2022-01-01T08:00:00.000000,9,1,
A,2022-01-01T09:00:00.000000,10,1,
A,2022-01-01T10:00:00.000000,11,1,
A,2022-01-01T11:00:00.000000,12,1,
A,2022-01-01T12:00:00.000000,13,1,
A,2022-01-01T13:00:00.000000,14,1,
A,2022-01-01T14:00:00.000000,15,0,wakeup
A,2022-01-01T15:00:00.000000,16,0,
A,2022-01-01T16:00:00.000000,17,0,
A,2022-01-01T17:00:00.000000,18,0,
A,2022-01-01T18:00:00.000000,19,0,
A,2022-01-01T19:00:00.000000,20,0,
A,2022-01-01T20:00:00.000000,21,0,
A,2022-01-01T21:00:00.000000,22,0,
A,2022-01-01T22:00:00.000000,23,1,onset
A,2022-01-01T23:00:00.000000,24,1,
A,2022-01-02T00:00:00.000000,25,1,
A,2022-01-02T01:00:00.000000,26,1,
A,2022-01-02T02:00:00.000000,27,1,
A,2022-01-02T03:00:00.000000,28,1,
A,2022-01-02T04:00:00.000000,29,1,
A,2022-01-02T05:00:00.000000,30,1,
A,2022-01-02T06:00:00.000000,31,1,
A,2022-01-02T07:00:00.000000,32,1,
A,2022-01-02T08:00:00.000000,33,0,wakeup
A,2022-01-02T09:00:00.000000,34,0,
A,2022-01-02T10:00:00.000000,35,0,
A,2022-01-02T11:00:00.000000,36,0,
A,2022-01-02T12:00:00.000000,37,0,
B,2022-01-01T00:00:00.000000,1,0,
B,2022-01-01T01:00:00.000000,2,0,
B,2022-01-01T02:00:00.000000,3,0,
B,2022-01-01T03:00:00.000000,4,1,onset
B,2022-01-01T04:00:00.000000,5,1,
B,2022-01-01T05:00:00.000000,6,1,
B,2022-01-01T06:00:00.000000,7,1,
B,2022-01-01T07:00:00.000000,8,1,
B,2022-01-01T08:00:00.000000,9,1,
B,2022-01-01T09:00:00.000000,10,1,
B,2022-01-01T10:00:00.000000,11,1,
B,2022-01-01T11:00:00.000000,12,1,
B,2022-01-01T12:00:00.000000,13,1,
B,2022-01-01T13:00:00.000000,14,1,
B,2022-01-01T14:00:00.000000,15,1,
B,2022-01-01T15:00:00.000000,16,0,wakeup
B,2022-01-01T16:00:00.000000,17,0,
B,2022-01-01T17:00:00.000000,18,0,
B,2022-01-01T18:00:00.000000,19,0,
B,2022-01-01T19:00:00.000000,20,0,
B,2022-01-01T20:00:00.000000,21,0,
B,2022-01-01T21:00:00.000000,22,0,
B,2022-01-01T22:00:00.000000,23,0,
B,2022-01-01T23:00:00.000000,24,0,
B,2022-01-02T00:00:00.000000,25,0,
B,2022-01-02T01:00:00.000000,26,0,
B,2022-01-02T02:00:00.000000,27,0,
B,2022-01-02T03:00:00.000000,28,0,
B,2022-01-02T04:00:00.000000,29,0,
B,2022-01-02T05:00:00.000000,30,0,
B,2022-01-02T06:00:00.000000,31,0,
B,2022-01-02T07:00:00.000000,32,0,
B,2022-01-02T08:00:00.000000,33,0,
B,2022-01-02T09:00:00.000000,34,0,
B,2022-01-02T10:00:00.000000,35,0,
B,2022-01-02T11:00:00.000000,36,0,
B,2022-01-02T12:00:00.000000,37,0,
C,2022-01-01T00:00:00.000000,1,0,
C,2022-01-01T01:00:00.000000,2,0,
C,2022-01-01T02:00:00.000000,3,0,
C,2022-01-01T03:00:00.000000,4,0,
C,2022-01-01T04:00:00.000000,5,0,onset
C,2022-01-01T05:00:00.000000,6,0,
C,2022-01-01T06:00:00.000000,7,0,
C,2022-01-01T07:00:00.000000,8,0,
C,2022-01-01T08:00:00.000000,9,0,
C,2022-01-01T09:00:00.000000,10,0,
C,2022-01-01T10:00:00.000000,11,0,
C,2022-01-01T11:00:00.000000,12,0,
C,2022-01-01T12:00:00.000000,13,0,
C,2022-01-01T13:00:00.000000,14,0,
C,2022-01-01T14:00:00.000000,15,0,
C,2022-01-01T15:00:00.000000,16,0,
C,2022-01-01T16:00:00.000000,17,0,
C,2022-01-01T17:00:00.000000,18,0,
C,2022-01-01T18:00:00.000000,19,0,
C,2022-01-01T19:00:00.000000,20,0,
C,2022-01-01T20:00:00.000000,21,0,
C,2022-01-01T21:00:00.000000,22,0,
C,2022-01-01T22:00:00.000000,23,0,
C,2022-01-01T23:00:00.000000,24,0,
C,2022-01-02T00:00:00.000000,25,0,
C,2022-01-02T01:00:00.000000,26,0,
C,2022-01-02T02:00:00.000000,27,0,
C,2022-01-02T03:00:00.000000,28,0,
C,2022-01-02T04:00:00.000000,29,0,
C,2022-01-02T05:00:00.000000,30,0,
C,2022-01-02T06:00:00.000000,31,0,
C,2022-01-02T07:00:00.000000,32,0,
C,2022-01-02T08:00:00.000000,33,0,
C,2022-01-02T09:00:00.000000,34,0,
C,2022-01-02T10:00:00.000000,35,0,
C,2022-01-02T11:00:00.000000,36,0,
C,2022-01-02T12:00:00.000000,37,0,
</code></pre>
| <python><dataframe><datetime><python-polars> | 2023-10-29 13:23:55 | 2 | 968 | Olivier D'Ancona |
77,383,348 | 1,982,032 | How can rewrite the code with pure playwright codes? | <p>I want to get all the <code>a</code> elements <code>href</code> attribute in the webpage <code>https://learningenglish.voanews.com/z/1581</code>:</p>
<pre><code>from lxml import html
from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(channel='chrome',headless=True)
page = browser.new_page()
url = "https://learningenglish.voanews.com/z/1581"
page.goto(url,wait_until= "networkidle")
doc = html.fromstring(page.content())
elements = doc.xpath('//div[@class="media-block__content"]//a')
for e in elements:
print(e.attrib['href'])
</code></pre>
<p>It can print all <code>a</code> elements <code>href</code> address,try to fulfill same function with pure playwright codes,failed .</p>
<pre><code>from playwright.sync_api import sync_playwright
with sync_playwright() as p:
browser = p.chromium.launch(channel='chrome',headless=True)
page = browser.new_page()
url = "https://learningenglish.voanews.com/z/1581"
page.goto(url,wait_until= "networkidle")
elements = page.locator('//div[@class="media-block__content"]//a')
for e in elements:
print(e.get_attribute('href'))
</code></pre>
<p>It encounter error:</p>
<pre><code>TypeError: 'Locator' object is not iterable
</code></pre>
<p>How can fix it?</p>
| <python><python-3.x><xpath><playwright-python> | 2023-10-29 12:50:42 | 1 | 355 | showkey |
77,383,220 | 9,962,007 | Get names of all numeric columns in a pandas DataFrame (filter by dtype) | <p>Which <code>pandas</code> methods can be used to get names of the columns of a given <code>DataFrame</code> that have numeric <code>dtypes</code> (of all sizes, such as <code>uint8</code>, and not just 64-bit ones), using a single line of code? Note: in practice there are hundreds of columns, so <code>dtypes</code> (or another vectorized method) should be used for detecting data types.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
test_df = pd.DataFrame(data=[{"str_col": "some string",
"int_col": 0,
"float_col": 3.1415}])
test_df.dtypes[test_df.dtypes == np.dtype('float')].index.values[0]
# "float_col"
test_df.dtypes[test_df.dtypes == np.dtype('int')].index.values[0]
# "int_col"
# ?
# ["float_col", "int_col"]
</code></pre>
| <python><pandas><numpy><dtype> | 2023-10-29 12:08:57 | 1 | 7,211 | mirekphd |
77,383,206 | 22,466,650 | How to get the editior elements of regex101? | <p>If we consider this <a href="https://regex101.com/r/t1nqwU/1" rel="nofollow noreferrer">https://regex101.com/r/t1nqwU/1</a>, how can I get this elements :</p>
<ul>
<li>REGULAR EXPRESSION</li>
<li>REGEX OPTIONS</li>
<li>TEST STRING</li>
</ul>
<p>When I inspect the page, I understand that I need to query <code>div class="cm-line"</code> but my code gives emtpy list.</p>
<p><a href="https://i.sstatic.net/bbok3.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bbok3.png" alt="enter image description here" /></a></p>
<pre><code>import requests
from bs4 import BeautifulSoup
url = 'https://regex101.com/r/t1nqwU/1'
headers = {'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/118.0.0.0 Safari/537.36'}
request = requests.get(url, headers=headers)
soup = BeautifulSoup(request.text)
print(soup.find_all('div', {'class': 'cm-line'}))
[]
</code></pre>
<p>Can you guys explain why please ? My expected output is this :</p>
<pre><code>wanted = {
'regexp': '(\s)(?:line)',
'options': 'gmi',
'string': 'First line\nSecond LINE\nThird line'
}
</code></pre>
| <python><web-scraping><beautifulsoup> | 2023-10-29 12:06:58 | 1 | 1,085 | VERBOSE |
77,383,196 | 4,139,143 | How can I pass a list of numba jitted functions as an argument into another numba jitted function? | <p>This code works fine as expected</p>
<pre><code>import numpy as np
import numba as nb
def f1(x):
return 2 * x
def f2(x):
return x - 4
def f(funcs, x):
out = np.zeros(len(funcs))
for i in range(len(out)):
out[i] = funcs[i](x)
return out
f([f1, f2], 3)
>>> array([ 6., -1.])
</code></pre>
<p>But if I decorate each function with <code>@nb.njit</code> (shown below), I get the error <code>TypeError: can't unbox heterogeneous list: type(CPUDispatcher(<function f1 at 0x2a10455e0>)) != type(CPUDispatcher(<function f2 at 0x2a5dedc10>))</code> so numba doesn't seem to recognise the types of each function.</p>
<p>What do I need to do to be able to pass a list of jitted functions into another jitted function to get numba to recognise it, compile and run properly?</p>
<p>With numba decorator (doesn't work):</p>
<pre><code>@nb.njit
def f1(x):
return 2 * x
@nb.njit
def f2(x):
return x - 4
@nb.njit
def f(funcs, x):
out = np.zeros(len(funcs))
for i in range(len(out)):
out[i] = funcs[i](x)
return out
</code></pre>
| <python><numba><jit> | 2023-10-29 12:04:27 | 0 | 7,378 | PyRsquared |
77,383,154 | 595,305 | Clear up PyO3 removed Rust module? | <p>I'm a bit puzzled: I had a PyO3 module which I've now removed completely... the Rust module directory and files have been deleted and there is no reference to them in any Cargo.toml files, or anywhere.</p>
<p>And yet, when I run my Python script it is still able to import the old Rust module, and run the PyO3 function which was in there.</p>
<p>I've tried <code>Cargo clean</code> but this doesn't seem to remove it. Obviously that's a Cargo-specific command, whereas what I really need is a PyO3-specific "clean" method.</p>
<p>I want Python to complain when I go <code>import my_old_rust_module</code> that the module can't be found.</p>
| <python><rust><pyo3> | 2023-10-29 11:53:58 | 0 | 16,076 | mike rodent |
77,383,027 | 1,367,688 | Hashing with sha1[:10] or MD5 for caching, is MD5 is better? | <p>I tried to find good hashing function that will be fast and short</p>
<p>There is discussion <a href="https://stackoverflow.com/questions/4567089/hash-function-that-produces-short-hashes">Hash function that produces short hashes?</a></p>
<p>They recommend to use:</p>
<pre class="lang-py prettyprint-override"><code>>>> import hashlib
>>> hash = hashlib.sha1("my message".encode("UTF-8")).hexdigest()
>>> hash
'104ab42f1193c336aa2cf08a2c946d5c6fd0fcdb'
>>> hash[:10]
'104ab42f11'
</code></pre>
<p>There is comparison table In this link <a href="https://www.tutorialspoint.com/difference-between-md5-and-sha1" rel="nofollow noreferrer">https://www.tutorialspoint.com/difference-between-md5-and-sha1</a>
That shows that MD5 is faster then SHA1</p>
<p>Questions are:</p>
<ul>
<li><p>For caching objects (not security purposes) it seems that it's better using MD5 then SHA1, am I missing something?</p>
</li>
<li><p>Is there better Hashing that is Fast and Short</p>
</li>
</ul>
| <python><caching><hash><md5><sha1> | 2023-10-29 11:16:17 | 2 | 467 | Yehuda |
77,382,829 | 1,942,868 | Assert when ObjectDoesNotExitst is raise | <p>I have this function which raises the <code>ObjectDoesNotExist</code></p>
<pre><code>def is_user_login(self,id):
try:
u = m.CustomUser.objects.get(id=id)
except ObjectDoesNotExist as e:
raise e
</code></pre>
<p>Now I am writing the test script.</p>
<pre><code> try:
CommonFunc.is_user_login(4)
except Exception as e:
print(e)
self.assertEqual(ObjectDoesNotExist,e)
</code></pre>
<p>It doesn't work.</p>
<p>It shows error like this below ,</p>
<pre><code>AssertionError: <class 'django.core.exceptions.ObjectDoesNotExist'> != DoesNotExist('CustomUser matching query does not exist.')
</code></pre>
<p>How can I assert for <code>ObjectDoesNotEqual</code>?</p>
| <python><django> | 2023-10-29 10:14:25 | 1 | 12,599 | whitebear |
77,382,814 | 898,042 | sklearn binary classifier for dataset with datetime, categorical values without preprocessing? | <p>I need to predict if signup-driver will actually start driving using some basic classifier.</p>
<pre><code>city_name signup_os signup_channel signup_date bgc_date first_completed_date did_drive
Strark ios web Paid 1/2/16 NaN NaN no
Strark windows Paid 1/21/16 NaN NaN no
</code></pre>
<p>the dataset has some date columns, what classifier from sklearn to use to train basic classifier?</p>
<p>it fails with datetime values. All the features are categorical or date values</p>
<pre><code>from sklearn.model_selection import train_test_split
X = refined_df[['city_name','signup_os','signup_channel','signup_date','bgc_date']]
y = refined_df['did_drive']
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y , test_size=0.25, random_state=0)
models = {}
# Logistic Regression
from sklearn.linear_model import LogisticRegression
models['Logistic Regression'] = LogisticRegression()
# Support Vector Machines
from sklearn.svm import LinearSVC
models['Support Vector Machines'] = LinearSVC()
# Decision Trees
from sklearn.tree import DecisionTreeClassifier
models['Decision Trees'] = DecisionTreeClassifier()
# Random Forest
from sklearn.ensemble import RandomForestClassifier
models['Random Forest'] = RandomForestClassifier()
# Naive Bayes
from sklearn.naive_bayes import GaussianNB
models['Naive Bayes'] = GaussianNB()
# K-Nearest Neighbors
from sklearn.neighbors import KNeighborsClassifier
models['K-Nearest Neighbor'] = KNeighborsClassifier()
from sklearn.metrics import accuracy_score, precision_score, recall_score
accuracy, precision, recall = {}, {}, {}
for key in models.keys():
# Fit the classifier
models[key].fit(X_train, y_train)
# Make predictions
predictions = models[key].predict(X_test)
# Calculate metrics
accuracy[key] = accuracy_score(predictions, y_test)
precision[key] = precision_score(predictions, y_test)
recall[key] = recall_score(predictions, y_test)
</code></pre>
<p>ValueError: could not convert string to float: 'Berton'. it cant convert city name to float. how to do it?</p>
<p>is there decision tree that accept datetime values without any additional conversion?</p>
| <python><scikit-learn><classification> | 2023-10-29 10:09:01 | 1 | 24,573 | ERJAN |
77,382,634 | 7,055,769 | unable to access object's property despite it being present | <p>My serializer:</p>
<pre><code>class TaskSerializer(serializers.ModelSerializer):
class Meta:
model = Task
fields = "__all__"
def create(self, validated_data):
try:
print(validated_data)
print(validated_data.author)
task = Task.objects.create(**validated_data)
return task
except BaseException as e:
print(e)
raise HTTP_400_BAD_REQUEST
</code></pre>
<p>my view:</p>
<pre><code>class TaskCreateApiView(generics.CreateAPIView):
serializer_class = TaskSerializer
</code></pre>
<p>my model:</p>
<pre><code>from django.db import models
from django.contrib.auth.models import User
class Task(models.Model):
content = models.CharField(
default="",
max_length=255,
)
author = models.ForeignKey(
User,
on_delete=models.CASCADE,
null=True,
)
category = models.CharField(
default="",
max_length=255,
)
def __str__(self):
return str(self.id) + self.content
</code></pre>
<p>My log from serializer:</p>
<blockquote>
<p>{'content': 'test2', 'category': 'test', 'author': <User: zaq1>}</p>
<p>'dict' object has no attribute 'author'
print(validated_data.author)
^^^^^^^^^^^^^^^^^^^^^
AttributeError: 'dict' object has no attribute 'author'</p>
</blockquote>
<p>How can I access <code>author</code>? I see it exists as <code><User:zaq1></code> but can't seem to access it</p>
| <python><django><django-rest-framework> | 2023-10-29 09:07:36 | 1 | 5,089 | Alex Ironside |
77,382,541 | 8,253,860 | Can I use sentry to track errors originating from a specific package? | <p>I am using sentry to track error and performance of my pypi package. The problem is that a lot of times it captures errors that are not at all related to my package but because my package is imported, those errors are tracked. Sometimes the same type of errors flood the dashboard.
I couldn't find anything related to this on the docs or on searching. So, what is the standard way to perform error tracking using sentry?</p>
<p>There is an option to filter these errors using hints through the sdk itselfy.
Then there's an option to ignore similar errors in the dashboard but that requires business plan.
Are these the only way?</p>
<p>I tried filtering the errors at the origin itself but then that is very hacky as you have to keep updating it with the new error msgs that you find and the process just continues forever.</p>
| <python><performance><error-handling><sentry> | 2023-10-29 08:33:23 | 0 | 667 | Ayush Chaurasia |
77,382,299 | 6,734,243 | how to accept 2 extention for a file in python? | <p>I'm discovering files from my template directory one of them should be <code>copier.yaml</code>, to do so I'm using the following:</p>
<pre class="lang-py prettyprint-override"><code>copier_yaml = template_dir / "copier.yaml" # a pathlib.Path
params = yaml.safe_load(copier_yaml.read_text())
</code></pre>
<p>Now I realize to be fully compatible with <code>copier</code> I should be accepting both <code>.yaml</code> and <code>.yml</code> extentions, is there a pythonic way to do so ?</p>
| <python> | 2023-10-29 07:07:01 | 1 | 2,670 | Pierrick Rambaud |
77,382,208 | 16,725,431 | Python replace unprintable characters except linebreak | <p>I am trying to write a function that replaces unprintable characters with space, that worked well but it is replacing linebreak <code>\n</code> with space too. I cannot figure out why.</p>
<p>Test code:</p>
<pre><code>import re
def replace_unknown_characters_with_space(input_string):
# Replace non-printable characters (including escape sequences) with spaces
# According to ChatGPT, \n should not be in this range
cleaned_string = re.sub(r'[^\x20-\x7E]', ' ', input_string)
return cleaned_string
def main():
test_string = "This is a test string with some unprintable characters:\nHello\x85World\x0DThis\x0Ais\x2028a\x2029test."
print("Original String:")
print(test_string)
cleaned_string = replace_unknown_characters_with_space(test_string)
print("\nCleaned String:")
print(cleaned_string)
if __name__ == "__main__":
main()
</code></pre>
<p>Output:</p>
<pre><code>Original String:
This is a test string with some unprintable characters:
Hello
Thisd
is 28a 29test.
Cleaned String:
This is a test string with some unprintable characters: Hello World This is 28a 29test.
</code></pre>
<p>As you can see, the linebreak before Hello World is replaced by space, which is not intended. I tried to get help from ChatGPT but its regex solutions don't work.</p>
<p>my last resort is to use a for loop and use python built-in <code>isprintable()</code> method to filter the characters out, but this will be much slower compared to regex.</p>
| <python><python-3.x><ascii><python-re><non-printing-characters> | 2023-10-29 06:25:34 | 4 | 444 | Electron X |
77,382,170 | 13,336,872 | Django - Running pip as the 'root' user can result in broken permissions | <p>I'm deploying my Django backend utilizing the AWS app runner service. The contents of my files are as follows.</p>
<p><strong>apprunner.yaml</strong></p>
<pre><code>version: 1.0
runtime: python3
build:
commands:
build:
- pip install -r requirements.txt
run:
runtime-version: 3.8.16
command: sh startup.sh
network:
port: 8000
</code></pre>
<p><strong>requirements.txt</strong></p>
<pre><code>asttokens==2.2.1
backcall==0.2.0
category-encoders==2.6.0
certifi==2023.7.22
charset-normalizer==3.3.0
colorama>=0.2.5, <0.4.5
comm==0.1.3
contourpy==1.0.7
cycler==0.11.0
debugpy==1.6.7
decorator==5.1.1
distlib==0.3.7
executing==1.2.0
filelock==3.13.0
fonttools==4.40.0
gunicorn==20.1.0
idna==3.4
ipykernel==6.23.2
ipython>=7.0.0, <8.0.0
jedi==0.18.2
joblib==1.2.0
jupyter_client==8.2.0
jupyter_core==5.3.0
kiwisolver==1.4.4
matplotlib==3.7.1
matplotlib-inline==0.1.6
nest-asyncio==1.5.6
numpy==1.24.2
opencv-python==4.7.0.68
packaging==23.0
pandas==1.5.3
parso==0.8.3
patsy==0.5.3
pickleshare==0.7.5
Pillow==9.5.0
pipenv==2023.10.24
platformdirs==3.11.0
prompt-toolkit==3.0.38
psutil==5.9.5
pure-eval==0.2.2
pycodestyle==2.10.0
pygame==2.1.3
Pygments==2.15.1
pyparsing==3.0.9
python-dateutil==2.8.2
pytz==2022.7.1
pyzmq==25.1.0
scikit-learn==1.2.1
scipy==1.10.0
seaborn==0.12.2
six==1.16.0
stack-data==0.6.2
statsmodels==0.13.5
threadpoolctl==3.1.0
tornado==6.3.2
traitlets==5.9.0
urllib3>=1.25.4, <1.27
virtualenv==20.24.6
wcwidth==0.2.6
whitenoise==6.4.0
</code></pre>
<p><strong>startup.sh</strong></p>
<pre><code>#!/bin/bash
python manage.py collectstatic && gunicorn --workers 2 backend.wsgi
</code></pre>
<p>By the way all the packages are installed successfully in the AWS app runner, finally it gives</p>
<pre><code>10-29-2023 02:39:34 AM [Build] [91mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
10-29-2023 02:39:34 AM [Build] [0mRemoving intermediate container 4a0110123b9d
10-29-2023 02:39:34 AM [Build] ---> fca9ca934529
10-29-2023 02:39:34 AM [Build] Step 5/5 : EXPOSE 8000
10-29-2023 02:39:34 AM [Build] ---> Running in c8a4669398b0
10-29-2023 02:39:34 AM [Build] Removing intermediate container c8a4669398b0
10-29-2023 02:39:34 AM [Build] ---> 1f0d784181f3
10-29-2023 02:39:34 AM [Build] Successfully built 1f0d784181f3
10-29-2023 02:39:34 AM [Build] Successfully tagged application-image:latest
10-29-2023 02:42:13 AM [AppRunner] Failed to deploy your application source code.
</code></pre>
<p>However I changed the apprunner.yaml code to include virtualenv as</p>
<pre><code>version: 1.0
runtime: python3
build:
commands:
build:
- pip install pipenv
- pipenv install -r requirements.txt
run:
runtime-version: 3.8.16
command: sh startup.sh
network:
port: 8000
</code></pre>
<p>but then the apprunner fails and gives errors even without installing the packages:</p>
<pre><code>10-29-2023 02:29:06 AM [Build] Warning: the environment variable LANG is not set!
10-29-2023 02:29:06 AM [Build] We recommend setting this in ~/.profile (or equivalent) for proper expected behavior.
10-29-2023 02:29:06 AM [Build] Warning: Python 3.11 was not found on your system...
10-29-2023 02:29:06 AM [Build] Creating a virtualenv for this project...
10-29-2023 02:29:06 AM [Build] Pipfile: /codebuild/output/src2729792062/src/backend/Pipfile
10-29-2023 02:29:06 AM [Build] Using default python from /root/.pyenv/versions/3.9.16/bin/python3.9 (3.9.16) to create virtualenv...
10-29-2023 02:29:06 AM [Build] created virtual environment CPython3.9.16.final.0-64 in 1333ms
10-29-2023 02:29:06 AM [Build] creator CPython3Posix(dest=/root/.local/share/virtualenvs/backend-C4VHmbmy, clear=False, no_vcs_ignore=False, global=False)
10-29-2023 02:29:06 AM [Build] seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/root/.local/share/virtualenv)
10-29-2023 02:29:06 AM [Build] added seed packages: pip==23.2, setuptools==68.0.0, wheel==0.40.0
10-29-2023 02:29:06 AM [Build] activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator
10-29-2023 02:29:06 AM [Build] ✔ Successfully created virtual environment!
10-29-2023 02:29:06 AM [Build] Virtualenv location: /root/.local/share/virtualenvs/backend-C4VHmbmy
10-29-2023 02:29:06 AM [Build] Warning: Your Pipfile requires python_version 3.11, but you are using 3.9.16 (/root/.local/share/v/b/bin/python).
10-29-2023 02:29:06 AM [Build] $ pipenv --rm and rebuilding the virtual environment may resolve the issue.
10-29-2023 02:29:06 AM [Build] Usage: pipenv install [OPTIONS] [PACKAGES]...
10-29-2023 02:29:06 AM [Build] ERROR:: Aborting deploy
10-29-2023 02:29:16 AM [AppRunner] Failed to deploy your application source code.
</code></pre>
<p>so I changed the <code>runtime-version</code> in apprunner.yaml to 3.11.0 then the apprunner gives:</p>
<pre><code>10-29-2023 01:43:46 AM [AppRunner] The specified runtime version is not supported. Refer to the Release information in the App Runner Developer guide for supported runtime versions.
</code></pre>
<p>So I'm confused between installing virtual env and versioning problem in AWS apprunner. However I followed this <a href="https://aws.amazon.com/blogs/containers/deploy-and-scale-django-applications-on-aws-app-runner/" rel="nofollow noreferrer">blog</a> to deploy and scale Django applications on AWS App Runner.</p>
| <python><django><amazon-web-services><virtualenv><version> | 2023-10-29 06:01:54 | 1 | 832 | Damika |
77,381,900 | 9,900,084 | Write a matplotlib figure to a ReportLab PDF file without saving image to disk | <p>I'm trying to find a way to write a matplotlib figure to a PDF via reportlab (4.0.6 open-source version). According to its <a href="https://docs.reportlab.com/reportlab/userguide/ch2_graphics/#image-methods" rel="nofollow noreferrer">doc</a>, it should accept a PIL Image object, but I tried the following and it returned <code>TypeError: expected str, bytes or os.PathLike object, not Image</code>.</p>
<pre class="lang-py prettyprint-override"><code>from reportlab.pdfgen import canvas
from PIL import Image
import numpy as np
import matplotlib.pyplot as plt
from matplotlib.backends.backend_agg import FigureCanvas
c = canvas.Canvas('test-pdf.pdf')
fig, ax = plt.subplots()
ax.plot([1, 2, 4], [3, 4, 6], '-o')
fig_canvas = FigureCanvas(fig)
fig_canvas.draw()
img = Image.fromarray(np.asarray(fig_canvas.buffer_rgba()))
c.drawImage(img, 0, 0)
c.showPage()
c.save()
</code></pre>
<p>I have seen this <a href="https://stackoverflow.com/questions/18897511/how-to-drawimage-a-matplotlib-figure-in-a-reportlab-canvas">solution</a>, but it's very old and uses other dependencies. Is there a way to achieve this just using PIL or numpy or any python3 first-party packages?</p>
| <python><matplotlib><python-imaging-library><reportlab> | 2023-10-29 03:20:54 | 1 | 2,559 | steven |
77,381,869 | 1,173,913 | Convert sklearn random forest model into raw python code | <p>If I have a sklearn random forest model like the one below, how would I convert the tree to raw Python code? I.e. rather than call the predict() function for the classifier, I want the tree in raw code so I can use it in an embedded application. I can generate C++ code like this using MicroMLgen, but I need Python.</p>
<pre><code>import numpy as np
from glob import glob
from os.path import basename
# Import chosen classifier function
# (for alternatives see https://github.com/eloquentarduino/micromlgen)
from sklearn.ensemble import RandomForestClassifier
# For exporting model to Arduino C code
from micromlgen import port
# Load training dataset from csv files
def load_features(folder):
dataset = None
classmap = {}
for class_idx, filename in enumerate(glob('%s/*.csv' % folder)):
class_name = basename(filename)[:-4]
classmap[class_idx] = class_name
samples = np.loadtxt(filename, dtype=float, delimiter=',')
labels = np.ones((len(samples), 1)) * class_idx
samples = np.hstack((samples, labels))
dataset = samples if dataset is None else np.vstack((dataset, samples))
return dataset, classmap
# Load data, apply classifier, output as Arduino format
if __name__ == '__main__':
# Load training data from the specified subfolder
features, classmap = load_features('training_data')
# Create classifier function from feature set
X, y = features[:, :-1], features[:, -1]
classifier = RandomForestClassifier(20, max_depth=10).fit(X, y)
# Use MicroMLgen to port classifier to Arduino C-code
c_code = port(classifier, classmap=classmap)
# Show generated code
print(c_code)
</code></pre>
<p>The output should look someting like this (except in Python)....</p>
<pre><code> int predict(float *x) {
uint8_t votes[26] = { 0 };
// tree #1
if (x[2] <= 25.0) {
if (x[1] <= 39.0) {
if (x[0] <= 268.0) {
votes[4] += 1;
}
else {
votes[19] += 1;
}
}
else {
if (x[0] <= 302.0) {
if (x[0] <= 115.0) {
if (x[0] <= 86.0) {
etc etc...
</code></pre>
| <python><machine-learning><scikit-learn><random-forest> | 2023-10-29 03:00:05 | 0 | 581 | Graham |
77,381,858 | 3,299,432 | Date data type not preserved in Snowflake to Pandas dataframe | <p>I have a Snowflake table that includes a date field.</p>
<p>When I query Snowflake and load the data into a Pandas dataframe the date is always converted to object data type in the Pandas dataframe.</p>
<p>I've tried lots of options, at the moment I'm using this</p>
<pre><code>snowflake_url = f'snowflake://{sf_user}:{sf_password}@{sf_account}/{sf_database}/{sf_schema}?warehouse={sf_warehouse}'
engine = create_engine(snowflake_url)
query = f"SELECT * FROM {sf_table}"
df = pd.read_sql(query, con=engine)
print(df.dtypes)
</code></pre>
<p>Table currently contains data types of object, integer and float that are preserved properly but dates are being converted to object as well.</p>
<p>I don't really want to nominate individual field names, because I'm trying to make the script dynamic for lots of tables but I will if that's the only solution.</p>
| <python><python-3.x><sqlalchemy><snowflake-cloud-data-platform> | 2023-10-29 02:52:37 | 1 | 547 | cmcau |
77,381,775 | 268,581 | Format y-axis as Trillions of U.S. dollars | <p>Here's a Python program which does the following:</p>
<ul>
<li>Makes an API call to treasury.gov to retrieve data</li>
<li>Stores the data in a Pandas dataframe</li>
<li>Plots the data</li>
</ul>
<pre><code>import requests
import pandas as pd
import matplotlib.pyplot as plt
page_size = 10000
url = 'https://api.fiscaldata.treasury.gov/services/api/fiscal_service/v2/accounting/od/debt_to_penny'
url_params = f'?page[size]={page_size}'
response = requests.get(url + url_params)
result_json = response.json()
df = pd.DataFrame(result_json['data'])
df['record_date'] = pd.to_datetime(df['record_date'])
rows = df[df['debt_held_public_amt'] != 'null']
rows['debt_held_public_amt'] = pd.to_numeric(rows['debt_held_public_amt'])
plt.ion()
rows.plot(x='record_date', y='debt_held_public_amt', kind='line', rot=90)
</code></pre>
<p>Here's the resulting chart that I get:</p>
<p><a href="https://i.sstatic.net/cDmdy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cDmdy.png" alt="enter image description here" /></a></p>
<h1>Question</h1>
<p>The <code>debt_held_public_amt</code> is in U.S. dollars:</p>
<p><a href="https://i.sstatic.net/KosPC.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KosPC.png" alt="enter image description here" /></a></p>
<p>What's a good way to format the y-axis as trillions of U.S. dollars?</p>
| <python><pandas><dataframe><matplotlib> | 2023-10-29 01:56:37 | 2 | 9,709 | dharmatech |
77,381,677 | 13,078,279 | Why does plotly.Mesh3d refuse to plot a 3D function? | <p>I am attempting to plot the analytical t=0 solution to the 3D wave equation:</p>
<pre><code>import numpy as np
import plotly.graph_objects as go
f = lambda x, y, z: np.sin(x) * np.sin(y) * np.sin(z)
x = np.linspace(-5, 5)
y = np.linspace(-5, 5)
z = np.linspace(-5, 5)
Z = f(x, y, z)
fig = go.Figure(data=[go.Mesh3d(x=x, y=y, z=Z)])
fig.show()
</code></pre>
<p>This, however, outputs an empty graph. I am not sure why.</p>
<p>I originally considered that the issue may simply be that 3D plots of 3D functions are impossible, but I believe that not to be the case, given that many 3D functions can be plotted just fine as a 3D mesh. For instance, the probability density of the hydrogen wavefunction <code>psi_nlm(r, theta, phi)</code> can be plotted without issue using <code>Mesh3d</code>. In addition, <code>Mesh3d()</code> takes in 3 arguments by default, so it is supposed to be used for 3D functions and shouldn't be restricted to 2D functions. So what is the issue?</p>
| <python><plot><plotly> | 2023-10-29 00:43:13 | 0 | 416 | JS4137 |
77,381,592 | 1,081,297 | Local start and end of day in UTC | <p>I would like to find out what the start and end time of a specific day is expressed in UTC and in Python.</p>
<p>For instance:</p>
<ul>
<li>the current date and time is Sun 29 Oct 2023, 01:33:49 CEST (Central European Summer Time),</li>
<li>the day starts at Sun 29 Oct 2023, 00:00:00 CEST,</li>
<li>the day ends at Sun 29 Oct 2023, 23:59:59 CET (NB, the time zone switched from CEST (daylight saving time) to CET (not on daylight saving time))</li>
</ul>
<p>Now I would like to get these times in UTC:</p>
<ul>
<li>Start: Sat 28 Oct 2023, 22:00:00 UTC</li>
<li>End: Sun 29 Oct 2023, 22:59:59 UTC (the day contains 25 hours)</li>
</ul>
<p>I do not want to set the timezone programatically - I want to get it from my system.</p>
<p>I find this easy to do in Swift as every date is timezone aware, but I can't get my head around on how to do this in Python. The reason why I need to do this is because I want to get all the data within a specific (local) day from my database, which contains UTC timestamps.</p>
<p>I've tried this:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, time
import pytz
start_of_day = datetime.combine(datetime.now(), time.min)
end_of_day = datetime.combine(datetime.now(), time.max)
print(start_of_day)
print(end_of_day)
print(start_of_day.astimezone().tzinfo)
print(end_of_day.astimezone().tzinfo)
start_of_day = pytz.utc.localize(start_of_day)
end_of_day = pytz.utc.localize(end_of_day)
print(start_of_day)
print(end_of_day)
print(start_of_day.astimezone().tzinfo)
print(end_of_day.astimezone().tzinfo)
</code></pre>
<p>which gives the following output:</p>
<pre><code>2023-10-29 00:00:00
2023-10-29 23:59:59.999999
BST
GMT
2023-10-29 00:00:00+00:00
2023-10-29 23:59:59.999999+00:00
BST
GMT
</code></pre>
<p>while I would expect, something like (I guess UTC might also be GMT):</p>
<pre><code>2023-10-29 00:00:00
2023-10-29 23:59:59.999999
CEST
CET
2023-10-28 22:00:00+00:00
2023-10-29 22:59:59.999999+00:00
UTC
UTC
</code></pre>
<p>Not only are the times wrong, but the timezones are also weird.</p>
| <python><datetime><timezone><python-datetime><pytz> | 2023-10-28 23:46:55 | 1 | 581 | Dieudonné |
77,381,318 | 6,111,772 | python csv writes too many decimals | <p>A well discussed problem, but could not find a suitable answer.
writing a float list of points (n,3) of 3D vectors to a file using code:</p>
<pre><code>...
with open(filename,"w") as f:
csvw=csv.writer(f,delimiter=";",quotechar='"')
for point in points:
csvw.writerow(point)
</code></pre>
<p>gives file content:</p>
<pre><code> 0.04471017781221634;0.0;0.999
-0.05707349435937544;-0.052283996037127314;0.997
0.008731637417420734;0.09949250478307754;0.995
0.0718653614139638;-0.09373563798705578;0.993
...
</code></pre>
<p>for a million points this is a waste of memory. Using binary coding is not easy to transfer to other programs. So I would prefer a more compact format like :</p>
<pre><code> 0.044;0.0;0.999
-0.057;-0.052;0.997
0.008;0.099;0.995
0.071;-0.093;0.993
...
</code></pre>
<p>Here decimals are cut off, rounding is preferred.
How can I change or extend the code? Thanks in advance.</p>
| <python><csv><digits> | 2023-10-28 21:36:43 | 1 | 441 | peets |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.