QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
77,097,619 | 3,247,006 | save_screenshot() vs get_screenshot_as_file() in Selenium in Python | <p>I took 2 screenshots of Django Admin with <code>save_screenshot()</code> and <code>get_screenshot_as_file()</code> as shown below. *I use <a href="https://github.com/django/django" rel="nofollow noreferrer">Django</a>, <a href="https://github.com/pytest-dev/pytest-django" rel="nofollow noreferrer">pytest-django</a> and <a href="https://github.com/SeleniumHQ/selenium" rel="nofollow noreferrer">Selenium</a>:</p>
<p><code>save_screenshot()</code>:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
def test_1(live_server):
driver = webdriver.Chrome()
driver.get(("%s%s" % (live_server.url, "/admin/")))
driver.save_screenshot("admin.png") # Here
</code></pre>
<p><code>get_screenshot_as_file()</code>:</p>
<pre class="lang-py prettyprint-override"><code>from selenium import webdriver
def test_1(live_server):
driver = webdriver.Chrome()
driver.get(("%s%s" % (live_server.url, "/admin/")))
driver.get_screenshot_as_file("admin.png") # Here
</code></pre>
<p>Then, I got the same screenshots as shown below:</p>
<p><code>admin.png</code>:</p>
<p><a href="https://i.sstatic.net/UOCPe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/UOCPe.png" alt="enter image description here" /></a></p>
<p>So, what is the difference between <code>save_screenshot()</code> and <code>get_screenshot_as_file()</code> in Selenium in Python?</p>
| <python><selenium-webdriver><screenshot><webpage-screenshot> | 2023-09-13 13:38:19 | 1 | 42,516 | Super Kai - Kazuya Ito |
77,097,591 | 1,185,254 | How to create a Mixin Enum | <p>I'm trying to create an enum with a <code>pathlib.Path</code> mixin, MCVE:</p>
<pre><code>import enum, pathlib
class Test(pathlib.Path, enum.Enum):
path1 = pathlib.Path('/path1')
path2 = pathlib.Path('/path2')
</code></pre>
<p>which gives a totally cryptic error:</p>
<pre><code> 1 import enum, pathlib
----> 2 class Test(pathlib.Path, enum.Enum):
3 path1 = pathlib.Path('/path1')
4 path2 = pathlib.Path('/path2')
File ~/anaconda3/envs/main/lib/python3.9/enum.py:288, in EnumMeta.__new__(metacls, cls, bases, classdict, **kwds)
286 enum_member._value_ = value
287 else:
--> 288 enum_member = __new__(enum_class, *args)
289 if not hasattr(enum_member, '_value_'):
290 if member_type is object:
File ~/anaconda3/envs/main/lib/python3.9/pathlib.py:1082, in Path.__new__(cls, *args, **kwargs)
1080 if cls is Path:
1081 cls = WindowsPath if os.name == 'nt' else PosixPath
-> 1082 self = cls._from_parts(args, init=False)
1083 if not self._flavour.is_supported:
1084 raise NotImplementedError("cannot instantiate %r on your system"
1085 % (cls.__name__,))
File ~/anaconda3/envs/main/lib/python3.9/pathlib.py:707, in PurePath._from_parts(cls, args, init)
702 @classmethod
703 def _from_parts(cls, args, init=True):
704 # We need to call _parse_args on the instance, so as to get the
705 # right flavour.
706 self = object.__new__(cls)
--> 707 drv, root, parts = self._parse_args(args)
708 self._drv = drv
709 self._root = root
File ~/anaconda3/envs/main/lib/python3.9/pathlib.py:700, in PurePath._parse_args(cls, args)
695 else:
696 raise TypeError(
697 "argument should be a str object or an os.PathLike "
698 "object returning str, not %r"
699 % type(a))
--> 700 return cls._flavour.parse_parts(parts)
File ~/anaconda3/envs/main/lib/python3.9/enum.py:429, in EnumMeta.__getattr__(cls, name)
427 return cls._member_map_[name]
428 except KeyError:
--> 429 raise AttributeError(name) from None
AttributeError: _flavour
</code></pre>
<p>What am I doing wrong?</p>
| <python><enums><mixins><pathlib> | 2023-09-13 13:35:17 | 1 | 11,449 | alex |
77,097,497 | 1,548,981 | Deploying AWS lambda from a SAM App with AWS Codepipeline & CodeBuild | <p>I'm doing a cross-account pipeline for a SAM application. The same application uses a lambda function (hello_world) with aws_lambda_powertools and a simple rest API. This is a sample application deployed with "sam init" </p>
<p>The pipeline works and deploys the resources, but the lambda function is missing the extra modules like requests or aws_lambda_powertools. When testing i get the error <code>"errorMessage": "Unable to import module 'app': No module named 'aws_lambda_powertools'",</code></p>
<p>In the lambda function, I have</p>
<pre><code>from aws_lambda_powertools import Logger
from aws_lambda_powertools import Tracer
from aws_lambda_powertools import Metrics
......
</code></pre>
<p>and there is a
hello_world\requirements.txt file with</p>
<pre><code>requests
aws-lambda-powertools[tracer]
</code></pre>
<p>The pipeline has a Build stage made with AWS CodeBuild, where I use a buildspec.yaml file.</p>
<p>This is the code in the build:</p>
<pre><code>- echo "Starting SAM packaging `date` in `pwd`"
- pip install --upgrade pip
- pip install pipenv --user
- pip install awscli aws-sam-cli
- pip install -r requirements.txt
- aws cloudformation package --template-file template.yaml --s3-bucket $ArtifactBucket --output-template-file packaged-template.yml
#- sam package --template-file template.yaml --s3-bucket $ArtifactBucket --output-template-file packaged-template.yml --region $AWS_REGION
</code></pre>
<p>If i use <em>aws cloudformation package</em> .... the pipeline deploys the code but without aws-lambda-powertools modules. I guess this is normal since is not supposed to pack modules.</p>
<p>If i use <em>- sam package --template-file template....</em> i get this error <code>Error: Cannot use both --resolve-s3 and --s3-bucket parameters. Please use only one.</code> THis is dont understand since i do no use --resolve-s3 in my command</p>
<p>My question will be - How to include the modules from the requirments.txt file in the AWS CodeBuild building process so it deploys lambda with all the dependencies?</p>
| <python><amazon-web-services><aws-lambda><aws-codepipeline><aws-codebuild> | 2023-09-13 13:25:00 | 2 | 1,369 | Crerem |
77,097,490 | 1,652,219 | How to create a cross table in Polars? | <p>I would like to count unique combinations in two Polars columns.</p>
<h1>In R</h1>
<pre class="lang-r prettyprint-override"><code>df <- data.frame(a = c(2,0,1,0,0,0), b = c(1,1,1,0,0,1))
table(df)
0 1
0 2 2
1 0 1
2 0 1
</code></pre>
<h1>In Pandas</h1>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
a = np.array([2,0,1,0,0,0])
b = np.array([1,1,1,0,0,1])
pd.crosstab(a, b)
# col_0 0 1
# row_0
# 0 2 2
# 1 0 1
# 2 0 1
</code></pre>
<h1>In Polars</h1>
<p>Is this the proper way?</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
df = pl.DataFrame(
{
"a": [2,0,1,0,0,0],
"b": [1,1,1,0,0,1]
}
)
df.pivot(on="a", index="b", values="a", aggregate_function="len").fill_null(0)
</code></pre>
<pre><code>shape: (2, 4)
┌─────┬─────┬─────┬─────┐
│ b ┆ 2 ┆ 0 ┆ 1 │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ u32 ┆ u32 ┆ u32 │
╞═════╪═════╪═════╪═════╡
│ 1 ┆ 1 ┆ 2 ┆ 1 │
│ 0 ┆ 0 ┆ 2 ┆ 0 │
└─────┴─────┴─────┴─────┘
</code></pre>
| <python><dataframe><pivot><python-polars> | 2023-09-13 13:24:32 | 1 | 3,944 | Esben Eickhardt |
77,097,432 | 9,543,830 | How to use default values for empty parameters in FastAPI | <p>I've got a task on the project to update some packages versions due to security reasons. I updated <code>FastAPI</code> from 0.63.0 to 0.95.2. But one test started to fail.</p>
<p>This test makes a API request in the following format:
<code>https://.../api/v2/list/?limit=&offset=12&...</code>.</p>
<p>And I'm getting the following response:</p>
<pre><code>Status code: 422 Unprocessable Entity
JSON: {'detail': [{'loc': ['query', 'limit'], 'msg': 'value is not a valid integer', 'type': 'type_error.integer'}]}
</code></pre>
<p>The error occurs because of empty parameter <code>limit=</code>.</p>
<p>There is a view:</p>
<pre class="lang-py prettyprint-override"><code>async def get_list(
...
limit: int = Query(10, lt=max_int_value),
...
) -> List[Dict[str, Any]]:
...
</code></pre>
<p><strong>So the question is:</strong> Is there a way to make it use default value (10) instead of raising 422 in this scenario?</p>
| <python><validation><pytest><fastapi> | 2023-09-13 13:16:45 | 1 | 2,685 | voilalex |
77,097,290 | 4,495,790 | How to filter dates of time window relative to column value in Pandas? | <p>I have a Pandas data frame <code>df</code> with columns of <code>ID</code>, <code>DATE</code> (continuous year-month dates of the same six-month-long period) and <code>FIX_DATE</code> (constant year-month date per <code>ID</code> always falling in the second half of the given <code>DATE</code> period). Multiple <code>ID</code>s are with over the same <code>DATE</code> periods. Example:</p>
<pre><code>ID DATE FIX_DATE
01 2023-01 2023-05
01 2023-02 2023-05
01 2023-03 2023-05
01 2023-04 2023-05
01 2023-05 2023-05
01 2023-06 2023-05
02 2023-01 2023-04
02 2023-02 2023-04
02 2023-03 2023-04
02 2023-04 2023-04
02 2023-05 2023-04
02 2023-06 2023-04
</code></pre>
<p>I need a query what results filtered rows of the respective <code>ID</code>s where the rows are three-month-long records of each <code>ID</code> closing with the <code>FIX_DATE</code> date per users. So in case of my example, the result is:</p>
<pre><code>ID DATE FIX_DATE
01 2023-03 2023-05
01 2023-04 2023-05
01 2023-05 2023-05
02 2023-02 2023-04
02 2023-03 2023-04
02 2023-04 2023-04
</code></pre>
<p>How could get the desired output in Pandas?</p>
| <python><pandas><subset> | 2023-09-13 12:57:58 | 2 | 459 | Fredrik |
77,097,277 | 11,334,608 | Connection timeout to S3 from a python application running in ECS EC2 | <p>I have a python ECS task running via EC2 (not Fargate). In this task, I am setting up a <code>boto3</code> client</p>
<pre><code>cls._instance.client = boto3.client(
"s3",
region_name="eu-west-1",
)
</code></pre>
<p>and trying to perform a <code>PutObject</code> operation (<code>client.upload_file_obj</code> or <code>client.put_object</code>).</p>
<p>When I deploy this on ECS and hit the endpoint, my request seems to time-out (or memory leak) and in my logs I see</p>
<pre><code>[2023-09-13 12:37:22 +0000] [1] [ERROR] Worker (pid:40) was sent SIGKILL! Perhaps out of memory?
2023-09-13 12:37:21 +0000] [1] [CRITICAL] WORKER TIMEOUT (pid:40)
</code></pre>
<p>My ECS task has a permission that allos it to <code>PutObject</code> in the specific bucket, my ECS security group has allowed outbound traffic on both <code>443</code> ad <code>80</code> and I even tried to create a VPC endpoint for all my subnets, but nothing seems to fix that timeout. What could I be missing in my network configuration?</p>
<p>Perhaps it is also important to mention that my task does not have a public IP and is load balanced by an application load balancer.</p>
| <python><amazon-web-services><amazon-s3><boto3><amazon-ecs> | 2023-09-13 12:56:17 | 1 | 759 | Simeon Borisov |
77,097,235 | 7,346,393 | My consumer receives every second message | <p>I'm trying to implement a simple producer and consumer architecture in my application. My base class for all producers is very simple</p>
<pre><code>class ProducerBase(metaclass=abc.ABCMeta):
QUEUE: str = "default"
EVENT: str
def __init__(self) -> None:
self.channel = None
self.message = None
self.logger = logging.getLogger()
def produce(self, *args, **kwargs):
self._get_channel()
self.prepare_message(*args, **kwargs)
self.update_message()
self.logger.info(f"Message: {self.message}")
self.channel.basic_publish(exchange="", routing_key=self.QUEUE, body=json.dumps(self.message))
self.logger.info(f"Message produced to queue: {self.QUEUE}")
def _get_channel(self) -> None:
connection = pika.BlockingConnection(pika.URLParameters(BROKER_URL))
self.channel = connection.channel()
@abc.abstractmethod
def prepare_message(self, *args, **kwargs) -> None:
raise NotImplementedError
def update_message(self) -> None:
self.message.update({"event": self.EVENT})
</code></pre>
<p>I have not noticed any bugs, etc. When I need any instance to produce an event it does it perfectly.</p>
<p>Below you can see my base class for consumers.</p>
<pre><code>class ConsumerBase:
QUEUE: str = "default"
HANDLERS: list = []
DB_INITIALIZED: bool = False
def __init__(self, db_required: bool = False) -> None:
self.logger = logging.getLogger()
self.db_required = db_required
async def consume(self):
if self.db_required and self.DB_INITIALIZED is False:
self.logger.info("Connection to the database is being initialized")
await self._set_db()
connection = await aio_pika.connect_robust(BROKER_URL)
self.logger.info(f"Waiting for messages from queue: {self.QUEUE}")
async with connection:
channel = await connection.channel()
queue = await channel.declare_queue(self.QUEUE, durable=True)
async with queue.iterator() as messages:
async for message in messages:
await self.callback(message=message)
@classmethod
async def _set_db(cls):
database_config = {
"connections": {"default": DB_URL},
"apps": {
"models": {
"models": MODELS,
"default_connection": "default",
},
},
}
await Tortoise.init(config=database_config)
await Tortoise.generate_schemas()
cls.DB_INITIALIZED = True
async def callback(self, message: Coroutine[any, any, AbstractIncomingMessage]):
async with message.process():
body = message.body.decode()
data = json.loads(body)
self.logger.info(f"Received message from queue: {self.QUEUE}")
self.logger.info(f"Body: {data}")
for handler in self.HANDLERS:
await handler(message=data).handle()
@classmethod
def run_consumer(cls, db_required=False):
loop = asyncio.get_event_loop()
consumer_instance = cls(db_required=db_required)
loop.run_until_complete(consumer_instance.consume())
</code></pre>
<p>It works, however only every second time. What am I doing wrong? The consumer is in a separate Docker container. All the connections seems to be alright. If you need any additional files, configuration just let me know.</p>
<p>P. S.
I'm using Python 3.11 with FastAPI, TortoiseORM and postgreSQL.</p>
| <python><rabbitmq> | 2023-09-13 12:49:28 | 0 | 869 | Hendrra |
77,096,982 | 13,819,183 | mypy infer paramspec of function based on paramspec of first input | <p>I'm creating a function which will act as a factory for a class, here's how it will look:</p>
<pre class="lang-py prettyprint-override"><code>T = TypeVar("T")
def app_factory(app: Type[T], *args, **kwargs) -> T:
...
return app(*args, **kwargs)
class App:
def __init__(self, a: str, b: bool):
self.a = a
self.b = b
app_instance = app_factory(app=App, a="string", b=True)
</code></pre>
<p>Would it be possible to dynamically give <code>app_factory</code> the typehints <code>a: str, b: bool</code> based on the first input arg <code>app</code>? In other words, is there any way to dynamically set ParamSpec of a function to be equal to one of the function input callables?</p>
| <python><python-3.x><mypy><typing> | 2023-09-13 12:12:27 | 1 | 1,405 | Steinn Hauser Magnússon |
77,096,971 | 7,064,415 | Cut a figure along the x axis while retaining scale | <p>I need to be able to compare graphs that show the progress of several variables over time. I need to do this for different cases, where the time covered is not always the same: now the data visualised covers 5 seconds long, then 10 seconds, then 30 seconds, etc.</p>
<p>The problem is that Matplotlib uses the number of seconds as a guide to determine the length of the graph -- meaning that a graph covering only 5 seconds will have the same length as one covering 30 seconds. See these two examples (NB: these are separate graphs, not subplots of the same graph):</p>
<p><a href="https://i.sstatic.net/tZ06j.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tZ06j.png" alt="Graph covering 5 seconds" /></a></p>
<p><a href="https://i.sstatic.net/fJQsj.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fJQsj.png" alt="Graph covering 30 seconds" /></a></p>
<p>In the <a href="https://matplotlib.org/stable/api/_as_gen/matplotlib.pyplot.xlim.html" rel="nofollow noreferrer">documentation for xlim</a> I read that I can use it to turn autoscaling off, and I successfully did that. The upper graph now looks like this:</p>
<p><a href="https://i.sstatic.net/2fj7v.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2fj7v.png" alt="Graph covering 5 seconds, scaled" /></a></p>
<p>which is much better.</p>
<p>Is it now also possible to cut the graph off after 5 seconds, so that I don't have all that empty space at the end? Like this (NB: I faked this example with a graphics program and it's a little bit too small; it should be exactly 1/6 the size of the one above):</p>
<p><a href="https://i.sstatic.net/65sAut.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65sAut.png" alt="Graph covering 5 seconds, scaled and cut" /></a></p>
| <python><matplotlib><xticks> | 2023-09-13 12:11:10 | 1 | 732 | rdv |
77,096,942 | 20,088,885 | Why does nothing shows in my pycharm terminal when I run my Odoo Dev? | <p>I'm having a trouble in my odoo development right now because after changing some part, my terminal doesn't show anything when I run my <code>odoo config</code>. My models shows when I re-run the program, but when I edit <code>xml</code> files, I don't know if i have an error or anything because no message is being shown.</p>
<p>I've been following tutorials in internet but it doesn't work.</p>
<p>so this is what it shows when I run<code>odoo-bin configuration</code>, I've got an empty terminal, and it works when I search <code>localhost:8069</code> but the problem is, I don't know what are the error messages</p>
<p><a href="https://i.sstatic.net/912YM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/912YM.png" alt="enter image description here" /></a></p>
<p>My configuration is like this.</p>
<p><a href="https://i.sstatic.net/0PZxn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/0PZxn.png" alt="enter image description here" /></a></p>
<p>And this is my directory</p>
<p><a href="https://i.sstatic.net/p21Oe.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/p21Oe.png" alt="enter image description here" /></a></p>
<p>I use <code>venv</code> for installing dependecies for the odoo development stored in requirements.txt`</p>
<p>my python interpreter's path is <code>c:\Odoo\odoo16\venv\Scripts\python.exe</code></p>
<p><strong>EDIT</strong></p>
<p>Hello, updates, I think i got the problem but I don't know how can I configure it, how can i show the odoo.log in my terminal?</p>
| <python><odoo><odoo-10> | 2023-09-13 12:07:46 | 2 | 785 | Stykgwar |
77,096,912 | 1,652,219 | How to add a number of months to a date in Polars? | <p>I need to carry out a very simple operation in Polars, but the documentation and examples I have been finding are super convoluted. I simply have a date, and I would like to create a range running from the first day in the following month until the first day of a month twelve months later.</p>
<p>I have a date:<br />
date = 2023-01-15</p>
<p>I want to find these two dates:<br />
range_start = 2023-02-01<br />
range_end = 2024-02-01</p>
<p>How is this done in Polars?</p>
<h1>In datetime in Python</h1>
<pre><code>from datetime import datetime
from dateutil import relativedelta
my_date = datetime.fromisoformat("2023-01-15")
start = my_date.replace(day=1) + relativedelta.relativedelta(months=1)
end = start + relativedelta.relativedelta(months=12)
</code></pre>
<h1>Polars?</h1>
<pre><code># The format of my date
import polars as pl
my_date_pl = pl.lit(datetime.fromisoformat("2023-01-15"))
????
</code></pre>
| <python><datetime><timedelta><python-polars> | 2023-09-13 12:03:47 | 2 | 3,944 | Esben Eickhardt |
77,096,668 | 9,836,333 | Number of metrics allowed in Google Analytics Data API (GA4) | <p>I'm not able to find any (official) documentation saying what's the limit on the number of metrics that can be queried within the same API call. Out of the developer tool (<a href="https://ga-dev-tools.google/ga4/query-explorer/" rel="nofollow noreferrer">https://ga-dev-tools.google/ga4/query-explorer/</a>) I can tell that the limit is 10, however I can't tell if this limit is coming from the API (Python) or from the developer interface available. Can someone, please, confirm and share the reference to the appropriated doc, if any. Thanks!</p>
<p><a href="https://i.sstatic.net/InBRd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/InBRd.png" alt="enter image description here" /></a></p>
| <python><google-analytics-4> | 2023-09-13 11:30:19 | 1 | 810 | Mike |
77,096,518 | 194,000 | ModuleNotFoundError after installing dependency | <p>I'm hoping to contribute to <a href="https://github.com/simonw/llm" rel="nofollow noreferrer">this Python library</a> and am following <a href="https://llm.datasette.io/en/stable/contributing.html" rel="nofollow noreferrer">the contribution instructions</a>.</p>
<p>I've followed the instructions to check out the code and make a virtual environment using <code>venv</code>. Then I installed the dependencies, which seems to run fine:</p>
<pre><code>(venv) % pip install -e '.[test]'
...
Building wheels for collected packages: llm
Building editable for llm (pyproject.toml) ... done
Created wheel for llm: filename=llm-0.10-0.editable-py3-none-any.whl size=8542 sha256=991975d8a5f002bd8a3325ee47ef673513538241229ab670fbb267bac187f143
Stored in directory: /private/var/folders/h5/r_p1fbvx13g1hf0c15pjx60h0000gn/T/pip-ephem-wheel-cache-tmra42no/wheels/cb/85/32/0bf57228d9eab277b05cd64c34a0e608ae4cbaebc42529b8ce
Successfully built llm
Installing collected packages: llm
Attempting uninstall: llm
Found existing installation: llm 0.10
Uninstalling llm-0.10:
Successfully uninstalled llm-0.10
Successfully installed llm-0.10
</code></pre>
<p>I'm trying to install the <code>llm</code> library to work on the code, following <a href="https://llm.datasette.io/en/stable/contributing.html" rel="nofollow noreferrer">these instructions</a>.</p>
<p>Now I try to run <code>pytest</code>, but I get this error:</p>
<pre><code>(venv) % pytest
ImportError while loading conftest '/Users/me/llm/tests/conftest.py'.
tests/conftest.py:3: in <module>
import llm
E ModuleNotFoundError: No module named 'llm'
</code></pre>
<p>This is weird since in the previous step, the <code>llm</code> library seemed to have installed OK.</p>
<p>What am I doing wrong?</p>
<p>UPDATE:</p>
<p>Checking python paths and installed packages:</p>
<pre><code>(venv) % which python3
/Users/me/llm/venv/bin/python3
(venv) % python3 -c 'import sys; print(sys.path)'
[...'/Users/me/llm/venv/lib/python3.11/site-packages']
(venv) % ls /Users/me/llm/venv/lib/python3.11/site-packages
... llm-0.10.dist-info
</code></pre>
<p>If Python3 has the virtual environment's <code>site-packages</code> directory in its path, and llm is in there, why can't the test script see llm?</p>
| <python><pytest><python-venv> | 2023-09-13 11:08:16 | 0 | 66,078 | Richard |
77,096,464 | 16,092,107 | Why does panda flatten out arrays? | <p>I have some data that looks like this :</p>
<pre><code> date temperature
0 2021-6-1 [14.1, 16.5]
1 2021-6-10 [19.0, 18.4]
2 2021-6-11 [24.4, 24.8]
3 2021-6-12 [25.4]
</code></pre>
<p>That I built with a groupby:</p>
<pre class="lang-py prettyprint-override"><code>data = data.groupby(['date'])['temperature'].apply(list).reset_index()
</code></pre>
<p>I would like to sort out every element where the size of the list are not 2, this is what I did:</p>
<pre class="lang-py prettyprint-override"><code>data = data[data['temperature'].apply(len) == 2]
</code></pre>
<p>This throws an error, <code>object of type 'float' has no len()</code> so I tried to see what was going on, turns out the following :</p>
<pre><code>newdataset = newdataset[data['temperature'].apply(lambda x: print(x))]
</code></pre>
<p>And this just printed out all individual array elements, and not the list.</p>
<p>Why does pandas flatten out such iteration ? How can I:</p>
<ul>
<li>Iterate over my lists ?</li>
<li>keep only 2-lengthed lists ?</li>
</ul>
<h4>Edit 1:</h4>
<p>This is the info of the table, after building it with groupby / apply :</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 92 entries, 0 to 91
Data columns (total 2 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 date 92 non-null object
1 temperature 92 non-null object
dtypes: object(2)
memory usage: 1.6+ KB
None
Empty DataFrame
Columns: [date, temperature]
Index: []
[]
</code></pre>
<p>As requestes, here some more info :
<code>df[df['temperature'].str.len().isna()]</code> gives:</p>
<pre><code>Empty DataFrame Columns: [date, temperature] Index: []
</code></pre>
<p>And <code>df.to_dict('list')</code>:</p>
<pre><code>{
'date': ['2021-6-1', '2021-6-10', ...],
'temperature': [array([14.1, 16.5]), array([19. , 18.4]), ...]
}
</code></pre>
| <python><pandas> | 2023-09-13 11:00:37 | 1 | 800 | LucioleMaléfique |
77,095,944 | 498,690 | Why does polars rolling corr produce both null and NaN instead of one or the other? | <p><strong>Update:</strong> Polars no longer produces null in this case. The example now returns 2 NaN values.</p>
<hr />
<pre class="lang-py prettyprint-override"><code>import polars as pl
ts_count_df = pl.DataFrame([pl.Series("ts", [
1687743109438,
1687831935720,
1687958569223,
1687978376064,
1688095371395,
1688213575425,
1688262561054,
1688281964195,
1688362014828,
1688442925287
]), pl.Series("count", [
0,
0,
19301,
67,
0,
3009,
1871,
0,
95,
12,
])]).with_columns(
pl.from_epoch("ts", time_unit="ms")
).sort("ts")
with_corr = ts_count_df.rolling(index_column="ts", period="3d").agg(
pl.corr("count",
"count").alias("corr"),
pl.mean("count").alias("avg"))
print(with_corr.head())
</code></pre>
<p>outputs:</p>
<pre><code>┌─────────────────────────┬────────────────────┬────────────────────┐
│ ts ┆ corr ┆ avg │
│ --- ┆ --- ┆ --- │
│ datetime[ms] ┆ f64 ┆ f64 │
╞═════════════════════════╪════════════════════╪════════════════════╡
│ 2023-06-26 01:31:49.438 ┆ null ┆ 0 │
│ 2023-06-27 02:12:15.720 ┆ NaN ┆ 0 │
│ 2023-06-28 13:22:49.223 ┆ 1.0000000000000002 ┆ 6433.666666666667 │
│ 2023-06-28 18:52:56.064 ┆ 1.0000000000000002 ┆ 4842 │
│ 2023-06-30 03:22:51.395 ┆ 1 ┆ 6456 │
│ 2023-07-01 12:12:55.425 ┆ 1.0000000000000002 ┆ 5594.25 │
│ 2023-07-02 01:49:21.054 ┆ 1 ┆ 1626.6666666666667 │
│ 2023-07-02 07:12:44.195 ┆ 1 ┆ 1220 │
│ 2023-07-03 05:26:54.828 ┆ 1 ┆ 1243.75 │
│ 2023-07-04 03:55:25.287 ┆ 1 ┆ 997.4 │
└─────────────────────────┴────────────────────┴────────────────────┘
</code></pre>
<p>Why are the first two values of corr <code>null</code> and <code>NaN</code>? I would expect the first two days to be both <code>NaN</code> or both <code>null</code> not one of each</p>
| <python><correlation><python-polars> | 2023-09-13 09:47:52 | 1 | 3,818 | Capacytron |
77,095,905 | 860,202 | Finding multisets of numbers within a defined range, efficiently | <p>The problem consists of a collection of N arrays of positive real numbers. The total amount of numbers in the collection is K. The arrays can be of variable length. All K numbers in the collection must be divided over subsets, each number is drawn once but numbers in the collection can be duplicated (so we are not looking for sets but multisets). The requirement is that all numbers in a multiset must be within a range "width". The width of a multiset is the distance between its smallest and largest number.</p>
<p>It can be assumed that no such multisets exist within each individual array. The numbers within each array will always be spaced apart further than "width". The arrays can be assumed to be ascending. The size of the multisets is [1,N], they may not contain more than one number from the same array.</p>
<p>Some example data with N=4 and K=28:
(the real problem is closer to N=1000 and K=10e6)</p>
<pre><code>input = [[484.6, 512.2, 623.4, 784.3, 785.2, 786.1, 812.9],
[452.2, 484.5, 512.3, 623.5, 782.6, 784.1, 786.3, 884.2],
[452.1, 512.1, 623.4, 784.2, 785.2, 812.9],
[484.8, 512.2, 623.3, 684.3, 785.3, 786.2, 812.9]]
width = 0.3
solution = [[452.2, 452.1],
[484.6, 484.5, 484.8],
[512.2, 512.3, 512.1, 512.2],
[623.4, 623.5, 623.4, 623.3],
[684.3],
[782.6],
[784.3, 784.1, 784.2],
[785.2, 785.2, 785.3],
[786.1, 786.3, 786.2],
[812.9, 812.9, 812.9],
[884.2]]
</code></pre>
<p>Finding one solution is ok, there is no need to find all possible solutions. It is also desired that the algorithm would be easily compatible with the ability to have as a solution the coordinates of the numbers as they appeared in the original 2D input, rather than the numbers themselves.</p>
<p>I have the following algorithm implemented in Python:</p>
<p>Initialise the list of sets as the first vector.
For each remaining vector, for each value: check if the value is within range "width" of the mean values of sets. If yes, add that value to the first encountered set that meet the width requirement and update the mean of that set. If no, add that value as a new set. The sets to be found are called features here.</p>
<pre><code>def find_features(data: list[list[float]], width = 0.3) -> list[list[float]]:
if not data:
return []
# initialize features list as the first vector
features = [[x] for x in data[0]]
features_mean = [x for x in data[0]]
# loop through the remaining vectors starting from the second
for vector in data[1:]:
for number in vector:
added = False
for i, feature in enumerate(features_mean):
# if a value is within tollerance width of a feature, add it to that feature and update the mean of that feature
if abs(feature - number) < width:
features[i].append(number)
features_mean[i] = sum(features[i]) / len(features[i])
added = True
break
# if not added to any existing feature, add it as a new one
if not added:
features.append([number])
features_mean.append(number)
return features
</code></pre>
<p>One optimisation I tried was initially sorting the features list, and then instead of iterating through the feature_mean list, using bisect to quickly find the closest feature_mean. The problem with that is that the list doesn't necessarily stay sorted as you are inserting new values and updating the means (it does in this toy example, but may not on big datasets), so you would have to sort it again after updating a mean, which defeats the speed gain from the binary search.
</p>
| <python><algorithm><optimization><multiset> | 2023-09-13 09:41:45 | 1 | 684 | jonas87 |
77,095,777 | 10,687,907 | fill the dataframes with dummies | <p>I have a dataframe that list clients throughout years.
I'd like to take all customers that ever appear, groupby year week and make every single one appear, if one is not appearing for a specific week i'd like to add None values to the rest of the row but still create a row for this customer
there are multiple years so i'd like it to be for each week of each year
in the end I want to retrieve the complete database with added rows</p>
<p>I tried this but it does not work</p>
<pre><code>import pandas as pd
data = {'years': [2021, 2021, 2022],
'weeks': [1, 1, 3],
'customers': ['A', 'B', 'C'],
'cars': [2, 1, 2],
'pens': [5, 2, 3],
'furnitures': [1, 2, 4]}
df = pd.DataFrame(data)
grouped = df.groupby(['years', 'weeks'])
unique_customers = grouped['customers'].unique()
result = pd.DataFrame(columns=['years', 'weeks', 'customers', 'cars', 'pens', 'furnitures'])
for (year, week), customers in unique_customers.iteritems():
for customer in customers:
result = result.append({'years': year, 'weeks': week, 'customers': customer}, ignore_index=True)
result['cars'].fillna('None', inplace=True)
result['pens'].fillna('None', inplace=True)
result['furnitures'].fillna('None', inplace=True)
result.sort_values(by=['years', 'weeks', 'customers'], inplace=True)
result.reset_index(drop=True, inplace=True)
# DESIRED OUTPUT
data = {'years': [2021, 2021, 2021, 2022, 2022, 2022],
'weeks': [1, 1, 1, 3, 3, 3],
'customers': ['A', 'B', 'C', 'A', 'B', 'C'],
'cars': [2, 1, None, None, None, 1],
'pens': [5, 2, None, None, None, 0],
'furnitures': [1, 2, None, None, None, 1]}
print(pd.DataFrame(data))
</code></pre>
<p>Dataframe</p>
<p><a href="https://i.sstatic.net/Uyaqi.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Uyaqi.png" alt="enter image description here" /></a></p>
<p>Desired Dataframe</p>
<p><a href="https://i.sstatic.net/nuzRp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nuzRp.png" alt="enter image description here" /></a></p>
| <python><pandas> | 2023-09-13 09:23:00 | 0 | 807 | Achille G |
77,095,776 | 301,513 | FastAPI pydantic data validation for put method if body only contains the updated data | <p>I am learning FastAPI and understanding that its data validation using <strong>pydantic</strong> is one of its features. But after reading its put method example from <a href="https://fastapi.tiangolo.com/tutorial/body-updates/" rel="nofollow noreferrer">its tutorial</a> I have a question if I only want to let the put body contain the updated data(as the URL already has its id), how do I do that?</p>
<p>Use the sample code from the tutorial as an example to what I mean,</p>
<pre><code>from fastapi import FastAPI
from fastapi.encoders import jsonable_encoder
from pydantic import BaseModel
app = FastAPI()
class Item(BaseModel):
id: str
description: str = "default description"
price: Union[float, None] = None
tax: float = 10.5
tags: list[str] = []
...
@app.put("/items/{item_id}")
#async def update_item(item_id: str, item:Item):
async def update_item(item_id: str, item):
pass
</code></pre>
<p>If I code <code>async def update_item(item_id: str, item:Item)</code> then the body has to contain id property otherwise I will get <code>422 "field required"</code>. But I feel that is unnecessary because the URL <code>/items/{item_id}</code> already contains the <code>id</code> I just want body to contain the updated data.</p>
<p>But when I coded <code>async def update_item(item_id: str, item)</code>, to my surprise the item became the <strong>required</strong> QUERY PARAMETERS!</p>
<p>As its document shows:</p>
<p><a href="https://i.sstatic.net/ikQEI.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ikQEI.png" alt="put with query parameters" /></a></p>
<p>Why does it become query parameters then? This is my second question.</p>
<p>I feel that is wrong because I prefer to query parameters for GET only.</p>
<p><strong>--- Update ---</strong></p>
<p>I guess the 2 methods Chris provided are the way FastAPI solves my first question (whether <code>id</code> should be one of Item's properties is another question, e.g. check <a href="https://stackoverflow.com/questions/33452765/what-to-do-when-rest-post-provides-an-id">What to do when REST POST provides an ID?</a>), but I come from nodejs background so I would like to provide Nestjs solution in comparison.</p>
<p>Using Nestjs sample code here <a href="https://docs.nestjs.com/controllers#full-resource-sample" rel="nofollow noreferrer">https://docs.nestjs.com/controllers#full-resource-sample</a></p>
<pre><code>@Controller('cats')
export class CatsController {
@Post()
create(@Body() createCatDto: CreateCatDto) {
return 'This action adds a new cat';
}
...
@Put(':id')
update(@Param('id') id: string, @Body() updateCatDto: UpdateCatDto) {
return `This action updates a #${id} cat`;
}
</code></pre>
<p>As <a href="https://docs.nestjs.com/techniques/validation#mapped-types" rel="nofollow noreferrer">https://docs.nestjs.com/techniques/validation#mapped-types</a> explains "The <code>PartialType()</code> function returns a type (class) with all the properties of the input type set to optional."</p>
<pre><code>export class CreateCatDto {
name: string;
age: number;
breed: string;
}
export class UpdateCatDto extends PartialType(CreateCatDto) {}
</code></pre>
| <python><fastapi><pydantic> | 2023-09-13 09:22:58 | 1 | 12,833 | Qiulang |
77,095,668 | 12,827,931 | Repeating ndarray along axis | <p>Assume an array of size (in general) <code>TxNxM</code></p>
<pre><code>arr = np.array([[[0.83, 0.3 , 0.22],
[0.17, 0.33, 0.37]],
[[0. , 0.28, 0.09],
[0. , 0.1 , 0.18]],
[[0. , 0. , 0.08],
[0. , 0. , 0.05]],
[[0. , 0. , 0. ],
[0. , 0. , 0. ]]])
</code></pre>
<p>What I'd like to get is an array of size <code>TxMxNxN</code> that looks like this:</p>
<pre><code>np.stack((np.repeat(arr[0,:,0:T-1], N, axis = 0).T.reshape(T-1, N, N),
np.repeat(arr[1,:,0:T-1], N, axis = 0).T.reshape(T-1, N, N),
np.repeat(arr[2,:,0:T-1], N, axis = 0).T.reshape(T-1, N, N),
np.repeat(arr[3,:,0:T-1], N, axis = 0).T.reshape(T-1, N, N)))
</code></pre>
<p>so the output is</p>
<pre><code>array([[[[0.83, 0.83],
[0.17, 0.17]],
[[0.3 , 0.3 ],
[0.33, 0.33]],
[[0.22, 0.22],
[0.37, 0.37]]],
[[[0. , 0. ],
[0. , 0. ]],
[[0.28, 0.28],
[0.1 , 0.1 ]],
[[0.09, 0.09],
[0.18, 0.18]]],
[[[0. , 0. ],
[0. , 0. ]],
[[0. , 0. ],
[0. , 0. ]],
[[0.08, 0.08],
[0.05, 0.05]]],
[[[0. , 0. ],
[0. , 0. ]],
[[0. , 0. ],
[0. , 0. ]],
[[0. , 0. ],
[0. , 0. ]]]])
</code></pre>
<p>But this approach doesn't seem to be efficient code-wise and I feel that the same output is achievable using <code>np.repeat</code> or <code>np.tile</code>. The question is how do I do that?</p>
| <python><arrays><pandas><numpy> | 2023-09-13 09:09:42 | 3 | 447 | thesecond |
77,095,565 | 2,749,397 | plt.pause(0) never returns, even if I close the window | <p>TL;DR How can I return to terminal prompt when my scripts ends with <code>plt.pause(0)</code> ??</p>
<hr />
<p>This is my code, a sort of animation w/o <code>matplotlib.anim</code></p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.lines import Line2D
from sys import exit
def websocket(tmax=1000, dt=0.005):
from numpy import sin
from time import sleep
w1 = 0.3 ; w2 = 0.4 ; slope = 0.0025
for t in range(0, tmax):
sleep(dt)
yield slope*t + 0.5*(sin(w1*t)-sin(w2*t))
t, y = [], []
tmin = 0 ; dtmax = 101
fig, ax = plt.subplots(figsize=(5, 2), dpi=96, layout='tight')
line = ax.plot(0, 0)[0]
ax.set_ylim(-1, 4)
ax.set_xlim(tmin, tmin+dtmax)
for t_, y_ in enumerate(websocket(400, dt=0.002)):
t.append(t_), y.append(y_)
tmin = max(tmin, t_-dtmax)
line.remove()
ax.set_xlim(tmin, tmin+dtmax)
line = Line2D(t[tmin:tmin+dtmax],
y[tmin:tmin+dtmax])
ax.add_line(line)
fig.canvas.draw()
plt.pause(0.0001)
if True:
plt.pause(0)
else:
plt.pause(4)
</code></pre>
<p>When I run with <code>pause(0)</code> it hangs displaying the window, with which I can interact as expected, but when I close th e plot window (either via the window manager or using the "q" keybinding), the script hangs forever until I enter <code>^C</code> in the controlling terminal, at that point I get the following backtrace</p>
<pre><code>^CTraceback (most recent call last):
File "/home/boffi/ppp.py", line 31, in <module>
plt.pause(0)
File "/usr/lib64/python3.11/site-packages/matplotlib/pyplot.py", line 557, in pause
canvas.start_event_loop(interval)
File "/usr/lib64/python3.11/site-packages/matplotlib/backends/backend_qt.py", line 407, in start_event_loop
with _maybe_allow_interrupt(event_loop):
File "/usr/lib64/python3.11/contextlib.py", line 144, in __exit__
next(self.gen)
File "/usr/lib64/python3.11/site-packages/matplotlib/backends/qt_compat.py", line 269, in _maybe_allow_interrupt
old_sigint_handler(*handler_args)
KeyboardInterrupt
</code></pre>
<p>If I run with <code>pause(4)</code> and I close the window immediately after completion, the script stays still for about 4" and eventually I have the terminal prompt.</p>
<p>How can I return to terminal prompt when my scripts ends with <code>plt.pause(0)</code> ??</p>
<hr />
<p>PS I understand that the correct solution probably won't end with <code>plt.pause(0)</code>.</p>
| <python><matplotlib> | 2023-09-13 08:57:21 | 1 | 25,436 | gboffi |
77,095,370 | 9,414,470 | Django django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet | <p>I have an app for surveys. I am trying to import my model in <code>views.py</code> and its giving an error.</p>
<p>My code is:</p>
<p><code>settings.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>INSTALLED_APPS = [
'survey.apps.SurveyConfig',
'survey.urls',
'django.contrib.admin',
'django.contrib.auth',
'django.contrib.contenttypes',
'django.contrib.sessions',
'django.contrib.messages',
'django.contrib.staticfiles',
]
</code></pre>
<p><code>urls.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path('admin/', admin.site.urls),
path('survey/', include('survey.urls')),
]
</code></pre>
<p><code>survey/urls.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.urls import path
from . import views
urlpatterns = [
path('', views.list_surveys, name='list-surveys'),
path('create/', views.create_survey, name='list-surveys'),
]
</code></pre>
<p><code>survey/views.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.shortcuts import render
from .models import MultipleChoiceQuestion
surveys = [
{
"id": "S001",
"questions": [
["How much do you like Userfacet?", 1],
["How much do you like frontend?", 0],
["How much do you like backend?", 2],
["How much do you like fullstack?", 0],
["How much do you like Userfacet?", 3],
["How much do you like frontend?", 0],
["How much do you like backend?", 4],
["How much do you like fullstack?", 0],
["How much do you like Userfacet?", 5],
["How much do you like frontend?", 0],
["How much do you like backend?", 6],
["How much do you like fullstack?", 0],
["How much do you like Userfacet?", 7],
["How much do you like frontend?", 0],
["How much do you like backend?", 8],
["How much do you like fullstack?", 0],
["How much do you like Userfacet?", 9],
["How much do you like frontend?", 0],
["How much do you like backend?", 10],
["How much do you like fullstack?", 0]
]
}
]
# Create your views here.
def list_surveys(request):
context = { "surveys": {} }
for question in MultipleChoiceQuestion.objects.all():
try:
context["surveys"][question.survey_id].append(question)
except KeyError:
context["surveys"][question.survey_id] = []
print(context)
return render(request, 'survey/list_surveys.html', context = context)
def create_survey(request):
return render(request, 'survey/create_survey.html')
</code></pre>
<p><code>survey/models.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.db import models
from django.contrib.auth.models import User
# Create your models here.
class MultipleChoiceQuestion(models.Model):
class MultipleChoiceAnswer(models.IntegerChoices):
NONE = 0, 'NONE'
ONE = 1, 'One'
TWO = 2, 'TWO'
THREE = 3, 'THREE'
FOUR = 4, 'FOUR'
FIVE = 5, 'FIVE'
SIX = 6, 'SIX'
SEVEN = 7, 'SEVEN'
EIGHT = 8, 'EIGHT'
NINE = 9, 'NINE'
TEN = 10, 'TEN'
survey_id = models.CharField(max_length=10)
user = models.ForeignKey(User, on_delete=models.CASCADE)
question = models.CharField(max_length=50)
answer = models.IntegerField(choices=MultipleChoiceAnswer.choices, default=MultipleChoiceAnswer.NONE)
def __str__(self):
return 'Question'
class Similarity(models.Model):
survey_id = models.CharField(max_length=10)
user_1 = models.ForeignKey(User, related_name='user_1', on_delete=models.CASCADE)
user_2 = models.ForeignKey(User, related_name='user_2', on_delete=models.CASCADE)
value = models.IntegerField(default=0)
def __str__(self):
return 'Value: ' + str(self.value)
</code></pre>
<p><code>survey/admin.py</code>:</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib import admin
from .models import MultipleChoiceQuestion, Similarity
# Register your models here.
admin.site.register(MultipleChoiceQuestion)
admin.site.register(Similarity)
</code></pre>
<hr />
<p>Full error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\jaide\Documents\marks_management_system\django_app\manage.py", line 22, in <module>
main()
File "C:\Users\jaide\Documents\marks_management_system\django_app\manage.py", line 18, in main
execute_from_command_line(sys.argv)
File "C:\Program Files\Python311\Lib\site-packages\django\core\management\__init__.py", line 442, in execute_from_command_line
utility.execute()
File "C:\Program Files\Python311\Lib\site-packages\django\core\management\__init__.py", line 416, in execute
django.setup()
File "C:\Program Files\Python311\Lib\site-packages\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Program Files\Python311\Lib\site-packages\django\apps\registry.py", line 91, in populate
app_config = AppConfig.create(entry)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\django\apps\config.py", line 193, in create
import_module(entry)
File "C:\Program Files\Python311\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\jaide\Documents\marks_management_system\django_app\survey\urls.py", line 19, in <module>
from . import views
File "C:\Users\jaide\Documents\marks_management_system\django_app\survey\views.py", line 2, in <module>
from .models import MultipleChoiceQuestion
File "C:\Users\jaide\Documents\marks_management_system\django_app\survey\models.py", line 2, in <module>
from django.contrib.auth.models import User
File "C:\Program Files\Python311\Lib\site-packages\django\contrib\auth\models.py", line 3, in <module>
from django.contrib.auth.base_user import AbstractBaseUser, BaseUserManager
File "C:\Program Files\Python311\Lib\site-packages\django\contrib\auth\base_user.py", line 57, in <module>
class AbstractBaseUser(models.Model):
File "C:\Program Files\Python311\Lib\site-packages\django\db\models\base.py", line 129, in __new__
app_config = apps.get_containing_app_config(module)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\django\apps\registry.py", line 260, in get_containing_app_config
self.check_apps_ready()
File "C:\Program Files\Python311\Lib\site-packages\django\apps\registry.py", line 138, in check_apps_ready
raise AppRegistryNotReady("Apps aren't loaded yet.")
django.core.exceptions.AppRegistryNotReady: Apps aren't loaded yet.
</code></pre>
<p>When trying to call <code>django.setup()</code> myself, I get another error:</p>
<pre><code>Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\threading.py", line 1038, in _bootstrap_inner
self.run()
File "C:\Program Files\Python311\Lib\threading.py", line 975, in run
self._target(*self._args, **self._kwargs)
File "C:\Program Files\Python311\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Program Files\Python311\Lib\site-packages\django\core\management\commands\runserver.py", line 125, in inner_run
autoreload.raise_last_exception()
File "C:\Program Files\Python311\Lib\site-packages\django\utils\autoreload.py", line 87, in raise_last_exception
raise _exception[1]
File "C:\Program Files\Python311\Lib\site-packages\django\core\management\__init__.py", line 394, in execute
autoreload.check_errors(django.setup)()
File "C:\Program Files\Python311\Lib\site-packages\django\utils\autoreload.py", line 64, in wrapper
fn(*args, **kwargs)
File "C:\Program Files\Python311\Lib\site-packages\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Program Files\Python311\Lib\site-packages\django\apps\registry.py", line 91, in populate
app_config = AppConfig.create(entry)
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\django\apps\config.py", line 193, in create
import_module(entry)
File "C:\Program Files\Python311\Lib\importlib\__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen importlib._bootstrap>", line 1206, in _gcd_import
File "<frozen importlib._bootstrap>", line 1178, in _find_and_load
File "<frozen importlib._bootstrap>", line 1149, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 690, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 940, in exec_module
File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
File "C:\Users\jaide\Documents\marks_management_system\django_app\survey\urls.py", line 19, in <module>
from . import views
File "C:\Users\jaide\Documents\marks_management_system\django_app\survey\views.py", line 3, in <module>
django.setup()
File "C:\Program Files\Python311\Lib\site-packages\django\__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "C:\Program Files\Python311\Lib\site-packages\django\apps\registry.py", line 83, in populate
raise RuntimeError("populate() isn't reentrant")
RuntimeError: populate() isn't reentrant
</code></pre>
<p>Any help appreciated!</p>
| <python><django> | 2023-09-13 08:32:07 | 1 | 896 | Jaideep Shekhar |
77,095,366 | 11,277,108 | Convert CryptoJS to Python | <p>I'm trying to follow some node.js code and convert it into python. I'm now left with the decryption part and I'm struggling:</p>
<pre><code>const CryptoJS = require("crypto-js")
const ee = "#3vlsc6mdk2k400$"
const encrypt_data = "hlXzkPyyhwUYql2Nwl/3AAcRSsZHKf5LyqsAHqSWjP+ZHzfdmQ7bG2cOrf3YxwcZFIlsJNLJOSL/dSj/fFtjWHkeQd21inSUPOkbu2hSD2xMxEkyss8rOIVJAx6NmY9sap852VtmTc2CT4TdXXRduEK4fXASReIX3Eb9V+TMs24t5ow6w8aau+GWZLP9b32ALs4IZeea+dE3YcKtYrZOu/bV7ZLSawlontkgGN9s4QSjUhv43ifxkS6oDHGFkh+4pjjqfLDa2c0fA28otRZUF4uz+UvYAW2b9hZxBVJQU0E45Bf/myuQjZ14KtQr0NdxAMq53PZlki2hRVtnCDErA2e26cK9/bkC6Pz/J0N7rosTYw6TtDRGPYeqM3z645Uew3f3vEcSQLkWWxi1txQPxTbn1MT4HzRtnAbGJOF+GeaAKbwtSt2B86iHjkyEJ+ssmIMsARRjUmhdFmsMF6vuqA5pSgxvYTacg/yzZvy6HVhZBqTpPcaRJGt41efib3zQg8u++yKXdz8MnHicuz32w/osWzcMsC3Cwm5/a1tJZ48xFJdu8YgUsFS6ioNaO9V6vWz8imQZiPEZxd1FLfRynjS8LpvY3+83M2h+A0oExmcd4UaEMCqkklM1A7ssOXeDTqKS8UiZVM3zH6lzNI42QOZE+WYcPvwNzVLanJpZcKqlLupGfOiHuUclEwKrBL8h3wHtU6UmU+VoPJQM82b4pv5vJY/qlUgjLnaWk18A5UV9MF2b81iI3T8i4U8KGeovMhVLdq7YRZFdBG9djQgPRzwfofB/LRz5+aTwKwiTTsmvy4DMP/2iCB7Eiqr7OaKtuaj1n6vt2MdIstqTz/nDEkjLcdrspajdqHnTfUYLEVJvns6KPIKQaQ61I71G7vkEG4MtZ3PRgGy7/zR/B2qAzhaJmHYMZtOfE2OPcPXi3wi9tTYObYaGzpQIqkFGUtpa862bq8qMSXVUpfb8dvDTOyuvURD9FmSHeDHiO6DYhqxqQrfw1aRHK0vu6QcSsGF31vYnrRGR48nZgouqyzUv90Nc9hvyXBcEaYZpCG2qbAArBseD+RRtXeWV1yvV+C7oy68JOxgLJaL1AsLPX81WV9maPy2Ns3IJ64iNvKMebWFtETNtDPIs5amm+wFjERiQ85DK70wucEd3lWWQr7UddSO8U72whJXGbtsC2onskI75uLF3n7XX4goaHrj0IVB3kVqc4O1zMXWvCzype2EerR2E9K/qoBWh5PQRc4bPhrNdoYGSAh18AKtzVOqPgNgzXnW591r4pWMrWW8Tww89sayPZUnxOwDIaf6kFP74+34K+ZWKGVJA9YBPpKfGAfMgOYalnB7YMA4Tn4Hmt4OQtPeArwgR4DBW+HiQ+aFNK04="
var nn = CryptoJS.enc.Utf8.parse(ee)
var rr = CryptoJS.enc.Utf8.parse(ee.toUpperCase())
var ii = CryptoJS.AES.decrypt(encrypt_data, nn, {
iv: rr,
mode: CryptoJS.mode.CBC,
padding: CryptoJS.pad.Pkcs7
})
var xx = ii.toString(CryptoJS.enc.Utf8)
var out = JSON.parse(xx);
</code></pre>
<p>I've got as far as:</p>
<pre><code>import json
from Cryptodome.Util.Padding import pad
from Cryptodome.Cipher import AES
def decrypt():
ee = "#3vlsc6mdk2k400$"
encrypt_data = "hlXzkPyyhwUYql2Nwl/3AAcRSsZHKf5LyqsAHqSWjP+ZHzfdmQ7bG2cOrf3YxwcZFIlsJNLJOSL/dSj/fFtjWHkeQd21inSUPOkbu2hSD2xMxEkyss8rOIVJAx6NmY9sap852VtmTc2CT4TdXXRduEK4fXASReIX3Eb9V+TMs24t5ow6w8aau+GWZLP9b32ALs4IZeea+dE3YcKtYrZOu/bV7ZLSawlontkgGN9s4QSjUhv43ifxkS6oDHGFkh+4pjjqfLDa2c0fA28otRZUF4uz+UvYAW2b9hZxBVJQU0E45Bf/myuQjZ14KtQr0NdxAMq53PZlki2hRVtnCDErA2e26cK9/bkC6Pz/J0N7rosTYw6TtDRGPYeqM3z645Uew3f3vEcSQLkWWxi1txQPxTbn1MT4HzRtnAbGJOF+GeaAKbwtSt2B86iHjkyEJ+ssmIMsARRjUmhdFmsMF6vuqA5pSgxvYTacg/yzZvy6HVhZBqTpPcaRJGt41efib3zQg8u++yKXdz8MnHicuz32w/osWzcMsC3Cwm5/a1tJZ48xFJdu8YgUsFS6ioNaO9V6vWz8imQZiPEZxd1FLfRynjS8LpvY3+83M2h+A0oExmcd4UaEMCqkklM1A7ssOXeDTqKS8UiZVM3zH6lzNI42QOZE+WYcPvwNzVLanJpZcKqlLupGfOiHuUclEwKrBL8h3wHtU6UmU+VoPJQM82b4pv5vJY/qlUgjLnaWk18A5UV9MF2b81iI3T8i4U8KGeovMhVLdq7YRZFdBG9djQgPRzwfofB/LRz5+aTwKwiTTsmvy4DMP/2iCB7Eiqr7OaKtuaj1n6vt2MdIstqTz/nDEkjLcdrspajdqHnTfUYLEVJvns6KPIKQaQ61I71G7vkEG4MtZ3PRgGy7/zR/B2qAzhaJmHYMZtOfE2OPcPXi3wi9tTYObYaGzpQIqkFGUtpa862bq8qMSXVUpfb8dvDTOyuvURD9FmSHeDHiO6DYhqxqQrfw1aRHK0vu6QcSsGF31vYnrRGR48nZgouqyzUv90Nc9hvyXBcEaYZpCG2qbAArBseD+RRtXeWV1yvV+C7oy68JOxgLJaL1AsLPX81WV9maPy2Ns3IJ64iNvKMebWFtETNtDPIs5amm+wFjERiQ85DK70wucEd3lWWQr7UddSO8U72whJXGbtsC2onskI75uLF3n7XX4goaHrj0IVB3kVqc4O1zMXWvCzype2EerR2E9K/qoBWh5PQRc4bPhrNdoYGSAh18AKtzVOqPgNgzXnW591r4pWMrWW8Tww89sayPZUnxOwDIaf6kFP74+34K+ZWKGVJA9YBPpKfGAfMgOYalnB7YMA4Tn4Hmt4OQtPeArwgR4DBW+HiQ+aFNK04="
nn = ee.encode("utf-8")
rr = ee.upper().encode("utf-8")
cypher = AES.new(key=nn, mode=AES.MODE_CBC, iv=rr)
encode_data = encrypt_data.encode("utf-8")
padded_data = pad(
data_to_pad=encode_data,
block_size=16,
style="pkcs7",
)
ii = cypher.decrypt(padded_data)
xx = ii.decode("utf-8")
out = json.loads(xx)
return out
if __name__ == "__main__":
decrypt()
</code></pre>
<p>However, I'm getting the following error:</p>
<pre><code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0xaf in position 0: invalid start byte
</code></pre>
<p>Where am I going wrong?</p>
| <python><node.js><cryptojs><pycryptodome> | 2023-09-13 08:31:24 | 1 | 1,121 | Jossy |
77,095,278 | 9,753,863 | Farneback's optical flow implementation | <p>I want to integrate Farneback's optical flow in my framework and I started with a Python prototype.</p>
<p>I tried to follow the steps described in <a href="https://link.springer.com/content/pdf/10.1007/3-540-45103-X_50.pdf" rel="nofollow noreferrer">this paper</a> and I compared to OpenCV's output, but I get far worse results.</p>
<p>Here is how I proceeded:</p>
<ol>
<li><p>I have a multiscale approach, with an initial guess dx = dy = 0 by default for each pixel,</p>
</li>
<li><p>I estimate the optical flow at the current scale. Just like OpenCV, I perform the estimation several times at each scale. The estimation consists of:</p>
</li>
</ol>
<ul>
<li><p>Second order polynomial expansion for each image, i.e. locally estimate the parameters A and b in the equation
<img src="https://chart.googleapis.com/chart?cht=tx&chl=x%5ET%20A%20x%20%2B%20b%5ET%20x%20%2B%20c" alt="eq1" /> for eah image</p>
</li>
<li><p>Combine the parameters of both images as suggested in the paper:</p>
</li>
</ul>
<p><img src="https://chart.googleapis.com/chart?cht=tx&chl=A(x)%20%3D%20%5Cfrac%7BA_1(x)%20%2B%20A_2(x%2Bd)%7D%7B2%7D%5C%5C%0A%5CDelta%20b(x)%20%3D%20-%20%5Cfrac%7B1%7D%7B2%7D%5Cleft(%20b_2(x%20%2B%20d)%20-%20b_1(x)%20%5Cright)%20%2B%20A(x)%20d(x)" alt="eq2" /></p>
<ol start="3">
<li><p>Compute <img src="https://chart.googleapis.com/chart?cht=tx&chl=A%5ET(x)A(x)" alt="eq3.1" /> and <img src="https://chart.googleapis.com/chart?cht=tx&chl=A%5ET(x)%20%5CDelta%20b(x)" alt="eq3.2" /></p>
</li>
<li><p>Average <img src="https://chart.googleapis.com/chart?cht=tx&chl=A%5ET(x)A(x)" alt="eq3.1" /> and <img src="https://chart.googleapis.com/chart?cht=tx&chl=A%5ET(x)%20%5CDelta%20b(x)" alt="eq3.2" /> in a neighbourhood</p>
</li>
<li><p>Solve the system :
<img src="https://chart.googleapis.com/chart?cht=tx&chl=%5Chat%7Bd%7D(x)%20%3D%20%20%5Cleft(%20A%5ET(x)A(x)%20%5Cright)%5E%7B-1%7D%20A%5ET(x)%20%5CDelta%20b(x)" alt="eq4" /></p>
</li>
</ol>
<p>Here is my complete code</p>
<pre><code>import numpy as np
import math
import cv2
# --------------------------------------------------------------------------- #
def conjgrad(A, b, x0):
r = b - np.dot(A, x0)
p = r
rsold = np.dot(r.T, r)
x = x0
for i in range(len(b)):
Ap = np.dot(A, p)
alpha = rsold / (np.dot(p.T, Ap) + 1e-5)
x = x + alpha * p
r = r - alpha * Ap
rsnew = np.dot(r.T, r)
if math.sqrt(rsnew) < 1e-10:
break
p = r + (rsnew / rsold) * p
rsold = rsnew
return x
# --------------------------------------------------------------------------- #
def Farneback_estimParams(im, flowX=None, flowY=None):
sizeY, sizeX = im.shape
halfSize = 5
roiSize = 2*halfSize+1
nbNeighbours = roiSize*roiSize
A = np.zeros((4, sizeY, sizeX), dtype=np.float32)
B = np.zeros((2, sizeY, sizeX), dtype=np.float32)
for y in range(sizeY):
for x in range(sizeX):
dx = 0
dy = 0
if flowX is not None:
dx = flowX[y, x]
dy = flowY[y, x]
# Build the coordinate matrix:
# [1, x, y, x*x, y*y, x*y
# ... ]
# For the entire neighbourhood
a = np.ones((nbNeighbours, 6))
b = np.ones((nbNeighbours, 1))
i = 0
for oy in range(-halfSize, halfSize+1):
for ox in range(-halfSize, halfSize+1):
a[i, 0] = 1
a[i, 1] = ox
a[i, 2] = oy
a[i, 3] = ox*ox
a[i, 4] = oy*oy
a[i, 5] = ox*oy
curX = int(x + ox + dx)
curY = int(y+oy + dy)
if curX < 0 or curX >= sizeX or curY < 0 or curY >= sizeY:
b[i, 0] = 0
else:
b[i, 0] = im[curY, curX]
i += 1
# Solve a*p = b
ata = np.dot(a.T, a)
atb = np.dot(a.T, b)
p = conjgrad(ata, atb, np.zeros((6, 1)))
A[0, y, x] = p[3]
A[1, y, x] = p[4]/2
A[2, y, x] = p[4]/2
A[3, y, x] = p[5]
B[0, y, x] = p[0]
B[1, y, x] = p[1]
return A, B
# --------------------------------------------------------------------------- #
def Farneback(im1, im2, dx0, dy0):
nbIter = 10
sizeY, sizeX = im1.shape
# Create the output images
dx = dx0
dy = dy0
for curIter in range(nbIter):
# Estimate the parameters
A1, b1 = Farneback_estimParams(im1)
A2, b2 = Farneback_estimParams(im2, dx, dy)
# Combine the parameters
A = (A1+A2) / 0.5
Ad = np.zeros_like(b1)
Ad[0, :, :] = A[0, :, :]*dx + A[1, :, :]*dy
Ad[1, :, :] = A[2, :, :]*dx + A[3, :, :]*dy
deltaB = (b1-b2)/0.5 + Ad
# Compute A^T * A
ATA = np.zeros((4, sizeY, sizeX), dtype=np.float32)
ATA[0, :, :] = A[0, :, :]*A[0, :, :] + A[1, :, :]*A[2, :, :]
ATA[1, :, :] = A[0, :, :]*A[1, :, :] + A[1, :, :]*A[3, :, :]
ATA[2, :, :] = A[2, :, :]*A[0, :, :] + A[3, :, :]*A[2, :, :]
ATA[3, :, :] = A[2, :, :]*A[1, :, :] + A[3, :, :]*A[3, :, :]
# Compute A^T * B
ATB = np.zeros((2, sizeY, sizeX), dtype=np.float32)
ATB[0, :, :] = A[0, :, :] * deltaB[0, :, :] + A[2, :, :] * deltaB[1, :, :]
ATB[1, :, :] = A[1, :, :] * deltaB[0, :, :] + A[3, :, :] * deltaB[1, :, :]
# Smooth ATA and ATB
winSize = 15
for i in range(4):
ATA[i, :, :] = cv2.blur(ATA[i, :, :], (winSize, winSize))
if i < 2:
ATB[i, :, :] = cv2.blur(ATB[i, :, :], (winSize, winSize))
# For each pixel
for y in range(sizeY):
for x in range(sizeX):
# Build the matrix ATA and the vector ATB
ata = np.zeros((2, 2))
atb = np.zeros((2, 1))
ata[0, 0] = ATA[0, y, x]
ata[0, 1] = ATA[1, y, x]
ata[1, 0] = ATA[2, y, x]
ata[1, 1] = ATA[3, y, x]
atb[0] = ATB[0, y, x]
atb[1] = ATB[1, y, x]
# Solve the system
d = conjgrad(ata, atb, np.zeros((2, 1)))
dx[y, x] = d[0]
dy[y, x] = d[1]
return dx, dy
# --------------------------------------------------------------------------- #
def multiScaleFarneback(im1, im2, dx0=None, dy0=None):
# Multiscale parameters
pyrScale = 0.5
nbLevels = 2
# Initial displacement
sizeY, sizeX = im1.shape
if dx0 == None or dy0 == None:
dx0 = np.zeros_like(im1).astype(np.float32)
dy0 = np.zeros_like(im1).astype(np.float32)
# Create the list of the scales
scales = [1]
for i in range(0, nbLevels):
scales.append(scales[i]*pyrScale)
# For each level, from coarse to fine
for curLevel in range(nbLevels, -1, -1):
curScale = scales[curLevel]
print("Level", curLevel, ", Scale", curScale)
# Sub-sample the images
curSizeX = int(sizeX * curScale)
curSizeY = int(sizeY * curScale)
curIm1 = cv2.resize(im1, (curSizeY, curSizeX), interpolation=cv2.INTER_AREA).astype(np.float32)
curIm2 = cv2.resize(im2, (curSizeY, curSizeX), interpolation=cv2.INTER_AREA).astype(np.float32)
# Resample the displacement field
if curLevel == nbLevels:
dx_prev = cv2.resize(dx0, (curSizeY, curSizeX), interpolation=cv2.INTER_AREA)
dy_prev = cv2.resize(dy0, (curSizeY, curSizeX), interpolation=cv2.INTER_AREA)
dx_prev = dx_prev * curScale
dy_prev = dy_prev * curScale
else:
dx_prev = cv2.resize(dx, (curSizeY, curSizeX), interpolation=cv2.INTER_AREA)
dy_prev = cv2.resize(dy, (curSizeY, curSizeX), interpolation=cv2.INTER_AREA)
dx_prev = dx_prev / curScale
dy_prev = dy_prev / curScale
# Compute the optical flow at the current level, using the previous estimation
dx, dy = Farneback(curIm1, curIm2, dx_prev, dy_prev)
return dx, dy
</code></pre>
<p>I know there is a difference in the way the parameters A and b are calculated between my implementation and OpenCV's code. I just do not understand this step and the explanations in Farneback's PhD thesis so I coded a simple and naive non-weighted least squared calculation.</p>
<p>Is that so important or there is another mistake causing bad estimation?</p>
<p>This is how I set up the parameters on OpenCV:</p>
<p>flow = cv2.calcOpticalFlowFarneback(im1, im2, None, pyr_scale=0.5, levels=2, winsize=15, iterations=10, poly_n=5, poly_sigma=1.1, flags=0)</p>
<p>Thanks</p>
| <python><image-processing><computer-vision><opticalflow> | 2023-09-13 08:20:11 | 0 | 862 | ractiv |
77,095,143 | 8,413,889 | is there an option to display a selenium session in website | <p>Is it possible to display a Python selenium session (which run a flow in some website) on my website so that the user can see the session that is running? I'm using the Django library.</p>
| <python><django><selenium-webdriver> | 2023-09-13 07:59:56 | 0 | 441 | YosDos |
77,095,046 | 14,688,566 | Unable to install python 3.11.4 with tkinter via pyenv on MacOS | <p>I'm receiving the following error trying to install python <code>3.11.4</code> via <code>pyenv</code> on <code>MacOS Ventura 13.5.2 (22G91)</code>:</p>
<pre class="lang-console prettyprint-override"><code>foo@bar:~$ pyenv install 3.11.4
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Downloading Python-3.11.4.tar.xz...
-> https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tar.xz
Installing Python-3.11.4...
python-build: use tcl-tk from homebrew
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/me/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 38, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named '_tkinter'
WARNING: The Python tkinter extension was not compiled and GUI subsystem has been detected. Missing the Tk toolkit?
Installed Python-3.11.4 to /Users/me/.pyenv/versions/3.11.4
</code></pre>
<p>at the same time, I'm able to install python <code>3.10.4</code> via <code>pyenv</code> just fine:</p>
<pre class="lang-console prettyprint-override"><code>foo@bar:~$ pyenv install 3.10.4
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Downloading Python-3.10.4.tar.xz...
-> https://www.python.org/ftp/python/3.10.4/Python-3.10.4.tar.xz
Installing Python-3.10.4...
python-build: use tcl-tk from homebrew
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
Installed Python-3.10.4 to /Users/me/.pyenv/versions/3.10.4
</code></pre>
<p>While I was searching for an answer, I found this SO question:
<a href="https://stackoverflow.com/a/61879759/14688566">https://stackoverflow.com/a/61879759/14688566</a></p>
<p>I've tried some of the recommended approaches like this one (uninstall, install with some env vars):</p>
<pre class="lang-console prettyprint-override"><code>foo@bar:~$ pyenv uninstall 3.11.4
pyenv: remove /Users/me/.pyenv/versions/3.11.4? [y|N] y
pyenv: 3.11.4 uninstalled
foo@bar:~$ env \
PATH="$(brew --prefix tcl-tk)/bin:$PATH" \
LDFLAGS="-L$(brew --prefix tcl-tk)/lib" \
CPPFLAGS="-I$(brew --prefix tcl-tk)/include" \
PKG_CONFIG_PATH="$(brew --prefix tcl-tk)/lib/pkgconfig" \
CFLAGS="-I$(brew --prefix tcl-tk)/include" \
PYTHON_CONFIGURE_OPTS="--with-tcltk-includes='-I$(brew --prefix tcl-tk)/include' --with-tcltk-libs='-L$(brew --prefix tcl-tk)/lib -ltcl8.6.13 -ltk8.6.13'" \
pyenv install 3.11.4
python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Downloading Python-3.11.4.tar.xz...
-> https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tar.xz
Installing Python-3.11.4...
python-build: use tcl-tk from $PYTHON_CONFIGURE_OPTS
python-build: use readline from homebrew
python-build: use zlib from xcode sdk
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/me/.pyenv/versions/3.11.4/lib/python3.11/tkinter/__init__.py", line 38, in <module>
import _tkinter # If this fails your Python may not be configured for Tk
^^^^^^^^^^^^^^^
ModuleNotFoundError: No module named '_tkinter'
WARNING: The Python tkinter extension was not compiled and GUI subsystem has been detected. Missing the Tk toolkit?
Installed Python-3.11.4 to /Users/me/.pyenv/versions/3.11.4
</code></pre>
<p>But it's not working either.</p>
<p>As far as I understand, <code>pyenv</code> successfully picks up homebrew's <code>tcl-tk</code>, so that's not an issue. According to this <a href="https://stackoverflow.com/a/75804235/14688566">comment</a> it appears <code>tcl-tk</code> is no longer keg-only and, therefore, setting environment variables is no longer needed.
In fact, I don't see any caveats like <em>"tcl-tk is keg-only, which means it was not symlinked into /usr/local"</em> when I'm running <code>brew info tcl-tk</code> (as far as I undestand, that's the issue faced by the majority of commenters there and the fix for this is running pyenv install along with those environment variables).</p>
<hr />
<p>Here are the outputs of <code>brew info</code> for <code>tcl-tk</code>, <code>python-tk</code> (not sure I even need this one installed tbh) and <code>pyenv</code> packages installed on my machine (all of them are latest versions) along with the <code>brew ---version</code> output:</p>
<pre class="lang-console prettyprint-override"><code>foo@bar:~$ brew info tcl-tk
==> tcl-tk: stable 8.6.13 (bottled)
Tool Command Language
https://www.tcl-lang.org
Conflicts with:
page (because both install `page` binaries)
/usr/local/Cellar/tcl-tk/8.6.13_5 (3,064 files, 52.8MB) *
Poured from bottle using the formulae.brew.sh API on 2023-09-12 at 20:45:30
From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/t/tcl-tk.rb
License: TCL
==> Dependencies
Required: openssl@3 ✔
==> Caveats
The sqlite3_analyzer binary is in the `sqlite-analyzer` formula.
==> Analytics
install: 57,088 (30 days), 173,897 (90 days), 333,561 (365 days)
install-on-request: 27,654 (30 days), 89,791 (90 days), 164,180 (365 days)
build-error: 6 (30 days)
</code></pre>
<pre class="lang-console prettyprint-override"><code>foo@bar:~$ brew info python-tk
==> python-tk@3.11: stable 3.11.5 (bottled)
Python interface to Tcl/Tk
https://www.python.org/
/usr/local/Cellar/python-tk@3.11/3.11.5 (5 files, 131.7KB) *
Poured from bottle using the formulae.brew.sh API on 2023-09-12 at 11:11:06
From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/p/python-tk@3.11.rb
License: Python-2.0
==> Dependencies
Required: python@3.11 ✔, tcl-tk ✔
==> Analytics
install: 7,261 (30 days), 16,597 (90 days), 32,528 (365 days)
install-on-request: 7,255 (30 days), 16,583 (90 days), 32,505 (365 days)
build-error: 0 (30 days)
</code></pre>
<pre class="lang-console prettyprint-override"><code>foo@bar:~$ brew info pyenv
==> pyenv: stable 2.3.26 (bottled), HEAD
Python version management
https://github.com/pyenv/pyenv
/usr/local/Cellar/pyenv/2.3.26 (1,110 files, 3.3MB) *
Poured from bottle using the formulae.brew.sh API on 2023-09-13 at 09:42:44
From: https://github.com/Homebrew/homebrew-core/blob/HEAD/Formula/p/pyenv.rb
License: MIT
==> Dependencies
Required: autoconf ✔, openssl@3 ✔, pkg-config ✔, readline ✔
==> Options
--HEAD
Install HEAD version
==> Analytics
install: 109,431 (30 days), 278,276 (90 days), 531,993 (365 days)
install-on-request: 108,805 (30 days), 276,683 (90 days), 528,966 (365 days)
build-error: 3 (30 days)
</code></pre>
<pre class="lang-console prettyprint-override"><code>foo@bar:~$ brew --version
Homebrew 4.1.11
Homebrew/homebrew-core (git revision 8e188d78fe0; last commit 2023-09-13)
Homebrew/homebrew-cask (git revision 862ffb4b59; last commit 2023-09-13)
</code></pre>
| <python><macos><homebrew><pyenv> | 2023-09-13 07:47:32 | 1 | 446 | absoup |
77,094,999 | 5,507,055 | How to merge models to one BaseClass, but is still distinguishable | <p>I have several models:</p>
<pre><code>from sqlmodel import SQLModel
class Dog(SQLModel, table=true):
__tablename__ = "dog"
name : str
size: int
color: str
class Cat(SQLModel, table=true):
__tablename__ = "cat"
name: str
size: int
color: str
</code></pre>
<p>and then I have a pandatic schema:</p>
<pre><code>from pydantic import BaseModel
class Cat(BaseModel):
name: str
size: int
color: str
class Dog(BaseModel):
name: str
size: int
color: str
class Pets(BaseModel):
name: str
size: int
color: str
</code></pre>
<p>and now I have a FastAPI function:</p>
<pre><code> from app.models import Cat, Dog
from app.schemas import Pets
@router.get("/pets", response_model=List[Pets])
async def get_pets(session: AsyncSession = Depends(get_async_session)):
pets = []
query = select(Cat)
cats = (await session.execute(query)).scalars().fetchall()
query = select(Dog)
dogs = (await session.execute(query)).scalars().fetchall()
pets.extend(dogs, cats)
return pets
</code></pre>
<p>This is the result:</p>
<pre><code> [{'color': 'blue', 'name': 'bob', 'size': 5},
{'color': 'green', 'name': 'tiger', 'size': 5},
{'color': 'purple', 'name': 'rob', 'size': 2}]
</code></pre>
<p>How could I distinguish which model the entry is from? I could add an attribute to each model, but this doesn't seem very smart and waste space.</p>
<p><code>Update</code>:</p>
<p>I tried this:</p>
<pre><code>from typing import Annotated, Literal
from pydantic import BaseModel
class Cat(BaseModel):
model_type = Literal["cat"]
name: str
size: int
color: str
Pets = Annotated[Union[Cat, Dog], Field(discriminator='model_type')]
</code></pre>
<p>But do I understand it correct, that I still need the attribute also at the SQLModel? But then I have in each database row the string "cat" or "dog".</p>
| <python><fastapi><pydantic><sqlmodel> | 2023-09-13 07:40:00 | 0 | 2,845 | ikreb |
77,094,863 | 136,725 | Forward reference to enclosing class in optional return type, without typing.Optional? | <p>Python 3.10 introduced the <code>Foo | None</code> alternative type annotation syntax for <code>Optional[Foo]</code>. This doesn't seem to work when using forward reference to the enclosing type, like <code>'Foo' | None</code>, which instead results in a runtime error:</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
@staticmethod
def maybe_create(arg: str) -> 'Foo' | None:
if len(arg) < 5:
return Foo()
else:
return None
if __name__ == '__main__':
import sys
print(Foo.maybe_create(sys.argv[1]))
</code></pre>
<pre><code>Traceback (most recent call last):
File "staticmethod-test.py", line 1, in <module>
class Foo:
File "staticmethod-test.py", line 3, in Foo
def maybe_create(arg: str) -> 'Foo' | None:
~~~~~~^~~~~~
TypeError: unsupported operand type(s) for |: 'str' and 'NoneType'
</code></pre>
<p>Using <code>typing.Optional</code> works as expected, like:</p>
<pre class="lang-py prettyprint-override"><code>from typing import Optional
class Foo:
@staticmethod
def maybe_create(arg: str) -> Optional['Foo']:
if len(arg) < 5:
return Foo()
else:
return None
if __name__ == '__main__':
import sys
print(Foo.maybe_create(sys.argv[1]))
</code></pre>
<p>Is there a way to express this optionality without extra imports?</p>
| <python><python-typing> | 2023-09-13 07:21:38 | 1 | 4,117 | Tuure Laurinolli |
77,094,430 | 1,242,865 | How to check if a python package is installed using poetry | <p>I'm using <a href="https://python-poetry.org/" rel="noreferrer">Poetry</a> to manage my Python project, and I would like to check if a package is installed. This means the package is actually fetched from the remote repository and is located in the <code>.venv/</code> folder. I went through the official documentation but didn't find any command that can do that.</p>
<p>Any ideas? Thanks.</p>
<p>Update: I ended up with a solution by running the following command and parsing its output.</p>
<p>Thanks all for the help here!</p>
<p><code>poetry install --dry-run --sync --no-ansi</code></p>
| <python><python-poetry> | 2023-09-13 06:08:53 | 3 | 676 | guojiubo |
77,094,251 | 4,902,679 | Set two different logging levels | <p>I am using a basic logging config where all messages of level <code>INFO</code> are getting displayed to stdout, but I'm looking to set log level of <code>WARNING</code> only for the Kafka related messages, the rest can be at <code>INFO</code> level.</p>
<p>I'm not sure how to set two different logging levels with minimum settings.</p>
<p>Current configuration looks like this;</p>
<pre><code>import logging
logging.basicConfig(format='[%(asctime)s] [%(levelname)s] - %(message)s', level=logging.INFO)
...
</code></pre>
| <python><python-3.x> | 2023-09-13 05:20:56 | 1 | 544 | Goku |
77,093,865 | 1,639,594 | Cannot run MetaAI's llama2 due to "No module named 'fire'" error | <p>I am trying to run the llama2 model locally on my MacOS (M1 chip).</p>
<p>In the <code>example_chat_completion.py</code> file there is an import for the fire module:</p>
<pre><code># Copyright (c) Meta Platforms, Inc. and affiliates.
# This software may be used and distributed according to the terms of the Llama 2 Community License Agreement.
from typing import List, Optional
import fire
from llama import Llama, Dialog
...
</code></pre>
<p>But when attempting to run the model as instructed from the <a href="https://github.com/facebookresearch/llama" rel="nofollow noreferrer">llama2 repo</a>:</p>
<pre><code>torchrun --nproc_per_node 1 example_chat_completion.py \
--ckpt_dir llama-2-7b-chat/ \
--tokenizer_path tokenizer.model \
--max_seq_len 512 --max_batch_size 6
</code></pre>
<p>...I get the following error:</p>
<pre><code>ModuleNotFoundError: No module named 'fire'
</code></pre>
<p>I am running from <code>llama</code> directory, and have tried <code>pip install fire</code>, <code>pip3 install fire</code>, <code>conda install fire</code> and <code> python -m pip install fire</code>. All of these methods show "Requirement already satisfied."</p>
<p>Note that I am running in the <a href="https://developer.apple.com/metal/tensorflow-plugin/" rel="nofollow noreferrer">venv-metal</a> virtual environment.</p>
| <python><import><pytorch><large-language-model><llama> | 2023-09-13 03:27:40 | 1 | 13,424 | Cybernetic |
77,093,716 | 15,587,184 | Data Wrangling in Python in Chaining Style from R | <p>I'm new to Python and I come from the R environment. One thing that I love in R is the ability to write down code that will make some many transformations on the data in one readable chunk of code</p>
<p>But it is very difficult for me to find code in that style in Python and I wonder if some of you can guide as to where to find resources and references on that particular style and the functions that it allows.</p>
<p>For instance I want to transform this code of R:</p>
<pre><code>library(dplyr)
iris %>%
select(-Petal.Width) %>% #drops column Ptela.Width
filter(Petal.Length > 2 | Sepal.Width > 3.1) %>% #Selects only rows where Criteria is met
filter(Species %in% c('setosa', 'virginica')) %>% #Filters in Species selected
mutate_if(is.numeric, scale) %>% #Numerical columns are scale into z-scores
mutate(item = rep(1:3, length.out = n())) %>% # a new col item is created and will carry the sequence 1,2,3 until the end of the dataste
group_by(Species) %>% #groups by species
summarise(n = n(), #summarises the size of each group
n_sepal_over_1z = sum(Sepal.Width > 1), #counts the number of obs where Spepal.Width is over 1 z score
nunique_item_petal_over_2z = n_distinct(item[Petal.Length>1]))
#counst the unique elements in the col item where the values of the col Petal.length is over 1 z-score
</code></pre>
<p>That little piece of code was able to do everything I wanted but if I want to write that in Python I can't seem to find a way to replicate that style of coding. The closest I get is this:</p>
<pre><code> import pandas as pd
from sklearn.preprocessing import StandardScaler
# Load the Iris dataset
iris = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data",
header=None, names=["Sepal.Length", "Sepal.Width", "Petal.Length", "Petal.Width", "Species"])
# Filter and manipulate the data
filtered_data = iris[(iris["Petal.Length"] > 2) | (iris["Sepal.Width"] > 3.1)]
filtered_data = filtered_data[filtered_data["Species"].isin(["setosa", "virginica"])]
# Scale numeric columns using StandardScaler
numeric_columns = filtered_data.select_dtypes(include=[float])
scaler = StandardScaler()
scaled_data = pd.DataFrame(scaler.fit_transform(numeric_columns), columns=numeric_columns.columns)
# Add the "item" column
scaled_data["item"] = list(range(1, 4)) * (len(scaled_data) // 3)
# Group by "Species" and calculate summary statistics
summary_stats = scaled_data.groupby("Species").agg(
n=pd.NamedAgg(column="Sepal.Length", aggfunc="size"),
n_sepal_over_1z=pd.NamedAgg(column="Sepal.Width", aggfunc=lambda x: (x > 1).sum()),
nunique_item_petal_over_2z=pd.NamedAgg(column="item", aggfunc=lambda x: x[scaled_data["Petal.Length"] > 1].nunique())
).reset_index()
print(summary_stats)
</code></pre>
<p>As you can see is way more code. How can I achieve my transformations in just one chunk of code in Python with as little code as possible? I'm new so my intention is NOT to compare both programming languages, they are awesome in their own right but I just want to see Python as flexible and as diverse in the chaining or pipeline style as R.</p>
| <python><r><chaining><method-chaining> | 2023-09-13 02:40:42 | 2 | 809 | R_Student |
77,093,549 | 12,684,429 | How to label outputs of looping | <p>EDITED to focus on outputs rather than the concat:</p>
<p>I am scraping data through an API which requires different codes to scrape various datasets (for example, country codes) - I obviously don't want to copy and paste the same few lines of code to pull the data and manipulate it so was hoping to loop through a list of country codes and do it this way.</p>
<pre><code>thislist = ["ES", "EE", "LT"]
for x in thislist:
headers = api.box(id=x, meta=True)
countrydf = api.box(id=x)
countrydf .index = countrydf['Date']
countrydf .index = pd.to_datetime(countrydf .index)
countrydf = countrydf.iloc[: , 2:]
print(countrydf)
</code></pre>
<p>The issue is that this gives me only the last dataframe (LT) - how do I need to change this so that I can find all dataframes (ES, EE, LT etc) so that I can then concat them together.</p>
<p>Any help much appreciated!</p>
<p>Thanks</p>
| <python><pandas><list><loops> | 2023-09-13 01:33:51 | 3 | 443 | spcol |
77,093,546 | 12,427,876 | Update InputText in PySimpleGUI | <p>I'm trying to disable & delete all texts in 2 certain inputtext fields (<code>A1</code>, <code>A2</code>), if some fields (<code>B1</code>, <code>B2</code>) have value:</p>
<pre><code>import PySimpleGUI as sg
def gen_layout() -> list:
layout=[
[sg.Text('A1', size=(20, 1)), sg.InputText(key='-INPUT-A1-', enable_events=True)],
[sg.Text('A2', size=(20, 1)), sg.InputText(key='-INPUT-A2-', enable_events=True)],
[sg.Text('B1', size=(20, 1)), sg.InputText(key='-INPUT-B1-', enable_events=True)],
[sg.Text('B2', size=(20, 1)), sg.InputText(key='-INPUT-B2-', enable_events=True)],
]
return layout
def main():
""" Main function
"""
# Theme color
sg.theme('BlueMono')
# Layout
layout = gen_layout()
# Create the window
window = sg.Window(title="Test", layout=layout, resizable=True)
# Create an event loop
while True:
# Read window
window_read = window.read()
if window_read is None:
break
event, values = window_read
# Window closed
if event == sg.WIN_CLOSED:
break
print(values)
if (values['-INPUT-B1-'] != "" or values['-INPUT-B2-'] != "") and ('-INPUT-B1-' in event or '-INPUT-B2-' in event):
# Disable
window['-INPUT-A1-'].update(disabled=True)
window['-INPUT-A2-'].update(disabled=True)
# Delete text
window['-INPUT-A1-'].update("")
window['-INPUT-A2-'].update("")
elif (values['-INPUT-B1-'] == "" and values['-INPUT-B2-'] == "") and ('-INPUT-B1-' in event or '-INPUT-B2-' in event):
# Enable
window['-INPUT-A1-'].update(disabled=False)
window['-INPUT-A2-'].update(disabled=False)
window.close()
main()
</code></pre>
<p>The code above <strong>appears to</strong> work (i.e., GUI part is working correctly), but as you can see from <code>print(values)</code>: if I input <code>1</code>, <code>2</code>, <code>3</code>, <code>4</code> to <code>A1</code>, <code>A2</code>, <code>B1</code>, <code>B2</code> respectively, I hope the <code>values</code> printouts should be:</p>
<pre><code>{'-INPUT-A1-': '1', '-INPUT-A2-': '', '-INPUT-B1-': '', '-INPUT-B2-': ''}
{'-INPUT-A1-': '1', '-INPUT-A2-': '2', '-INPUT-B1-': '', '-INPUT-B2-': ''}
{'-INPUT-A1-': '', '-INPUT-A2-': '', '-INPUT-B1-': '3', '-INPUT-B2-': ''}
{'-INPUT-A1-': '', '-INPUT-A2-': '', '-INPUT-B1-': '3', '-INPUT-B2-': '4'}
</code></pre>
<p>However, in fact, it is:</p>
<pre><code>{'-INPUT-A1-': '1', '-INPUT-A2-': '', '-INPUT-B1-': '', '-INPUT-B2-': ''}
{'-INPUT-A1-': '1', '-INPUT-A2-': '2', '-INPUT-B1-': '', '-INPUT-B2-': ''}
{'-INPUT-A1-': '1', '-INPUT-A2-': '2', '-INPUT-B1-': '3', '-INPUT-B2-': ''} <-- notice this
{'-INPUT-A1-': '', '-INPUT-A2-': '', '-INPUT-B1-': '3', '-INPUT-B2-': '4'}
</code></pre>
<p>Why is this program working like this? Is there something wrong with my <code>.update("")</code> function?</p>
| <python><pysimplegui> | 2023-09-13 01:32:18 | 1 | 411 | TaihouKai |
77,093,515 | 17,653,423 | How to check yaml Schema in multiple levels using Python | <p>If the yaml file is valid, the method <code>yaml.safe_load()</code> will convert the file to a dictionary (in the code is represented by the variable <code>CONFIG</code>).</p>
<p>After the first validation, it should check each key to see if the type matches.</p>
<pre><code>from schema import Use, Schema, SchemaError
import yaml
config_schema = Schema(
{
"pipeline": Use(
str,
error="Unsupported pipeline name. A string input is expected"
),
"retry_parameters": Use(
int,
error="Unsupported retry strategy. An integer input is expected"
)
},
error="A yaml file is expected"
)
CONFIG = """
pipeline: 1
retry_parameters: 'pipe_1'
"""
configuration = yaml.safe_load(CONFIG)
try:
config_schema.validate(configuration)
print("Configuration is valid.")
except SchemaError as se:
for error in se.errors:
if error:
print(error)
</code></pre>
<p>In the example above, it's raising and printing three errors.</p>
<pre><code>A yaml file is expected
A yaml file is expected
Unsupported retry strategy. A integer input is expected
</code></pre>
<p>But in that case I was expecting the following:</p>
<pre><code>Unsupported pipeline name. A string input is expected
Unsupported retry strategy. A integer input is expected
</code></pre>
<p>How can I check if the file has a valid yaml format and after that check if each key has the expected type?</p>
| <python><pyyaml> | 2023-09-13 01:20:39 | 1 | 391 | Luiz |
77,093,497 | 11,821,558 | Can not run langchain module , Kernel died while running the script? | <p>I was trying to run python script using jupyter notebook as shown below, am not sure on how to troubleshoot moreover the note freezes,</p>
<p>error message</p>
<p><code>The kernel appears to have died, it will start automatically</code></p>
<pre><code># embeddings using langchain
from langchain.embeddings import SentenceTransformerEmbeddings
embeddings = SentenceTransformerEmbeddings(model_name="all-MiniLM-L6-v2")
# using chromadb as a vector store and storing the docs in it
from langchain.vectorstores import Chroma
db = Chroma.from_documents(docs, embeddings)
# Doing similarity search using query
query = "What are the different kinds of pets people commonly own?"
matching_docs = db.similarity_search(query)
matching_docs[0]
</code></pre>
| <python><jupyter-notebook><langchain> | 2023-09-13 01:12:30 | 1 | 1,073 | cloudcop |
77,093,465 | 2,148,718 | Static type checking with Pydantic data conversion | <p>Pydantic offers the means to transform input data into a final type as part of model initialisation. However, this doesn't integrate nicely with static type checkers. Note, I'm using Pydantic 1.X (currently 1.10.12).</p>
<p>For example, consider this model which stores a list of strings. I want the user to be able to initialise it from a file if they want:</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, validator
class Foo(BaseModel):
data: list[str]
@validator("data", pre=True)
def read_file(cls, v):
if isinstance(v, str):
with open(v) as fp:
return fp.readlines()
return v
Foo(data = ["a", "b"])
Foo(data = "/etc/hosts")
</code></pre>
<p>This executes correctly, but running <code>pyright</code> on the script gives me:</p>
<pre><code>test.py:14:16 - error: Argument of type "Literal['/etc/hosts']" cannot be assigned to parameter "data" of type "list[str]" in function "__init__"
"Literal['/etc/hosts']" is incompatible with "list[str]" (reportGeneralTypeIssues)
</code></pre>
<p>Since Pydantic is designed to handle this use case, how can I make it compatible with my type checker? One solution is to make a custom constructor with custom argument types, but this is both redundant and unfeasible for a large number of fields.</p>
| <python><pydantic><pyright> | 2023-09-13 00:59:34 | 1 | 20,337 | Migwell |
77,093,447 | 6,387,095 | Django-cms Type error at admin/cms/page/add-plugin? | <p>I have set up an <code>s3 bucket</code> for the media and static files.</p>
<p>Now when I go to edit a page, I can add images, files etc. But when I try to add <code>text</code> I get the following error:</p>
<pre><code>TypeError at /admin/cms/page/add-plugin/
unsupported operand type(s) for +: 'NoneType' and 'str'
Request Method: GET
Request URL: http://my-url.us-west-2.elasticbeanstalk.com/admin/cms/page/add-plugin/?delete-on-cancel&placeholder_id=23&plugin_type=TextPlugin&cms_path=%2Fblog%2Ftest%2F%3Fedit&language=en&plugin_language=en&plugin=46
Django Version: 4.2.3
Exception Type: TypeError
Exception Value:
unsupported operand type(s) for +: 'NoneType' and 'str'
Exception Location: /var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/ djangocms_text_ckeditor/widgets.py, line 106, in get_ckeditor_settings
Raised during: cms.admin.placeholderadmin.add_plugin
Python Executable: /var/app/venv/staging-LQM1lest/bin/python3.8
Python Version: 3.8.16
Python Path:
['/var/app/current',
'/var/app/venv/staging-LQM1lest/bin',
'/var/app/venv/staging-LQM1lest/bin',
'/usr/lib64/python38.zip',
'/usr/lib64/python3.8',
'/usr/lib64/python3.8/lib-dynload',
'/var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages',
'/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages']
Error during template rendering
In template /var/app/venv/staging-LQM1lest/lib64/python3.8/site-packages/django/contrib/admin/templates/admin/includes/fieldset.html, error at line 20
unsupported operand type(s) for +: 'NoneType' and 'str'
Line 20: {{ field.field }}
</code></pre>
<p>I am not being able to think of any reason for the error other than the storage. Not sure how to proceed.</p>
| <python><django><django-cms><djangocms-text-ckeditor> | 2023-09-13 00:49:23 | 1 | 4,075 | Sid |
77,093,266 | 19,123,103 | How to clear input field after hitting Enter in streamlit? | <p>I have a streamlit app where I want to get user input and use it later. However, I also want to clear the input field as soon as the user hits Enter. I looked online and it seems I need to pass a callback function to <code>text_input</code> but I can't make it work. I tried a couple different versions but neither works as I expect.</p>
<pre class="lang-py prettyprint-override"><code>import streamlit as st
def clear_text():
st.session_state.my_text = ""
# This version doesn't clear the text after hitting Enter.
my_text = st.text_input("Enter text here", on_change=clear_text)
# This version clears the field but doesn't save the input.
my_text = st.text_input("Enter text here", on_change=clear_text, key='my_text')
st.write(my_text)
</code></pre>
<p>The expectation is to save the input into <code>my_text</code> and clear the field afterwards.</p>
<p>I looked at similar questions about clearing text input <a href="https://stackoverflow.com/q/77016854/19123103">here</a> and <a href="https://stackoverflow.com/q/76891261/19123103">here</a> but they're not relevant for my case because I want the input field to clear automatically while those cases talk about using a separate button. How do I make it work?</p>
| <python><widget><user-input><streamlit> | 2023-09-12 23:48:27 | 1 | 25,331 | cottontail |
77,093,263 | 9,571,463 | Best Practice on Using a Single SQLAlchemy Engine Across Multiple Async Tasks | <p>I am running a concurrent process using <code>asyncio</code>. To do this, I am constructing a number of orchestrator objects which use this same engine. I am debating if I should be passing this engine or referencing it as a class variable.</p>
<p>I understand that the SQLAlchemy engine is thread safe, but if I pull connections out of it using <code>async</code> coroutines is this safe? I am using Snowflake as my database which does not allow direct asyncio integration, so my idea is to run async tasks that involve the data processing (typically API calls, and some minor transformation) and then use the engine to create a connection and run a query.</p>
<pre><code>import asyncio
class Orchestrator:
def __init__(self, engine) -> None:
self.engine = engine
async def run(self) -> None:
# Do some things
# Run insert query
await self.insert_data(data)
async def insert_data(self) -> None:
with self.engine.connect() as conn:
conn.execute("INSERT INTO TABLE")
async def main() -> None:
# Create orchestrator objects
orchestrators = create_orchestrators()
# Run orchestrators which are going to be treated as asynchronous tasks that involve
# some data processing and eventual insert into database.
# I don't want one orchestrator's activity to block another's
tasks: list = [
asyncio.create_task(o.run())
for o in orchestrators
]
await asyncio.gather(*tasks)
if __name__ == "__main__":
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
</code></pre>
| <python><sqlalchemy><python-asyncio> | 2023-09-12 23:46:53 | 0 | 1,767 | Coldchain9 |
77,093,154 | 7,306,999 | Set background color for a specific x-range only | <p>Consider the following example code to generate a simple plot:</p>
<pre><code># Import modules
import numpy as np
import matplotlib.pyplot as plt
# Create 1D array with time values
t = np.linspace(0, 2, 1000)
# Calculate 1D array with acceleration values
a = 10 * np.sin(2 * np.pi * t)
# Create plot
fig, ax = plt.subplots()
ax.plot(t, a)
# Annotate plot
fig.suptitle("acceleration in the time-domain")
ax.set_xlabel("time [s]")
ax.set_ylabel("acceleration [g]")
# Set background color
ax.patch.set_facecolor('red')
ax.patch.set_alpha(0.2)
# Trigger display of plot window
plt.show()
</code></pre>
<p>The axes area of the resulting plot has a light red background color, as shown here:</p>
<p><a href="https://i.sstatic.net/9kTC4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9kTC4.png" alt="matplotlib result" /></a></p>
<p>What I would like to achieve however is a plot in which the background is light red in the x-range 1.00 to 1.50 only (and white in the remaining x-range). Is this possible?</p>
| <python><matplotlib> | 2023-09-12 23:13:24 | 1 | 8,674 | Xukrao |
77,093,149 | 21,575,627 | Number of permutation operations to return to original array | <p>I recently solved this in a coding assessment for an application (in C++, Python wasn't allowed).</p>
<p>I've re-done the problem (with the same logic) in Python with a problem statement.</p>
<p>My brute force solution got 6/15 of the test cases (the rest were time limit exceeded).</p>
<p>My optimal got 8/15, with the rest being incorrect. The ones where it was incorrect were hidden test cases, with very large output. I verified the issue was not due to the calculation of the LCM and not due to integer overflow, leaving the logic of the program (the cycle detection).</p>
<p>Problem statement:</p>
<pre><code># return the number of permutation operations required to get an array back to the original (at minimum 1 must be done)
# permutation operation defined as
# tmp[i] = arr[p[i]] for all i
# arr = tmp
# where p is an array of non-zero length n with distinct values 0 - (n - 1) as elements
# array is arbitrary array of length n
</code></pre>
<p>Brute force:</p>
<pre><code># O(n^2) since max cycle is at worst n long
def brute_force_sol(p):
n = len(p)
arr = [i for i in range(n)]
num = 0
while True:
tmp = [0] * n
for i in range(n):
tmp[i] = arr[p[i]]
arr = tmp
num += 1
if arr == [i for i in range(n)]:
break
return num
</code></pre>
<p>Optimal:</p>
<pre><code># idea:
# observe that since 0 - (n - 1) are present, with each being a directed edge
# there form cycles that may sync up occassionally. we want to know when
# all the cycles sync up since that is the number of ops to get back to orig
# OR we want the LCM of the length of all the cycles present in the graph
# example:
# p = [1, 4, 3, 2, 0]
# cycle one: 0 -> 1 -> 4 -> 0 (len 3)
# cycle two: 2 -> 3 -> 2 (len 2)
# LCM(2, 3) = 6
# p = [0, 1]
# cycle one: 0 -> 0
# cycle two: 1 -> 1
# LCM(1, 1) = 1
# O(n)
def optimal_sol(p):
n = len(p)
a = set()
cycles = []
for i in range(n):
if i in a:
continue
else:
# track the cycle len
j = i
start = j
cycleLen = 0
while True:
a.add(j)
j = p[j]
cycleLen += 1
if start == j:
break
cycles.append(cycleLen)
from math import lcm
return lcm(*cycles)
</code></pre>
<p>I'm hoping for someone to be able to give me an idea of where my algorithm is incorrect. I've written it up in Python for viewing convenience.</p>
| <python><arrays><algorithm><data-structures><graph> | 2023-09-12 23:12:11 | 2 | 1,279 | user129393192 |
77,092,748 | 714,077 | How Add Intelligent Title to Pandas DataFrame.plot.hist | <p>I have a simple data set of multiple columns, all of which I want to plot against a "quality" score from 1-10. NOTE: this is a group_by plot, not a simple plot of a single column. It yields a series of histogram bars for all the other columns in the dataframe, one histogram for each of the "quality" values in the data. Since in this case the "quality" scores range from 3 to 9, a total of 7 histograms result.</p>
<pre><code>df.plot.hist(by='quality',figsize=(10,30))
plt.tight_layout()
plt.show()
</code></pre>
<p>This group_by histogram plot works beautifully with one issue: the top title of each individual histogram defaults to the numeric "quality" value, i.e, "3", "4", "5", etc., but does not name the column from which that numeric "quality" value is given.
<a href="https://i.sstatic.net/GPrKg.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/GPrKg.png" alt="example of grouped histogram result" /></a>
What I would like to do is capture that value but add the column name (or some such string) to it like so:</p>
<pre><code>"Quality: " + str(whatever_the_quality_value_variable_is_called)
</code></pre>
<p>Can anyone suggest how to set plt.title for each histogram to something other than a fixed string? I find many pages showing how to set the trivial plt.title("my_title") which is not helpful.</p>
<p>Looking for something like</p>
<pre><code>plt.title("My Useful Title Labeling Variable Value" + {the variable}")
</code></pre>
<p>Thank you in advance for spending time and thought on this.</p>
<p>Here is a sample of the data:
<a href="https://i.sstatic.net/MAzOR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/MAzOR.png" alt="sample data" /></a></p>
<p>RESOLUTION:
Here is the result of Jesse's code. Beautiful.
<a href="https://i.sstatic.net/XMA8F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XMA8F.png" alt="resolution example" /></a></p>
| <python><pandas><plot><histogram><title> | 2023-09-12 21:20:35 | 1 | 961 | noogrub |
77,092,684 | 5,231,001 | How does the x,y,z in `plot_surface` work | <p>I am trying to figure out how to work with the input data for pyplot's <code>plot_surface</code>, as it is quite confusing.
For a start the y-axis does not represent height as I am used to with geometrics. There is some documentation for it, but it doesn't make sense to me.</p>
<blockquote>
<p>The general format of <code>ax.plot_surface()</code> is below.</p>
<pre><code>>> ax.plot_surface(X, Y, Z)
</code></pre>
<p>Where X and Y are 2D array of x and y points and Z is a 2D array of
heights.</p>
</blockquote>
<p>I noticed in examples the data is commonly build something like:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
tx = np.linspace(0, 1, nx)
ty = np.linspace(0, 1, ny)
X, Y = np.meshgrid(tx, ty)
Z = ((X**2-1)**2) # or some other pseudo-random generated data
plt.figure(figsize=(10,10))
plt.plot_surface(X, Y, Z)
plt.show()
</code></pre>
<p>So from what I could figure out all the variables are 2D-arrays where:</p>
<ul>
<li>X is an array mapping x-coords against a row of y-coords.</li>
<li>Y is an array mapping y-coords against a column of y-coords.</li>
<li>Z is a 2D-array containing all the height values. But I am not sure if it is x-first or y-first.</li>
</ul>
<p>So not sure if these assumptions are correct.</p>
<p>Does anyone know a place where these things are described in a better understandable way, or could anyone explain/dumb this down for me a bit?</p>
| <python><numpy><matplotlib><surface><matplotlib-3d> | 2023-09-12 21:04:57 | 1 | 1,922 | n247s |
77,092,571 | 3,611,472 | "granularity argument is invalid " error with Coinbase API | <p>I am writing my own Python script to use the Advanced Trade API of Coinbase.</p>
<p>I am trying to use the <code>/api/v3/brokerage/products/{product_id}/candles</code> endpoint to fetch historical data. Authentication works, but I get the following error when I try to get historical data</p>
<p><code>{"error":"INVALID_ARGUMENT","error_details":"granularity argument is invalid ","message":"granularity argument is invalid "}</code></p>
<p>The base code of my custom client is the following</p>
<pre><code>import hmac, hashlib, requests
import time
import urllib.parse
from requests.auth import AuthBase
import datetime
class CoinbaseAuth(AuthBase):
def __init__(self, api_key, api_secret):
self.api_key = api_key
self.api_secret = api_secret
def __call__(self, request):
timestamp = str(int(time.time()))
message = timestamp + request.method + request.path_url + (request.body or b'').decode()
signature = hmac.new(self.api_secret.encode('utf-8'),
message.encode('utf-8'),
digestmod=hashlib.sha256).digest().hex()
request.headers.update({
'CB-ACCESS-SIGN': signature,
'CB-ACCESS-TIMESTAMP': timestamp,
'CB-ACCESS-KEY': self.api_key,
'Content-Type': 'application/json'
})
return request
class CoinbaseClient:
def __init__(self, auth: CoinbaseAuth):
self.auth = auth
def request(self, http_method: str, endpoint: str, body: dict = None):
if body is None:
body = {}
url = urllib.parse.urljoin('https://api.coinbase.com/', endpoint)
response = requests.request(http_method, url, json=body, auth=self.auth)
return response
</code></pre>
<p>that I can use running the following lines</p>
<pre><code>
config = {'api_key': 'YOUR_API_KEY',
'api_secret': 'YOUR_SECRET_API'}
auth = CoinbaseAuth(api_key=config['api_key'], api_secret=config['api_secret'])
client = CoinbaseClient(auth=auth)
start = str(int(datetime.datetime(2023, 1, 1).timestamp()))
end = str(int(datetime.datetime(2023, 1, 3).timestamp()))
response = client.request(http_method='GET', endpoint='/api/v3/brokerage/products/BTC-USD/candles',
body={"start": start, "end": end, 'granularity': 'ONE_DAY'})
print(response.text)
</code></pre>
<p>These lines return the error <code>{"error":"INVALID_ARGUMENT","error_details":"granularity argument is invalid ","message":"granularity argument is invalid "}</code></p>
<p>Curiously, if I make the same request but without specifying a body request, I get the data I was looking for. But this is not what the documentation of Coinbase says.</p>
<p>To run a test, I define a <code>dummy_request</code> method of the <code>CoinbaseClient</code> class</p>
<pre><code>import hmac, hashlib, requests
import time
import urllib.parse
from requests.auth import AuthBase
import datetime
class CoinbaseAuth(AuthBase):
def __init__(self, api_key, api_secret):
self.api_key = api_key
self.api_secret = api_secret
def __call__(self, request):
timestamp = str(int(time.time()))
message = timestamp + request.method + request.path_url + (request.body or b'').decode()
signature = hmac.new(self.api_secret.encode('utf-8'),
message.encode('utf-8'),
digestmod=hashlib.sha256).digest().hex()
request.headers.update({
'CB-ACCESS-SIGN': signature,
'CB-ACCESS-TIMESTAMP': timestamp,
'CB-ACCESS-KEY': self.api_key,
'Content-Type': 'application/json'
})
return request
def return_headers(self, path_url, body=''):
timestamp = str(int(time.time()))
message = timestamp + 'GET' + path_url + body
signature = hmac.new(self.api_secret.encode('utf-8'),
message.encode('utf-8'),
digestmod=hashlib.sha256).digest().hex()
return {
'CB-ACCESS-SIGN': signature,
'CB-ACCESS-TIMESTAMP': timestamp,
'CB-ACCESS-KEY': self.api_key,
'Content-Type': 'application/json'
}
class CoinbaseClient:
def __init__(self, auth: CoinbaseAuth):
self.auth = auth
def request(self, http_method: str, endpoint: str, body: dict = None):
if body is None:
body = {}
url = urllib.parse.urljoin('https://api.coinbase.com/', endpoint)
response = requests.request(http_method, url, json=body, auth=self.auth)
return response
def dummy_request(self):
url = 'https://api.coinbase.com/api/v3/brokerage/products/BTC-USDC/candles?start=1672527600&end=1672614000&granularity=ONE_DAY'
headers = self.auth.return_headers('/api/v3/brokerage/products/BTC-USDC/candles')
response = requests.get(url, headers=headers)
return response
</code></pre>
<p>By calling this new method, I get the data I was looking for. Running the following lines</p>
<pre><code>response = client.dummy_request()
print(response.text)
</code></pre>
<p>returns <code>{"candles":[{"start":"1672531200", "low":"16490", "high":"16621", "open":"16531.83", "close":"16611.58", "volume":"10668.73697739"}]}</code></p>
<p>Why am I getting the <code>"INVALID_ARGUMENT"</code> error when I specify the body request? Anyone has an idea of what is going on? Is there an error in my code?</p>
| <python><python-requests><coinbase-api> | 2023-09-12 20:41:32 | 1 | 443 | apt45 |
77,092,547 | 1,797,000 | How to solve "OperationalError: database is locked" with simultaneous /admin and management command sessions? | <p>I'm using django with sqlite for a web application which executes a long-running management command to update the database (<code>./manage.sh download_company_data</code>). However, while it's running, I'm unable to create a new user in the Django admin interface. I've already increased the timeout option for the database to 20000:</p>
<pre><code>DATABASES = {
'default': {
'ENGINE': 'django.db.backends.sqlite3',
'NAME': BASE_DIR / 'db.sqlite3',
'OPTIONS': { 'timeout': 20000 },
}
}
</code></pre>
<p>The operational errors are immediate, though, ignoring the configured timeout. They happen after clicking on the save button when trying to add a user to the database.</p>
<p>I'm simultaneous running the database update process and a <code>runserver</code> process for the admin site. The individual updates are simple and fast, don't access the <code>User</code> table and I have also tried to disconnect from the database and sleep to give room for the server process:</p>
<pre><code> for line in c:
line = preprocess(line, prefix)
search = { unique: line[unique] }
for i in range(10):
try:
Model.objects.update_or_create(**search, defaults=line)
break
except OperationalError as e:
info('Operational error. Delayed retry...')
sleep(0.02)
connections.close_all()
sleep(1)
</code></pre>
<p>I don't understand why the admin site is repeatedly giving operational errors mentioning the database is locked, even when there's only 1 connection to the sqlite file most of the time.</p>
<p>Is there any way to solve this without switching sqlite to another database backend ?</p>
| <python><django><sqlite> | 2023-09-12 20:36:38 | 1 | 8,117 | hdante |
77,092,488 | 9,855,588 | python unique set of unsigned digits using uuid | <p>I need a unique number (max 64-bit) that most likely won't be reused in the future, or potentially regenerated in the future.</p>
<p>Could that be achieved using:</p>
<pre><code>import uuid
uuid.uuid4().node
</code></pre>
<p>which would yield the last 48 bits of the UUID? Is there a chance this 48 bit value could get repeated when calling for its value in the future?</p>
| <python><python-3.x> | 2023-09-12 20:24:52 | 3 | 3,221 | dataviews |
77,092,462 | 5,823,586 | Angular 16 / Flask Browser Refresh - How to Redirect to 'root' | <p>I am new to Angular and have been assigned the development of a SPA UI frontend communicating with a Flask server and everything was going fine until I hit an issue of when the user hits the refresh button.</p>
<p>What we want to do is redirect to the application root which in this case is index.html in the templates directory. One approach I found was to use the Flask errorhandler decorator to do the redirect but this won't work because there isn't anything returned back to the Flask server when I hit the 'Refresh' button.</p>
<p>When I launch the application I can see in my VSCode terminal window the Flask server starting the application container and that the call to render_template is loading all of the Angular assets; however, when I hit the browser refresh button all I see is a 404 error for the page that I was on and there is no call back to the Flask server.</p>
<p>So my question is this if there isn't a call back to the Flask server how can I perform the redirect to the application root. Do I put some sort of javascript handler in the index template that detects the 404 condition and handles reloading itself?</p>
<p>TIA,
Bill Youngman</p>
| <python><angular><flask><browser-refresh> | 2023-09-12 20:20:56 | 1 | 301 | B. Youngman |
77,092,392 | 512,480 | Strange version string / number error when using pillow | <p>This simple python program:</p>
<pre><code>import pyautogui
# Get the screen size
screen_size = pyautogui.size()
# Capture the screen and save it to a file
screenshot = pyautogui.screenshot()
screenshot.save("screenshot.png")
</code></pre>
<p>Breaks with a very strange error from pyscreeze:</p>
<pre><code>% python3 screenshot.py
Traceback (most recent call last):
File "/Users/ken/Programs/Neil/screenshot.py", line 7, in <module>
screenshot = pyautogui.screenshot()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/pyscreeze/__init__.py", line 527, in _screenshot_osx
if tuple(PIL__version__) < (6, 2, 1):
TypeError: '<' not supported between instances of 'str' and 'int'
</code></pre>
<p>What I have tried:</p>
<ol>
<li>pip3 install --upgrade pillow
which took it to pillow-10.0.0.</li>
<li>update pyscreeze to the latest
This gives the strange error below but an update to pyautogui then goes without a hitch. None of which resolves the original str / int problem.</li>
</ol>
<p>Please tell me how to address this kind of error.</p>
<pre><code>% pip3 install pyscreeze
Collecting pyscreeze
Using cached PyScreeze-0.1.29-py3-none-any.whl
Collecting pyscreenshot (from pyscreeze)
Using cached pyscreenshot-3.1-py3-none-any.whl (28 kB)
Requirement already satisfied: Pillow>=9.2.0 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pyscreeze) (9.5.0)
Requirement already satisfied: EasyProcess in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pyscreenshot->pyscreeze) (1.1)
Requirement already satisfied: entrypoint2 in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pyscreenshot->pyscreeze) (1.1)
Requirement already satisfied: mss in /Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages (from pyscreenshot->pyscreeze) (9.0.1)
Installing collected packages: pyscreenshot, pyscreeze
**ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.**
pyautogui 0.9.54 requires pygetwindow>=0.0.5, which is not installed.
Successfully installed pyscreenshot-3.1 pyscreeze-0.1.29
</code></pre>
| <python><versioning><dependency-management><pyautogui> | 2023-09-12 20:09:31 | 2 | 1,624 | Joymaker |
77,092,321 | 7,197,899 | Metaclass isinstance not working as expected | <p>I've got the following class hiearchy:</p>
<pre class="lang-py prettyprint-override"><code>class Acting: pass
class Observing: pass
class AgentMeta(type):
def __instancecheck__(self, instance: Any) -> bool:
return isinstance(instance, Observing) and isinstance(instance, Acting)
class Agent(Acting, Observing, metaclass=AgentMeta): pass
class GridWorld: pass
class GridMoving(Acting, GridWorld): pass
class GridObserving(Observing, GridWorld): pass
class Blocking(GridMoving, GridObserving): pass
class Broadcasting(Agent, GridWorld): pass
</code></pre>
<p>I set it up this way so that I can check this</p>
<pre><code>isinstance(Blocking(), Agent) -> True
</code></pre>
<p>instead of this</p>
<pre><code>isinstance(Blocking(), Observing) and isinstance(Blocking(), Acting)
</code></pre>
<p>However, now I have the undesired side effect that <code>Blocking</code> is an instance of <code>Broadcasting</code>.</p>
<pre><code>isinstance(Blocking(), Broadcasting) -> True
</code></pre>
<p>My understanding is that this is because <code>Braodcasting</code> inherits from <code>Agent</code>, which has the modification on the <code>__instancecheck__</code> from the metaclass. I don't really understand metaclasses well enough to come up with an elegant solution, but I would like to do the following:</p>
<ol>
<li>When I want to check if something is both <code>Acting</code> and <code>Observing</code>, I would like to shortcut this by just checking if it is <code>Agent</code>.</li>
<li>I would like to be able to inherit from <code>Agent</code> but still have the <code>isinstance</code> check the actual class and not use the <code>__instancecheck__</code> from <code>AgentMeta</code>.</li>
</ol>
<p>Is this possible? Are metaclasses overkill here?</p>
| <python><inheritance><metaclass><isinstance> | 2023-09-12 19:56:11 | 1 | 1,173 | KindaTechy |
77,092,237 | 5,044,921 | Simulating a vibrating string: is divergence avoidable? | <p>I am trying to simulate a Gaussian wave traveling down a 1-D string, with some wave velocity 1.0, until it hits x=0, where the string changes to have a wave velocity 0.5.</p>
<p>My code currently is as follows:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
# Parameters
N = 301
x_max = 10.0
x_min = -x_max
v_left = 1.0
v_right = 0.5
t0 = -6.0
dt = 0.02
# Initial conditions
x = np.linspace(x_min, x_max, N)
v = (x < 0) * v_left + (x >= 0) * v_right
y = np.exp(-(x - v_left * t0) ** 2)
dy_dt = y * 2 * (x - v_left * t0) * v_left
dx = x[1] - x[0]
# Prepare for iteration
fig = plt.figure()
d2y_dx2 = np.zeros_like(x)
t = t0
iteration = 0
# Iterate
while t <= 4.0:
# Approximate wave equation: d^2/dt^2 y = v^2 * d^2/dx^2
d2y_dx2[1:-1] = np.diff(y, 2) / dx**2
d2y_dt2 = v**2 * d2y_dx2
y += dt * dy_dt + 0.5 * dt**2 * d2y_dt2
dy_dt += dt * d2y_dt2
# Check if we should plot
for test_t in [-4.0, -2.0, -0.5, 0.5, 2.0, 4.0]:
if abs(t - test_t) < (0.1 * dt):
plt.plot(x, y, label=f"t={t}")
# Prepare for next iteration
iteration += 1
t = t0 + (iteration * dt)
# Adjust plot
plt.xlim(-5.0, 5.0)
plt.ylim(-1.1, 1.1)
plt.plot([0.0, 0.0], [-1.1, 1.1], color="black")
plt.legend()
plt.show()
</code></pre>
<p>This algorithm works fine for some time, but it eventually turns crazy. The plot that is generated displays the state of the string at various points in time, and looks like so:
<a href="https://i.sstatic.net/Jf0Hb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Jf0Hb.png" alt="Resulting graph" /></a></p>
<p>My first instinct is that I'm simulating too crudely, but if I increase the density on the x-axis, or decrease <code>dt</code> then the craziness shows up <em>earlier</em>. Is this all just an artefact of the limitations of floats? How can I avoid this from happening? If I can't: how can I maximise the x/t density without before this happens?</p>
| <python><numpy><simulation><physics> | 2023-09-12 19:43:11 | 0 | 4,786 | acdr |
77,092,189 | 7,700,802 | Creating a custom desribe summary table for a number of columns | <p>I have this dataset:</p>
<pre><code>d = {'eue_scanner': [1,2,3,4,5],
'eue_thinclient': [12,1221,2123,1231, 123],
'service_model': ['Gold', 'Bronze', 'Bronze', 'Silver', 'Silver']}
df = pd.DataFrame(data=d)
</code></pre>
<p>I want to create a describe table that will look like this (ignore actual numbers):</p>
<p><a href="https://i.sstatic.net/RHSuE.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RHSuE.png" alt="enter image description here" /></a></p>
<pre><code>df['eue_thinclient'].describe().to_csv("my_description.csv")
</code></pre>
<p>but I would want all these summary tables in this formatting for the rest. Here is the code I have:</p>
<pre><code>for smf, df_smf in df_final.groupby('service_model'):
for col in columns_for_stats:
print(smf,"-", col)
print(df_smf[col].describe())
df_smf[col].describe().to_csv("my_description.csv")
break
break
</code></pre>
<p>The <code>column_for_stats</code> list is just this:</p>
<pre><code>column_for_stats = ['eue_scanner', 'eue_thinclient']
</code></pre>
<p>but for all the different combinations. Here is the expected csv output in picture form</p>
<p><a href="https://i.sstatic.net/iu2Az.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/iu2Az.png" alt="enter image description here" /></a></p>
<p>Any help is appreciated.</p>
| <python><pandas><excel><dataframe> | 2023-09-12 19:35:11 | 1 | 480 | Wolfy |
77,092,182 | 3,543,200 | boto3 failing to upload file in subdirectory of existing bucket | <p><code><EDIT></code> Closing this, as uninstalling and reinstalling <code>moto</code> from scratch fixed the issue. <code></EDIT></code></p>
<p>I am using <code>boto3==1.28.45</code> with <code>moto==1.3.14</code> running locally on port 5353.</p>
<p>This has worked for years, but for my machine alone, uploading to subdirs of an existing bucket is now failing with 404.</p>
<p>I can't figure out where to even start debugging this, and would appreciate any ideas of what to think about.</p>
<p><code><EDIT></code> I understand that it's not really a "subdir", it's a key-value store, but it seems anything with slashes is failing <code></EDIT></code></p>
<pre><code>(py-3.8.13) % cat test.txt
test
(py-3.8.13) % python
Python 3.8.13 (default, Aug 26 2022, 14:51:46)
[Clang 12.0.5 (clang-1205.0.22.9)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import boto3
>>> client = boto3.client('s3', region_name='us-east-1', endpoint_url='http://127.0.0.1:5353')
>>> client.create_bucket(Bucket='test')
{'ResponseMetadata': {'HTTPStatusCode': 200, 'HTTPHeaders': {'server': 'Werkzeug/2.3.5 Python/3.8.13', 'date': 'Tue, 12 Sep 2023 19:17:47 GMT', 'content-type': 'text/html; charset=utf-8', 'content-length': '158', 'connection': 'close'}, 'RetryAttempts': 0}}
>>> client.upload_file('test.txt', 'test', 'test.txt')
>>> client.upload_file('test.txt', 'test', 'test_dir/test.txt')
Traceback (most recent call last):
... [ skipped for brevity ]
botocore.exceptions.ClientError: An error occurred (404) when calling the PutObject operation: Not Found
</code></pre>
<p>I've tried the following as well:</p>
<pre><code>>>> resource = boto3.resource('s3', region_name='us-east-1', endpoint_url='http://127.0.0.1:5353')
>>> bucket = resource.Bucket('test')
>>> bucket.upload_file('test.txt', 'test.txt')
>>> bucket.upload_file('test.txt', 'test_dir/test.txt')
Traceback (most recent call last):
...
botocore.exceptions.ClientError: An error occurred (404) when calling the PutObject operation: Not Found
>>> bucket.put_object(Key='test_dir/')
>>> list(bucket.objects.iterator())
[s3.ObjectSummary(bucket_name='test', key='test.txt'), s3.ObjectSummary(bucket_name='test', key='test_dir/')]
>>> bucket.upload_file('test.txt', 'test_dir/test.txt')
Traceback (most recent call last):
...
botocore.exceptions.ClientError: An error occurred (404) when calling the PutObject operation: Not Found
</code></pre>
<p>Moto_server logs are as expected:</p>
<pre><code>127.0.0.1 - - [12/Sep/2023 12:28:15] "PUT /test/test_dir/test.txt HTTP/1.1" 404 -
</code></pre>
| <python><amazon-s3><boto3><moto> | 2023-09-12 19:33:40 | 1 | 997 | gmoss |
77,092,165 | 14,293,020 | Write tifs as a datacube | <p>I have a folder of <code>.tif</code> files, I want to merge them in a datacube. By <em>datacube</em> I mean a <code>netcdf</code> file, or <code>zarr</code> files. The goal is that if I open this datacube in Python, I can access a 3D array representing the stack of <code>tif</code> files.</p>
<p><strong>Context:</strong> The files never overlap spatially, but stitched together they cover one big area. They are named after a date: some tifs have the same date (in which case they are merged with <code>xr.concat(dim='x')</code>).</p>
<p><strong>Goal:</strong> I want to combine all these tifs in one dataset and save this dataset. I want to do that with <code>dask</code> so I don't encounter memory issues.</p>
<p><strong>Questions:</strong></p>
<ul>
<li>How to concatenate and save datasets as zarr or netcdf with <code>dask</code> so we don't overload memory ?</li>
</ul>
<p><strong>Code example:</strong></p>
<pre><code>import xarray as xr
import pandas as pd
from collections import Counter
# List of dates, 3 files share the same date
list_dates = [
pd.Timestamp('2014-10-08 00:00:00'),
pd.Timestamp('2014-10-13 00:00:00'),
pd.Timestamp('2014-10-15 00:00:00'),
pd.Timestamp('2014-10-15 00:00:00'),
pd.Timestamp('2014-10-15 00:00:00')
]
# In the list of files, the last 3 files have the same date
list_files = [
'2014-10-08_0.tif',
'2014-10-13_0.tif',
'2014-10-15_0.tif',
'2014-10-15_1.tif',
'2014-10-15_2.tif'
]
# Count how many times each date appears
date_counts = Counter(list_dates)
# Initialize the list hosting the DataArrays
data_arrays = []
# Initialize the counter
i = 0
# Loop that appends the DataArrays and concatenates them if the files have the same date
while i < len(list_files):
# If the date is unique, append the DataArrays list
if date_counts[list_dates[i]] == 1:
# Open the file as a dataarray (index [0] because it is opened as a 3D array even though it is 2D)
data_array = xr.open_dataarray(list_files[i][0])
# Add the timestamp as a dimension
data_array = data_array.expand_dims({'time': [list_dates[i]]})
data_arrays.append(data_array)
i += 1 # Update the iterator
# If the date has multiple occurrences, concatenate spatially the files sharing the same date
else:
# Open all the files with the same date as dataarrays and store them in a temporary list
ds_temp = xr.concat([xr.open_dataarray(list_files[j])[0] for j in range(i, i + date_counts[list_dates[i]])], dim='x')
# Add a time dimension
ds_temp = ds_temp.expand_dims({'time': [list_dates[i]]})
data_arrays.append(ds_temp)
i += date_counts[list_dates[i]] # Update the iterator by the amount of files that were used in the concatenation
# Concatenate all the DataArrays in the final list along the 'time' dimension
final_dataset = xr.concat(data_arrays, dim='time')
# Print the final dataset
print(final_dataset)
</code></pre>
| <python><pandas><dask><ram><python-xarray> | 2023-09-12 19:31:08 | 1 | 721 | Nihilum |
77,092,114 | 5,551,827 | numba typeerror on higher dimensional structured numpy datatypes | <p>The following code compiles and executes correctly:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numba import njit
Particle = np.dtype([ ('position', 'f4'), ('velocity', 'f4')])
arr = np.zeros(2, dtype=Particle)
@njit
def f(x):
x[0]['position'] = x[1]['position'] + x[1]['velocity'] * 0.2 + 1.
f(arr)
</code></pre>
<p>However, making the datatype more highly dimensional causes this code to fail when compiling (but works without <code>@njit</code>):</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from numba import njit
Particle = np.dtype([
('position', 'f4', (2,)),
('velocity', 'f4', (2,))
])
arr = np.zeros(2, dtype=Particle)
@njit
def f(x):
x[0]['position'] = x[1]['position'] + x[1]['velocity'] * 0.2 + 1.
f(arr)
</code></pre>
<p>With the following error:</p>
<pre><code>TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<built-in function setitem>) found for signature:
>>> setitem(Record(position[type=nestedarray(float32, (2,));offset=0],velocity[type=nestedarray(float32, (2,));offset=8];16;False), Literal[str](position), array(float64, 1d, C))
There are 16 candidate implementations:
- Of which 16 did not match due to:
Overload of function 'setitem': File: <numerous>: Line N/A.
With argument(s): '(Record(position[type=nestedarray(float32, (2,));offset=0],velocity[type=nestedarray(float32, (2,));offset=8];16;False), unicode_type, array(float64, 1d, C))':
No match.
During: typing of staticsetitem at /tmp/ipykernel_21235/2952285515.py (13)
File "../../../../tmp/ipykernel_21235/2952285515.py", line 13:
<source missing, REPL/exec in use?>
</code></pre>
<p>Any thoughts on how to remedy the later one? I would like to use more highly dimensionalized datatypes.</p>
| <python><numba> | 2023-09-12 19:21:32 | 1 | 777 | Darren McAffee |
77,092,010 | 5,032,387 | Understanding options for kwargs in Python functions API | <p>How do I know all the options for arguments of a function when there is a kwargs argument? I've encountered this many times where the documentation isn't specific enough with respect to what they are. I've never been able to figure out whether this is just lackluster documentation or there is a concept that I'm ignorant about.</p>
<p>For example, examine the following code from the HuggingFace <a href="https://huggingface.co/docs/transformers/llm_tutorial" rel="nofollow noreferrer">tutorial</a> on generation:</p>
<pre><code>from transformers import AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained(
"openlm-research/open_llama_7b", device_map="auto", load_in_4bit=True
)
</code></pre>
<p>If we navigate to the <a href="https://huggingface.co/docs/transformers/main/en/model_doc/auto#transformers.AutoTokenizer.from_pretrained" rel="nofollow noreferrer">API</a>, we see that there isn't a load_in_4bit argument, so this must be an optional keyword argument. Here are the kwargs details:</p>
<pre><code>kwargs (additional keyword arguments, optional) — Will be passed to the Tokenizer __init__() method. Can be used to set special tokens like bos_token, eos_token, unk_token, sep_token, pad_token, cls_token, mask_token, additional_special_tokens. See parameters in the __init__() for more details.
</code></pre>
<p>If I navigate to the Tokenizer <strong>init</strong> documentation like the above suggests, then I also see a keyword argument:</p>
<pre><code> | __init__(self, vocab_file=None, merges_file=None, tokenizer_file=None, unk_token='<|endoftext|>', bos_token='<|endoftext|>', eos_token='<|endoftext|>', add_prefix_space=False, **kwargs)
| Initialize self. See help(type(self)) for accurate signature.
</code></pre>
<p>So in this particular case, the only way I would have known about this argument is from the tutorial referenced, which is a very fragile way of understanding how to use a particular function.</p>
| <python><documentation> | 2023-09-12 19:04:06 | 0 | 3,080 | matsuo_basho |
77,091,968 | 15,498,094 | Update Django model field when auth token expires | <p>I'm using JWT auth in Django, I need to set a field in one of my models to false when the access token expires. Is there anyway to monitor tokens like that?</p>
| <python><django><jwt> | 2023-09-12 18:57:57 | 1 | 446 | Vedank Pande |
77,091,861 | 18,739,908 | How can I configure ib_insync to run in a FastAPI app? | <p>I already have IB Gateway setup on my local machine. When I run my app I get the error: "RuntimeError: This event loop is already running". Is it possible to have ib_insync running in a FastAPI app? If so how can I do it? Here is my existing code:</p>
<pre><code>from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
from .routers import auth, profile
from decouple import config
from .utils.constants import app_name
from ib_insync import IB
FRONTEND_DOMAIN = config("FRONTEND_DOMAIN")
app = FastAPI()
ib = IB()
ib.connect("127.0.0.1", 7497, clientId=1)
app.add_middleware(
CORSMiddleware,
allow_origins=[FRONTEND_DOMAIN],
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
app.include_router(auth.router)
app.include_router(profile.router)
@app.get("/")
async def home():
return {"Welcome to the": f"{app_name} API"}
</code></pre>
<p>I also want to be able to access ib from all files in my app. How can I do this?</p>
| <python><fastapi><algorithmic-trading><interactive-brokers><ib-insync> | 2023-09-12 18:40:06 | 1 | 494 | Cole |
77,091,765 | 14,852,106 | How to make field readonly based on group and state with _get_view function. Odoo 16 | <p>I want to use the _get_view function in the 'purchase.order' to set readonly attribute of some customs fields based on group and state.
Here is my code. It works with group condition:</p>
<pre><code>@api.model
def _get_view(self, view_id=None, view_type='form', **options):
arch, view = super()._get_view(view_id, view_type, **options)
rec_list = ['new_field1', 'new_field2', 'new_field3']
for field in arch.xpath("//field"):
if field.get('name') in rec_list:
field.set('readonly', '1') if self.env.user.has_group('base.user_admin') else field.set('readonly', '0')
return arch, view
</code></pre>
<p>My problem is how to get the actuel value of state to add the condition based on state.</p>
<p>Any help please .
Thanks.</p>
| <python><xml><odoo><odoo-16> | 2023-09-12 18:24:20 | 1 | 633 | K.ju |
77,091,756 | 22,407,544 | How to get python-magic to identify file type of docx correctly instaead of as application/zip? | <p>I'm developing a website which allows users to submit files and then I use Python to detect the file type. I learned that all MS Office files are actually zip files but how do I determine that it contains a single MS Word(or other MS Office) file? I tried getting the first few bytes with:</p>
<pre><code>validator = magic.Magic(uncompress=True, mime=True)
mime_type = validator.from_buffer(open(file_path, 'rb').read(1000))
</code></pre>
<p>But no matter how many bytes I read it still returns application/zip.</p>
<p>I'm now trying:</p>
<pre><code>import zipfile
myzip = zipfile.ZipFile(file_path, 'r')
'[Content_Types].xml' in myzip.namelist()
</code></pre>
<p>to test if the zip file contains an xml file, which returns True.</p>
<p>I also ran <code>myzip.namelist()</code> which returned:</p>
<pre><code>['word/numbering.xml', 'word/settings.xml', 'word/fontTable.xml', 'word/styles.xml', 'word/document.xml', 'word/_rels/document.xml.rels', '_rels/.rels', 'word/theme/theme1.xml', '[Content_Types].xml']
</code></pre>
<p>which returned all the files in the zip file but I'm not sure what next to do to get the mime.</p>
<p>To be sure I ran the same command with a xlsx file as file_path which returned:</p>
<pre><code>['[Content_Types].xml', '_rels/.rels', 'xl/workbook.xml', 'xl/_rels/workbook.xml.rels', 'xl/worksheets/sheet1.xml', 'xl/theme/theme1.xml', 'xl/styles.xml', 'xl/sharedStrings.xml', 'docProps/core.xml', 'docProps/app.xml']
</code></pre>
<p>as a test to ensure the results are different.</p>
<p>I want to ensure that a single file is submitted and to be able to detect the file type but I'm not sure what to do next to ensure I get the correct MIME type.</p>
| <python><excel><xml><ms-word><mime-types> | 2023-09-12 18:23:08 | 1 | 359 | tthheemmaannii |
77,091,688 | 5,431,734 | relative python imports. What is the difference between single dot and no dot | <p>Thats quite basic but still I am a bit puzzled. I have this toy example where I try to import a function from <code>my_module.py</code> which looks like:</p>
<pre><code>## my_module.py
def say_hi():
print('Hi!')
</code></pre>
<p>See attached pic for my project tree
<a href="https://i.sstatic.net/CYiCU.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/CYiCU.png" alt="enter image description here" /></a></p>
<p>The import statement <code>from .my_module import say_hi</code> gives the <code>ImportError: attempted relative import with no known parent package</code>. However it works fine as <code>from my_module import say_hi</code>.</p>
<p>I thought these statements were equivalent, especially given the presence of <code>__init__.py</code>. PyCharm too seems to be happy in both cases, ie a mouse-hover and ctrl key combination over the word my_module, highlights the word.</p>
| <python> | 2023-09-12 18:12:40 | 1 | 3,725 | Aenaon |
77,091,644 | 16,383,578 | How to convert BGR array to LCh array efficiently? | <p>I have a NumPy three dimensional array of shape <code>(height, width, 3)</code> with <code>float</code> data type, the values are between 0 and 1, it represents an BGR image of resolution width*height.</p>
<p>And now I want to convert it to a LCh(ab) array and back. So I spent days researching, I have read the Wikipedia <a href="https://en.wikipedia.org/wiki/SRGB" rel="nofollow noreferrer">article</a>, another Wikipedia <a href="https://en.wikipedia.org/wiki/CIELAB_color_space" rel="nofollow noreferrer">article</a>, <a href="https://gitlab.gnome.org/GNOME/gimp" rel="nofollow noreferrer">GIMP source code</a>, <a href="https://gitlab.gnome.org/GNOME/babl" rel="nofollow noreferrer">babl source code</a>, this <a href="http://dev.vkdev.pro/2013/01/lch-color-model-photoshop-blend-modes.html" rel="nofollow noreferrer">program</a>, this <a href="https://ninedegreesbelow.com/photography/srgb-color-space-to-profile.html" rel="nofollow noreferrer">article</a>, and another <a href="http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html" rel="nofollow noreferrer">one</a>...</p>
<p>I don't like that all the values I find online are of low precision. So I ripped the full precision values from GIMP source code and reverse engineered the whole process to create the matrices, and I got the following:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
from pathlib import Path
from typing import Tuple
def xy_to_XY(x: float, y: float) -> Tuple[float]:
return x / y, (1 - x - y) / y
BRADFORD = np.array(
[
[0.8951000, 0.2664000, -0.1614000],
[-0.7502000, 1.7135000, 0.0367000],
[0.0389000, -0.0685000, 1.0296000],
],
dtype=float,
)
BRADFORD_INV = np.linalg.inv(BRADFORD)
D65 = np.array([0.9504559270516716, 1, 1.0890577507598784], dtype=float)
D50 = np.array([0.96420288, 1, 0.82490540], dtype=float)
CHAD = np.zeros(9)
CHAD[:9:4] = (BRADFORD @ D50) / (BRADFORD @ D65)
CHAD = BRADFORD_INV @ CHAD.reshape((3, 3)) @ BRADFORD
Rx = 0.639998686
Ry = 0.330010138
Gx = 0.300003784
Gy = 0.600003357
Bx = 0.150002046
By = 0.059997204
Xr, Zr = xy_to_XY(Rx, Ry)
Xg, Zg = xy_to_XY(Gx, Gy)
Xb, Zb = xy_to_XY(Bx, By)
INTERIMAT = np.array([[Xr, Xg, Xb], [1, 1, 1], [Zr, Zg, Zb]], dtype=float)
MATINV = np.linalg.inv(INTERIMAT)
D65_Y = MATINV @ D65
D65_XYZ = INTERIMAT * D65_Y
def xyY_to_XYZ(x: float, y: float, Y: float) -> Tuple[float]:
return x * Y / y, Y, (1 - x - y) * Y / y
D50_XYZ = np.vstack(
[
CHAD @ xyY_to_XYZ(Rx, Ry, D65_Y[0]),
CHAD @ xyY_to_XYZ(Gx, Gy, D65_Y[1]),
CHAD @ xyY_to_XYZ(Bx, By, D65_Y[2]),
]
).T
D65_RGB = np.linalg.inv(D65_XYZ)
D50_RGB = np.linalg.inv(D50_XYZ)
D65_RGB = np.linalg.inv(D65_XYZ)
D50_RGB = np.linalg.inv(D50_XYZ)
for i, c in enumerate("XYZ"):
for j, d in enumerate("rgb"):
print(f"D65_{c}{d} = {D65_XYZ[i, j]}")
print()
for i, c in enumerate("RGB"):
for j, d in enumerate("xyz"):
print(f"D65_{c}{d} = {D65_RGB[i, j]}")
print()
for i, c in enumerate("XYZ"):
for j, d in enumerate("rgb"):
print(f"D50_{c}{d} = {D50_XYZ[i, j]}")
print()
for i, c in enumerate("RGB"):
for j, d in enumerate("xyz"):
print(f"D50_{c}{d} = {D50_RGB[i, j]}")
</code></pre>
<p>I now have the full precision values, so I used them in my completely working code to convert between BGR and LCh:</p>
<pre class="lang-py prettyprint-override"><code>import numba as nb
import numpy as np
from math import atan2, cos, pi, sin
from typing import Callable, Tuple
D65_Xw = 0.9504559270516716
D65_Zw = 1.0890577507598784
D65_Xr = 0.4123835774573348
D65_Xg = 0.35758636076837935
D65_Xb = 0.18048598882595746
D65_Yr = 0.21264225112116675
D65_Yg = 0.7151677022795175
D65_Yb = 0.07219004659931565
D65_Zr = 0.019324834131038457
D65_Zg = 0.11918543851645445
D65_Zb = 0.9505474781123853
D65_Rx = 3.2410639132702483
D65_Ry = -1.5374434989773638
D65_Rz = -0.49863738352233855
D65_Gx = -0.9692888172936756
D65_Gy = 1.875993314670902
D65_Gz = 0.04157078604801982
D65_Bx = 0.05564381729909414
D65_By = -0.20396692403457678
D65_Bz = 1.0569503107394616
LAB_F0 = 216 / 24389
LAB_F1 = 841 / 108
LAB_F2 = 4 / 29
LAB_F3 = LAB_F0 ** (1 / 3)
LAB_F4 = 1 / LAB_F1
LAB_F5 = 1 / 2.4
LAB_F6 = 0.04045 / 12.92
RtD = 180 / pi
DtR = pi / 180
@nb.njit(cache=True, fastmath=True)
def gamma_expand(c: float) -> float:
return c / 12.92 if c <= 0.04045 else ((c + 0.055) / 1.055) ** 2.4
@nb.njit(cache=True, fastmath=True)
def LABF(f: float) -> float:
return f ** (1 / 3) if f >= LAB_F0 else LAB_F1 * f + LAB_F2
@nb.njit(cache=True, fastmath=True)
def LABINVF(f: float) -> float:
return f**3 if f >= LAB_F3 else LAB_F4 * (f - 4 / 29)
@nb.njit(cache=True, fastmath=True)
def gamma_contract(n: float) -> float:
n = n * 12.92 if n <= LAB_F6 else (1.055 * n**LAB_F5) - 0.055
return 0.0 if n < 0 else (1.0 if n > 1 else n)
@nb.njit(cache=True, fastmath=True)
def BGR_to_LCh_D65(b: float, g: float, r: float) -> Tuple[float]:
b = gamma_expand(b)
g = gamma_expand(g)
r = gamma_expand(r)
x = LABF((D65_Xr * r + D65_Xg * g + D65_Xb * b) / D65_Xw)
y = LABF(D65_Yr * r + D65_Yg * g + D65_Yb * b)
z = LABF((D65_Zr * r + D65_Zg * g + D65_Zb * b) / D65_Zw)
m = 500 * (x - y)
n = 200 * (y - z)
return 116 * y - 16, (m * m + n * n) ** 0.5, (atan2(n, m) * RtD) % 360
@nb.njit(cache=True, fastmath=True)
def LCh_D65_to_BGR(l: float, c: float, h: float) -> Tuple[float]:
h *= DtR
l = (l + 16) / 116
x = D65_Xw * LABINVF(l + c * cos(h) / 500)
y = LABINVF(l)
z = D65_Zw * LABINVF(l - c * sin(h) / 200)
r = D65_Rx * x + D65_Ry * y + D65_Rz * z
g = D65_Gx * x + D65_Gy * y + D65_Gz * z
b = D65_Bx * x + D65_By * y + D65_Bz * z
m = min(r, g, b)
if m < 0:
r -= m
g -= m
b -= m
return gamma_contract(b), gamma_contract(g), gamma_contract(r)
@nb.njit(cache=True, parallel=True)
def loop_LCh(img: np.ndarray, mode: Callable) -> np.ndarray:
height, width = img.shape[:2]
out = np.empty_like(img)
for y in nb.prange(height):
for x in nb.prange(width):
b, g, r = img[y, x]
out[y, x] = mode(b, g, r)
return out
@nb.njit
def IMG_to_LCh_D65(img: np.ndarray) -> np.ndarray:
return loop_LCh(img, BGR_to_LCh_D65)
@nb.njit
def LCh_D65_to_IMG(lch: np.ndarray) -> np.ndarray:
return loop_LCh(lch, LCh_D65_to_IMG)
</code></pre>
<p>It works, and I have rigorously verified its correctness, but it is very slow.</p>
<p>For reference, below is the code I use to convert from BGR to HSL:</p>
<pre class="lang-py prettyprint-override"><code>@nb.njit(cache=True, fastmath=True)
def extrema(a: float, b: float, c: float) -> Tuple[float]:
i = 2
if b > c:
b, c = c, b
i = 1
if a > b:
a, b = b, a
if b > c:
b, c = c, b
i = 0
return i, a, c
@nb.njit(cache=True, fastmath=True)
def hue(b: float, g: float, r: float, d: float, i: float) -> float:
if i == 2:
h = (g - b) / (6 * d)
elif i:
h = 1 / 3 + (b - r) / (6 * d)
else:
h = 2 / 3 + (r - g) / (6 * d)
return h % 1
@nb.njit(cache=True, fastmath=True)
def HSL_pixel(
b: float, g: float, r: float, i: float, x: float, z: float
) -> Tuple[float]:
s = x + z
d = z - x
avg = s / 2
return (hue(b, g, r, d, i), d / (1 - abs(s - 1)), avg) if d else (0, 0, avg)
@nb.njit(cache=True, parallel=True)
def from_BGR(img: np.ndarray, mode: Callable) -> np.ndarray:
height, width = img.shape[:2]
out = np.empty_like(img)
for y in nb.prange(height):
for x in nb.prange(width):
b, g, r = img[y, x]
i, a, c = extrema(b, g, r)
out[y, x] = mode(b, g, r, i, a, c)
return out
@nb.njit
def BGR_to_HSL(img: np.ndarray) -> np.ndarray:
return from_BGR(img, HSL_pixel)
</code></pre>
<p><code>BGR_to_HSL</code> takes about 20 milliseconds to process an 1920x1080 image, but <code>IMG_to_LCh_D65</code> takes about 256 milliseconds to process the same image, which is over 10 times slower.</p>
<pre class="lang-py prettyprint-override"><code>In [2]: img = np.random.random((1080, 1920, 3))
In [3]: %timeit BGR_to_HSL(img)
19.4 ms ± 889 µs per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [4]: %timeit BGR_to_HSL(img)
22.1 ms ± 1.6 ms per loop (mean ± std. dev. of 7 runs, 10 loops each)
In [5]: %timeit IMG_to_LCh_D65(img)
257 ms ± 23.7 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [6]: %timeit IMG_to_LCh_D65(img)
268 ms ± 18.2 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)
In [7]: b, g, r = np.random.random(3)
In [8]: %timeit i, a, c = extrema(b, g, r)
479 ns ± 14.2 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [9]: %timeit i, a, c = extrema(b, g, r); HSL_pixel(b, g, r, i, a, c)
1.02 µs ± 4.98 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
In [10]: %timeit BGR_to_LCh_D65(b, g, r)
913 ns ± 19.5 ns per loop (mean ± std. dev. of 7 runs, 1,000,000 loops each)
</code></pre>
<p>These two functions use two very similar structures, they both use a two level nested for loop and process the pixels in parallel, the only difference is the detail of the computation.</p>
<p>As you can clearly see, the process <code>IMG_to_LCh_D65</code> uses is actually faster than what uses <code>BGR_to_HSL</code>, so why does the former function which processes each individual faster takes much longer time to complete overall, and how to fix that?</p>
<p>And more, people often say Python is slow and C++ is superfast, I agree with that. So I want to offload the heavy computation to C++, so I tried to port my code to C++ and use it as shared library.</p>
<p>I have no idea how to do that, I want to compile to .dll but so far I have only managed to compile to .exe files. I am still a beginner in C++, but I have managed to port the code that converts from BGR to HSL to C++. It compiled successfully, but it was extremely inefficient, so I didn't port the LCh code:</p>
<pre class="lang-cpp prettyprint-override"><code>#include <algorithm>
#include <chrono>
#include <functional>
#include <iostream>
#include <string>
#include <tuple>
#include <vector>
constexpr double one_third = 1.0 / 3.0;
constexpr double two_third = 2.0 / 3.0;
using std::tuple;
using std::vector;
using pixel = vector<double>;
using line = vector<pixel>;
using image = vector<line>;
using std::chrono::steady_clock;
using std::chrono::duration;
using std::chrono::duration_cast;
using std::chrono::microseconds;
using std::cout;
using std::function;
double wrap(double d) {
d = fmod(d, 1.0);
return d < 0 ? d + 1.0 : d;
}
void fill(pixel& p) {
p[0] = double(rand()) / RAND_MAX;
p[1] = double(rand()) / RAND_MAX;
p[2] = double(rand()) / RAND_MAX;
}
image random_image(int64_t height, int64_t width) {
image image(height, line(width, pixel(3)));
for (auto& l : image) {
std::for_each(l.begin(), l.end(), fill);
}
return image;
}
tuple<int, double, double> extrema(double a, double b, double c) {
int i = 2;
if (b > c) {
std::swap(b, c);
i = 1;
}
if (a > b) {
std::swap(a, b);
}
if (b > c) {
std::swap(b, c);
i = 0;
}
return { i, a, c };
}
double hue(double b, double g, double r, double d, double i) {
double h;
if (i == 2) {
h = (g - b) / (6 * d);
}
else if (i) {
h = one_third + (b - r) / (6 * d);
}
else {
h = two_third + (r - g) / (6 * d);
}
return wrap(h);
}
pixel HSL_pixel(double b, double g, double r) {
auto [i, x, z] = extrema(b, g, r);
double s = x + z;
double d = z - x;
double avg = s / 2;
return d ? pixel { hue(b, g, r, d, i), d / (1 - abs(s - 1)), avg } : pixel { 0.0, 0.0, avg };
}
image BGR_to_HSL(const image& img) {
size_t height, width;
height = img.size();
width = img[0].size();
image out(height, line(width, pixel(3)));
for (int y = 0; y < height; y++) {
for (int x = 0; x < width; x++) {
pixel px = img[y][x];
out[y][x] = HSL_pixel(px[0], px[1], px[2]);
}
}
return out;
}
double timeit(function<image(image)> func, image img, int runs = 256) {
auto start = steady_clock::now();
for (int64_t i = 0; i < runs; i++) {
func(img);
}
auto end = steady_clock::now();
duration<double, std::nano> time = end - start;
return time.count() / runs / 1000.0;
}
int main()
{
image img = random_image(1080, 1920);
double once = timeit(BGR_to_HSL, img, 16);
cout << "converting 1920x1080 BGR image to HSL vector: " + std::to_string(once) + " microseconds\n";
}
</code></pre>
<pre><code>PS C:\Users\Xeni> C:\Users\Xeni\source\repos\CIELCh\x64\Release\CIELCh.exe
converting 1920x1080 BGR image to HSL vector: 1024189.362500 microseconds
</code></pre>
<p>I don't know if it works, presumably it works. Why is it so terribly slow, and then, how can I fix that so that I can use it in Python?</p>
| <python><c++><image><numpy><image-processing> | 2023-09-12 18:04:47 | 1 | 3,930 | Ξένη Γήινος |
77,091,626 | 5,086,100 | Consume data from Yahoo Screener via requests | <p>I ran a query on the Yahoo Screener at:</p>
<p><a href="https://finance.yahoo.com/screener/equity/new" rel="nofollow noreferrer">https://finance.yahoo.com/screener/equity/new</a></p>
<p>DevTools shows that the data came back as JSON via:</p>
<p><a href="https://query2.finance.yahoo.com/v1/finance/screener?crumb=u0eNvTHfT6U&lang=en-US&region=US&formatted=true&corsDomain=finance.yahoo.com" rel="nofollow noreferrer">https://query2.finance.yahoo.com/v1/finance/screener?crumb=u0eNvTHfT6U&lang=en-US&region=US&formatted=true&corsDomain=finance.yahoo.com</a></p>
<p>So I tried to manually request the data with:</p>
<pre><code>import json
import requests
url = "https://query2.finance.yahoo.com/v1/finance/screener"
payload = json.loads('{"size":25,"offset":0,"sortField":"intradaymarketcap","sortType":"DESC","quoteType":"EQUITY","topOperator":"AND","query":{"operator":"AND","operands":[{"operator":"or","operands":[{"operator":"EQ","operands":["region","us"]}]},{"operator":"or","operands":[{"operator":"LT","operands":["intradaymarketcap",2000000000]},{"operator":"BTWN","operands":["intradaymarketcap",2000000000,10000000000]}]}]},"userId":"","userIdType":"guid"}')
header = {
"authority": "query2.finance.yahoo.com",
"method":"POST",
"path":"/v1/finance/screener?crumb=umZV3T8[ETC...]&lang=en-US&region=US&formatted=true&corsDomain=finance.yahoo.com",
"scheme":"https",
"Accept":"*/*",
"Accept-Encoding":"gzip, deflate, br",
"Accept-Language":"en-US,en;q=0.9",
"Access-Control-Request-Headers":"content-type",
"Access-Control-Request-Method":"POST",
"Cache-Control":"no-cache",
"Content-Type":"application/json",
"Cookie":"tbla_id=33c52a3f-2fd9-41[ETC...]",
"Origin":"https://finance.yahoo.com",
"Pragma":"no-cache",
"Referer":"https://finance.yahoo.com/screener/equity/new",
"Sec-Ch-Ua":"\"Chromium\";v=\"116\",\"Google Chrome\";v=\"116\"",
"Sec-Ch-Ua-Platform":"Windows",
"Sec-Fetch-Dest":"empty",
"Sec-Fetch-Mode":"cors",
"Sec-Fetch-Site":"same-site",
"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36"
}
response = requests.post(
url = url,
headers = header,
data = json.dumps(payload),
timeout = 30)
data_json = json.loads(response.content)
</code></pre>
<p>Even if I use the cookie and crumb from the original request header, I get this error:</p>
<p><code>{'code': 'Unauthorized', 'description': 'Invalid Crumb'}</code></p>
<p>Is this even possible via requests?</p>
| <python><python-requests><yahoo-finance> | 2023-09-12 18:02:46 | 1 | 2,044 | Alfred Wallace |
77,091,123 | 9,620,095 | How to make field readonly based on group with XML. Odoo 16 | <p>I added a custom field in 'purchase.order' model:</p>
<pre><code>class PurchaseOrder(models.Model):
_inherit = 'purchase.order'
custom_field = fields.Char(string="Custom Field")
</code></pre>
<p>I want to set "readonly" = 0 only for admin group, so I added this form views</p>
<pre><code><record id="purchase_order_form_extend" model="ir.ui.view">
<field name="name">purchase.order.form.extend</field>
<field name="model">purchase.order</field>
<field name="inherit_id" ref="purchase.purchase_order_form"/>
<field name="arch" type="xml">
<field name="partner_ref" position="after">
<field name="custom_field" attrs="{'readonly': 1}"/>
</field>
</field>
</record>
<record id="view_order_form_custom_readonly" model="ir.ui.view">
<field name="name">purchase.order.form.readonly.custom</field>
<field name="model">purchase.order</field>
<field name="inherit_id" ref="purchase_order_form_extend"/>
<field name="mode">primary</field>
<field name="groups_id" eval="[(6, 0, [ref('base.user_admin')])]"/>
<field name="arch" type="xml">
<field name='custom_field' position="attributes">
<attribute name="attrs">{'readonly': 0}</attribute>
</field>
</field>
</code></pre>
<p>But still not woking... ,
Any idea about what's wrong please?
Thanks.</p>
| <python><xml><odoo><odoo-16> | 2023-09-12 16:39:40 | 1 | 631 | Ing |
77,091,002 | 1,411,376 | TypeError: 'cython_function_or_method' object is not subscriptable when parsing pydantic model | <p>I have a pydantic model (using pydantic 1.10.7):</p>
<pre class="lang-py prettyprint-override"><code>import re
from typing import Literal, Sequence
from pydantic import BaseModel, Field
ColumnTypeLiteral = Literal[
"BOOLEAN",
"DATE",
"FLOAT",
"INTEGER",
"NUMERIC",
"STRING",
]
class Column(BaseModel, frozen=True):
name: str
type: ColumnTypeLiteral
mode: str = "NULLABLE"
fields: Sequence = []
def __hash__(self):
return self.name.__hash__()
@property
def base_name(self) -> str:
m = re.match(".+?(?=__|$)", self.name)
if m:
return m.group()
else:
return self.name
@property
def is_monthly_column(self) -> bool:
pattern = re.compile(".*_{2}\\d{4}_\\d{1,2}")
return True if pattern.match(self.name) else False
@property
def year_and_month(self) -> YearAndMonth | None:
m = re.match(".*_{2}(\\d{4})_(\\d{1,2})", self.name)
if m:
return YearAndMonth(year=int(m.groups()[0]), month=int(m.groups()[1]))
else:
return None
@classmethod
def monthly_value_to_column_name(cls, base_name: str, year: int, month: int):
return f"{base_name}__{year}_{month}"
</code></pre>
<p>This works as expected, and serializes to dictionaries as I'd expect:</p>
<pre class="lang-py prettyprint-override"><code> columns = [
{
"name": "arrived_in_period",
"type": "INTEGER",
"mode": "NULLABLE",
"fields": [],
},
{
"name": "business_exclusion",
"type": "BOOLEAN",
"mode": "NULLABLE",
"fields": [],
},
]
</code></pre>
<p>But when I try to parse the above dictionaries when I use</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import parse_obj_as
parsed_columns = parse_obj_as[List[Column], columns]
</code></pre>
<p>I get the following exception:</p>
<blockquote>
<p>TypeError: 'cython_function_or_method' object is not subscriptable</p>
</blockquote>
<p>I see a lot of similar questions here on stackoverflow but none about <code>cython_function_or_method</code> specifically. I assume it has something to do with the regex properties but I am not sure.</p>
| <python><pydantic> | 2023-09-12 16:22:26 | 1 | 795 | Max |
77,090,701 | 1,914,781 | add new rate column base previous column | <p>I would like to add a rate column base on row before current row, just like a diff() but need do some calculation just like below:</p>
<pre><code>import pandas as pd
import numpy as np
df = pd.DataFrame(np.random.random((5,2)),columns=['v1','v2'])
print(df)
rate = []
rate.append(0)
prev = 0
for index, row in df.iterrows():
if index == 0:
prev = row
continue
r = (prev['v2'] - row['v2'])/(row['v1'] - prev['v1'])
prev = row
rate.append(r)
df['rate'] = rate
print(df)
</code></pre>
<p>Current logic works what's more better way to do it in python?</p>
| <python><pandas> | 2023-09-12 15:44:20 | 3 | 9,011 | lucky1928 |
77,090,564 | 1,203,457 | Using Python xmltree to sort child elements by attribute value containing FLOAT values? | <p>This is and example of the XML for Pioneer Rekordbox DJ Software.</p>
<ul>
<li>Using <strong>Python</strong>, I am trying to re-sort the XML child tags by the <strong>START</strong> attribute for each <strong>POSITION_MARK</strong> tag. The POSITION MARK tag basically represent cue points for the DJ software.</li>
</ul>
<p>I looked at this Stackoverflow question, but they only sort by whole numbers, not floats:</p>
<p><a href="https://stackoverflow.com/questions/47096543/python-sort-xml-elements-by-and-tag-and-attributes-recursively">Python sort XML elements by and tag and attributes recursively</a></p>
<p>But I need to sort by the <strong>float</strong> value in each attribute.</p>
<ul>
<li>Each TRACK tag has POSITION_MARK child tags with a Start attribute <strong>float</strong> value.</li>
<li>How do you re-sort the POSITION_MARK child tags by there float number in the Start attribute.</li>
<li>This way when it rewrites the XML, each TRACK tags POSITION_MARK children are sorted in ascending order (smallest to largest) by their Start attribute value which is a <strong>float</strong> value.</li>
</ul>
<p>How do you sort in Python using xmltree sorted to re-sort the POSITION_MARK children by their START attribute that contain float values?</p>
<pre><code><?xml version="1.0" ?>
<DJ_PLAYLISTS>
<COLLECTION>
<TRACK>
<POSITION_MARK Start="22.093" />
<POSITION_MARK Start="44.162" />
<POSITION_MARK Start="88.300" />
<POSITION_MARK Start="110.369" />
<POSITION_MARK Start="132.438" />
<POSITION_MARK Start="11.059" />
<POSITION_MARK Start="220.714" />
<POSITION_MARK Start="242.783" />
</TRACK>
<TRACK>
<POSITION_MARM Start="0.024" />
<POSITION_MARK Start="30.024" />
<POSITION_MARK Start="60.024" />
<POSITION_MARK Start="90.024" />
<POSITION_MARK Start="120.024" />
<POSITION_MARK Start="150.024" />
<POSITION_MARK Start="180.024" />
</TRACK>
</COLLECTION>
</DJ_PLAYLISTS>
</code></pre>
| <python><xml><sorting><elementtree> | 2023-09-12 15:22:48 | 1 | 7,061 | Jonathan Marzullo |
77,090,439 | 170,243 | Using pip to install a package for a previous version of python | <p>I have python3.10 on my desktop but I need 3.7 for work.</p>
<p>I follow the <a href="https://www.makeuseof.com/install-python-ubuntu/" rel="nofollow noreferrer">instructions here</a> to install python3.7 alongside my 3.10 and they work great.</p>
<p>However, when I run python3.7 on the checked out code I find I need to use pip to install a missing package. However, I only have one pip3 executable which sensibly installs all modules into the 3.10 directory.</p>
<p>How do I get pip to install packages for the 3.7 version of python?</p>
| <python> | 2023-09-12 15:06:17 | 1 | 4,534 | Joe |
77,090,424 | 3,482,266 | Type hinting for expression of type "Float | ndarray[Unknown, Unknown]" | <p>I'm creating a simple wrapper class for scikit-learn function <code>median_absolute_error</code>.</p>
<pre><code>from sklearn.metrics import median_absolute_error
class MedAbsError:
name:str = "MAE"
def compute(self, predictions: np.ndarray, observations: np.ndarray) -> float | np.ndarray:
return median_absolute_error(y_pred=predictions, y_true=observations)
</code></pre>
<p>However, Pylance is reporting:</p>
<blockquote>
<p>Expression of type "Float | ndarray[Unknown, Unknown]" cannot be
assigned to return type "float | ndarray[Unknown,
Unknown]"</p>
</blockquote>
<p>Why is that? I've searched for Float in numpy datatypes, but I didn't find it in that section.</p>
| <python><types> | 2023-09-12 15:03:37 | 0 | 1,608 | An old man in the sea. |
77,090,274 | 15,341,457 | Python Scrapy - (403) status code is not handled or not allowed | <p>I'm trying to scrape reviews from Tripadvisor, more specifically from this <a href="https://www.tripadvisor.it/Restaurant_Review-g187791-d25150423-Reviews-Don_s_Meats_Spirits-Rome_Lazio.html" rel="nofollow noreferrer">address</a>.</p>
<p>I'm currently unable to scrape any data and I'm returned the 403 status code. At first I tried the usual command <code>scrape crawl reviews</code> without success. I then tried to make some tests with <code>scrape shell 'website address'</code> and received the same 403 status. Any <em>extract()</em> attempt returns an empty array.</p>
<p>I've looked up some guides online and installed <a href="https://pypi.org/project/scrapy-user-agents/" rel="nofollow noreferrer">scrapy-user-agents</a> and inserted the correct <em>Downloader Middlewares</em> in the <em>settings.py</em> file as indicated in the linked page. The scraper now tries to crawl the website with a set of fake user-agents but for each one of them I get the error:</p>
<p><code>[scrapy_user_agents.user_agent_picker] WARNING: [UnsupportedBrowserType]</code></p>
<p>or the error:</p>
<p><code>[scrapy_user_agents.user_agent_picker] WARNING: [UnsupportedDeviceType]</code></p>
<p>and 0 pages are crawled.</p>
<p>Anyone with some experience in scraping Tripadvisor has any idea on how to solve this problem?</p>
| <python><web-scraping><scrapy><user-agent><http-status-code-403> | 2023-09-12 14:47:58 | 1 | 332 | Rodolfo |
77,090,120 | 363,949 | Make a single global .so module available in virtualenv | <p>I have a program that needs to use the <code>dbm.gnu</code> module. This requires the external python3-gdbm debian package which is installed globally, but it's not available inside the virtualenv. As far as I can tell, this is not available to install via pip. I tried copying the file from <code>/usr/lib/python3.10/lib-dynload/_gdbm.cpython-310-x86_64-linux-gnu.so</code> to <code>venv/lib/python3.10/lib-dynload/</code>, but the module is still not importable. Is there a way to somehow make this package available inside the virtualenv?</p>
<p>Using the <code>--system-site-packages</code> option of the <code>venv</code> module is not possible, because there are conflicts between some other global packages and the ones that are to be installed in the virtualenv.</p>
| <python><virtualenv><gdbm> | 2023-09-12 14:30:26 | 0 | 4,205 | Elektito |
77,090,038 | 6,068,731 | Python - parallelize sequential composition of functions | <p>I have an array <code>A</code> that has dimension <code>(N, d)</code>. I also have a function <code>f</code> that takes arrays <code>(d,)</code> and returns arrays <code>(d,)</code>. I also have a positive integer <code>M</code>. I want to generate a new array <code>B</code> of dimension <code>(N, M+1, d)</code> as follows:</p>
<blockquote>
<p><code>B[i, j, :] = f^j(A[i, :])</code> for <code>j=0, ..., M</code>. In other words, the <code>(i,j)</code>th point is generated by composing <code>f</code> <code>j</code> times and applying it to <code>A[i,:]</code>. Notice that when <code>j=0</code> then <code>f^0= lambda x: x</code> so we don't do anything.</p>
</blockquote>
<p>It's easy to write this as a for loop, but I want to parallelize this using multiprocessing.</p>
<pre><code>import numpy as np
rng = np.random.default_rng(seed=1234)
N, M, d = 10, 5, 3
A = np.random.randn(N, d)
B = np.zeros((N, M+1, d))
def compose_f(x, j):
"""Applies f j times to x"""
if j == 0:
return x
else:
output = x
for jj in range(1, j+1):
output = f(x)
return output
for i in range(N):
for j in range(M):
B[i, j, :] = compose_f(A[i, :], j)
</code></pre>
<h1>Attempt</h1>
<p>First, I have rewritten the <code>compose_f</code> function. This operation is not parallelizable, but we can avoid evaluating the same quantities more than once.</p>
<pre><code>def compute_B_slice(x, M):
"""This version computes a whole slice of B."""
output = np.zeros((M+1, d))
output[0, :] = x
for j in range(1, M+1):
output[j, :] = f(output[j-1, :])
return output
</code></pre>
<p>I have then tried using multiprocessing but I am confused.</p>
<pre><code>from multiprocessing import Pool, cpu_count
def worker(params):
"""Helper function."""
return compute_B_slice(params)
pool = Pool(processes=cpu_count())
params = [A_row for Arow in A]
B = pool.map(worker, params)
</code></pre>
| <python><arrays><numpy> | 2023-09-12 14:21:25 | 0 | 728 | Physics_Student |
77,089,975 | 9,528,575 | kmeans clustering groups the data vertically rather than horizontally | <p>I have a dataset like this:</p>
<pre><code>coupled_series = [(9.752, 0.0005), (9.9792, 0.0008), (9.8571, 0.0036), (10.5017, 0.0038), (10.4808, 0.0038), (10.6975, 0.003), (12.1378, 0.0008), (12.7328, 0.0005), (14.0357, 0.0035), (11.7431, 0.0039), (10.107, 0.0039), (10.4207, 0.0039), (10.563, 0.003), (11.0856, 0.0009), (11.3304, 0.0005), (11.87, 0.0035), (12.9338, 0.0039), (13.243, 0.0039), (13.4354, 0.0038), (13.14, 0.003), (13.4611, 0.0008), (13.1459, 0.0004), (11.956, 0.0035), (12.4869, 0.0039), (13.2369, 0.004), (13.6368, 0.0039), (14.11, 0.0029), (14.1441, 0.0007), (13.8937, 0.0004), (13.4262, 0.0007)]
</code></pre>
<p>I like to run a kmeans cluster using sklearn using following code:</p>
<pre><code>kmeans = KMeans(n_clusters=2, max_iter=50, n_init="auto", random_state=0, algorithm='lloyd')
kmeans.fit(coupled_series)
x=list(zip(*coupled_series))[0]
y=list(zip(*coupled_series))[1]
plt.scatter(x, y, c=kmeans.labels_)
plt.show()
</code></pre>
<p>the result is the following picture:
<a href="https://i.sstatic.net/uktXk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uktXk.png" alt="enter image description here" /></a></p>
<p>As you can see, it has clustered the groups into left and right while one can say by looking at it that it's consisting of two lines on top and bottom. i.e., I would like them to be clustered into two groups of red and blue like picture below.</p>
<p><a href="https://i.sstatic.net/68hG0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/68hG0.png" alt="enter image description here" /></a></p>
<p>Is there anything I can do to fix this to cluster the way I like? Are there any other options of types of clustering I can try? Just to mention that I need to repeat this to other groups of data too and they mostly look like this.</p>
| <python><machine-learning><scikit-learn><cluster-analysis><k-means> | 2023-09-12 14:14:44 | 1 | 361 | Novic |
77,089,875 | 10,452,700 | How can I reproduce animation of Backtesting with intermittent refit over time? | <p>I'm experimenting with 1D time-series data and trying to reproduce the following approach via animation over my own data in GoogleColab notebook.</p>
<p>It's about reproducing the animation of this approach: <a href="https://skforecast.org/0.10.0/user_guides/backtesting.html#backtesting-with-intermittent-refit" rel="nofollow noreferrer"><em>Backtesting with intermittent refit</em></a> that has been introduced in <a href="/questions/tagged/skforecast" class="post-tag" title="show questions tagged 'skforecast'" aria-label="show questions tagged 'skforecast'" rel="tag" aria-labelledby="tag-skforecast-tooltip-container">skforecast</a> package.</p>
<blockquote>
<p><em><strong>Backtesting with intermittent refit</strong></em></p>
<p><em>The model is retrained every <em><strong>n</strong></em> iterations of prediction. This method is often used when the frequency of retraining and prediction is different. It can be implemented using either a fixed or rolling origin, providing flexibility in adapting the model to new data.</em></p>
</blockquote>
<p>Following is my code:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
from matplotlib.animation import FuncAnimation
from matplotlib.patches import Rectangle
import pandas as pd
from IPython.display import HTML
# create data
df = pd.DataFrame({
"TS_24hrs": np.arange(0, 274),
"count" : np.abs(np.sin(2 * np.pi * np.arange(0, 274) / 7) + np.random.normal(0, 100.1, size=274)) # generate sesonality
})
# Define the initial width for training and test data
TRAIN_WIDTH = 100
TEST_WIDTH = 1
# Define the delay for refitting the model
REFIT_DELAY = 10
# Define the delay for adding test data to train data
ADD_DELAY = 10
# create plot
plt.style.use("ggplot") # <-- set overall look
fig, ax = plt.subplots( figsize=(10,4))
# plot data
plt.plot(df['TS_24hrs'], df['count'], 'r-', linewidth=0.5, label='data or y')
# make graph beautiful
plt.plot([], [], 'g-', label="Train", linewidth=8, alpha=0.3) # <-- dummy legend entry
plt.plot([], [], 'b-', label="Test", linewidth=8, alpha=0.3) # <-- dummy legend entry
plt.xticks([0, 50, 100, 150, 200, 250, df['TS_24hrs'].iloc[-1]], visible=True, rotation="horizontal")
plt.title('Time-series backtesting with intermittent refit')
plt.ylabel('count', fontsize=15)
plt.xlabel('Timestamp [24hrs]', fontsize=15)
plt.grid(True)
plt.legend(loc="upper left")
fig.tight_layout(pad=1.2)
TRAIN_WIDTH = 25
TEST_WIDTH = 10
Y_LIM = 300 #ax.get_ylim()
def init():
rects = [Rectangle((0, 0), TRAIN_WIDTH, Y_LIM, alpha=0.3, facecolor='green'),
Rectangle((0 + TRAIN_WIDTH, 0), TEST_WIDTH, Y_LIM, alpha=0.3, facecolor='blue')]
patches = []
for rect in rects:
patches.append(ax.add_patch(rect))
return patches
# Initialize the start points for training and test data
train_data_start = 0
test_data_start = TRAIN_WIDTH
# Initialize the counter for refitting the model
refit_counter = REFIT_DELAY
# Initialize the counter for adding test data to train data
add_counter = ADD_DELAY
def update(x_start):
global train_data_start, test_data_start, refit_counter, add_counter, TRAIN_WIDTH
# Check if the model needs to be refitted
if refit_counter == REFIT_DELAY:
# Update the positions of train and test data with refit
patches[0].xy = (x_start + test_data_start - TRAIN_WIDTH , 0)
patches[1].xy = (x_start + test_data_start, 0)
# Reset the counter for refitting the model
refit_counter = 0
else:
# Update the positions of train and test data without refit
TRAIN_WIDTH += TEST_WIDTH # Increase the most data width
patches[0].set_width(TRAIN_WIDTH)
patches[0].xy = (x_start + test_data_start - TRAIN_WIDTH - 10 , 0)
patches[1].xy = (x_start + test_data_start, 0)
# Increase the counter for refitting the model
refit_counter += 1
# Check if the test data needs to be added to train data
if add_counter == ADD_DELAY:
# Move the training and test data one step forward
train_data_start += TEST_WIDTH # Add the width of the test to the widest
test_data_start += 1
# Reset the counter for adding test data to train data
add_counter = 0
else:
# Increase the counter for adding test data to train data
add_counter += 1
return patches
# Create "Train" and "Test" areas
patches = init()
ani = FuncAnimation(
fig,
update,
frames=np.arange(0, df.shape[0] - TRAIN_WIDTH - TEST_WIDTH), # All starting points
interval=70,
blit=True
)
HTML(ani.to_html5_video())
</code></pre>
<p>My current output is:</p>
<pre class="lang-py prettyprint-override"><code>from matplotlib.animation import FuncAnimation, PillowWriter
ani.save("TLI.gif", dpi=100, writer=PillowWriter(fps=50))
</code></pre>
<p><img src="https://i.imgur.com/cRIsja3.gif" alt="img" /></p>
<p>Expected output:
<img src="https://d33wubrfki0l68.cloudfront.net/534ef9e96dd557ad9b891171757fd95b77cd3cd8/2c277/images/backtesting_intermittent_refit.gif" alt="gif" /></p>
| <python><matplotlib><time-series><sliding-window><matplotlib-animation> | 2023-09-12 14:03:43 | 1 | 2,056 | Mario |
77,089,826 | 5,547,553 | How to get ascii code of a character in a string column in python polars dataframe? | <br>
I'd like to get the ascii code of a character in a column in python polars dataframe.
<pre><code>import polars as pl
df = pl.DataFrame({'a' : ['apple', 'banana']})
df.select('a').with_columns(pl.col('a').str.slice(0, 1))
</code></pre>
<p>Slicing works fine, but how do I apply ord() function to it?<br>
Ord() works fine in pure python, like:</p>
<pre><code>print((df['a'][0], ord(df['a'][0][0:1])))
# ('apple', 97)
</code></pre>
| <python> | 2023-09-12 13:57:25 | 1 | 1,174 | lmocsi |
77,089,825 | 822,942 | Python and FastAPI live HTML Interface | <p>I have a Python application that compares two files. if i add something in one file and save it, there is a difference. its just a small helper to develope something different.</p>
<p>I would like to show that difference as fast as possible in a html file that i use as an interface. I tried FastAPI and my template is rendering correctly, its also taking the variables im sending but how can i trigger an update of only one variable in that html file?</p>
<pre><code>@app.get("/", response_class=HTMLResponse)
async def index(request: Request):
return templates.TemplateResponse("index.html", {
"request": request,
"difference": difference,
"other": variable
})
</code></pre>
<p>i would like to avoid calling avery second but would like to trigger an update only when it happens.</p>
<p>If using websocket, is it possible to wait for another function to send data that then will be "forwarded" to the html/js? The Example waits until the js sends text.</p>
<pre><code> while True:
data = await websocket.receive_text()
await websocket.send_text(f"Data: {data}")
</code></pre>
| <python><fastapi> | 2023-09-12 13:57:20 | 1 | 317 | dichterDichter |
77,089,769 | 11,485,896 | tabula-py: java.lang.ClassNotFoundException: java.lang.ClassNotFoundException: org.apache | <p>I'm unsuccessfully trying to read a PDF in Python using <code>tabula-py</code>. I installed Java (both <code>jre-1.8</code> and <code>jdk-20</code>), set up <code>JAVA_HOME</code> path (<code>C:\Program Files\Java\jre-1.8\bin</code> in my case; I also added it to <code>Path</code>) and still cannot run <code>read_pdf</code> method. Code:</p>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
from tabula import read_pdf
from tabulate import tabulate
file_name: str = "to_tabulate.pdf"
df = read_pdf(file_name)
print(tabulate(df))
</code></pre>
<p>Output (shortened):</p>
<pre class="lang-py prettyprint-override"><code>Exception Traceback (most recent call last)
File org.jpype.JPypeContext.java:-1, in org.jpype.JPypeContext.callMethod()
Exception: Java Exception
The above exception was the direct cause of the following exception:
java.lang.ClassNotFoundException Traceback (most recent call last)
</code></pre>
<p>...</p>
<pre class="lang-py prettyprint-override"><code>java.lang.ClassNotFoundException: java.lang.ClassNotFoundException: org.apache
The above exception was the direct cause of the following exception:
</code></pre>
<p>...</p>
<pre class="lang-py prettyprint-override"><code>ImportError: Failed to import 'org.apache'
</code></pre>
<p>I can't find any solution for this particular problem and have absolutely no guess what causes this issue (I don't have any experience in Java and still considering myself as beginner in Python). The only interesting (let's say) thing I noticed is that in the traceback there's no mention of <code>org.apache</code> subclass (like e.g. <code>commons</code>) but just <code>apache</code> in general.</p>
<p>Has anyone encountered such problem before? Are there any solutions? Any help would be highly appreciated.</p>
<p>I use Windows 11 Pro 22H2 22621.2134, Python 3.11.1 and <code>tabula-py</code> 2.8.1.</p>
| <python><java><tabula><tabula-py> | 2023-09-12 13:50:42 | 0 | 382 | Soren V. Raben |
77,089,710 | 9,072,753 | How to typing annotate additional obligatory parameter when using ParamSpec? | <p>I want to wrap a function call with some pre action and post action and also make sure that the caller has set the key parameter. I am trying with the following:</p>
<pre><code>from typing import ParamSpec, TypeVar
P = ParamSpec('P')
R = TypeVar('R')
def urlparams_pre(key: str):
return key
def urlparams_post(key: str, ret: R) -> R:
return ret
def urlparams_call(fn: Callable[P, R], *args: P.args, key: str, **kwargs: P.kwargs) -> R:
urlparams_pre(key)
ret = fn(*args, key=key, **kwargs)
urlparams_post(key, ret)
return ret
</code></pre>
<p>However, <code>pyright</code> complains about that <code>fn</code> does not take <code>key=key</code> parameter, and that inside <code>urlparams_call</code> the <code>key:</code> can't be after <code>P.args</code>.</p>
<p>How do I fix it?</p>
<p>This will be used to set url query parameters from <code>streamlit.*</code> functions. For example <code>urlparams_call(st.text_input, "input your text", value="initial value", key="input")</code> will add a <code>&input=initial+value</code> to query and properly react to changes.</p>
| <python><python-3.x><typing> | 2023-09-12 13:43:24 | 1 | 145,478 | KamilCuk |
77,089,623 | 7,339,624 | Running independent `nn.Module` instances in `nn.ModuleList` truly in parallel in PyTorch | <p>I have a PyTorch model that consists of multiple independent <code>FullyConnectedNetwork</code> instances stored inside an <code>nn.ModuleList</code>. Here's the code:</p>
<pre><code>import torch.nn as nn
class FullyConnectedNetwork(nn.Module):
def __init__(self):
super(FullyConnectedNetwork, self).__init__()
self.fc1 = nn.Linear(20, 10)
self.fc2 = nn.Linear(10, 1)
def forward(self, x):
x = self.fc1(x)
x = self.fc2(x)
return x
class ParallelFCN(nn.Module):
def __init__(self, n):
super(ParallelFCN, self).__init__()
self.models = nn.ModuleList([FullyConnectedNetwork() for _ in range(n)])
def forward(self, x):
outputs = [model(x[:, i*20:(i+1)*20]) for i, model in enumerate(self.models)]
return torch.cat(outputs, dim=1)
# Example usage:
n = 1000
model = ParallelFCN(n)
print(model)
</code></pre>
<p>Currently, I'm using a for-loop to pass data through each <code>FullyConnectedNetwork</code> instance. However, I realize that this approach is not truly parallel in a software sense.</p>
<p>Given that each <code>FullyConnectedNetwork</code> is independent of the others, is there a way to run them truly in parallel, perhaps using multi-threading, multi-processing, or any other method in PyTorch?</p>
<p>I need it because the number of my modules can get really big, as big as 400, and processing then using a for loop is very slow.</p>
| <python><machine-learning><deep-learning><pytorch><parallel-processing> | 2023-09-12 13:33:34 | 1 | 4,337 | Peyman |
77,089,531 | 1,142,881 | How can I Case-update based on a multi-index update statement? | <p>Supposing I have a model class:</p>
<pre><code>class A(Model):
key1 = ...
key2 = ...
value1 = ...
</code></pre>
<p>I'd like to:</p>
<pre><code>import pandas as pd
df = pd.DataFrame(data=..., index=['key1', 'key2'], columns=['value1'])
Model.update(value1=Case((A.key1, A.key2)), list(df.items()).where(...).execute()
</code></pre>
<p>but this doesn't work because <code>Case</code> seems to accept only a single key, at least I couldn't find such detail in the documentation.</p>
<p>I need to lookup the error I am getting.</p>
| <python><peewee> | 2023-09-12 13:21:53 | 2 | 14,469 | SkyWalker |
77,089,401 | 2,549,828 | Type hinting with cyclic dependency | <p>I am building a Django/DRF application and have the following models:</p>
<pre class="lang-py prettyprint-override"><code>class Company(models.Model):
some_field = models.TextField()
some_method(self, user):
pass
class User(AbstractUser):
company = models.ForeignKey(Company, on_delete=models.CASCADE)
</code></pre>
<p>The method <code>some_method</code> of the company model uses a user as input. The code itself works fine. However, how can I type hint this parameter to get IDE support / code completion in PyCharm? Declaring it like <code>some_method(user: User):</code> yields an "Unresolved reference" error which makes sense since the user class is declared further down in the file. Is there a workaround or do I have to live without code completion?</p>
| <python><python-3.x><django><pycharm> | 2023-09-12 13:06:01 | 1 | 1,148 | Phocacius |
77,089,361 | 12,705,481 | What do ellipses do when they are the default argument of a function? | <p>In pandas source code, here's a snippet of the <code>to_csv</code> function:</p>
<pre><code>@overload
def to_csv(
self,
path_or_buf: FilePath | WriteBuffer[bytes] | WriteBuffer[str],
sep: str = ...,
na_rep: str = ...,
float_format: str | Callable | None = ...,
</code></pre>
<p>What does the <code>...</code> mean?</p>
<p><strong>EDIT:</strong>
A couple of users have suggested <a href="https://stackoverflow.com/questions/772124/what-does-the-ellipsis-object-do">this answer</a>.
Though appreciated, as per my comment, this answer does not contain a clear explanation of <code>...</code> used as function argument defaults. There is a discussion <a href="https://stackoverflow.com/a/50661182/12705481">over here</a>, but it is not concrete enough for me to consider it satisfactory, and so I rejected the suggestion to close my question as having been answered elsewhere.</p>
| <python> | 2023-09-12 13:00:10 | 1 | 2,628 | Alan |
77,089,348 | 3,980,808 | Mojo: "Error: An error occurred in Python." | <p>I am trying out Mojo by Modular and more specifically having a look at the <code>Mandelbrot.ipynb</code> available <a href="https://github.com/modularml/mojo/blob/main/examples/notebooks/Mandelbrot.ipynb" rel="nofollow noreferrer">here</a>. When running the line <code>np = Python.import_module("numpy")</code> I get the rather odd error <code>Error: An error occurred in Python.</code>. I have had a look around and only found <a href="https://github.com/modularml/mojo/issues/590" rel="nofollow noreferrer">this</a> open issue. Has anyone experienced the same issue previously and reached a solution? Thanks in advance!</p>
| <python><modular><mojolang> | 2023-09-12 12:58:18 | 2 | 973 | Ivar Eriksson |
77,089,228 | 188,331 | Jieba: Import error, cannot find paddle.fluid and jieba.lac_small.predict module | <p>I am trying to use Jieba (a Chinese text segment tool) with Paddle. Here is the code:</p>
<pre><code>import jieba # v0.42.1
import paddle # v2.5.1 paddlepaddle-gpu
paddle.enable_static()
jieba.enable_paddle()
text = '我是一個男生' # means "I am a boy."
seg_list = jieba.cut(text, use_paddle=True)
</code></pre>
<p>In the output, it said:</p>
<pre><code>Import error, cannot find paddle.fluid and jieba.lac_small.predict module. Now, back to jieba basic cut......
[2023-09-12 20:33:33,413] [ DEBUG] _compat.py:50 - Import error, cannot find paddle.fluid and jieba.lac_small.predict module. Now, back to jieba basic cut......
</code></pre>
<p>I'm using the latest version of <code>paddle</code> and <code>jieba</code>. <code>Jieba</code> library also suggests to install <code>paddlepaddle-tiny==1.6.1.</code>, but a suitable candidate cannot be found in my current settings (Python 3.9.7).</p>
<p>How can I solve this problem?</p>
<hr />
<p><strong>UPDATE</strong> If I remove the 2 <code>enable_</code> lines, the warning is gone.</p>
| <python><paddle-paddle><python-jieba> | 2023-09-12 12:41:32 | 1 | 54,395 | Raptor |
77,089,164 | 1,113,579 | pandas: find invalid records where for a given value in one column, another column contains multiple values | <p>In my DataFrame, for a given value in the <code>id</code> column, the value in <code>doc_no</code> column must be a single value. In the below example, the records with <code>id-1</code> are invalid, because they have more than one value for the <code>doc_no</code> column.</p>
<pre><code>import pandas as pd
def find_non_unique_values():
print()
df = pd.DataFrame({
'id': ['id-1', 'id-1', 'id-2', 'id-3'],
'doc_no': ['doc-1', 'doc-2', 'doc-3', 'doc-4']
})
pd.set_option('display.max_columns', None)
# default display width is 80
pd.set_option("display.width", 150)
print("dataframe info:")
df.info()
print()
print(df.head(4))
find_non_unique_values()
</code></pre>
<p>How can I find all such invalid records?</p>
<p>Note: I do not want to drop the duplicates. I want to identify the invalid records and notify the user for correction.</p>
| <python><pandas><dataframe> | 2023-09-12 12:34:14 | 2 | 1,276 | AllSolutions |
77,089,116 | 2,604,247 | How to Get the Value of an HTTPBearer Token Passed as Dependency in FastAPI? | <p>Trying to learn the basics of authorisation using java-web-tokens from this <a href="https://github.com/BekBrace/FASTAPI-and-JWT-Authentication" rel="nofollow noreferrer">github repository</a>. Reproducing <a href="https://github.com/BekBrace/FASTAPI-and-JWT-Authentication/blob/main/main.py#L71" rel="nofollow noreferrer">the relevant part</a> here for easy reference (comments mine).</p>
<pre class="lang-py prettyprint-override"><code>@app.post("/posts", dependencies=[Depends(JWTBearer())], tags=["posts"])
def add_post(post: PostSchema):
# How to access the token string here?
# token:str=JWTBearer() # Will this help?
# I want something like this
# print(f'The supplied token is {token}.')
post.id = len(posts) + 1
posts.append(post.dict())
return {
"data": "post added."
}
</code></pre>
<p>The author is using the <code>JWTBearer</code> subclass to authorise the specific route with a JWT. But is there a simple way to access the value of the token as a local variable inside the handler function <code>add_post</code>?</p>
<p>I am pretty new to the whole authorisation-flow, and this repository is simple enough for me to get the essential parts. So I would appreciate if you answer sticks to the same project structure, just answering the question on how to get the token value <em>inside</em> the function, without introducing too many new concepts, or a different library etc.</p>
<p>My technology stack (if important) is</p>
<ul>
<li>fastapi 0.103.1</li>
<li>Python 3.10</li>
<li>Ubuntu 22.04</li>
</ul>
| <python><jwt><backend><fastapi><bearer-token> | 2023-09-12 12:28:55 | 1 | 1,720 | Della |
77,088,952 | 10,687,907 | Openpyxl vs pd.read_excel | <p>So this morning I came across this article <a href="https://saturncloud.io/blog/faster-way-to-read-excel-files-to-pandas-dataframe/" rel="nofollow noreferrer">https://saturncloud.io/blog/faster-way-to-read-excel-files-to-pandas-dataframe/</a> which claimed openpyxl was up to 10x faster than the regular <code>pd.read_excel</code></p>
<p>This is the code I used</p>
<pre><code># Benchmarking read_excel()
start = time.time()
df = pd.read_excel(myfile, "data")
end = time.time()
print(f"read_excel() took {end - start:.2f} seconds")
# Benchmarking openpyxl and pandas
start = time.time()
workbook = openpyxl.load_workbook(myfile)
worksheet = workbook['data']
rows = []
for row in worksheet.iter_rows(values_only=True):
rows.append(row)
df = pd.DataFrame(rows[1:], columns=rows[0])
end = time.time()
print(f"openpyxl and pandas took {end - start:.2f} seconds")
</code></pre>
<p>and the result seems quite wrong, any thoughts on why is the difference so big ?</p>
<p><a href="https://i.sstatic.net/fI1O0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fI1O0.png" alt="enter image description here" /></a></p>
<p>Note that the file has 3 sheets but the only relevant one which holds a lot of data is the one called "data", the two other ones are small pivot table</p>
| <python><pandas><performance><openpyxl> | 2023-09-12 12:06:43 | 1 | 807 | Achille G |
77,088,914 | 9,501,624 | Pycharm autocomplete for undefined class attributes in debug console? | <p>For a general Hardware class, I made optional placeholder-attributes for physical devices which might be required during runtime. This works well with autocomplete when using Pycharm's IDE. However, the debug console's autocomplete seems to know less than the IDE.</p>
<p>Question: Is there a way to have both defined and undefined attributes available for autocomplete in the debug console, i.e., the same behaviour as in the IDE?</p>
<p>Minimal example:</p>
<pre class="lang-py prettyprint-override"><code>class OptionalAttributes:
my_string: str
my_predefined_str: str = "hello world"
</code></pre>
<ul>
<li><strong>Pycharm IDE:</strong> Both attributes and their type is available for autocomplete:</li>
</ul>
<p><a href="https://i.sstatic.net/ogcpw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ogcpw.png" alt="IDE knows both attributes" /></a> <a href="https://i.sstatic.net/vQqVp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vQqVp.png" alt="IDE knows type of undefined attribute" /></a></p>
<ul>
<li><strong>Pycharm debug console:</strong> Autocomplete is only aware of the predefined attribute. It does not know the attribute my_string and cannot infer its type.</li>
</ul>
<p><a href="https://i.sstatic.net/hRnD4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/hRnD4.png" alt="Debug console does not know undefined attribute" /></a> <a href="https://i.sstatic.net/VxVbs.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/VxVbs.png" alt="Debug console does not know the type of the attribute" /></a></p>
| <python><autocomplete><pycharm> | 2023-09-12 12:01:56 | 1 | 3,844 | Christian Karcher |
77,088,788 | 2,386,113 | How to open the .Result file generated by Spyder Profiler? | <p>I used <strong>Spyder's inbuilt profiler</strong> to generate the following profiling results:</p>
<p><a href="https://i.sstatic.net/pK9X6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pK9X6.png" alt="enter image description here" /></a></p>
<p>...I then pressed the <strong>Save Button</strong> to save the profiling results, which were saved a file with the extension <code>.Result</code></p>
<p><strong>Question:</strong> How do I open the <code>.Result</code> file to see my profiling results again? I tried to open it in Spyder and notepadd++, but it did not work.</p>
<p><strong>Version Info:</strong></p>
<p><a href="https://i.sstatic.net/nN8b0.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nN8b0.png" alt="enter image description here" /></a></p>
| <python><profiling><spyder> | 2023-09-12 11:45:35 | 0 | 5,777 | skm |
77,088,660 | 4,198,298 | Update column in rows with variable for each row | <p>I have a database and have added an additional column 'team' so it looks like this</p>
<pre><code>date | forename | surname | team
-----------------------------------------
1/1/01 | james | smith |
2/2/02 | paul | jones |
3/3/03 | steven | bradley |
</code></pre>
<p>I want to add an entry to the team column from each, chosen using the random package from a list of teams</p>
<pre><code>import psycopg2
import random
dbconfig = {
"dbname": "mydb",
"host": "127.0.0.1",
"user": "user",
"password": "password",
"port": 5432,
}
connection = psycopg2.connect(**dbconfig)
connection.autocommit = True
teams = ["liverpool", "chelsea", "arsenal"]
cursor = connection.cursor()
cursor.execute("select * from players")
players = cursor.fetchall()
for player in players:
team = random.choice(teams)
query = f""" update trades set team = '{team}' """
</code></pre>
<p>But when I do this it updates the team for every row rather than just the current one, so I get left with</p>
<pre><code>date | forename | surname | team
-----------------------------------------
1/1/01 | james | smith | chelsea
2/2/02 | paul | jones | chelsea
3/3/03 | steven | bradley | chelsea
</code></pre>
<p>Can I update the rows one by one or in bulk using the random variable?</p>
| <python><postgresql><psycopg2> | 2023-09-12 11:24:20 | 1 | 945 | CEamonn |
77,088,537 | 13,854,431 | I am trying to push an index to Azurecognitive search but this gives an error ServiceRequestError: EOF occurred in violation of protocol (_ssl.c:2427) | <p>I am trying to push a index (with embeddings) to Azure cognitive search. The following code is what pushes the index to cognitive search:</p>
<pre><code> #Upload some documents to the index
with open('index.json', 'r') as file:
documents = json.load(file)
search_client = SearchClient(endpoint=service_endpoint, index_name=index_name, credential=credential)
result = search_client.upload_documents(documents, timeout = 50)
print(f"Uploaded {len(documents)} documents")
</code></pre>
<p>The code works whenever the size of the 'index.json' is small. (have tried it and it successfully pushes the data to Azure cognitive search). But it does not work whenever the size of 'index.json' is large. Right now I am working with an 'index.json' of 69mb.</p>
<p>I receive the following error when running the code:</p>
<pre><code>ServiceRequestError Traceback (most recent call last)
Cell In[21], line 5
3 documents = json.load(file)
4 search_client = SearchClient(endpoint=service_endpoint, index_name=index_name, credential=credential)
----> 5 result = search_client.upload_documents(documents, timeout = 50)
6 print(f"Uploaded {len(documents)} documents")
File /usr/local/lib/python3.11/site-packages/azure/search/documents/_search_client.py:543, in SearchClient.upload_documents(self, documents, **kwargs)
540 batch.add_upload_actions(documents)
542 kwargs["headers"] = self._merge_client_headers(kwargs.get("headers"))
--> 543 results = self.index_documents(batch, **kwargs)
544 return cast(List[IndexingResult], results)
File /usr/local/lib/python3.11/site-packages/azure/core/tracing/decorator.py:78, in distributed_trace..decorator..wrapper_use_tracer(*args, **kwargs)
76 span_impl_type = settings.tracing_implementation()
77 if span_impl_type is None:
---> 78 return func(*args, **kwargs)
80 # Merge span is parameter is set, but only if no explicit parent are passed
81 if merge_span and not passed_in_parent:
File /usr/local/lib/python3.11/site-packages/azure/search/documents/_search_client.py:641, in SearchClient.index_documents(self, batch, **kwargs)
631 @distributed_trace
632 def index_documents(self, batch: IndexDocumentsBatch, **kwargs: Any) -> List[IndexingResult]:
633 """Specify a document operations to perform as a batch.
...
--> 381 raise error
382 if _is_rest(request):
383 from azure.core.rest._requests_basic import RestRequestsTransportResponse
ServiceRequestError: EOF occurred in violation of protocol (_ssl.c:2427)
</code></pre>
<p>Anyone knows how to fix this error, so the code does push the data to Azure cognitive search?</p>
| <python><azure><azure-cognitive-search> | 2023-09-12 11:05:34 | 1 | 457 | Herwini |
77,088,510 | 1,470,314 | Anaconda and pip | <p>Configuring a <a href="https://conda.io/projects/conda/en/latest/user-guide/tasks/manage-environments.html#creating-an-environment-from-an-environment-yml-file" rel="nofollow noreferrer">conda environment.yaml</a>, I can set certain packages to be installed by pip. Are these installations also localized to that conda environment somehow, or is that guarantee only given to packages installed directly?</p>
| <python><pip><conda> | 2023-09-12 11:00:53 | 1 | 1,012 | yuvalm2 |
77,088,414 | 1,305,688 | Shift+Enter error at running Selection/Line in python terminal using | <p>After upgrading to VS code version 1.81.0 VS code seem to be activating Python after sending the the code, using Shift+Enter, sending 1+1</p>
<pre><code>Microsoft Windows [Version 10.0.19044.3324]
(c) Microsoft Corporation. All rights reserved.
C:\Users\fail\gitprojects\proj_x>activate
(base) C:\Users\fail\gitprojects\proj_x>conda activate env_proj_x
(env_proj_x) C:\Users\fail\gitprojects\proj_x>1+1
'1+1' is not recognized as an internal or external command,
operable program or batch file.
(env_proj_x) C:\Users\fail\gitprojects\proj_x>C:/Users/fail/.conda/envs/env_proj_x/python.exe
Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)] :: Anaconda, Inc. on win32
Type "help", "copyright", "credits" or "license" for more information.
>>>
</code></pre>
<p>This time sending the code, using Shift+Enter, everything works as it should, as Python is activated.</p>
<p>How can I fix this?</p>
| <python><visual-studio-code> | 2023-09-12 10:48:39 | 1 | 8,018 | Eric Fail |
77,088,237 | 13,506,329 | Split a NumPy array of strings into 2 new NumPy arrays based on index positions provided by another NumPy array | <p>This is my NumPy array:</p>
<pre><code>og_arr = [['5mm', '45"', '300 mm WT', 'Nan'], ['50mm', '3/5"', 'Nan', 'Nan']]
</code></pre>
<p>I have written some logic that is able to identify at which index position the <code>mm/inch</code> string starts and this results in the following array.</p>
<pre><code> index_arr = [[1, 2, 4, -1], [2, 3, -1, -1]]
</code></pre>
<p>I would like to split <code>og_arr</code> into 2 arrays called <code>values</code> and <code>units</code> based on the <code>index_arr</code> so that I get the following.</p>
<pre><code># perform some sort of indexing + splitting operation involving og_arr and index_arr
values = [['5', '45', '300', 'Nan'], ['50', '3/5', 'Nan', 'Nan']]
units = [['mm', '"', 'mm WT', ''], ['mm', '"', '', '']]
</code></pre>
<p>I have a solution to this problem using a <code>for/while</code> loop, however, I am more interested in finding out if a pure vectorized solution exists for this sort of problem.</p>
| <python><arrays><numpy><numpy-ndarray> | 2023-09-12 10:22:59 | 1 | 388 | Lihka_nonem |
77,087,702 | 738,017 | How to find the actual tag of a "latest" Docker image in Docker hub, starting from a container image built months ago | <p>I have a local Docker image built months ago, using a Dockerfile starting with <code>FROM python</code></p>
<p>Now that image is still in a server that I can access, and I need to create a new Dockefile to generate a container starting from EXACTLY THE SAME python image used when the image was originally build.</p>
<p>If I use <code>FROM python</code> in my new Dockerfile, I will get a different python version, and a different OS within the new container. Instead, I would like to understand which python tag corresponds to the "latest" tag used months ago to originally build that image.</p>
<p>How can I do that?</p>
| <python><docker><dockerfile> | 2023-09-12 09:17:25 | 0 | 14,614 | Vito Gentile |
77,087,620 | 8,726,315 | Can't get element with Python Selenium find_element(By.CSS_SELECTOR) | <p>I'm having trouble getting a specific element in the page I want to scrape. The content is wrapped in a weird tag, and not sure if it's an iframe. I also tried with CSS_SELECTOR, but anything coming after the mentioned tag throws an error.</p>
<p>The page I want to scrape:
<a href="https://connect.echobotsales.de/#/company/YwOwE8kYmN" rel="nofollow noreferrer">https://connect.echobotsales.de/#/company/YwOwE8kYmN</a></p>
<p>what I'm doing:</p>
<pre><code>try:
telephone_element = driver.find_element(By.XPATH, "//dynamicelements/div[2]/div[2]/div[2]/div[2]/span/a").text
print(telephone_element)
except Exception as e:
print("error", e)
</code></pre>
<p>The error I get:</p>
<pre><code>error Message: no such element: Unable to locate element: {"method":"xpath","selector":"//dynamicelements/div[2]/div[2]/div[2]/div[2]/span/a"}
(Session info: headless chrome=113.0.5672.63); For documentation on this error, please visit: https://www.selenium.dev/documentation/webdriver/troubleshooting/errors#no-such-element-exception
</code></pre>
| <python><selenium-webdriver><web-scraping><xpath><css-selectors> | 2023-09-12 09:06:18 | 2 | 341 | Eric Lehmann |
77,087,440 | 7,677,894 | What happens if __getitem__ return None to dataloader in Pytorch? | <p>What happens if <code>__getitem__</code> return None to dataloader in Pytorch?</p>
<p>In my gussing, the batch size will be reduced and training will continue?</p>
| <python><pytorch><nlp><pytorch-dataloader> | 2023-09-12 08:46:30 | 1 | 983 | Ink |
77,087,407 | 5,057,360 | SharePoint long path not working in Python | <p>My Organization recently rearranged their SharePoint folder and now my script is not working.</p>
<p>I have a Python script that is reading an Excel file from SharePoint. The old path used to look like this:</p>
<pre><code>FILE_PATH = "https://company.sharepoint.com/sites/Folder1/Shared%20Documents/Project%20Files/0.%20Databases%202022/Previous%20DB%20versions%20(Not-used)/Some_Excel_File.xlsx"
</code></pre>
<p>And it was working fine. Now the path looks like this:</p>
<pre><code>FILE_PATH = "https://company.sharepoint.com/sites/MainMission/Shared%20Documents/5.%20Programme%20(projects-Files-TU)%20Department/DLLY052%20-%20EX%20Projects%20II/9.%20042%20Operational%20Documents/0.%20Databases%202022/Previous%20DB%20versions%20(Not-used)/Some_Excel_File.xlsx"
</code></pre>
<p>and I get an error when I try to open it.</p>
<pre><code>Error: (-2147352567, 'Exception occurred.', (0, 'Microsoft Excel', 'Open method of Workbooks class failed', 'xlmain11.chm', 0, -2146827284), None)
</code></pre>
<p>I know the issue is with the path because I copied the same file to a more simpler SharePoint path and it worked. I tried escaping and formatting using:</p>
<pre><code>FILE_PATH = r'{0}'.format(FILE_PATH)
FILE_PATH = os.path.expandvars(FILE_PATH)
FILE_PATH = FILE_PATH.replace("%", "%%")
</code></pre>
<p>None of the above worked for me.
Thanks.</p>
| <python><sharepoint><path> | 2023-09-12 08:42:18 | 0 | 317 | ybloodz |
77,087,356 | 5,684,214 | Debug / log output from dbt python models | <p>Is there any way to view variables / output from python models in dbt? Currently my workaround involves replicating the python model in a jupyter notebook.</p>
<p>I found this in the <a href="https://docs.getdbt.com/docs/build/python-models#limitations" rel="nofollow noreferrer">docs</a>:</p>
<blockquote>
<p><strong>Lack of print() support.</strong> The data platform runs and compiles your Python model without dbt's oversight. This means it doesn't display the output of commands such as print() in dbt's logs.</p>
</blockquote>
<p>I tried adding print statements and python logging.</p>
| <python><snowflake-cloud-data-platform><dbt> | 2023-09-12 08:35:45 | 2 | 953 | DannyDannyDanny |
77,087,197 | 14,705,072 | Is there a way to group_by in polars while keeping other columns? | <p>I am currently trying to achieve a polars group_by while keeping other columns than the ones in the <code>group_by</code> function.</p>
<p>Here is an example of an input data frame that I have.</p>
<pre class="lang-py prettyprint-override"><code>df = pl.from_repr("""
┌─────┬─────┬─────┬─────┐
│ SRC ┆ TGT ┆ IT ┆ Cd │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ f64 │
╞═════╪═════╪═════╪═════╡
│ 1 ┆ 1 ┆ 2 ┆ 3.0 │
│ 2 ┆ 1 ┆ 2 ┆ 4.0 │
│ 3 ┆ 1 ┆ 2 ┆ 3.0 │
│ 3 ┆ 2 ┆ 1 ┆ 8.0 │
└─────┴─────┴─────┴─────┘
""")
</code></pre>
<p>I want to group by <code>['TGT', 'IT']</code> using <code>min('Cd')</code>, which is the following code :</p>
<p><code>df.group_by('TGT', 'IT').agg(pl.col('Cd').min())</code></p>
<p>With this code line, I obtain the following dataframe.</p>
<pre><code>┌─────┬─────┬─────┐
│ TGT ┆ IT ┆ Cd │
│ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ f64 │
╞═════╪═════╪═════╡
│ 1 ┆ 2 ┆ 3.0 │
│ 2 ┆ 1 ┆ 8.0 │
└─────┴─────┴─────┘
</code></pre>
<p>And here is the dataframe I would rather want</p>
<pre><code>┌─────┬─────┬─────┬─────┐
│ SRC ┆ TGT ┆ IT ┆ Cd │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ f64 │
╞═════╪═════╪═════╪═════╡
│ 1 ┆ 1 ┆ 2 ┆ 3.0 │
│ 3 ┆ 2 ┆ 1 ┆ 8.0 │
└─────┴─────┴─────┴─────┘
</code></pre>
<p>I thing I could achieve this by joining the first dataframe on the grouped one using <code>['TGT', 'IT', 'Cd']</code>, and then delete the doubled rows, as I only want one (and any) <code>'SRC'</code> for each <code>('TGT', 'IT')</code> couple. But I wanted to know if there is a more straightforward way to do it, especially by keeping the <code>'SRC'</code> column during the <code>group_by</code></p>
<p>Thanks by advance</p>
| <python><python-polars> | 2023-09-12 08:13:23 | 1 | 319 | Haeden |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.