QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
75,221,271
| 16,169,533
|
Django image dosen't show when searching?
|
<p>I have an inventory app with products and its name, photo.</p>
<p>I query the records in HTML page and all works fine and the images showing.</p>
<p>when i try to search the result come without the photo.</p>
<p>View:</p>
<pre><code>def inventory_search_view(request):
query = request.GET.get('q')
product_search = Inventory.objects.filter(name__icontains = query).values()
print(product_search)
context = {'object_list': product_search}
return render(request, 'inventory_search.html', context = context)
</code></pre>
<p>HTML:</p>
<pre><code> {%for object in object_list %}
<div class="product-image">
<img src="{{object.image}}" alt="{{object.name}}" />
<div class="info">
<h2> Description</h2>
<ul>
{{object.description}}
</ul>
</div>
</div>
</div>
{%endfor%}
</code></pre>
<p>search form:</p>
<pre><code> <form action="search/">
<input type="text" name="q">
<input type="submit" value="Search">
</form>
</code></pre>
<p>Thanks in advance.</p>
|
<python><django><orm>
|
2023-01-24 12:09:20
| 1
| 424
|
Yussef Raouf Abdelmisih
|
75,221,252
| 10,714,156
|
PyTorch: Compute Hessian matrix of the model
|
<p>Say that, for some reason, I want to fit a linear regression using PyTorch, as illustrated below.</p>
<p>How could I compute the <strong>Hessian matrix</strong> of the model to, ultimately, compute the standard error for my parameter estimates?</p>
<pre><code>import torch
import torch.nn as nn
# set seed
torch.manual_seed(42)
# define the model
class OLS_pytorch(nn.Module):
def __init__(self, X, Y):
super(OLS_pytorch, self).__init__()
self.X = X
self.Y = Y
self.beta = nn.Parameter(torch.ones(X.shape[1], 1, requires_grad=True))
self.intercept = nn.Parameter(torch.ones(1, requires_grad=True))
self.loss = nn.MSELoss()
def forward(self):
return self.X @ self.beta + self.intercept
def fit(self, lr=0.01, epochs=1000):
optimizer = torch.optim.Adam(self.parameters(), lr=lr)
for epoch in range(epochs):
optimizer.zero_grad()
loss = self.loss(self.forward(), self.Y)
loss.backward()
optimizer.step()
if epoch % 10 == 0:
print(f"Epoch {epoch} loss: {loss.item()}")
return self
</code></pre>
<p>Generating some data and using the model</p>
<pre><code># Generate some data
X = torch.randn(100, 1)
Y = 2 * X + 3 + torch.randn(100, 1)
# fit the model
model = OLS_pytorch(X, Y)
model.fit()
#extract parameters
model.beta, model.intercept
#Epoch 980 loss: 0.7803605794906616
#Epoch 990 loss: 0.7803605794906616
#(Parameter containing:
# tensor([[2.0118]], requires_grad=True),
# Parameter containing:
# tensor([3.0357], requires_grad=True))
</code></pre>
<hr />
<p>For instance, in R, using the same data and the <code>lm()</code> function, I recover the same parameters, but I am also able to recover the Hessian matrix, and them I am able to compute standard errors.</p>
<pre><code>ols <- lm(Y ~ X, data = xy)
ols$coefficients
#(Intercept) X
# 3.035674 2.011811
vcov(ols)
# (Intercept) X
# (Intercept) 0.0079923921 -0.0004940884
# X -0.0004940884 0.0082671053
summary(ols)
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 3.03567 0.08940 33.96 <2e-16 ***
# X 2.01181 0.09092 22.13 <2e-16 ***
# ---
# Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
</code></pre>
<hr />
<p><strong>UPDATE</strong>: Using the answer from <a href="https://stackoverflow.com/a/75221312/10714156">@cherrywoods</a></p>
<p>here is how you would match the standard errors produced by <code>lm()</code> in R</p>
<pre><code># predict
y_pred = model.X @ model.beta + model.intercept
sigma_hat = torch.sum((y_pred - model.Y)**2)/ (N-2) #2 is the number of estimated parameters.
from torch.autograd.functional import hessian
def loss(beta, intercept):
y_pred = model.X @ beta + intercept
return model.loss(y_pred, model.Y)
H = torch.Tensor(hessian(loss, (model.beta, model.intercept)))
vcov = torch.sqrt(torch.diag(sigma_hat*torch.inverse(H/2)) )
print(vcov)
#tensor([0.9092, 0.8940], grad_fn=<SqrtBackward0>)
</code></pre>
|
<python><pytorch><regression><standard-error><hessian-matrix>
|
2023-01-24 12:07:29
| 1
| 1,966
|
Γlvaro A. GutiΓ©rrez-Vargas
|
75,221,234
| 11,023,647
|
Why the logs sent to syslog-ng are not saved?
|
<p>I have an application which I run with <code>docker-compose</code>. Now I'd like to add logging to my application so I added this image to my compose -file:</p>
<pre><code>syslog-ng:
image: lscr.io/linuxserver/syslog-ng:latest
container_name: syslog-ng
environment:
- PUID=1000
- PGID=1000
- TZ=Europe/London
volumes:
- ./syslog-ng.conf:/config/syslog-ng.conf
- /var/log/test_logs:/var/log
ports:
- 514:5514/udp
- 601:6601/tcp
- 6514:6514/tcp
restart: unless-stopped
</code></pre>
<p>The <code>syslog-ng.conf</code> is located in the root of my repo, just like the <code>docker-compose.yaml</code>. This is the contents of the conf file, it is copied from <a href="https://github.com/linuxserver/docker-syslog-ng/blob/main/root/defaults/syslog-ng.conf" rel="nofollow noreferrer">here</a>:</p>
<pre><code>#############################################################################
# Default syslog-ng.conf file which collects all local logs into a
# single file called /var/log/booth_logs tailored to container usage.
@version: 3.35
@include "scl.conf"
source s_local {
internal();
};
source s_network_tcp {
syslog(transport(tcp) port(6601));
};
source s_network_udp {
syslog(transport(udp) port(5514));
};
destination d_local {
file("/var/log/test_logs");
file("/var/log/test_logs-kv.log" template("$ISODATE $HOST $(format-welf --scope all-nv-pairs)\n") frac-digits(3));
};
log {
source(s_local);
source(s_network_tcp);
source(s_network_udp);
destination(d_local);
};
</code></pre>
<p>This is the service where the file <code>test_script.py</code> is located from where I try to send logs:</p>
<pre><code>test_service:
image: "another_service:latest"
depends_on:
- another_service
- syslog-ng
container_name: another_service
volumes:
- ./syslog-ng.conf:/config/syslog-ng.conf
command: python app/test_script.py
</code></pre>
<p>And this is how I've been trying to test this in the `test_script.py``:</p>
<pre><code>import logging
import logging.handlers
# configure the syslog handler
syslog = logging.handlers.SysLogHandler(address=('syslog-ng', 514))
# create a logger
logger = logging.getLogger('my_logger')
logger.setLevel(logging.DEBUG)
# add the syslog handler to the logger
logger.addHandler(syslog)
# use the logger
logger.debug('This is a debug message')
logger.info('This is an info message')
logger.warning('This is a warning message')
logger.error('This is an error message')
logger.critical('This is a critical message')
</code></pre>
<p>I don't get any errors and all the services are starting normally but no logs are saved. I've checked the container <code>/var/log</code> and my computers <code>/var/log/test_logs</code>. Any ideas what I'm missing here?</p>
|
<python><docker><logging><python-logging><syslog-ng>
|
2023-01-24 12:05:16
| 0
| 379
|
lr_optim
|
75,221,086
| 1,540,456
|
List of values to list of dictionaries
|
<p>I have a list with some string values:</p>
<pre><code>['red', 'blue']
</code></pre>
<p>And I want to use those strings as values in single, constant keyed dictionaries. So the output would be:</p>
<pre><code>[{'color': 'red'}, {'color': 'blue'}]
</code></pre>
|
<python><list><dictionary>
|
2023-01-24 11:52:16
| 2
| 5,914
|
Atlas91
|
75,220,524
| 2,179,057
|
Python CLI Menu with Arrow Keys on Windows
|
<p>The terminal I use on Windows is Mingw-w64 (Git Bash). I am trying to find or create a CLI menu with Python that I can navigate with arrow keys, however nothing I find works.</p>
<p>The Python library, <code>simple-term-menu</code>, doesn't work on Windows. <code>console-menu</code> doesn't use arrow keys but it just throws an error when I import it anyway. After importing <code>windows-curses</code>, I was able to get it working in CMD but not Git Bash (it says, "Redirection is not supported.")</p>
<p>I know for a fact that what I'm after is possible. The JavaScript framework, Adonis, is capable of it with their create command (<code>yarn create adonis-ts-app hello-world</code>). The NPM one doesn't work but Yarn does. Given this, it's obviously possible, but how?</p>
<p>Given all of this, how can I get the CLI menu I want in Git Bash, or how can I get windows-curses to work?</p>
|
<python><command-line-interface><python-curses>
|
2023-01-24 10:59:57
| 3
| 4,510
|
Spedwards
|
75,220,492
| 11,710,304
|
poetry does not upgrade polars 15.16
|
<p>I have a repository which consits mainly of python and polars code. Every time I want to update or add libraries with poetry I get this error. Until the version 15.15 I had no problems with polars. I have tried updating Polars within the IDE (PyCharm) and also with Brew. Is it something to do with the latest polars release or is it on the system (Mac M1) side ? Unfortunately I am not getting anywhere at this point. Can someone help me ?</p>
<pre><code>Package operations: 0 installs, 1 update, 0 removals
β’ Updating polars (0.15.15 -> 0.15.16): Failed
CalledProcessError
Command '['/Users/PATH/.venv/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/PATH/.venv', '--upgrade', '--no-deps', '/Users/PATH/Library/Caches/pypoetry/artifacts/58/97/2d/cb5f20eacd75bb88a57321c5a81a2b591330ccb0ae2fc786fffbe500eb/polars-0.15.16.tar.gz']' returned non-zero exit status 1.
at /opt/homebrew/Cellar/python@3.11/3.11.1/Frameworks/Python.framework/Versions/3.11/lib/python3.11/subprocess.py:571 in run
567β # We don't call process.wait() as .__exit__ does that for us.
568β raise
569β retcode = process.poll()
570β if check and retcode:
β 571β raise CalledProcessError(retcode, process.args,
572β output=stdout, stderr=stderr)
573β return CompletedProcess(process.args, retcode, stdout, stderr)
574β
575β
The following error occurred when trying to handle this error:
EnvCommandError
Command ['/Users/PATH/.venv/bin/python', '-m', 'pip', 'install', '--use-pep517', '--disable-pip-version-check', '--isolated', '--no-input', '--prefix', '/Users/PATH/.venv', '--upgrade', '--no-deps', '/Users/PATH/Library/Caches/pypoetry/artifacts/58/97/2d/cb5f20eacd75bb88a57321c5a81a2b591330ccb0ae2fc786fffbe500eb/polars-0.15.16.tar.gz'] errored with the following return code 1, and output:
Processing /Users/PATH/Library/Caches/pypoetry/artifacts/58/97/2d/cb5f20eacd75bb88a57321c5a81a2b591330ccb0ae2fc786fffbe500eb/polars-0.15.16.tar.gz
Installing build dependencies: started
Installing build dependencies: finished with status 'done'
Getting requirements to build wheel: started
Getting requirements to build wheel: finished with status 'done'
Preparing metadata (pyproject.toml): started
Preparing metadata (pyproject.toml): finished with status 'error'
error: subprocess-exited-with-error
Γ Preparing metadata (pyproject.toml) did not run successfully.
β exit code: 1
β°β> [6 lines of output]
Cargo, the Rust package manager, is not installed or is not on PATH.
This package requires Rust and Cargo to compile extensions. Install it through
the system's package manager or via https://rustup.rs/
Checking for Rust toolchain....
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
Γ Encountered error while generating package metadata.
β°β> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
at /opt/homebrew/Cellar/poetry/1.3.2/libexec/lib/python3.11/site-packages/poetry/utils/env.py:1540 in _run
1536β output = subprocess.check_output(
1537β command, stderr=subprocess.STDOUT, env=env, **kwargs
1538β )
1539β except CalledProcessError as e:
β 1540β raise EnvCommandError(e, input=input_)
1541β
1542β return decode(output)
1543β
1544β def execute(self, bin: str, *args: str, **kwargs: Any) -> int:
The following error occurred when trying to handle this error:
PoetryException
Failed to install /Users/PATH/Library/Caches/pypoetry/artifacts/58/97/2d/cb5f20eacd75bb88a57321c5a81a2b591330ccb0ae2fc786fffbe500eb/polars-0.15.16.tar.gz
at /opt/homebrew/Cellar/poetry/1.3.2/libexec/lib/python3.11/site-packages/poetry/utils/pip.py:58 in pip_install
54β
55β try:
56β return environment.run_pip(*args)
57β except EnvCommandError as e:
β 58β raise PoetryException(f"Failed to install {path.as_posix()}") from e
59β
</code></pre>
|
<python><python-poetry><python-polars>
|
2023-01-24 10:56:03
| 2
| 437
|
Horseman
|
75,220,272
| 2,919,585
|
Why do I get a SettingWithCopyWarning when using a MultiIndex (but not with a simple index)?
|
<p>The following piece of code works as expected, with no warnings. I create a dataframe, create two sub-dataframes from it using <code>.loc</code>, give them the same index and then assign to a column of one of them.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
df = pd.DataFrame(np.random.randn(20, 4),
index=pd.Index(range(20)),
columns=['one', 'two', 'three', 'four'])
d1 = df.loc[[2, 4, 6], :]
d2 = df.loc[[3, 5, 7], :]
idx = pd.Index(list('abc'), name='foo')
d1.index = idx
d2.index = idx
d1['one'] = d1['one'] - d2['two']
</code></pre>
<p>However, if I do exactly the same thing except with a multi-indexed dataframe, I get a <code>SettingWithCopyWarning</code>.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import pandas as pd
arrays = [
np.array(["bar", "bar", "baz", "baz", "foo", "foo", "qux", "qux"]),
np.array(["one", "two", "one", "two", "one", "two", "one", "two"]),
]
df = pd.DataFrame(np.random.randn(8, 4), index=arrays, columns=['one', 'two', 'three', 'four'])
d1 = df.loc[(['bar', 'qux', 'foo'], 'one'), :]
d2 = df.loc[(['bar', 'qux', 'foo'], 'two'), :]
idx = pd.Index(list('abc'), name='foo')
d1.index = idx
d2.index = idx
d1['one'] = d1['one'] - d2['two']
</code></pre>
<p>I know that I can avoid this warning by using <code>.copy()</code> during the creation of <code>df1</code> and <code>df2</code>, but I struggle to understand why this is necessary in the second case but not in the first. The chained indexing is equally present in both cases, isn't it? Also, the operation works in both cases (i.e. <code>d1</code> is modified but <code>df</code> is not). So, what's the difference?</p>
|
<python><pandas><dataframe><multi-index><chained-assignment>
|
2023-01-24 10:36:11
| 2
| 571
|
schtandard
|
75,220,251
| 4,753,897
|
hex to int python but get a letter in response
|
<p>This is pretty well documented but I keep getting output that doesn't make sense. I have a hex value that looks like</p>
<pre><code>\x00\x00\x00\x00\x00\x01\x86\xa0
</code></pre>
<p>but I get</p>
<pre><code> >>> b'\x00\x00\x00\x00\x00\x01\x86\xa0'.hex()
'00000000000186a0'
</code></pre>
<p>I am expecting a int or at least a readable number. I assume I am using the wrong function.</p>
<p>Advice?</p>
|
<python><python-3.x>
|
2023-01-24 10:34:05
| 2
| 12,145
|
Mike3355
|
75,220,145
| 5,928,682
|
props.source.bindAsNotificationRuleSource is not a function in aws cdk python
|
<p>I am trying to set up notification to my codepipeline in aws.
Been following this <a href="https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_codestarnotifications/README.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_codestarnotifications/README.html</a></p>
<pre><code>pipeline = CodePipeline(
self,
id,
pipeline_name=id,
synth=synth_step,
cross_account_keys=True,
code_build_defaults=pipelines.CodeBuildOptions(
build_environment=BuildEnvironment(
build_image=aws_codebuild.LinuxBuildImage.STANDARD_5_0,
privileged=True,
)
),
)
</code></pre>
<p>after creating my code pipeline with in the stack i am creating a notification rule.</p>
<pre><code>rule = aws_codestarnotifications.NotificationRule(self, "NotificationRule",
source=pipeline,
events=["codepipeline-pipeline-pipeline-execution-failed", "codepipeline-pipeline-pipeline-execution-succeeded"
],
targets=[sns_topic]
)
</code></pre>
<p>but i am getting <code>RuntimeError: props.source.bindAsNotificationRuleSource is not a function</code>.</p>
<p>I also tried solution mentioned here, but didn't workout.</p>
<p><a href="https://github.com/aws/aws-cdk/issues/9710" rel="nofollow noreferrer">https://github.com/aws/aws-cdk/issues/9710</a></p>
<p>Does anyone has an idea on it? where am i going wrong?</p>
|
<python><amazon-web-services><aws-codepipeline><aws-cdk>
|
2023-01-24 10:24:45
| 2
| 677
|
Sumanth Shetty
|
75,220,079
| 16,395,449
|
CSV to Xls conversion using Pandas script
|
<p>Trying to convert csv to xls using Python script. I have a sample csv file - test_report.csv and it is having the below values</p>
<pre><code>COL1 COL2 COL3
A 1
B 2
C 5
</code></pre>
<p>COL3 is having empty or NULL values.</p>
<p>When I am trying to convert csv to xls using python script, I'm not able to see COL3 in the converted file HEADER as it is not having any data. If data is there is COL3, I can see the HEADER part and the corresponding values without any issue.</p>
<p>I have tried the below script. Not sure what mistake I'm making.</p>
<pre><code>import os
os.chdir("/dev/test/test01/sub/subdir01/")
# Reading the csv file
import pandas as pd
print(pd.__file__)
col_names=["COL1","COL2","COL3"]
df_new = pd.read_csv("test_report.csv", quotechar='"', names=col_names, sep="|",skiprows=1, low_memory=False,error_bad_lines=False,header=None).dropna(axis=1, how="all")
# Saving xlsx file
file = f"test_report{pd.Timestamp('now').strftime('%Y%m%d_%I%M')}.xlsx"
df_new.to_excel(file, index=False)
</code></pre>
<p>Need guidance on fixing this issue.</p>
|
<python><python-3.x><pandas><dataframe>
|
2023-01-24 10:18:59
| 0
| 369
|
Jenifer
|
75,220,007
| 806,160
|
Running delta lake in python and Debian as standalone spark
|
<p>I want to use a delta lake in python. I installed spark as stand alone and anaconda in Debian 11.6.</p>
<p>The code that I try to run delta lake is:</p>
<pre><code>import pyspark
from delta import *
builder = pyspark.sql.SparkSession.builder.appName("MyApp") \
.config("spark.sql.extensions", "io.delta.sql.DeltaSparkSessionExtension") \
.config("spark.sql.catalog.spark_catalog", "org.apache.spark.sql.delta.catalog.DeltaCatalog")
spark = configure_spark_with_delta_pip(builder).getOrCreate()
</code></pre>
<p>But the above code arise this error:</p>
<pre><code>:: loading settings :: url = jar:file:/usr/bin/spark-3.3.1-bin-hadoop3/jars/ivy-2.5.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
Ivy Default Cache set to: /home/boss/.ivy2/cache
The jars for the packages stored in: /home/boss/.ivy2/jars
io.delta#delta-core_2.12 added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-290d27e6-7e29-475f-81b5-1ab1331508fc;1.0
confs: [default]
found io.delta#delta-core_2.12;2.2.0 in central
found io.delta#delta-storage;2.2.0 in central
found org.antlr#antlr4-runtime;4.8 in central
:: resolution report :: resolve 272ms :: artifacts dl 10ms
:: modules in use:
io.delta#delta-core_2.12;2.2.0 from central in [default]
io.delta#delta-storage;2.2.0 from central in [default]
org.antlr#antlr4-runtime;4.8 from central in [default]
---------------------------------------------------------------------
| | modules || artifacts |
| conf | number| search|dwnlded|evicted|| number|dwnlded|
---------------------------------------------------------------------
| default | 3 | 0 | 0 | 0 || 3 | 0 |
---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent-290d27e6-7e29-475f-81b5-1ab1331508fc
confs: [default]
0 artifacts copied, 3 already retrieved (0kB/11ms)
23/01/24 04:10:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
</code></pre>
<p>How I can solve this problem?</p>
|
<python><pyspark><delta-lake>
|
2023-01-24 10:12:30
| 1
| 1,423
|
Tavakoli
|
75,219,852
| 6,941,145
|
Changing task_runner in prefect deployment
|
<p>Is there a way to change the <code>task_runner</code> within a prefect deployment? I would like to have possibility to have for a single flow a deployment with say <code>ConcurrentTaskRunner</code> and <code>DaskTaskRunner</code> (local or remote).</p>
<p>The only way I have found so far is to create within deployment:</p>
<pre><code>infra_overrides:
env:
dask_server: True
</code></pre>
<p>And on the flow level something like:</p>
<pre><code>def determine_runner():
return DaskTaskRunner if os.environ.get("dask_server") == "True" else ConcurrentTaskRunner
@flow(task_runner=determine_runner())
def my_flow():
pass
</code></pre>
<p>This works as in normal run I don't have variable <code>dask_server</code> and in special deployment run where I set this variable agent starts each run on clean environment with this variable set up. But my guess is that <strong>there must be a better way</strong>. If there was a solution on deployment level I could have a single function <a href="https://docs.prefect.io/api-ref/prefect/deployments/#prefect.deployments.Deployment.build_from_flow" rel="nofollow noreferrer">building from flows</a> instead of adding to each flow a function <code>determine_runner</code>.</p>
<p>Of course it would be best if there was possibility to do something like:</p>
<pre><code>Deployment.build_from_flow(
...
task_runner=my_preferred_runner,
)
</code></pre>
<p>Which is not implemented.</p>
|
<python><prefect>
|
2023-01-24 09:58:04
| 1
| 317
|
Piotr Siejda
|
75,219,678
| 14,860,526
|
Check if a type is Union type in Python
|
<p>I have defined a dataclass:</p>
<pre><code>import dataclasses
@dataclasses.dataclass
class MyClass:
attr1: int | None
attr2: str | None
</code></pre>
<p>I can loop through the types of my attributes with:</p>
<pre><code>for field in dataclasses.fields(MyClass):
fieldname = field.name
fieldtype = field.type
</code></pre>
<p>But how can I check if type 'str' is in 'fieldtype' or get the list of types inside the union type?</p>
|
<python><python-typing>
|
2023-01-24 09:42:41
| 1
| 642
|
Alberto B
|
75,219,448
| 19,580,067
|
Read outlook Email from one of my 2 email addresses using win32 python
|
<p>I have 2 outlook email accounts opened in app. So trying to read the emails of one particular account using python. I have tried few steps but didn't work. Any suggestions on how can I do that.
I know how to read emails if i have only one account but not sure how to do that with 2.</p>
<pre><code>code below:
outlook = win32com.client.Dispatch("Outlook.Application").GetNamespace("MAPI")
wich_accnt = outlook.Folders
#try the restrict method!
for i in wich_accnt:
if (i.Name == 'autoenquirytest@robo.com'):
outbox = outlook.GetDefaultFolder(6)
messages = outbox.Items
print(messages[1].SenderName)
</code></pre>
<pre><code>The code passes through if condition, but then how can we read with that particular emails inbox.
outbox = outlook.GetDefaultFolder(6)
messages = outbox.Items
print(messages[1].SenderName)
when I run the code for getting inbox 'outbox = outlook.GetDefaultFolder(6)'
I'm getting this error =
AttributeError: '<win32com.gen_py.Microsoft Outlook 16.0 Object Library.MAPIFolder instance at 0x1560956608624>' object has no attribute 'GetDefaultFolder'
</code></pre>
|
<python><jupyter-notebook><outlook><pywin32><win32com>
|
2023-01-24 09:16:31
| 1
| 359
|
Pravin
|
75,219,255
| 16,250,224
|
Reallocate the fraction of weights above threshold to the other weights while maintaining the sum per group
|
<p>I have a dataframe <code>df1</code> with <code>Date</code> and <code>ID</code> as index and the <code>Weight</code>. I want to set an upper weight limit (30%) of the weights per date. The weights on each day add up to 100% and if I set an upper weight limit, it is the case that the next biggest weight is then bigger than the weight limit of 30%. Is there a way to account for that without doing several iterations? The remaining weight sum which are not bigger than the max weight add up to: 100% - number of max weights reached.</p>
<pre><code>df1:
Date ID Weight
2023-01-30 A 0.45 <-- over max weight of 30%
2023-01-30 B 0.25
2023-01-30 C 0.15
2023-01-30 D 0.10
2023-01-30 E 0.05
2023-01-31 A 0.55
2023-01-31 B 0.25
2023-01-31 C 0.20
2023-01-31 D 0.00
2023-01-31 E 0.00
df1:
Date ID Weight Weight_upper
2023-01-30 A 0.45 0.300 <-- set to max weight
2023-01-30 B 0.25 0.318 <-- bigger than max weight
2023-01-30 C 0.15 0.191
2023-01-30 D 0.10 0.127 (ex calculation: 0.1 * (1 - 0.3)/(0.25+0.15+0.1+0.05)
2023-01-30 E 0.05 0.060
2023-01-31 A 0.55 0.300
2023-01-31 B 0.25 0.389
2023-01-31 C 0.20 0.311
2023-01-31 D 0.00 0.000
2023-01-31 E 0.00 0.000
</code></pre>
<p>For reproducibility:</p>
<pre><code>df = pd.DataFrame({
'Date':['2023-01-30', '2023-01-30', '2023-01-30', '2023-01-30', '2023-01-30', '2023-01-31', '2023-01-31', '2023-01-31', '2023-01-31', '2023-01-31'],
'ID':['A', 'B', 'C', 'D', 'E', 'A', 'B', 'C', 'D', 'E'],
'Weight':[0.45, 0.25, 0.15, 0.1, 0.05, 0.55, 0.25, 0.2, 0, 0]})
df.set_index('Date')
</code></pre>
<p>Many thanks for your help!</p>
|
<python><pandas>
|
2023-01-24 08:54:05
| 1
| 793
|
fjurt
|
75,219,242
| 1,415,325
|
How to return value from a javascript call in Jupyterlab?
|
<p>I am struggling to read clipboard from <strong>jupyterlab</strong>.
My jupyter server is running on a remote instance using docker, so pyperclip and other similar tricks are not working. The idea is to use javascript but I have very limited experience using it. I am able to get the clipboard value and paste it to the log console but not to get back the value to python.</p>
<p>The usual trick using <code>ipython.notebook.kernel.execute</code> is not working.</p>
<p>See below the working example. Note that this has to be run in Chrome as afaik Firefox is blocking this feature.</p>
<p>Any suggestion is more than welcome :-)</p>
<p>Patrick</p>
<pre class="lang-py prettyprint-override"><code>my_js = """
async function paste(input) {
const text = await navigator.clipboard.readText();
return text
}
paste().then((value) => console.log(value));
"""
import ipywidgets as widgets
from IPython.display import HTML, Javascript
button = widgets.Button(
description='Button',
disabled=False,
button_style='',
tooltip='Button',
icon='check'
)
output = widgets.Output()
@output.capture(clear_output=False)
def on_button_clicked(b):
display(Javascript(my_js))
button.on_click(on_button_clicked)
display(button, output)
# if you don't want to use ipywidgets, this would also work :
get_ipython().run_cell_magic("javascript", "",my_js)
</code></pre>
|
<javascript><python><jupyter-lab>
|
2023-01-24 08:52:37
| 0
| 1,429
|
sweetdream
|
75,218,903
| 19,238,204
|
How to Close the Surface of this Half Cylinder with Python Matplotlib?
|
<p>I have this half cylinder plot, but it is not closed on the surface. How to make it close?</p>
<p>Is it possible to plot cylinder from vertices and sides? With 2 vertices become an arc?</p>
<pre><code>from matplotlib import pyplot as plt
from mpl_toolkits.mplot3d.art3d import Poly3DCollection, Line3DCollection
import numpy as np
def data_for_cylinder_along_z(center_x,center_y,radius,height_z):
z = np.linspace(0, height_z, 50)
theta = np.linspace(0, 1*np.pi, 50)
theta_grid, z_grid=np.meshgrid(theta, z)
x_grid = radius*np.cos(theta_grid) + center_x
y_grid = radius*np.sin(theta_grid) + center_y
return x_grid,y_grid,z_grid
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
Xc,Yc,Zc = data_for_cylinder_along_z(0.2,0.2,0.05,0.1)
ax.plot_surface(Xc, Yc, Zc, alpha=0.5)
# Annotation
ax.set_title("Half Cylinder"))
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/6EK6W.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/6EK6W.png" alt="1" /></a></p>
|
<python><matplotlib>
|
2023-01-24 08:16:07
| 1
| 435
|
Freya the Goddess
|
75,218,781
| 1,581,090
|
How to read UDP data from a given port with python in Windows?
|
<p>On Windows 10 I want to read data from UDP port 9001. I have created the following script which does not give any output (python 3.10.9):</p>
<pre><code>import socket
sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.bind(("", 9001))
while True:
data, addr = sock.recv(1024)
print(f"received message: {data.decode()} from {addr}")
</code></pre>
<p>I checked that a device is sending UDP data on port 9001 using <strong>wireshark</strong>. But the above code just "runs" on powershell without any output (and without any errors).</p>
<p>Any ideas how to fix this?</p>
<p>I found <a href="https://cloudbrothers.info/en/test-udp-connection-powershell/" rel="noreferrer">this page</a> with a <strong>powershell</strong> script that is supposed to listen to a UDP port. So I tried this and created a file <code>Start-UDPServer.ps1</code> with the content as described in that page as follows:</p>
<pre><code>function Start-UDPServer {
[CmdletBinding()]
param (
# Parameter help description
[Parameter(Mandatory = $false)]
$Port = 10000
)
# Create a endpoint that represents the remote host from which the data was sent.
$RemoteComputer = New-Object System.Net.IPEndPoint([System.Net.IPAddress]::Any, 0)
Write-Host "Server is waiting for connections - $($UdpObject.Client.LocalEndPoint)"
Write-Host "Stop with CRTL + C"
# Loop de Loop
do {
# Create a UDP listender on Port $Port
$UdpObject = New-Object System.Net.Sockets.UdpClient($Port)
# Return the UDP datagram that was sent by the remote host
$ReceiveBytes = $UdpObject.Receive([ref]$RemoteComputer)
# Close UDP connection
$UdpObject.Close()
# Convert received UDP datagram from Bytes to String
$ASCIIEncoding = New-Object System.Text.ASCIIEncoding
[string]$ReturnString = $ASCIIEncoding.GetString($ReceiveBytes)
# Output information
[PSCustomObject]@{
LocalDateTime = $(Get-Date -UFormat "%Y-%m-%d %T")
SourceIP = $RemoteComputer.address.ToString()
SourcePort = $RemoteComputer.Port.ToString()
Payload = $ReturnString
}
} while (1)
}
</code></pre>
<p>and started it in an <strong>Powershell</strong> terminal (as admin) as</p>
<pre><code>.\Start-UDPServer.ps1 -Port 9001
</code></pre>
<p>and it returned to the Powershell immediately without ANY output (or error message). Maybe windows is broken?</p>
<p>If there is a solution to finally listen to UDP port 9001, I still strongly prefer a <strong>python</strong> solution!</p>
|
<python><windows><powershell>
|
2023-01-24 08:00:24
| 1
| 45,023
|
Alex
|
75,218,686
| 3,467,698
|
How do I view raw SQL of the query generated by asyncpg?
|
<p>As it is said in <a href="https://magicstack.github.io/asyncpg/current/usage.html" rel="nofollow noreferrer">asyncpg Usage</a>, I can use <code>$n</code> pattern for the arguments and execute a query this way, for example:</p>
<pre><code>result = await conn.fetchval("SELECT $1", 42)
</code></pre>
<p>In this case the raw SQL would be <code>SELECT 42</code>. How do I get this raw text with an asyncpg function before execution? I am asking this is because I want to log queries in my project before they are applied.</p>
<pre><code>query_tpl = "SELECT $1"
values = (42,)
sql = what_is_here(query_tpl, *values) # <- ???
print(sql) # Must be "SELECT 42"
result = await conn.fetchval(query_tpl, *values)
</code></pre>
|
<python><asyncpg>
|
2023-01-24 07:46:06
| 0
| 9,971
|
Fomalhaut
|
75,218,667
| 4,519,018
|
Python mysql connector rollback not working
|
<p>I am writing a python script that does the following as a part a transaction.</p>
<ol>
<li>Creates a new database.</li>
<li>Creates new tables using a schema.sql file.</li>
<li>Copies the data from the master DB to this new DB using <code>insert into select * from master.table_name...</code> like SQL statements.</li>
<li>Commits the txn, in the <code>else</code> block, if everything goes right. Rollback the txn, in the <code>except</code> block, if something goes wrong.</li>
<li>Close the connection in the <code>finally</code> block.</li>
</ol>
<p>However, while testing, I found out that rollback isn't working. If an exception is raised after DB is created, even after rollback, DB is created. If an exception is raised after inserting data into a few tables with some tables remaining, calling rollback in the <code>except</code> block does not revert the inserted data. The script looks like this:</p>
<pre><code>import mysql.connector
try:
conn = mysql.connector.connect(host='localhost', port=3306,
user=USERNAME, password=PASSWORD,
autocommit=False)
cursor.execute("START TRANSACTION;")
cursor.execute(f"DROP DATABASE IF EXISTS {target_db_name};")
cursor.execute(f"CREATE DATABASE {target_db_name};")
cursor.execute(f"USE {target_db_name};")
with open(SCHEMA_LOCATION) as f:
schema_query = f.read()
commands = schema_query.split(";")
for command in commands:
cursor.execute(command)
for query in QUERIES:
cursor.execute(f"{query}{org_Id};")
except Exception as error:
conn.rollback()
else:
conn.commit() # cursor.execute("COMMIT")
finally:
if conn.is_connected():
cursor.close()
conn.close()
</code></pre>
<p>Below are the details of the setup</p>
<ol>
<li>Python3</li>
<li>mysql-connector-python==8.0.32</li>
<li>MySQL 5.7</li>
<li>Storage Engine: InnoDB</li>
</ol>
|
<python><mysql><innodb>
|
2023-01-24 07:44:00
| 1
| 808
|
vvs14
|
75,218,609
| 13,359,498
|
Validation acc is very high in each fold but Test acc is very low
|
<p>I am trying to implement a neural network. I am using CNN model for classifying. First I split the dataset into train and test.</p>
<p>Code Snippet:</p>
<p><code>X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.2, random_state=42, shuffle=True, stratify=Y)</code></p>
<p>then I built a CNN model and used stratified cross-validation to fit the model.</p>
<p>Code Snippet:</p>
<pre><code>from statistics import mean, stdev
# Loop through the splits
lst_accu_stratified = []
for train_index, val_index in skf.split(X_train, y_train):
X_train_fold, X_val_fold = X_train[train_index], X_train[val_index]
y_train_fold, y_val_fold = y_train[train_index], y_train[val_index]
# print('Fold :')
ResNet50 = model.fit(X_train_fold, y_train_fold, batch_size=16, epochs=20, verbose=1)
val_loss, val_acc = model.evaluate(X_val_fold, y_val_fold, verbose=0)
print("Validation Loss: ", val_loss, "Validation Accuracy: ", val_acc)
lst_accu_stratified.append(val_acc)
# Print the output.
print('List of possible accuracy:', lst_accu_stratified)
print('\nMaximum Accuracy That can be obtained from this model is:',
max(lst_accu_stratified)*100, '%')
print('\nMinimum Accuracy:',
min(lst_accu_stratified)*100, '%')
print('\nOverall Accuracy:',
mean(lst_accu_stratified)*100, '%')
print('\nStandard Deviation is:', stdev(lst_accu_stratified))
</code></pre>
<p>Output:</p>
<pre><code>Epoch 1/20
30/30 [==============================] - 9s 102ms/step - loss: 1.3490 - accuracy: 0.5756
Epoch 2/20
30/30 [==============================] - 2s 71ms/step - loss: 0.4620 - accuracy: 0.8466
Epoch 3/20
30/30 [==============================] - 2s 71ms/step - loss: 0.1818 - accuracy: 0.9412
Epoch 4/20
30/30 [==============================] - 2s 71ms/step - loss: 0.1106 - accuracy: 0.9727
Epoch 5/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0643 - accuracy: 0.9811
Epoch 6/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0438 - accuracy: 0.9895
Epoch 7/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0371 - accuracy: 0.9916
Epoch 8/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0212 - accuracy: 0.9958
Epoch 9/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0143 - accuracy: 1.0000
Epoch 10/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0149 - accuracy: 0.9958
Epoch 11/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0158 - accuracy: 0.9958
Epoch 12/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0134 - accuracy: 0.9958
Epoch 13/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0072 - accuracy: 1.0000
Epoch 14/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0031 - accuracy: 1.0000
Epoch 15/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0024 - accuracy: 1.0000
Epoch 16/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0016 - accuracy: 1.0000
Epoch 17/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0016 - accuracy: 1.0000
Epoch 18/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0019 - accuracy: 1.0000
Epoch 19/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0088 - accuracy: 0.9979
Epoch 20/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0031 - accuracy: 1.0000
Validation Loss: 0.8360670208930969 Validation Accuracy: 0.800000011920929
Epoch 1/20
30/30 [==============================] - 3s 106ms/step - loss: 0.5129 - accuracy: 0.8700
Epoch 2/20
30/30 [==============================] - 2s 71ms/step - loss: 0.4789 - accuracy: 0.8784
Epoch 3/20
30/30 [==============================] - 2s 71ms/step - loss: 0.2724 - accuracy: 0.9224
Epoch 4/20
30/30 [==============================] - 2s 72ms/step - loss: 0.2108 - accuracy: 0.9308
Epoch 5/20
30/30 [==============================] - 2s 71ms/step - loss: 0.1081 - accuracy: 0.9706
Epoch 6/20
30/30 [==============================] - 2s 71ms/step - loss: 0.1010 - accuracy: 0.9748
Epoch 7/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0481 - accuracy: 0.9895
Epoch 8/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0316 - accuracy: 0.9874
Epoch 9/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0483 - accuracy: 0.9811
Epoch 10/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0167 - accuracy: 0.9937
Epoch 11/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0129 - accuracy: 0.9937
Epoch 12/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0023 - accuracy: 1.0000
Epoch 13/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0024 - accuracy: 1.0000
Epoch 14/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0093 - accuracy: 0.9979
Epoch 15/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0389 - accuracy: 0.9895
Epoch 16/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0293 - accuracy: 0.9895
Epoch 17/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0016 - accuracy: 1.0000
Epoch 18/20
30/30 [==============================] - 2s 71ms/step - loss: 6.7058e-04 - accuracy: 1.0000
Epoch 19/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0011 - accuracy: 1.0000
Epoch 20/20
30/30 [==============================] - 2s 71ms/step - loss: 6.7595e-04 - accuracy: 1.0000
Validation Loss: 0.5674645304679871 Validation Accuracy: 0.8571428656578064
Epoch 1/20
30/30 [==============================] - 2s 71ms/step - loss: 0.1533 - accuracy: 0.9518
Epoch 2/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0978 - accuracy: 0.9686
Epoch 3/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0702 - accuracy: 0.9790
Epoch 4/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0754 - accuracy: 0.9811
Epoch 5/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0362 - accuracy: 0.9874
Epoch 6/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0174 - accuracy: 0.9916
Epoch 7/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0144 - accuracy: 0.9916
Epoch 8/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0089 - accuracy: 0.9958
Epoch 9/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0017 - accuracy: 1.0000
Epoch 10/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0044 - accuracy: 0.9979
Epoch 11/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0033 - accuracy: 1.0000
Epoch 12/20
30/30 [==============================] - 2s 73ms/step - loss: 5.9884e-04 - accuracy: 1.0000
Epoch 13/20
30/30 [==============================] - 2s 73ms/step - loss: 3.7875e-04 - accuracy: 1.0000
Epoch 14/20
30/30 [==============================] - 2s 73ms/step - loss: 4.7657e-04 - accuracy: 1.0000
Epoch 15/20
30/30 [==============================] - 2s 73ms/step - loss: 2.8062e-04 - accuracy: 1.0000
Epoch 16/20
30/30 [==============================] - 2s 73ms/step - loss: 4.5594e-04 - accuracy: 1.0000
Epoch 17/20
30/30 [==============================] - 2s 72ms/step - loss: 2.3471e-04 - accuracy: 1.0000
Epoch 18/20
30/30 [==============================] - 2s 72ms/step - loss: 2.5190e-04 - accuracy: 1.0000
Epoch 19/20
30/30 [==============================] - 2s 72ms/step - loss: 1.5143e-04 - accuracy: 1.0000
Epoch 20/20
30/30 [==============================] - 2s 72ms/step - loss: 2.4174e-04 - accuracy: 1.0000
Validation Loss: 0.002929181093350053 Validation Accuracy: 1.0
Epoch 1/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0035 - accuracy: 1.0000
Epoch 2/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0048 - accuracy: 0.9979
Epoch 3/20
30/30 [==============================] - 2s 71ms/step - loss: 7.1234e-04 - accuracy: 1.0000
Epoch 4/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0100 - accuracy: 0.9937
Epoch 5/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0041 - accuracy: 1.0000
Epoch 6/20
30/30 [==============================] - 2s 71ms/step - loss: 0.0016 - accuracy: 1.0000
Epoch 7/20
30/30 [==============================] - 2s 71ms/step - loss: 6.2473e-04 - accuracy: 1.0000
Epoch 8/20
30/30 [==============================] - 2s 72ms/step - loss: 4.5511e-04 - accuracy: 1.0000
Epoch 9/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0015 - accuracy: 1.0000
Epoch 10/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0132 - accuracy: 0.9979
Epoch 11/20
30/30 [==============================] - 2s 72ms/step - loss: 0.0106 - accuracy: 0.9958
Epoch 12/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0032 - accuracy: 0.9979
Epoch 13/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0022 - accuracy: 0.9979
Epoch 14/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0039 - accuracy: 0.9979
Epoch 15/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0023 - accuracy: 1.0000
Epoch 16/20
30/30 [==============================] - 2s 73ms/step - loss: 2.7678e-04 - accuracy: 1.0000
Epoch 17/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0022 - accuracy: 1.0000
Epoch 18/20
30/30 [==============================] - 2s 73ms/step - loss: 0.0034 - accuracy: 0.9979
Epoch 19/20
30/30 [==============================] - 2s 73ms/step - loss: 4.1879e-04 - accuracy: 1.0000
Epoch 20/20
30/30 [==============================] - 2s 72ms/step - loss: 4.0388e-04 - accuracy: 1.0000
Validation Loss: 0.003368004923686385 Validation Accuracy: 1.0
Epoch 1/20
30/30 [==============================] - 2s 72ms/step - loss: 5.1283e-04 - accuracy: 1.0000
Epoch 2/20
30/30 [==============================] - 2s 72ms/step - loss: 8.4923e-04 - accuracy: 1.0000
Epoch 3/20
30/30 [==============================] - 2s 72ms/step - loss: 3.2774e-04 - accuracy: 1.0000
Epoch 4/20
30/30 [==============================] - 2s 72ms/step - loss: 1.3468e-04 - accuracy: 1.0000
Epoch 5/20
30/30 [==============================] - 2s 72ms/step - loss: 1.4561e-04 - accuracy: 1.0000
Epoch 6/20
30/30 [==============================] - 2s 72ms/step - loss: 1.6656e-04 - accuracy: 1.0000
Epoch 7/20
30/30 [==============================] - 2s 71ms/step - loss: 1.2794e-04 - accuracy: 1.0000
Epoch 8/20
30/30 [==============================] - 2s 71ms/step - loss: 6.7647e-05 - accuracy: 1.0000
Epoch 9/20
30/30 [==============================] - 2s 71ms/step - loss: 1.7325e-04 - accuracy: 1.0000
Epoch 10/20
30/30 [==============================] - 2s 72ms/step - loss: 6.5071e-05 - accuracy: 1.0000
Epoch 11/20
30/30 [==============================] - 2s 72ms/step - loss: 6.1966e-05 - accuracy: 1.0000
Epoch 12/20
30/30 [==============================] - 2s 71ms/step - loss: 5.9293e-05 - accuracy: 1.0000
Epoch 13/20
30/30 [==============================] - 2s 71ms/step - loss: 3.1360e-04 - accuracy: 1.0000
Epoch 14/20
30/30 [==============================] - 2s 71ms/step - loss: 1.0051e-04 - accuracy: 1.0000
Epoch 15/20
30/30 [==============================] - 2s 71ms/step - loss: 1.7242e-04 - accuracy: 1.0000
Epoch 16/20
30/30 [==============================] - 2s 71ms/step - loss: 5.6384e-05 - accuracy: 1.0000
Epoch 17/20
30/30 [==============================] - 2s 71ms/step - loss: 8.4639e-05 - accuracy: 1.0000
Epoch 18/20
30/30 [==============================] - 2s 71ms/step - loss: 6.7929e-04 - accuracy: 1.0000
Epoch 19/20
30/30 [==============================] - 2s 71ms/step - loss: 1.6557e-04 - accuracy: 1.0000
Epoch 20/20
30/30 [==============================] - 2s 71ms/step - loss: 4.6414e-04 - accuracy: 1.0000
Validation Loss: 8.931908087106422e-05 Validation Accuracy: 1.0
List of possible accuracy: [0.800000011920929, 0.8571428656578064, 1.0, 1.0, 1.0]
Maximum Accuracy That can be obtained from this model is: 100.0 %
Minimum Accuracy: 80.0000011920929 %
Overall Accuracy: 93.1428575515747 %
Standard Deviation is: 0.09604420178372833
</code></pre>
<p>here the val accuracy of each fold is pretty high but when I test the model with test dataset, the accuracy is very low.</p>
<p>Code snippet:</p>
<pre><code>model.evaluate(X_test, y_test,batch_size=32)
</code></pre>
<p>output:</p>
<pre><code>5/5 [==============================] - 1s 222ms/step - loss: 2.3315 - accuracy: 0.6913
[2.3314528465270996, 0.6912751793861389]
</code></pre>
<p>My question is,</p>
<ol>
<li>Is my method correct?</li>
<li>What can be the reason for low test accuracy?</li>
</ol>
|
<python><tensorflow><keras><deep-learning><conv-neural-network>
|
2023-01-24 07:36:52
| 2
| 578
|
Rezuana Haque
|
75,218,486
| 1,436,800
|
How to make an attribute read-only in serializers in DRF?
|
<p>I have a serializer.</p>
<pre><code>class MySerializer(serializers.ModelSerializer):
class Meta:
model = models.MyClass
</code></pre>
<p>My model class is:</p>
<pre><code>class MyClass(models.Model):
employee = models.ForeignKey("Employee", on_delete=models.CASCADE)
work_done = models.TextField(blank=True, null=True)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
</code></pre>
<p>I want employee attribute to be read-only and should only show this value in it's field:</p>
<pre><code>employee = Employee.objects.get(user=self.request.user)
</code></pre>
<p>How can I do this in serializers?</p>
|
<python><django><django-rest-framework><django-views><django-serializer>
|
2023-01-24 07:22:04
| 2
| 315
|
Waleed Farrukh
|
75,218,300
| 7,177,478
|
How to initialize database with some data when app startup? (fastapi, sqlalchemy)
|
<p>I'm new to the fastapi and sqlalchemy. And I'm trying to initialize some data when I startup my app. Here is what I'm thinking of:</p>
<pre><code>@app.on_event("startup")
async def startup_event():
with SessionLocal() as session:
country_dataframe = pd.read_csv('./initialize_data/country.csv')
for index, row in country_dataframe.iterrows():
session.add(models.Country(row.to_dict()))
session.commit()
</code></pre>
<p>But I can't get the db session when I start it, it shows an error code:</p>
<blockquote>
<p>ERROR: Traceback (most recent call last): File
"C:\Users\newia\Miniconda3\envs\fastapi\lib\site-packages\starlette\routing.py",
line 540, in lifespan
async for item in self.lifespan_context(app): File "C:\Users\newia\Miniconda3\envs\fastapi\lib\site-packages\starlette\routing.py",
line 481, in default_lifespan
await self.startup() File "C:\Users\newia\Miniconda3\envs\fastapi\lib\site-packages\starlette\routing.py",
line 516, in startup
await handler() File "D:\Software Projects\PythonProjects\LanguageExchange\app.py", line 27, in
startup_event
with SessionLocal() as session: AttributeError: <strong>enter</strong></p>
<p>ERROR: Application startup failed. Exiting.</p>
</blockquote>
<p>Is there any design pattern to do this? Any advice would be grateful.</p>
|
<python><sqlalchemy><fastapi>
|
2023-01-24 06:58:28
| 0
| 420
|
Ian
|
75,218,218
| 20,599,682
|
How to solve expressions from a list?
|
<p>I have a list of expressions (+ - *):
<code>["2 + 3", "5 - 1", "3 * 4", ...]</code>
and I need to convert every expresion to <code>expression = answer</code> like this <code>2 + 3 = 5</code>.</p>
<p>I tried just doing <code>print(listt[0])</code> but it outputs <code>2 + 3</code>, not <code>5</code>. So how do i get the answer of this expression? I know that there is a long way by doing <code>.split()</code> with every expression, but is there any other faster way of doing this?</p>
<p>UPD: I need to use only built-in functions</p>
|
<python>
|
2023-01-24 06:46:09
| 3
| 328
|
FoxFil
|
75,218,205
| 1,245,659
|
ipython security lib missing
|
<p>Per instructions for IPython, I am supposed to be able to run this import when coding:</p>
<pre><code>from IPython.lib.security import passwd_check
</code></pre>
<p>I have IPython 8.8.0 installed as part of a Jupyter install.</p>
<p>however, when I attempt to run this library, it reports:</p>
<pre><code>ModuleNotFoundError: No module named 'IPython.lib.security'
</code></pre>
<p>what am I missing?</p>
|
<python><python-3.x><jupyter-notebook><ipython>
|
2023-01-24 06:43:40
| 3
| 305
|
arcee123
|
75,218,126
| 12,242,085
|
How to replace values in columns in DataFrame with staing coma if exists in Python Pandas?
|
<p>I have Pandas DataFrame like below:</p>
<pre><code>df = pd.DataFrame()
df["COL1"] = [111,222,333]
df["COL2"] = ["CV_COUNT_ABC_XM_BF, CV_COUNT_DEF_XM_BF", "CV_COUNT_DEF_XM_ BF", "LACK"]
df["COL3"] = ["LACK", "CV_COUNT_ABC_XM_BF, CV_COUNT_DEF_XM_BF", "CV_COUNT_DEF_XM_ BF xx"]
</code></pre>
<p>df:</p>
<pre><code>COL1 | COL2 | COL3
-------|------------------------------------------|---------
111 | CV_COUNT_ABC_XM_BF, CV_COUNT_DEF_XM_BF | LACK
222 | CV_COUNT_DEF_XM_ BF | CV_COUNT_ABC_XM_BF, CV_COUNT_DEF_XM_BF
333 | LACK | CV_COUNT_DEF_XM_ BF xx
... | ... | ...
</code></pre>
<p>And I need to:</p>
<ul>
<li>if there is only "LACK" in COL2 or COL3 stay it</li>
<li>if COL2 or COL3 contains "ABC" or "DEF" change values to stay only "ABC" or "DEF", but if values containing "ABC" or "DEF" are mentioned after coma, replaced values have to be also mentioned after coma</li>
<li>delete any other values in columns (if exists like for ID=333 in COL2 "xx") except values "ABC" or "DEF" or coma or "LACK"</li>
</ul>
<p>So, as a result I need something like below:</p>
<pre><code>COL1 | COL2 | COL3
-------|------------------------------------------|---------
111 | ABC, DEF | LACK
222 | DEF | ABC, DEF
333 | LACK | DEF
... | ... | ...
</code></pre>
<p>How can I do taht in Python Pandas ?</p>
|
<python><pandas><dataframe><if-statement><replace>
|
2023-01-24 06:32:21
| 1
| 2,350
|
dingaro
|
75,218,000
| 597,858
|
Remove duplicate pages from a PDF
|
<p>I have a pdf file which has lots duplicate pages which I want to remove. This is my code:</p>
<pre><code>pdf_reader = PyPDF2.PdfFileReader(filename_path)
print(pdf_reader.getNumPages())
pdf_writer = PyPDF2.PdfFileWriter()
last_page_n = pdf_reader.getNumPages() - 1
megalist1 =[]
for i in range(last_page_n):
current_page = pdf_reader.getPage(i)
megalist1.append(current_page)
res = []
[res.append(x) for x in megalist1 if x not in res]
print(len(megalist1))
</code></pre>
<p>It doesn't generate any error but it doesn't work either.
What is that I am doing wrong?</p>
|
<python>
|
2023-01-24 06:13:56
| 2
| 10,020
|
KawaiKx
|
75,217,883
| 2,993,106
|
python datefinder.find_dates does not work if only years are present e.g '2014 - 2018'
|
<p>I am trying to extract dates from a string if only years are present, e.g the following string:</p>
<pre><code>'2014 - 2018'
</code></pre>
<p>should return the following dates:</p>
<pre><code>2014/01/01
2018/01/01
</code></pre>
<p>I am using the python library datefinder and it's brilliant when other element like a month is specified but fails when only years are present in a date.</p>
<p>I need to recognise all sort of incomplete and complete dates:</p>
<pre><code>2014
May 2014
08/2014
03/10/2018
01 March 2013
</code></pre>
<p>Any idea how to recognise date in a string when only the year is present?</p>
<p>Thank you</p>
|
<python><date><datefinder>
|
2023-01-24 05:56:27
| 1
| 1,347
|
Dino
|
75,217,828
| 13,849,446
|
selenium.common.exceptions.InsecureCertificateException: probably due to HSTS policy
|
<p>I am trying to open microsoft outlook using selenium firefox, but I get this when the url opens
Connection to outlook.live.com has a security policy called HTTP Strict Transport Security (HSTS), which means that Firefox can only connect to it securely. The error I get on terminal is</p>
<pre><code>Traceback (most recent call last):
File "d:\Code\Python\plugins-olek-test\initializer.py", line 313, in start_driver
driver.get(url)
File "C:\Users\IT PLANET\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 449, in get
self.execute(Command.GET, {"url": url})
File "C:\Users\IT PLANET\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\webdriver.py", line 440, in execute
self.error_handler.check_response(response)
File "C:\Users\IT PLANET\AppData\Local\Programs\Python\Python39\lib\site-packages\selenium\webdriver\remote\errorhandler.py", line 245, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.InsecureCertificateException: Message:
Stacktrace:
RemoteError@chrome://remote/content/shared/RemoteError.sys.mjs:8:8
WebDriverError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:180:5
InsecureCertificateError@chrome://remote/content/shared/webdriver/Errors.sys.mjs:301:5
checkReadyState@chrome://remote/content/marionette/navigate.sys.mjs:58:24
onNavigation@chrome://remote/content/marionette/navigate.sys.mjs:329:39
emit@resource://gre/modules/EventEmitter.sys.mjs:154:20
receiveMessage@chrome://remote/content/marionette/actors/MarionetteEventsParent.sys.mjs:35:25
</code></pre>
<p>I have tried everything I could find on internet but none solved my problem. The code is</p>
<pre><code>from seleniumwire import webdriver
from selenium.webdriver.chrome.service import Service
from webdriver_manager.firefox import GeckoDriverManager
from webdriver_manager import utils
import sys
URL = 'https://outlook.live.com/'
firefox_options = webdriver.FirefoxOptions()
# firefox_options.add_argument('--no-sandbox')
path_to_firefox_profile = "output_files\\firefox\\xlycfcyp.default-release"
profile = webdriver.FirefoxProfile(path_to_firefox_profile)
profile.set_preference("dom.webdriver.enabled", False)
profile.set_preference('useAutomationExtension', False)
profile.update_preferences()
firefox_options.set_preference("general.useragent.override", 'user-agent=Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:89.0) Gecko/20100101 Firefox/89.0')
firefox_options.add_argument("--width=1400")
firefox_options.add_argument("--height=1000")
driver_installation = GeckoDriverManager().install()
service = Service(driver_installation)
if sys.platform == 'win32':
from subprocess import CREATE_NO_WINDOW
service.creationflags = CREATE_NO_WINDOW
driver = webdriver.Firefox(options=firefox_options, firefox_profile=profile,
service=service)
driver.get(URL)
</code></pre>
<p>I have tried my best, I hope someone helps me.
<strong>Note:</strong>
The same thing works for gmail. It also has HSTS policy but it is working fine.
Thanks in advance</p>
|
<python><selenium><selenium-webdriver><firefox><seleniumwire>
|
2023-01-24 05:46:54
| 2
| 1,146
|
farhan jatt
|
75,217,780
| 7,177,478
|
Sqlalchemy autoincrement not working while insert new data
|
<p>Hi I'm new to the sqlalchemy. And I have a table like this.</p>
<pre><code>class Block(Base):
__tablename__ = "block"
id = Column(Integer, nullable=False, autoincrement=True)
blocker = Column(Integer, ForeignKey("user.id"), nullable=False, primary_key=True)
blocked = Column(Integer, ForeignKey("user.id"), nullable=False, primary_key=True)
</code></pre>
<p>When I trying to insert new data, it gave me a error:</p>
<blockquote>
<p>sqlalchemy.exc.IntegrityError: (sqlite3.IntegrityError) NOT NULL
constraint failed: block.id [SQL: INSERT INTO block (id, blocker,
blocked) VALUES (?, ?, ?)] [parameters: (None, 1, 2)]
It seems like the autoincrement is not working? Could anyone give me some advice?</p>
</blockquote>
|
<python><sqlalchemy><fastapi>
|
2023-01-24 05:40:30
| 1
| 420
|
Ian
|
75,217,779
| 4,732,139
|
Pandas weekly resampling
|
<p>I have a dataframe with daily market data (OHLCV) and am resampling it to weekly.</p>
<p>My specific requirement is that the weekly dataframe's index labels must be the index labels of the <strong>first day of that week</strong>, whose data is <strong>present in</strong> the daily dataframe.</p>
<p>For example, in July 2022, the trading week beginning 4th July (for US stocks) should be labelled 5th July, since 4th July was a holiday and not found in the daily dataframe, and the first date in that week found in the daily dataframe is 5th July.</p>
<p>The usual weekly resampling <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#offset-aliases" rel="nofollow noreferrer">offset aliases</a> and <a href="https://pandas.pydata.org/pandas-docs/stable/user_guide/timeseries.html#anchored-offsets" rel="nofollow noreferrer">anchored offsets</a> do not seem to have such an option.</p>
<p>I can achieve my requirement specifically for US stocks by importing <code>USFederalHolidayCalendar</code> from <code>pandas.tseries.holiday</code> and then using</p>
<pre><code>bday_us = pd.offsets.CustomBusinessDay(calendar=USFederalHolidayCalendar())
dfw.index = dfw.index.map(lambda idx: bday_us.rollforward(idx))
</code></pre>
<p>where <code>dfw</code> is the already resampled weekly dataframe with <code>W-MON</code> as option.</p>
<p>However, this would mean that I'd have to use different trading calendars for each different exchange/market, which I'd very much like to avoid.</p>
<p>Any pointers on how to do this simply so that the index label in the weekly dataframe is the index label of the first day of that week <strong>available in</strong> the daily dataframe would be much appreciated.</p>
|
<python><pandas><group-by><resampling>
|
2023-01-24 05:40:26
| 1
| 373
|
Py_Dream
|
75,217,558
| 5,527,786
|
Leetcode 3 sum different answer python vs javascript
|
<p>The <a href="https://leetcode.com/problems/3sum" rel="nofollow noreferrer">3 sum problem</a> is a well known coding interview problem. It states the following:</p>
<blockquote>
<p>Given an integer array nums, return all the triplets [nums[i], nums[j], nums[k]] such that i != j, i != k, and j != k, and nums[i] + nums[j] + nums[k] == 0.</p>
</blockquote>
<p>I understand the algorithm and have coded the solution in python, but I recently tried to translate the python code below into javascript and I'm getting an incorrect answer.</p>
<p>This is very strange to me, because the code is <strong>completely translated</strong>, line-by-line. Unless javascript's recursive runtime stack under-the-hood has a different implementation of python (unlikely). Is there something more to why the code gives different results on <code>[-1,0,1,2,-1,-4]</code>?</p>
<pre class="lang-py prettyprint-override"><code>class Solution:
def threeSum(self, nums: List[int]) -> List[List[int]]:
def kSum(nums: List[int], target: int, k: int, startIndex: int) -> List[List[int]]:
if k == 2:
ts = twoSum(nums, target, startIndex)
return ts
res = []
for i in range(startIndex, len(nums)):
currNum = nums[i]
takeCurr = target - currNum
if i == 0 or nums[i - 1] != nums[i]:
found = kSum(nums, target=takeCurr, k=k-1, startIndex=i+1)
for subset in found:
temp = [currNum] + subset
res.append(temp)
return res
def twoSum(nums: List[int], target: int, startIndex: int) -> List[List[int]]:
res = []
lo = startIndex
hi = len(nums)-1
while (lo < hi):
curr_sum = nums[lo] + nums[hi]
if curr_sum < target:
lo += 1
elif curr_sum > target:
hi -= 1
else:
res.append([nums[lo], nums[hi]])
lo += 1
while (lo < len(nums) and nums[lo] == nums[lo-1]):
lo+=1
return res
nums.sort()
return kSum(nums, target=0, k=3, startIndex=0)
</code></pre>
<pre class="lang-js prettyprint-override"><code>function threeSum (nums) {
function kSum(nums, target, k, startIndex) {
if (k===2) {
let ts = twoSum(nums, target, startIndex);
return ts;
}
let res = [];
for (let i = startIndex; i < nums.length; ++i) {
const currNum = nums[i];
const takeCurr = target - currNum;
if (i === 0 || nums[i-1] != nums[i]) {
let found = kSum(nums, target=takeCurr, k=k-1, startIndex=i+1);
for (const subset of found) {
let temp = [currNum].concat(subset);
res.push(temp);
}
}
}
return res;
}
function twoSum(nums, target, startIndex) {
let res = [];
let lo = startIndex;
let hi = nums.length-1;
while (lo < hi) {
const curr_sum = nums[lo] + nums[hi];
if (curr_sum < target) {
lo++;
}
else if (curr_sum > target) {
hi--;
}
else {
res.push([nums[lo], nums[hi]]);
lo++;
while (lo < nums.length && nums[lo] === nums[lo-1]) {
lo++;
}
}
}
return res;
}
nums.sort(function(a,b) { return a-b;});
return kSum(nums, target=0, k=3, startIndex=0);
}
</code></pre>
|
<javascript><python>
|
2023-01-24 04:57:41
| 1
| 561
|
nodel
|
75,217,503
| 1,436,800
|
Custom Validate function is not being called inside perform_create function in DRF
|
<p>This is my code.</p>
<pre><code>class MyViewSet(ModelViewSet):
serializer_class = MySerializer
queryset = MyClass.objects.all()
def get_serializer_class(self):
if request.user.is_superuser:
return self.serializer_class
else:
return OtherSerializer
def perform_create(self, serializer):
if request.user.is_superuser:
if serializer.is_valid():
serializer.save(organization=self.request.user.organization)
else:
employee = Employee.objects.get(user=self.request.user)
serializer.save(employee=employee, organization=self.request.user.organization)
</code></pre>
<p>This is my Serializer:</p>
<pre><code>class MySerializer(serializers.ModelSerializer):
class Meta:
model = models.MyClass
def validate(self, data):
employee = data.get('employee')
members = Team.objects.get(id=team.id.members.all())
if employee not in members:
raise serializers.ValidationError('Invalid')
return data
</code></pre>
<p>The issue is, My custom validate function is not being called when I call it inside perform_create() in my ViewSet.</p>
<p>What might be the issue?</p>
|
<python><django><django-rest-framework><django-serializer><django-validation>
|
2023-01-24 04:47:11
| 1
| 315
|
Waleed Farrukh
|
75,217,411
| 12,242,085
|
How to modify Data Frame so as to take values between some other character in column in Python Pandas?
|
<p>I have DataFrame in Python Pandas like below:</p>
<pre><code>COL_1 | COL_2 | COL_3
------|---------------------|---------
111 | CV_COUNT_ABC_XM_BF | CV_SUM_ABC_XM_BF
222 | CV_COUNT_DEF_XM_BF | CV_SUM_CC_XM_BF
333 | CV_COUNT_CC_XM_BF | LACK
444 | LACK | CV_SUM_DEF_XM_BF
... | ... | ...
</code></pre>
<p>And I need to modify above DataFrame to have in COL_2 and COL_3 values like:</p>
<ul>
<li><p>if there is "LACK" in COL_2 or COL_3 stay it</p>
</li>
<li><p>if there is something other than "LACK" take value:</p>
<pre><code>between "CV_COUNNT_" and "_XM_BF"
or
between "CV_SUM_" and "_XM_BF"
</code></pre>
</li>
</ul>
<p>So, as a result I need something like below:</p>
<pre><code>COL_1 | COL_2 | COL_3
------|-------------------|---------
111 | ABC | ABC
222 | DEF | CC
333 | CC | LACK
444 | LACK | DEF
... | ... | ...
</code></pre>
|
<python><pandas><dataframe><numpy><if-statement>
|
2023-01-24 04:30:45
| 2
| 2,350
|
dingaro
|
75,217,264
| 9,727,704
|
Getting list elements satisfying some criteria using a comprehension
|
<p>I have a list like this:</p>
<pre><code>foolist = ['foo bar', 'foo', 'bar' 'foo bar']
</code></pre>
<p>I want the number of <code>foo</code>s in that list, without using regex. Is there a simpler way to do it than below, using one line?</p>
<pre><code>print(len([i for i in [ 'foo' in line for line in foolist ] if i == True]))
</code></pre>
<p>The above is interesting, but it also makes my skin crawl in the way that nested ternary operators would.</p>
|
<python><list-comprehension>
|
2023-01-24 03:58:44
| 1
| 765
|
Lucky
|
75,217,081
| 6,494,707
|
Tensor of Lists: how to convert to tensor of one list?
|
<p>I have a tensor like this</p>
<pre><code>tensor([[4.],[1.]], device='cuda:0')
</code></pre>
<p>I wanna change to the following:</p>
<pre><code>tensor([4.,1.], device='cuda:0')
</code></pre>
<p>How to do that?</p>
<p>I am not sure if this is the reason for the error:</p>
<pre><code>loss = nn.MSELoss(y_tilde,y)
RuntimeError: Boolean value of Tensor with more than one value is ambiguous
</code></pre>
|
<python><pytorch><tensor><torch>
|
2023-01-24 03:11:11
| 1
| 2,236
|
S.EB
|
75,217,063
| 192,221
|
Polars table convert a list column to separate rows i.e. unnest a list column to multiple rows
|
<p>I have a Polars dataframe in the form:</p>
<pre><code>df = pl.DataFrame({'a':[1,2,3], 'b':[['a','b'],['a'],['c','d']]})
</code></pre>
<pre><code>βββββββ¬βββββββββββββ
β a β b β
β --- β --- β
β i64 β list[str] β
βββββββͺβββββββββββββ‘
β 1 β ["a", "b"] β
β 2 β ["a"] β
β 3 β ["c", "d"] β
βββββββ΄βββββββββββββ
</code></pre>
<p>I want to convert it to the following form. I plan to save to a parquet file, and query the file (with sql).</p>
<pre><code>βββββββ¬ββββββ
β a β b β
β --- β --- β
β i64 β str β
βββββββͺββββββ‘
β 1 β "a" β
β 1 β "b" β
β 2 β "a" β
β 3 β "c" β
β 3 β "d" β
βββββββ΄ββββββ
</code></pre>
<p>I have seen an <a href="https://stackoverflow.com/questions/74638704/polars-unnesting-columns-algorithmically-without-a-for-loop">answer that works on struct columns</a>, but <code>df.unnest('b')</code> on my data results in the error:</p>
<pre><code>SchemaError: Series of dtype: List(Utf8) != Struct
</code></pre>
<p>I also found <a href="https://github.com/pola-rs/polars/issues/3282" rel="noreferrer">a github issue</a> that shows list can be converted to a struct, but I can't work out how to do that, or if it applies here.</p>
|
<python><dataframe><python-polars>
|
2023-01-24 03:08:49
| 1
| 6,017
|
kristianp
|
75,216,980
| 9,521,905
|
Tensorflow object detection from a video
|
<p>I am trying to use tensorflow's available pretrained modal and see objects gets identified in a video. Here is what I've tried.</p>
<p><div class="snippet" data-lang="js" data-hide="false" data-console="true" data-babel="false">
<div class="snippet-code">
<pre class="snippet-code-html lang-html prettyprint-override"><code>import numpy as np
import tensorflow as tf
import cv2
# Load pretraine model
model = tf.compat.v2.saved_model.load(r'C:\Users\g\Downloads\faster_rcnn_openimages_v4_inception_resnet_v2_1')
# Open a video file
cap = cv2.VideoCapture(r'C:\Users\g\Desktop\1\training_videos\11.avi')
while True:
# Read a frame from the video
ret, frame = cap.read()
if not ret:
break
# Run the frame through the model
inputs = tf.constant(frame[np.newaxis, ...])
outputs = model(inputs)
# Get the object detect results
boxes, scores, classes, num = outputs["detection_boxes"], outputs["detection_scores"], outputs["detection_classes"], \
outputs["num_detections"]
# Draw the detect boxes on the frame
for i in range(num.numpy()[0]):
if scores.numpy()[0, i] > 0.5:
box = boxes.numpy()[0, i]
x1, y1, x2, y2 = box[1] * frame.shape[1], box[0] * frame.shape[0], box[3] * frame.shape[1], box[2] * \
frame.shape[0]
cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 0, 255), 2)
cv2.putText(frame, '{}'.format(classes.numpy()[0, i]), (int(x1), int(y1)), cv2.FONT_HERSHEY_SIMPLEX, 1,
(255, 0, 0), 2)
# Show the frame
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
# Release the video file and close the window
cap.release()
cv2.destroyAllWindows()</code></pre>
</div>
</div>
</p>
<p>But the issue I am facing is, when I run this on pycharm it says</p>
<pre><code>line 19, in <module>
outputs = model(inputs)
TypeError: 'AutoTrackable' object is not callable
</code></pre>
<p>Any input is appreciated.</p>
|
<python><tensorflow>
|
2023-01-24 02:46:57
| 1
| 677
|
Berglund
|
75,216,548
| 11,413,442
|
AWS SAM CLI throws error: Error building docker image
|
<p>I am trying to use the SAM CLI on my M1 Mac.</p>
<p>I followed the steps outlined in <a href="https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-getting-started-hello-world.html" rel="noreferrer">these docs</a>:</p>
<pre><code>sam init
cd sam-app
sam build
sam deploy --guided
</code></pre>
<p>I did not modify the code or the yaml files.
I can start the local Lambda function as expected:</p>
<pre><code>β sam-app sam local start-api
Mounting HelloWorldFunction at http://127.0.0.1:3000/hello [GET]
You can now browse to the above endpoints to invoke your functions. You do not need to restart/reload SAM CLI while working on your functions, changes will be reflected instantly/automatically. If you used sam build before running local commands, you will need to re-run sam build for the changes to be picked up. You only need to restart SAM CLI if you update your AWS SAM template
2023-01-23 17:54:06 * Running on http://127.0.0.1:3000/ (Press CTRL+C to quit)
</code></pre>
<p>But as soon as I hit the endpoint by doing:</p>
<pre><code>curl http://localhost:3000/hello
</code></pre>
<p>The Lambda RIE starts throwing errors and returns a 502.</p>
<pre><code>Invoking app.lambda_handler (python3.9)
Image was not found.
Removing rapid images for repo public.ecr.aws/sam/emulation-python3.9
Building image...................
Failed to build Docker Image
NoneType: None
Exception on /hello [GET]
Traceback (most recent call last):
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 2073, in wsgi_app
response = self.full_dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 1518, in full_dispatch_request
rv = self.handle_user_exception(e)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 1516, in full_dispatch_request
rv = self.dispatch_request()
^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/flask/app.py", line 1502, in dispatch_request
return self.ensure_sync(self.view_functions[rule.endpoint])(**req.view_args)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/apigw/local_apigw_service.py", line 361, in _request_handler
self.lambda_runner.invoke(route.function_name, event, stdout=stdout_stream_writer, stderr=self.stderr)
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/commands/local/lib/local_lambda.py", line 137, in invoke
self.local_runtime.invoke(
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/lib/telemetry/metric.py", line 315, in wrapped_func
return_value = func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/lambdafn/runtime.py", line 177, in invoke
container = self.create(function_config, debug_context, container_host, container_host_interface)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/lambdafn/runtime.py", line 73, in create
container = LambdaContainer(
^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_container.py", line 93, in __init__
image = LambdaContainer._get_image(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_container.py", line 236, in _get_image
return lambda_image.build(runtime, packagetype, image, layers, architecture, function_name=function_name)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_image.py", line 164, in build
self._build_image(
File "/opt/homebrew/Cellar/aws-sam-cli/1.70.0/libexec/lib/python3.11/site-packages/samcli/local/docker/lambda_image.py", line 279, in _build_image
raise ImageBuildException("Error building docker image: {}".format(log["error"]))
samcli.commands.local.cli_common.user_exceptions.ImageBuildException: Error building docker image: The command '/bin/sh -c mv /var/rapid/aws-lambda-rie-x86_64 /var/rapid/aws-lambda-rie && chmod +x /var/rapid/aws-lambda-rie' returned a non-zero code: 1
</code></pre>
<p>I found this <a href="https://github.com/aws/aws-sam-cli/issues/3169#issuecomment-906729604" rel="noreferrer">Github issue</a> where someone recommended to do the following:</p>
<pre><code> docker run --rm --privileged multiarch/qemu-user-static --reset -p yes
</code></pre>
<p>But it yielded no results.</p>
<p>Does anyone know what I'm doing wrong, or how to resolve this issue so the docker container can build correctly? Thanks.</p>
|
<python><docker><aws-lambda><serverless><aws-sam-cli>
|
2023-01-24 01:00:53
| 1
| 372
|
Mathis Van Eetvelde
|
75,216,547
| 7,530,975
|
How can I use matplotlib to animate several graphs each with an advancing window of real-time data and utilizing blitting?
|
<p>I currently have a working python program that simultaneously animates one or more graphs each with an advancing window of real-time data. The program utilizes FuncAnimation and replots each graph using the axes plot routine. It is desired to show updates every second in the animation and the program is able to perform as expected when animating a few graphs. However, matplotlib cannot complete updates in a 1 second timeframe when attempting to animate several (>5) graphs.</p>
<p>Understanding that updating graphs in their entirety takes time, I am attempting to employ blitting to the animation process.</p>
<p>I have attempted to simplify and comment the code for an easier understanding. The data I work with is a binary stream from a file. Frames of data within the stream are identified and marked prior to running the code below. Within each frame of data resides electronic signal values that are to be plotted. Each electronic signal has one or more data points within a single binary data frame. The ability to see up to a dozen plotted signals at the same time is desired. The code follows and is commented.</p>
<p>I employ a python deque to mimic a data window of 10 seconds. For each call to the FuncAnimation routine, 1 second of data is placed into the deque and then the deque in processed to produce a xValues and yValues array of data points.
At the bottom of the code is the FuncAnimation routine that is called every 1 second (DisplayAnimatedData). Within that routine are 2 print statements that I have used to determine that the data within the xValues and yValues arrays from the deque are correct and that the plot set_xlim for each graph is correctly being changed in a way to advance the window of animated data.</p>
<p>The plotting is kind of working. However, the xaxis tick values are not updated after the initial set of tick values are applied correctly using calls to set_xlim. And, I expected the yaxis ylim to automatically scale to the data. But it does not. How do I get the xaxis tick values to advance as the data window advances? How to I get the yaxis tick values to display correctly?
Finally, you will notice that the code hides the xaxis of all graphs except the last one. I designed this thinking that although the set_xlim of each graph is called for each pass through the FuncAnimation, that no time is spent redrawing but one xaxis. I hope this will improve performance.
Your insight would be appreciated.</p>
<pre><code>from matplotlib import animation
from matplotlib.backends.backend_qt5agg import FigureCanvasQTAgg as FigureCanvas
from matplotlib.figure import Figure
from collections import deque
from PyQt5 import QtCore, QtGui, QtWidgets
#PlotsUI is code created via Qt Designer
class PlotsUI(object):
def setupUi(self, PlotsUI):
PlotsUI.setObjectName("PlotsUI")
PlotsUI.setWindowModality(QtCore.Qt.NonModal)
PlotsUI.resize(1041, 799)
sizePolicy = QtWidgets.QSizePolicy(QtWidgets.QSizePolicy.Expanding,
QtWidgets.QSizePolicy.Expanding)
sizePolicy.setHorizontalStretch(0)
sizePolicy.setVerticalStretch(0)
sizePolicy.setHeightForWidth(PlotsUI.sizePolicy().hasHeightForWidth())
PlotsUI.setSizePolicy(sizePolicy)
self.gridLayout_2 = QtWidgets.QGridLayout(PlotsUI)
self.gridLayout_2.setObjectName("gridLayout_2")
self.plotLayout = QtWidgets.QVBoxLayout()
self.plotLayout.setObjectName("plotLayout")
self.gridLayout_2.addLayout(self.plotLayout, 0, 0, 1, 1)
self.retranslateUi(PlotsUI)
QtCore.QMetaObject.connectSlotsByName(PlotsUI)
def retranslateUi(self, PlotsUI):
_translate = QtCore.QCoreApplication.translate
PlotsUI.setWindowTitle(_translate("PlotsUI", "Plots"))
#DataSeriesMgr is given a collection of values for a user selected electronic signal
#found in the stream of binary data frames. One instance of this class is dedicated to
#manage the values of one electronic signal.
class DataSeriesMgr:
def __init__(self, frameMultiple, timeRange, dataSeries):
self._dataSeries = dataSeries
#frame multiple will typically be number of binary data frames required
#for 1 second of data (default 100 frames)
self._frameMultiple = frameMultiple
#create a data deque to support the windowing of animated data
#timeRange is the number of framesMultiples(seconds) of data stored in deque
self._dataDeque = deque(maxlen=timeRange)
self._timeRange = timeRange
#index into dataSeries
#keep track of what data has been processed
self._xValueIndex = 0 #byte number in buffer from binary file
self._dataSeriesSz = len(dataSeries)
#get the first available xvalue and yvalue arrays to help facilitate
#the calculation of x axis limits (by default 100 frames of data at a time)
self._nextXValues, self._nextYValues = self.XYDataSetsForAnimation()
if self._nextXValues is not None:
self._nextXLimits = (self._nextXValues[0], self._nextXValues[0] +
self._timeRange)
else:
self._nextXLimits = (None, None)
@property
def DataDeque(self):
return self._dataDeque
@property
def TimeRange(self):
return self._timeRange
@property
def NextXValues(self):
return self._nextXValues
def GetXYValueArrays(self):
allXValues = []
allYValues = []
#xyDataDeque is a collection of x values, y values tuples each 1 sec in duration
#convert what's in the deque to arrays of x and y values
xyDataArray = list(self._dataDeque)
for dataSet in xyDataArray:
for xval in dataSet[0]:
allXValues.append(xval)
for yval in dataSet[1]:
allYValues.append(yval)
#and set the data for the plot line
#print(f'{key}-NumOfX: {len(allXValues)}\n\r')
return allXValues,allYValues
def GatherFrameData(self, dataSubSet):
consolidatedXData = []
consolidatedYData = []
for frameData in dataSubSet: # each frame of data subset will have one or more data points
for dataPointTuple in frameData: # (unimportantValue, x, y) values
if dataPointTuple[0] is None: #no data in this frame
continue
consolidatedXData.append(dataPointTuple[1])
consolidatedYData.append(dataPointTuple[2])
return consolidatedXData,consolidatedYData
def XYDataSetsForAnimation(self):
index = self._xValueIndex #the current location in the data array for animation
nextIndex = index + self._frameMultiple
if nextIndex > self._dataSeriesSz: #we are beyond the number of frames
#there are no more data points to plot for this specific signal
return None, None
dataSubset = self._dataSeries[index:nextIndex]
self._xValueIndex = nextIndex #prepare index for next subset of data to be animated
#gather data points from data subset
xyDataSet = self.GatherFrameData(dataSubset)
#add it to the deque
# the deque holds a window of a number of seconds of data
self._dataDeque.append(xyDataSet)
#convert the deque to arrays of x and y values
xValues, yValues = self.GetXYValueArrays()
return xValues, yValues
def NextXYDataSets(self):
xValues = self._nextXValues
yValues = self._nextYValues
xlimits = self._nextXLimits
self._nextXValues, self._nextYValues = self.XYDataSetsForAnimation()
if self._nextXValues is not None:
self._nextXLimits = (self._nextXValues[0], self._nextXValues[0] +
self._timeRange)
else:
self._nextXLimits = (None, None)
return xValues, yValues, xlimits
class Graph:
def __init__(self, title, dataSeriesMgr):
self._title = title
self._ax = None
self._line2d = None
self._xlimits = None
self._dataSeriesMgr = dataSeriesMgr
@property
def DataSeriesMgr(self):
return self._dataSeriesMgr
@DataSeriesMgr.setter
def DataSeriesMgr(self, val):
self._dataSeriesMgr = val
@property
def AX(self):
return self._ax
@AX.setter
def AX(self, ax):
self._ax = ax
line2d, = self._ax.plot([], [], animated=True)
self._line2d = line2d
self._ax.set_title(self._title, fontweight='bold', size=10)
@property
def Line2D(self):
return self._line2d
@Line2D.setter
def Line2D(self,val):
self._line2d = val
@property
def Title(self):
return self._title
@property
def ShowXAxis(self):
return self._showXAxis
@ShowXAxis.setter
def ShowXAxis(self, val):
self._showXAxis = val
self._ax.xaxis.set_visible(val)
@property
def XLimits(self):
return self._xlimits
@XLimits.setter
def XLimits(self, tup):
self._xlimits = tup
self._ax.set_xlim(tup[0], tup[1])
class Plotter(QtWidgets.QDialog):
def __init__(self, parentWindow):
super(Plotter, self).__init__()
self._parentWindow = parentWindow
#Matplotlib Figure
self._figure = Figure()
self._frameMultiple = 100 #there are 100 frames of data per second
self._xaxisRange = 10 #make the graphs have a 10 second xaxis range
self._animationInterval = 1000 #one second
#PyQt5 UI
#add the canvas to the UI
self.ui = PlotsUI()
self.ui.setupUi(self)
self._canvas = FigureCanvas(self._figure)
self.ui.plotLayout.addWidget(self._canvas)
self.show()
def PlaceGraph(self,aGraph,rows,cols,pos):
ax = self._figure.add_subplot(rows,cols,pos)
aGraph.AX = ax
def Plot(self, dataSeriesDict):
self._dataSeriesDict = {}
self._graphs = {}
#for this example, simplify the structure of the data to be plotted
for binaryFileAlias, dataType, dataCode, dataSpec, dataTupleArray in dataSeriesDict.YieldAliasTypeCodeAndData():
self._dataSeriesDict[dataCode] = DataSeriesMgr(self._frameMultiple, self._xaxisRange, dataTupleArray)
self._numberOfGraphs = len(self._dataSeriesDict.keys())
#prepare for blitting
pos = 1
self._lines = []
lastKey = None
for k,v in self._dataSeriesDict.items():
#create a graph for each series of data
aGraph = Graph(k,v)
self._graphs[k] = aGraph
#the last graph will show animated x axis
lastKey = k
#and place it in the layout
self.PlaceGraph(aGraph, self._numberOfGraphs, 1, pos)
aGraph.ShowXAxis = False
#collect lines from graphs
self._lines.append(aGraph.Line2D)
pos += 1
#show the x axis of the last graph
lastGraph = self._graphs[lastKey]
lastGraph.ShowXAxis = True
#Animate
self._animation = animation.FuncAnimation(self._figure, self.DisplayAnimatedData,
None, interval=self._animationInterval, blit=True)
def DisplayAnimatedData(self,i):
indx = 0
haveData = False
for key, graph in self._graphs.items():
allXValues, allYValues, xlimits = graph.DataSeriesMgr.NextXYDataSets()
if allXValues is None: #no more data
continue
# print(f'{key}-NumOfX:{len(allXValues)}')
# print(f'{key}-XLimits: {xlimits[0]}, {xlimits[1]}')
self._lines[indx].set_data(allXValues, allYValues)
#call set_xlim on the graph.
graph.XLimits = xlimits
haveData = True
indx += 1
if not haveData: #no data ??
self._animation.event_source.stop()
return self._lines
</code></pre>
|
<python><matplotlib><animation>
|
2023-01-24 01:00:38
| 1
| 341
|
GAF
|
75,216,543
| 18,091,372
|
Using geopandas to explore a geojson LineString
|
<p>I have the following python code:</p>
<pre><code>import geopandas
data = {
"type": "FeatureCollection",
"features": [
{
"type": "Feature",
"geometry": {
"type": "GeometryCollection",
"geometries": [
{
"type": "LineString",
"coordinates": [
[-118, 32], [-119, 33], [-120, 34], [-121, 35], [-122, 36], [-123, 37], [-124, 38]
]
}
]
},
"properties": {
"provider": "MyProvider"
}
}
]
}
gdf = geopandas.GeoDataFrame.from_features(data)
gdf.explore()
</code></pre>
<p>when I run this, it generates the warning:</p>
<blockquote>
<p>UserWarning: GeoJsonTooltip is not configured to render for GeoJson
GeometryCollection geometries. Please consider reworking these
features: [{'provider': 'MyProvider'}] to MultiPolygon for full
functionality.</p>
</blockquote>
<p>And the tiles on the map do not load, although, the line, defined by the geojson data does show up.</p>
<p><a href="https://i.sstatic.net/e6zhA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/e6zhA.png" alt="explore result" /></a></p>
<p>If I use just gdf.plot(), I get the expected image:</p>
<p><a href="https://i.sstatic.net/9QEkp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9QEkp.png" alt="plot result" /></a></p>
<p>But, I want the interactive map tiles that using .explore() provides.</p>
<p>What does the warning mean exactly? How does my data need to change so this will work?</p>
|
<python><geojson><geopandas>
|
2023-01-24 01:00:10
| 1
| 796
|
Eric G
|
75,216,507
| 3,380,902
|
Plotly Dash: Uncaught (in promise) Error after upgrading dash to 2.7.1
|
<p>I have a Dash App running inside a Flask App. I am seeing a bunch of errors in the console after upgrading dash to <code>2.7.1</code></p>
<pre><code>Uncaught (in promise) Error: A listener indicated an asynchronous response by returning true, but the message channel closed before a response was received
Uncaught (in promise) Error: There is already a source with this ID
at r.addSource (async-plotlyjs.js:2:1020888)
at i.addSource (async-plotlyjs.js:2:1219363)
at l.addSource (async-plotlyjs.js:2:2988732)
at async-plotlyjs.js:2:2989736
at h (async-plotlyjs.js:2:2989770)
at l.update (async-plotlyjs.js:2:2990100)
at b.updateData (async-plotlyjs.js:2:2338377)
at async-plotlyjs.js:2:2336961
</code></pre>
<p><a href="https://i.sstatic.net/BpoKW.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/BpoKW.png" alt="error-ss" /></a></p>
<p>I do not see any errors in the application logs. What is the error? Any suggestions on where to look / troubleshoot?</p>
<p>Application code:</p>
<p>tab1.py</p>
<pre><code>import dash
from dash import dcc
import pandas as pd
df = pd.DataFrame({
'x': [1, 2, 3],
'Lat': [37.774322, 37.777035, 37.773033],
'Long': [-122.489761, -122.485555, -122.491220]
})
layout = html.Div(
dcc.Graph(id="map"),
dcc.Input(id="inp")
)
@app.callback(
Output('map','figure'),
Input('inp','value')
)
def fin(val):
# do something
data = []
data.append({
"type": "scattermapbox",
"lat": df["Lat"],
"lon": df["Long"],
"name": "Location",
"showlegend": False,
"hoverinfo": "text",
"mode": "markers",
"clickmode": "event+select",
"customdata": df.loc[:,cd_cols].values,
"marker": {
"symbol": "circle",
"size": 8,
"opacity": 0.7,
"color": "black"
}
}
)
layout = {
"autosize": True,
"hovermode": "closest",
"mapbox": {
"accesstoken": MAPBOX_KEY,
"bearing": 0,
"center": {
"lat": xxx,
"lon": xxx
},
"pitch": 0,
"zoom": zoom,
"style": "satellite-streets",
},
}
return ({'data': data, 'layout': layout})
</code></pre>
<p>application.py</p>
<pre><code>import dash
import flask
from dash import dcc, html
import dash_bootstrap_components as dbc
import os
# External stylesheets
SANDSTONE = "xxxxx"
external_stylesheets = [
SANDSTONE,
{
'href': 'custom.css',
'rel': 'stylesheet'
},
{
'href': 'https://use.fontawesome.com/releases/v5.10.2/css/all.css',
'rel': 'stylesheet'
}
]
application = dash.Dash(__name__,
requests_pathname_prefix='/dashboard/',
#serve_locally = False,
suppress_callback_exceptions = True,
meta_tags=[
{"name": "viewport", "content": "width=device-width, initial-scale=1"}
],
external_stylesheets=external_stylesheets,
)
server = application.server
# Title the app.
application.title = "Stroom - Platform Demo"
</code></pre>
<p>index.py</p>
<pre><code># In[32]:
import pandas as pd
import dash
from dash.dependencies import Input, Output, State
from dash import dcc
import dash_bootstrap_components as dbc
from dash import html
#import dash_design_kit as ddk
import plotly as py
from plotly import graph_objs as go
from plotly.graph_objs import *
import flask
from application import application
import os
from tabs import comps, analysis, deals, returns
from pages import home
import traceback
# In[8]:
server = application.server
# App Layout
application.layout = html.Div([
# header
html.Div([
html.Div(
html.Img(src='https://ss.s3.us-west-1.amazonaws.com/logo-black.png',height="100%"),
style={"float":"right",
"width":"170px",
"height":"100px",
"margin-top":"-84px"
}
),
html.Div(
[
html.H4("Market Intelligence", style={"textAlign":"center"}),
html.Hr(),
dbc.Nav(
[
dbc.NavLink("Tab1", href="/tab1", active="partial"),
dbc.NavLink("Tab2", href="/tab2", active="partial"),
],
vertical=True,
fill=True,
pills=True,
),
],
style = {
"position": "fixed",
"top": 0,
"left": 0,
"bottom": 0,
"width": "10rem",
"padding": "1rem 1rem",
"background-color": "#f8f9fa",
},
),
dcc.Location(id='url'),
html.Div(id='page-content'),
# Store component
dcc.Store(id="comps-store", storage_type="local"),
# Store component for graphs
dcc.Store(id="modal-store", storage_type="local"),
],
)
])
# Render page content
@application.callback(Output("page-content", "children"),
[
Input('url', 'pathname')
]
)
def display_content(pathname):
print(pathname)
if pathname in ["/","/dashboard/","/dashboard2"]:
return tab1.layout
elif pathname == "/comps":
return comps.layout
else:
return dash.no_update
</code></pre>
<p><strong>init</strong>.py</p>
<pre><code>from flask import Flask, redirect
from flask_sqlalchemy import SQLAlchemy
from flask_login import LoginManager, login_required
import sys
import os
sys.path.append("..") # Adds higher directory to python modules path.
from werkzeug.middleware.dispatcher import DispatcherMiddleware
from werkzeug.serving import run_simple
from index import application as dashApp
import pymysql
from sqlalchemy import create_engine
database = 'login'
server_auth = Flask(__name__, instance_relative_config=False)
server_auth.config['SECRET_KEY'] = os.environ["pwd"]
server_auth.config['SQLALCHEMY_DATABASE_URI'] = "mysql+pymysql://{}:{}@{}/{}".format(os.environ["user"],os.environ["pwd"],os.environ["host"], database)
# Update this for Production
server_auth.config['TESTING'] = True
# init SQLAlchemy so we can use it later in our models
db = SQLAlchemy(server_auth)
db.init_app(server_auth)
login_manager = LoginManager()
login_manager.login_view = 'auth.login'
login_manager.init_app(server_auth)
from .models import users, init_db
init_db() # created mysql tables
@login_manager.user_loader
def load_user(user_id):
# since the user_id is just the primary key of our user table, use it in the query for the user
return users.query.get(int(user_id))
# blueprint for auth routes in our app
from .auth import auth as auth_blueprint
server_auth.register_blueprint(auth_blueprint)
# blueprint for non-auth parts of app
from .main import main as main_blueprint
server_auth.register_blueprint(main_blueprint)
# from .app import appdash as dash_blueprint
# app.register_blueprint(dash_blueprint)
# return server_auth
@server_auth.route('/dashboard')
@login_required
def dashboard():
return redirect('/dashboard')
app = DispatcherMiddleware(server_auth,
{'/dashboard': dashApp.server})
if __name__ == '__main__':
run_simple('0.0.0.0', 80, app, use_reloader=True, use_debugger=True)
</code></pre>
|
<javascript><python><plotly-dash>
|
2023-01-24 00:51:46
| 1
| 2,022
|
kms
|
75,216,496
| 2,747,734
|
How to match between the nth occurrence and nth +1 occurrence using regex
|
<p>I have the following text:</p>
<pre><code>aBcD-19/WES/VA-MKL-2217223/2020
</code></pre>
<p>I would like to extract what is between the 2nd occurrence of / and the 3rd occurrence of /. Based on the text, the following will be extracted</p>
<pre><code>VA-MKL-2217223
</code></pre>
<p>So far I came out with this pattern '\S+?/' which gives me 3 matches. That it, whats behind each /. I only want to depend on the slashes.</p>
|
<python><regex>
|
2023-01-24 00:48:59
| 1
| 1,313
|
Sarah cartenz
|
75,216,470
| 1,887,261
|
How to get data from both sides of a many to many join in django
|
<p>Let's say I have the following models:</p>
<pre><code>class Well(TimeStampMixin, models.Model):
plate = models.ForeignKey(Plate, on_delete=models.CASCADE, related_name="wells")
row = models.TextField(null=False)
column = models.TextField(null=False)
class Meta:
unique_together = [["plate", "row", "column"]]
class Antibiotic(TimeStampMixin, models.Model):
name = models.TextField(null=True, default=None)
class WellConditionAntibiotic(TimeStampMixin, models.Model):
wells = models.ManyToManyField(Well, related_name="well_condition_antibiotics")
volume = models.IntegerField(null=True, default=None)
stock_concentration = models.IntegerField(null=True, default=None)
dosage = models.FloatField(null=True, default=None)
antibiotic = models.ForeignKey(
Antibiotic, on_delete=models.RESTRICT, related_name="antibiotics"
)
</code></pre>
<p>In plain english, there are a set of wells and each well can have multiple and many different types of antibiotics.</p>
<p>I'm trying to fetch the data of a given well and all of the antibiotics contained inside it.</p>
<p>I've tried <code>WellConditionAntibiotic.objects.filter(wells__id=1).select_related('antibiotic')</code></p>
<p>which gives me this query:</p>
<pre><code>SELECT
"kingdom_wellconditionantibiotic"."id",
"kingdom_wellconditionantibiotic"."created_at",
"kingdom_wellconditionantibiotic"."updated_at",
"kingdom_wellconditionantibiotic"."volume",
"kingdom_wellconditionantibiotic"."stock_concentration",
"kingdom_wellconditionantibiotic"."dosage",
"kingdom_wellconditionantibiotic"."antibiotic_id",
"kingdom_antibiotic"."id",
"kingdom_antibiotic"."created_at",
"kingdom_antibiotic"."updated_at",
"kingdom_antibiotic"."name"
FROM
"kingdom_wellconditionantibiotic"
INNER JOIN "kingdom_wellconditionantibiotic_wells" ON (
"kingdom_wellconditionantibiotic"."id" = "kingdom_wellconditionantibiotic_wells"."wellconditionantibiotic_id"
)
INNER JOIN "kingdom_antibiotic" ON (
"kingdom_wellconditionantibiotic"."antibiotic_id" = "kingdom_antibiotic"."id"
)
WHERE
"kingdom_wellconditionantibiotic_wells"."well_id" = 1
</code></pre>
<p>This gives me all of the antibiotic data, <strong>but none of the well data</strong>. So I tried
<code>Well.objects.filter(pk=1).select_related(['well_condition_antibiotics', 'antibiotic']).query</code> which errored.</p>
<p>How can I generate a django query to include all well data and all well antibiotic data?</p>
|
<python><django><django-models><django-queryset>
|
2023-01-24 00:43:52
| 1
| 12,666
|
metersk
|
75,216,176
| 5,807,808
|
json normalize to Dataframe for nested objects, Python
|
<p>i am trying to use normalize function to convert json to data frame using json_normalize.
This is the json i am working with</p>
<pre><code>data = {
"Parent":
[
{
"Attributes":
[
{
"Values": [{
"Month": "Jan",
"Value": "100"
}],
"Id": "90",
"CustId": "3"
},
{
"Values": [{
"Month": "Jan",
"Value": "101"
}],
"Id": "88"
},
{
"Values": [{
"Month": "Jan",
"Value": "102"
}],
"Id": "89"
}
],
"DId": "1234"
},
{
"Attributes":
[
{
"Values": [{
"Month": "Jan",
"Value": "200"
}],
"Id": "90",
"CustId": "3"
},
{
"Values": [{
"Month": "Jan",
"Value": "201"
}],
"Id": "88"
},
{
"Values": [{
"Month": "Jan",
"Value": "202"
}],
"Id": "89"
}
],
"DId": "5678"
}
]
}
</code></pre>
<p>and this is what i have tried</p>
<pre><code>print(type(data))
result = pd.json_normalize(data, record_path=['Parent',['Attributes']], max_level=2)
print(result.to_string())
</code></pre>
<p>And it gave the result , but it is missing the DId and values column is still a list of dict
<a href="https://i.sstatic.net/oDtZ9.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oDtZ9.png" alt="enter image description here" /></a></p>
<p>And this is what i want to achieve</p>
<p><a href="https://i.sstatic.net/1GaLm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1GaLm.png" alt="enter image description here" /></a></p>
<p>Any guidance how to accomplish it would be highly appreciated.</p>
|
<python><pandas><dataframe>
|
2023-01-23 23:42:35
| 2
| 401
|
peter
|
75,216,174
| 8,783,155
|
Torchserve streaming of inference responses with gRPC
|
<p>I am trying to send a singular request to a Torchserve server and retrieve a stream of responses. The processing of the request takes some time and I would like to receive intermeddiate updates over the course of the run. I am quite new to torchserve and especially gRPC but I assume that I either need to write a custom endpoint plugin for torchserve or alter the source code directly as the current proto files of Torchserve support unary gRPC calls.</p>
<p>I have found examples of near real-time video which implemented a version of client-side streaming via request batching however that is not what I need.</p>
<p>Question: Is there a way to implement server-side response streaming in the latest Torchserve version? Or would I need to change the proto files and the Java source in order to allow for it?</p>
|
<python><pytorch><grpc><torchserve>
|
2023-01-23 23:42:22
| 1
| 781
|
P_Andre
|
75,216,089
| 6,205,382
|
How to calculate percentage of column using pandas pivot_table()
|
<p>I am attempting to get the frequency of objects per version expressed as a percentage.</p>
<p>Input <code>dfsof = pd.read_clipboard()</code></p>
<pre><code>file version object
path1 1.0 name
path1 1.0 session
path1 1.0 sequence
path2 2.01 name
path2 2.01 session
path2 2.01 sequence
path3 2.01 name
path3 2.01 session
path3 2.01 earthworm
</code></pre>
<p>Using the following, I am able to get frequency of each <code>file</code>.</p>
<p><code>dfsof.pivot_table(index=['object'], values=['file'], columns=['version'], aggfunc=len, fill_value=0, margins=True)</code></p>
<pre><code> file
version 1.0 2.01 All
object
earthworm 0 1 1
name 1 2 3
sequence 1 1 2
session 1 2 3
All 3 6 9
</code></pre>
<p>I want to divide each count per object/version by the total number of distinct files for that version. Using the expected return table as an example, earthworms shows up in the input only once for <code>version 2.01</code>, so I expect 0% for <code>version 1.0</code> and 50% for <code>version 2.01</code> since only one of the files has that value.</p>
<p>Using <code>dfsof.groupby('version')['file'].nunique()</code> returns the frequency of files per version, which is the denominator for each of object/version in the table above. What I am struggling with is how to apply the denominator values to the pivot_table. I have seen examples of this using grand totals and subtotals but I can't seem to find nor figure out how to divide by the unique number of files per version. Any help would be greatly appreciated.</p>
<pre><code>version
1.00 1
2.01 2
</code></pre>
<p>Expected return</p>
<pre><code> path
version 1.0 2.01 All
object
earthworm 0% 50% 1
name 100% 100% 3
sequence 100% 50% 2
session 100% 100% 3
All 3 6 9
</code></pre>
|
<python><pandas><group-by><pivot-table>
|
2023-01-23 23:26:08
| 1
| 2,239
|
CandleWax
|
75,215,883
| 17,696,880
|
Why does this capture group capture a single character but not everything that the capture group covers?
|
<pre class="lang-py prettyprint-override"><code>import re
#input_example
capture_where_capsule = "((PL_ADVB='la gran biblioteca rΓ‘pidamente y luego llegamos allΓ')hacΓa)"
list_all_adverbs_of_place = ["de allΓ", "de alli", "allΓ", "alli", "de allΓ‘", "de alla", "allΓ‘", "alla", "arriba", "abajo", "a dentro", "adentro", "dentro", "a fuera", "afuera", "fuera", "hacΓa", "hacia", "encΓma de", "encima de", "por sobre", "sobre"]
place_reference = r"((?i:\w\s*)+)"
pattern = re.compile(r"\(\(PL_ADVB='" + place_reference + r"'\)" + rf"{'|'.join(list_all_adverbs_of_place)}" + r"\)", re.IGNORECASE)
m1 = re.search(pattern, capture_where_capsule)
if m1:
place_reference_string = m1.group()[1]
print(repr(place_reference_string))
</code></pre>
<p>Why does this capturing group fail to capture all this substring?</p>
<p>The parentheses of the capturing group enclose the entire pattern that should be responsible for capturing the text that matches.</p>
<p>The substring that should capture would be this substring (and not other):</p>
<pre><code>'la gran biblioteca rΓ‘pidamente y luego llegamos allΓ'
</code></pre>
|
<python><python-3.x><regex><string><regex-group>
|
2023-01-23 22:54:24
| 1
| 875
|
Matt095
|
75,215,850
| 3,582,831
|
How should I split the dataset in the Keras data generator train-valid-test?
|
<p>How should I split the dataset in the Keras data generator <code>train-valid-test</code>?
Should I load the train folder as a train-valid subset and load the test folder ad the test to use with <code>model.predict(...).</code> or use <code>valid_generator</code> again?</p>
<pre><code>train_datagen = ImageDataGenerator(rescale=1./255)
valid_datagen = ImageDataGenerator(rescale=1./255)
train_generator = train_datagen.flow_from_directory(
train_dir,
target_size=(224, 224),
class_mode='categorical')
valid_generator = valid_datagen.flow_from_directory(
test_dir,
target_size=(224, 224),
class_mode='categorical')
history = model.fit(train_generator,
epochs=10,
validation_data=valid_generator,
verbose=1)
</code></pre>
<p>, Here the predict step</p>
<pre><code>pred = model.predict(valid_generator)
</code></pre>
|
<python><tensorflow><keras>
|
2023-01-23 22:50:18
| 1
| 4,867
|
coder
|
75,215,847
| 2,130,515
|
How to fix not found element on Selenium
|
<p>I want to run some query on <a href="https://www.scribbr.com/paraphrasing-tool/" rel="nofollow noreferrer">here</a></p>
<pre><code>from time import sleep
from selenium import webdriver
from selenium.webdriver.chrome.service import Service
from selenium.webdriver.common.by import By
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
driver.get("https://www.scribbr.com/paraphrasing-tool/")
WebDriverWait(driver, 30).until(
EC.visibility_of_element_located((By.TAG_NAME, "iframe")))
frames = driver.find_elements(by=By.TAG_NAME, value='iframe')
for f in frames:
print("Frame src: ", f.get_attribute('src'))
# Frame src: https://quillbot.scribbr.com/?
# independentTool=true&language=EN&partnerCompany=scribbr.com
# Frame src:
# Frame src: https://consentcdn.cookiebot.com/sdk/bc-v4.min.html
driver.switch_to.frame(0)
print(driver.page_source)
# '<html><head></head><body></body></html>' !!!!
input_text_area = driver.find_element(By.ID, 'paraphraser-input-box') # `NoSuchElementException`
</code></pre>
|
<python><selenium>
|
2023-01-23 22:49:18
| 2
| 1,790
|
LearnToGrow
|
75,215,801
| 499,699
|
PyTorch Transformer Encoder masking sequence values
|
<p>It was my understanding that in the PyTorch <code>TransformerEncoder</code> I can pass a mask which would then stop certain features being attended to. By doing so, the model learns to "fill in the gaps" and becomes more resilient to noise and overfitting.</p>
<p>In the documentation for <code>TransformerEncoder</code> which I'm using, it is described as a <a href="https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html" rel="nofollow noreferrer"><code>mask</code></a> without any details. The other class I use <a href="https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html" rel="nofollow noreferrer">TransformerEncoderLayer</a> also doesn't describe it in detail, only the page about <a href="https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html" rel="nofollow noreferrer">torch.nn.Transformer</a> actually goes into some detail.</p>
<p>If I pass a mask which is <code>[B, S]</code> where <code>B</code> is batch, and <code>S</code> is the sequence, I'd be masking <em>which</em> part of the sequence to mask.</p>
<p>Alas, trying that ends in an error which says that the mask needs to be <code>[S,S]</code> which I don't get why. Going through the internet, this doesn't seem to be asked much, and only this <a href="https://stats.stackexchange.com/questions/598239/how-is-padding-masking-considered-in-the-attention-head-of-a-transformer">question on cross-validated</a> seems to get into it, but without the information I needed.</p>
<p>Aside from using the <code>padding</code> mask (which I'd normally use to avoid attending to padded or null values of my Sequence) how can I realistically pass a mask which will tell the model to ignore certain parts of the input sequence (per Batch or per Sequence)?</p>
|
<python><pytorch><transformer-model>
|
2023-01-23 22:43:01
| 1
| 14,954
|
Γlex
|
75,215,780
| 6,077,239
|
Implement qcut functionality using polars
|
<p>I have been using polars but it seems like it lacks qcut functionality as pandas do.</p>
<p>I am not sure about the reason but is it possible to achieve the same effect as pandas qcut using current available polars functionalities?</p>
<p>The following shows an example about what I can do with pandas qcut.</p>
<pre><code>import pandas as pd
data = pd.Series([11, 1, 2, 2, 3, 4, 5, 1, 2, 3, 4, 5])
pd.qcut(data, [0, 0.2, 0.4, 0.6, 0.8, 1], labels=['q1', 'q2', 'q3', 'q4', 'q5'])
</code></pre>
<p>The results are as follows:</p>
<pre><code>0 q5
1 q1
2 q1
3 q1
4 q3
5 q4
6 q5
7 q1
8 q1
9 q3
10 q4
11 q5
dtype: category
</code></pre>
<p>So, I am curious how can I get the same result by using polars?</p>
<p>Thanks for your help.</p>
|
<python><python-polars>
|
2023-01-23 22:39:52
| 1
| 1,153
|
lebesgue
|
75,215,442
| 1,115,833
|
efficient parsing of large nested json into a file
|
<p>I would like to parse this json <code>Affiliations</code> into a text file of entries without the email addresses:</p>
<pre><code># example of one entry: total entries: ~1million
{
"_index": "group",
"_type": "_doc",
"_id": "9890798789",
"_score": 1,
"_source": {
"Bibtex": {
"Article": {
"AuthorList": [
{
"Affiliation": {
"Affiliation": "Some departmentA, some university, city, country. Electronic address: gh@example.com."
}
},
{
"Affiliation": {
"Affiliation": "Some departmentB, some university, city, country. Electronic address: jh@example.com."
}
},
{
"Affiliation": {
"Affiliation": "Some departmentA, some university, city, country; Institute, Sydney, country. Electronic address: yu@example.com."
}
},
{
"Affiliation": {
"Affiliation": "Some department some university, city, country. Electronic address: nj@example.com."
}
},
{
"Affiliation": {
"Affiliation": "department, university, Sydney, country. Electronic address: bg@a.b.au."
}
},
{
"Affiliation": {
"Affiliation": "Some departmentA, some university, city, country. Electronic address: we@example.com."
}
}
]
}
}
}
}
</code></pre>
<p>output: text file with the following for one entry:</p>
<pre><code>Some departmentA, some university, city, country; Institute, Sydney, country.
Some departmentB, some university, city, country.
Some department some university, city, country.
department, university, Sydney, country.
....
more entries from other nested jsons
</code></pre>
<p>I have some 1million entries of varying lengths, so I am unsure how to parallelise (say GNU parallel or python mp) and still have unique entries with no email addresses.</p>
|
<python><python-3.x><bash>
|
2023-01-23 21:54:51
| 1
| 7,096
|
JohnJ
|
75,215,428
| 17,945,841
|
After applying PCA ,K-means assigns a center that is far from the cluster it self
|
<p>My data contains about 1200 observations and 25 features. I performed dimensionality reduction with PCA, and then used k-means.</p>
<p>It seems I'm getting random centers, as they are far from the cluster itself. Look at the yellow cluster for example, where is its center?</p>
<p>Notice that the clusters didn't move at all when I changed the k value, and the output I'm getting is just wrong. Have a look, this is for k = 3:</p>
<p><a href="https://i.sstatic.net/IZ2Vb.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/IZ2Vb.png" alt="enter image description here" /></a></p>
<p>and this is for k = 2, the <code>kmeans.cluster_centers_</code> is:</p>
<pre><code>array([[-0.10621571, -0.33357351, -0.31757225, -0.41859302, -0.34180062,
-0.40582209, -0.39882614, -0.41326583, -0.40258176, -0.27391731,
-0.38387384, -0.373283 , 0.30265487, -0.16514534, -0.38956931,
-0.18097048, -0.38672061, -0.38919503, -0.3935108 , -0.26672966,
-0.3337329 , -0.27723583, -0.29186106, -0.32065177, -0.29334765],
[ 0.47938423, 1.50551998, 1.43330135, 1.88923921, 1.54265145,
1.8316001 , 1.80002522, 1.86519597, 1.81697549, 1.23627317,
1.73254086, 1.68474116, -1.36597464, 0.74535178, 1.75824629,
0.8167755 , 1.74538923, 1.75655704, 1.77603542, 1.20383309,
1.50623935, 1.25125068, 1.31725887, 1.44720019, 1.3239683 ]])
</code></pre>
<p>and it looks like this:</p>
<p><a href="https://i.sstatic.net/20d38.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/20d38.png" alt="enter image description here" /></a></p>
<h4>Here is the code:</h4>
<pre><code>kmeans = KMeans(n_clusters=2, random_state=0).fit(scores_data)
metadata['cluster'] = kmeans.labels_
# Apply PCA to the data to visualize the clusters in 2D
pca = PCA(n_components=2)
pca_result = pca.fit_transform(scores_data)
plt.scatter(pca_result[:, 0], pca_result[:, 1], c=metadata['cluster'], s=50)
cluster_centers = kmeans.cluster_centers_
plt.scatter(cluster_centers[:, 0], cluster_centers[:, 1], c='red', marker='o')
# Add labels to the cluster centers
for i, center in enumerate(cluster_centers):
plt.annotate(f"Cluster {i}", (center[0], center[1]),
textcoords="offset points",
xytext=(0,10), ha='center', fontsize=20)
plt.rcParams["figure.figsize"] = (16,11)
plt.show()
</code></pre>
|
<python><machine-learning><cluster-analysis><k-means>
|
2023-01-23 21:53:34
| 0
| 1,352
|
Programming Noob
|
75,215,389
| 13,142,245
|
Pandas pivot to explode columns and fill values?
|
<p>I have a Pandas dataframe like so</p>
<pre><code>movie, week, sales
-----, ----, ------
1, 1, 100
...
1, 52, 200
2, 1, 500,
...
</code></pre>
<p>What I actually want it to look like is</p>
<pre><code>movie, week_1, ... week_52,
1, 1, 200
2, 1, 500
</code></pre>
<p>So what's effectively happened is there is now one row per movie and one column per week and the value of <code>df[movie, week]</code> equals the sales of that movie on that week.</p>
<p>I've tried <code>df.transpose()</code> and <code>df.pivot(columns=['week', 'sales'])</code> but neither is accomplishing the intended effect.</p>
<p>What the Pandas way to do this?</p>
|
<python><pandas><dataframe><pivot>
|
2023-01-23 21:48:22
| 0
| 1,238
|
jbuddy_13
|
75,215,360
| 11,370,582
|
Pass Pandas Dataframe between functions using Upload component in Plotly Dash
|
<p>I am working with an Excel workbook in plotly dash and I need to access the dataframe it returns so I can use it as an input to another function, I'm following this tutorial - <a href="https://dash.plotly.com/dash-core-components/upload" rel="nofollow noreferrer">https://dash.plotly.com/dash-core-components/upload</a></p>
<p>I've tried a couple of approaches, per this solution here - <a href="https://stackoverflow.com/questions/68181416/is-it-possible-to-upload-a-csv-file-in-dash-and-also-store-it-as-a-pandas-datafr">Is it possible to upload a csv file in Dash and also store it as a pandas DataFrame?</a></p>
<p>but neither are working. When I set <code>df</code> as a global variable, which i also know is not good practice, I'm getting an error in the app that it is not defined <code>NameError: name 'df' is not defined</code></p>
<p>I've also tried to pass the <code>df</code> variable between the functions but am unclear on how to access it when the inputs to the function <code>parse_contents</code> are all coming from the dash html component.</p>
<p>Here is my current code, you should be able to execute it with any excel workbook.</p>
<pre><code>import base64
import datetime
import io
import dash
from dash.dependencies import Input, Output, State
from dash import dcc, html, dash_table
import pandas as pd
external_stylesheets = ['https://codepen.io/chriddyp/pen/bWLwgP.css']
app = dash.Dash(__name__, external_stylesheets=external_stylesheets)
app.layout = html.Div([
dcc.Upload(
id='upload-data',
children=html.Div([
'Drag and Drop or ',
html.A('Select Files')
]),
style={
'width': '100%',
'height': '60px',
'lineHeight': '60px',
'borderWidth': '1px',
'borderStyle': 'dashed',
'borderRadius': '5px',
'textAlign': 'center',
'margin': '10px'
},
# Allow multiple files to be uploaded
multiple=True
),
html.Div(id='output-data-upload'),
])
def parse_contents(contents, filename, date):
content_type, content_string = contents.split(',')
global df #define data frame as global
decoded = base64.b64decode(content_string)
try:
if 'csv' in filename:
# Assume that the user uploaded a CSV file
df = pd.read_csv(
io.StringIO(decoded.decode('utf-8')))
elif 'xls' in filename:
# Assume that the user uploaded an excel file
print(io.BytesIO(decoded))
workbook_xl = pd.ExcelFile(io.BytesIO(decoded))
df = pd.read_excel(workbook_xl, sheet_name=0)
# print(df)
except Exception as e:
print(e)
return html.Div([
'There was an error processing this file.'
])
return html.Div([
html.H5(filename),
html.H6(datetime.datetime.fromtimestamp(date)),
dash_table.DataTable(
df.to_dict('records'),
[{'name': i, 'id': i} for i in df.columns]
),
html.Hr(), # horizontal line
# For debugging, display the raw contents provided by the web browser
html.Div('Raw Content'),
html.Pre(contents[0:200] + '...', style={
'whiteSpace': 'pre-wrap',
'wordBreak': 'break-all'
})
]), df
@app.callback(Output('output-data-upload', 'children'),
Input('upload-data', 'contents'),
State('upload-data', 'filename'),
State('upload-data', 'last_modified'))
def update_output(list_of_contents, list_of_names, list_of_dates):
print(df)
if list_of_contents is not None:
children = [
parse_contents(c, n, d) for c, n, d in
zip(list_of_contents, list_of_names, list_of_dates)]
return children
if __name__ == '__main__':
app.run_server(debug=True)
</code></pre>
|
<python><pandas><function><parameter-passing><plotly-dash>
|
2023-01-23 21:44:44
| 1
| 904
|
John Conor
|
75,215,318
| 4,106,634
|
Combine multiple table schemas in one separate table in databricks
|
<p>I am learning Databricks and going through an exploring and research phase. I found various tools while triaging python syntax. I.e. <strong>Dataframes</strong> with <strong>PySpark</strong>, <strong>Bamboo</strong> library, <strong>Apache Spark</strong> library to read SQL objects, Panda etc.</p>
<p>But somehow I am mixing up the usage of these all libraries.</p>
<p>I am exploring these alternatives to achieve one task. How to combine or merge multiple table schemas in one table.</p>
<p>For an instance, if I have 20 tables. Table1, Table2, Table3, ... , Table20.</p>
<h2>Table1 has 3 columns.</h2>
<p>Col1 | Col2 | Col3 |</p>
<h2>Table2 has 4 columns.</h2>
<p>Col4 | Col5 | Col6 | Col7</p>
<p>and that way all 20 table has such columns.</p>
<p>Can the community provide some insight to approach this implementation?</p>
<p>This is greatly appreciated.</p>
<p><strong>Troubleshooting</strong></p>
<pre><code>schema1 = "database string, table string"
schema2 = "table string, column string, datatype string"
tbl_df = spark.createDataFrame([],schema1)
tbl_df3 = spark.createDataFrame([],schema2)
db_list = [x[0] for x in spark.sql("SHOW DATABASES").rdd.collect()]
for db in db_list:
# Get list of tables from each the database
db_tables = spark.sql(f"SHOW TABLES in {db}").rdd.map(lambda row: row.tableName).collect()
# For each table, get list of columns
for table in db_tables:
#initialize the database
spark.sql(f"use {db}")
df = spark.createDataFrame([(db, table.strip())], schema=['database', 'table'])
tbl_df = tbl_df.union(df)
</code></pre>
<p>above code works fine and gives me list of all databases and tables associated. Now next thing I am trying to achieve is <code>schema2</code>.
Based on list of tables, I managed to retrieve list of columns from all tables. But I believe it returns in the form of <code>tuple</code>.
For example, when I iterate for loop on <code>db_tables</code> as below,</p>
<pre><code>columns = spark.sql(f"DESCRIBE TABLE {table}").rdd.collect()
</code></pre>
<p>this gives me below result.</p>
<pre><code>[Row(col_name='Col1', data_type='timestamp', comment=None), Row(col_name='Col2', data_type='string', comment=None), Row(col_name='Col3', data_type='string', comment=None)]
[Row(col_name='Col4', data_type='timestamp', comment=None), Row(col_name='Col5', data_type='timestamp', comment=None), Row(col_name='Col6', data_type='timestamp', comment=None), Row(col_name='Col7', data_type='timestamp', comment=None)]
</code></pre>
<p>This is my real challenge now. I try to figure out to access above <code>Row</code> format and transform in below tabular outcome.</p>
<pre><code>Table | Column | Datatype
-------------------------
Table1| Col1 | Timestamp
Table1| Col2 | string
Table1| Col3 | string
Table2| Col4 | Timestamp
Table2| Col5 | string
Table2| Col6 | string
Table2| Col7 | string
</code></pre>
<p>Finally I will merge or join 2 dataframes based on table name (taking it as key) and generate final outcome like below.</p>
<pre><code>Database| Table | Column | Datatype
------------------------------------
Db1 | Table1| Col1 | Timestamp
Db1 | Table1| Col2 | string
Db1 | Table1| Col3 | string
Db1 | Table2| Col4 | Timestamp
Db1 | Table2| Col5 | string
Db1 | Table2| Col6 | string
Db1 | Table2| Col7 | string
</code></pre>
|
<python><azure-databricks>
|
2023-01-23 21:39:50
| 1
| 422
|
adventureworks
|
75,215,099
| 1,098,792
|
Why am I only able to scrape 100 tweets with snscrape?
|
<p>So granted I'm not technical or experience with Python, but I need to scrape maybe 5,000 tweets from the Twitter account of a particular user. I got it working, but I can only scrape 100 tweets β I'm not sure what I'm doing wrong.</p>
<p>Here's the code I use:</p>
<pre><code>snscrape --jsonl --progress --max-results 100 twitter-search "from:jack" > user-tweets.json
tweets_df = pd.read_json('user-tweets.json', lines=True)
tweets_df
</code></pre>
<p>And then it throws a message saying there's invalid syntax at the "100" line, but it still returns 100 tweets anyway. If I try and increase that number to, say, 1000 tweets, it just doesn't work at all. How do I need to fix this so I can retrieve more tweets?</p>
|
<python><web-scraping><twitter>
|
2023-01-23 21:12:25
| 1
| 9,623
|
Tom Maxwell
|
75,215,078
| 2,925,387
|
aioredis - how to process redis messages asynchronously?
|
<p>I have to process every message from redis asynchronously.
Here is my attempt with aioredis:</p>
<pre class="lang-py prettyprint-override"><code>import asyncio
import aioredis
async def reader(channel: aioredis.client.PubSub):
while True:
data = None
try:
message = await channel.get_message(ignore_subscribe_messages=True)
if message is not None:
print(f"(Reader) Message Received: {message}")
data = message["data"]
except asyncio.TimeoutError:
pass
if data is not None:
await process_message(data)
async def process_message(message):
print(f"start process {message=}")
await asyncio.sleep(10)
print(f"+processed {message=}")
async def publish(redis, channel, message):
print(f"-->publish {message=} to {channel=}")
result = await redis.publish(channel, message)
print(" +published")
return result
async def main():
redis = aioredis.from_url("redis://localhost")
pubsub = redis.pubsub()
await pubsub.subscribe("channel:1", "channel:2")
future = asyncio.create_task(reader(pubsub))
await publish(redis, "channel:1", "Hello")
await publish(redis, "channel:2", "World")
await future
if __name__ == "__main__":
asyncio.run(main())
</code></pre>
<p>The problem is that aioredis does not <code>get_message</code> if the previous message was not processed. The messages are processed one by one.</p>
<p>How to solve that issue?</p>
|
<python><redis><aioredis>
|
2023-01-23 21:09:46
| 1
| 789
|
SKulibin
|
75,215,046
| 8,065,797
|
Replace incorrect occurrence of comas
|
<p>I would need to replace all incorrect occurrences of coma in the text</p>
<pre><code>jlkj,jlkj
jlkj, jlkj
jlkj,jlkj
jlkj , jlkj
</code></pre>
<p>with <code>, </code>, that means it would be <code>jlkj, jlkj</code> in result for all the cases.</p>
<p>The the pattern I came up with does not work: <code>(.*),(.*)</code></p>
|
<python><regex>
|
2023-01-23 21:05:49
| 1
| 529
|
JanFi86
|
75,215,044
| 402,649
|
Python WSGI ignoring python-home virtual environment?
|
<p>Using WSGI with Python 3.6 on RHEL 8 to run Flask on Apache.</p>
<p>In the WSGI configuration, we have the following line:</p>
<pre><code>WSGIDaemonProcess myapp user=apache group=apache threads=5 python-home=/var/www/FLASKAPPS/myapp/venv
</code></pre>
<p>When activating the virtual environment and running the Flask application from the command line, the application works fine in the test server.</p>
<p>When connecting through Apache, the error_log records that modules that are in the venv can't be loaded.</p>
<p>NOTE: I have seen some other questions on this topic, but they all seem to be about using a different version of Python than the version used to set up WSGI. In my case, the WSGI module is for Python 3.6 as far as I can tell.</p>
|
<python><python-3.6><wsgi>
|
2023-01-23 21:05:32
| 1
| 3,948
|
Wige
|
75,214,968
| 8,964,393
|
How to create lists from pandas columns
|
<p>I have created a pandas dataframe using this code:</p>
<pre><code>import numpy as np
import pandas as pd
ds = {'col1': [1,2,3,3,3,6,7,8,9,10]}
df = pd.DataFrame(data=ds)
</code></pre>
<p>The dataframe looks like this:</p>
<pre><code>print(df)
col1
0 1
1 2
2 3
3 3
4 3
5 6
6 7
7 8
8 9
9 10
</code></pre>
<p>I need to create a field called <code>col2</code> that contains in a list (for each record) the last 3 elements of col1 while iterating through each record. So, the resulting dataframe would look like this:</p>
<p><a href="https://i.sstatic.net/vNvld.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vNvld.png" alt="enter image description here" /></a></p>
<p>Does anyone know how to do it by any chance?</p>
|
<python><pandas><list><iterator><field>
|
2023-01-23 20:56:40
| 4
| 1,762
|
Giampaolo Levorato
|
75,214,941
| 817,630
|
How should I type-hint a generic class method that calls another class method to construct an instance of the class?
|
<p>I'm trying to figure out the type hinting for a couple of abstract classes that I want to use as base classes for classes have a <code>create</code> function. Specifically, this is for deserialization typing.</p>
<p>My simple example looks like this</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC, abstractmethod
from typing import Type, TypeVar
T = TypeVar("T", bound="A")
class A(ABC):
@classmethod
@abstractmethod
def create(cls: Type[T]) -> T:
pass
class B(A, ABC):
@classmethod
@abstractmethod
def create_b(cls: Type[T]) -> T:
pass
@classmethod
def create(cls) -> T:
return cls.create_b()
</code></pre>
<p>When I run Mypy against this I get</p>
<blockquote>
<p>error: Incompatible return value type (got "B", expected "T")</p>
</blockquote>
<p>I'm confused by this because <code>B</code> inherits from <code>A</code>, and I thought that <code>T</code> more or less represented "any <code>A</code>".</p>
<p>I can change the penultimate line to</p>
<pre class="lang-py prettyprint-override"><code> def create(cls: Type[T]) -> T:
</code></pre>
<p>but then I get</p>
<blockquote>
<p>error: "Type[T]" has no attribute "create_b"</p>
</blockquote>
<p>What should I be doing to get mypy to pass?</p>
|
<python><mypy><typing><type-variables>
|
2023-01-23 20:54:08
| 1
| 5,912
|
Kris Harper
|
75,214,612
| 1,664,944
|
Nested JSON to Pandas Data frame
|
<p><strong>Input JSON</strong></p>
<pre><code>{
"tables": {
"Cloc": {
"MINT": {
"CANDY": {
"Mob": [{
"loc": "AA",
"loc2": ["AAA"]
},
{
"loc": "AA",
"loc2": ["AAA"]
}
]
}
}
},
"T1": {
"MINT": {
"T2": {
"T3": [{
"loc": "AAA",
"loc2": ["AAA"]
}]
}
}
}
}
}
</code></pre>
<p><strong>Expected Output</strong></p>
<p><a href="https://i.sstatic.net/ZBeZo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ZBeZo.png" alt="enter image description here" /></a></p>
<p>=========================================</p>
<p>I have tried processing this nested JSON using pd.json_normalize()</p>
<p>data = pd.DataFrame(nested_json['tables']['Cloc']['MINT']['CANDY']['Mob'])</p>
<p>I have no clue how to proceed, any help or guidance is highly appreciated.</p>
<p>Many Thanks!!</p>
|
<python><json><pandas><nested-json>
|
2023-01-23 20:16:06
| 1
| 706
|
Alind Billore
|
75,214,218
| 372,429
|
Looking for python equivalent of powershell code
|
<p>I have some powershell code that I want to have an equivalent way to do with python:</p>
<pre><code>$secretARgs = @{ fileName = "ssltls.crt"
>> fileAttachment = [IO.File]::ReadAllBytes("c:\users\myuser\git\myrepos\ssltls.crt")
>> } | ConvertTo-Json
</code></pre>
<p>It uses this to add to rest api call to store a file in an aplication called securevault.<br />
I am not sure what the python method would be to do this ReadAllBytes and then convert to json.</p>
|
<python><powershell>
|
2023-01-23 19:33:54
| 1
| 479
|
archcutbank
|
75,214,198
| 20,589,275
|
How to call a function in django view.py
|
<p>I need to call the function get_context_data in my VacanciesView.
Code views.py:</p>
<pre><code>def VacanciesView(request):
navigate_results = Navigate.objects.all()
context_vac = { 'navigate_results': navigate_results}
get_context_data(self, **kwargs)
return render(request, 'main/vacancies.html', context_vac)
def get_context_data(self, **kwargs):
context = super(VacanciesView, self).get_context_data(**kwargs)
context['vacancies'] = sorted(get_vacancies(), key=lambda item: item["published_at"][:10])
return context
</code></pre>
<p>I try to do it by get_context_data(self, **kwargs), but it takes: name 'self' is not defined</p>
|
<python><django>
|
2023-01-23 19:32:00
| 1
| 650
|
Proger228
|
75,214,006
| 391,104
|
Use Python to split a string and output each token into seperate lines?
|
<pre><code>$ echo '"a1","a2","a3"'|python3 -c "import sys; print('\n'.join(sys.stdin.read().splitlines()), sep='\n');"
"a1","a2","a3"
$ echo '"a1","a2","a3"'|python3 -c "import sys; [print(a, sep='\n') for a in sys.stdin.read().splitlines()];"
"a1","a2","a3"
$ echo '"a1","a2","a3"'|python3 -c "import sys,pprint; pprint.pprint('\n'.join(sys.stdin.read().splitlines()));"
'"a1","a2","a3"'
</code></pre>
<p>I have tried many different methods but none of them work for me.
I would like to print each token into a seperate line.</p>
<p><strong>Question</strong>> How can I get the following results?</p>
<pre><code>"a1"
"a2"
"a3"
</code></pre>
<p>Thank you</p>
|
<python>
|
2023-01-23 19:14:05
| 1
| 36,152
|
q0987
|
75,213,916
| 9,582,542
|
Return Dynamic MSSQL query from python
|
<p>I have this script below. I left out the connection details for security purposes but the code executes with out error in python and in MS SQL 2019</p>
<pre><code>import pandas as pd
import pyodbc
sqlInsertScript = """
SELECT 'INSERT INTO dbo.table(' +
STUFF ((
SELECT ', [' + name + ']'
FROM syscolumns
WHERE id = OBJECT_ID('dbo.table') AND
name <> 'me'
FOR XML PATH('')), 1, 1, '') +
')
Select ' +
STUFF ((
SELECT ', [' + name + ']'
FROM syscolumns
WHERE id = OBJECT_ID('dbo.table') AND
name <> 'me'
FOR XML PATH('')), 1, 1, '') + '
From dbo.QueryPerformance
where EntryID > Number'
"""
insertquery = pd.read_sql_query(sqlInsertScript,cnxn1)
</code></pre>
<p>My issue is that this query returns 0 None in python. I need it to return the string I am creating because I intend to use that query going forward. I know the query works it returns correct text when run from MSSQL SSMS.</p>
|
<python>
|
2023-01-23 19:03:36
| 1
| 690
|
Leo Torres
|
75,213,815
| 5,266,176
|
is it possible to run jupyter nbconvert in colab to execute on ipynb file that mounts to google drive?
|
<p>I am trying to execute a series of ipynb notebooks sequentially from another, in colab.</p>
<pre><code>from google.colab import drive
drive.mount('/content/gdrive')
!jupyter nbconvert --to notebook --ExecutePreprocessor.timeout=600 --execute "/content/drive/My Drive/Mike Education/Code/Extract/Reading Levels/Read in Cleaned Reading Levels Data.ipynb"
</code></pre>
<p>the request times out, even though if I run the code manually it works fine in well under 10 minutes.</p>
<p>executing debug shows it times out at mount:</p>
<pre><code>[NbConvertApp] Searching ['/root/.jupyter', '/root/.local/etc/jupyter', '/usr/etc/jupyter', '/usr/local/etc/jupyter', '/etc/jupyter'] for config files
[NbConvertApp] Looking for jupyter_config in /etc/jupyter
[NbConvertApp] Looking for jupyter_config in /usr/local/etc/jupyter
[NbConvertApp] Looking for jupyter_config in /usr/etc/jupyter
[NbConvertApp] Looking for jupyter_config in /root/.local/etc/jupyter
[NbConvertApp] Looking for jupyter_config in /root/.jupyter
[NbConvertApp] Looking for jupyter_nbconvert_config in /etc/jupyter
[NbConvertApp] Looking for jupyter_nbconvert_config in /usr/local/etc/jupyter
[NbConvertApp] Looking for jupyter_nbconvert_config in /usr/etc/jupyter
[NbConvertApp] Looking for jupyter_nbconvert_config in /root/.local/etc/jupyter
[NbConvertApp] Looking for jupyter_nbconvert_config in /root/.jupyter
[NbConvertApp] Converting notebook /content/drive/My Drive/Mike Education/Code/Extract/Reading Levels/Read in Cleaned Reading Levels Data.ipynb to notebook
[NbConvertApp] Notebook name is 'Read in Cleaned Reading Levels Data'
[NbConvertApp] Applying preprocessor: ExecutePreprocessor
[NbConvertApp] Starting kernel: ['/usr/bin/python3', '-m', 'ipykernel_launcher', '-f', '/tmp/tmp1x8jnmqk.json', '--HistoryManager.hist_file=:memory:']
[NbConvertApp] Connecting to: tcp://127.0.0.1:47431
[NbConvertApp] connecting iopub channel to tcp://127.0.0.1:45195
[NbConvertApp] Connecting to: tcp://127.0.0.1:45195
[NbConvertApp] connecting shell channel to tcp://127.0.0.1:44753
[NbConvertApp] Connecting to: tcp://127.0.0.1:44753
[NbConvertApp] connecting stdin channel to tcp://127.0.0.1:55067
[NbConvertApp] Connecting to: tcp://127.0.0.1:55067
[NbConvertApp] connecting heartbeat channel to tcp://127.0.0.1:55431
[NbConvertApp] connecting control channel to tcp://127.0.0.1:47431
[NbConvertApp] Connecting to: tcp://127.0.0.1:47431
[NbConvertApp] Executing notebook with kernel: python3
[NbConvertApp] Executing cell:
#imports
import pandas as pd
import numpy as np
from google.colab import drive
from google.colab import files
import gspread
import time
import os
from google.colab import auth
auth.authenticate_user()
import gspread
from google.auth import default
creds, _ = default()
gc = gspread.authorize(creds)
from oauth2client.client import GoogleCredentials
drive.mount('/content/drive')
[NbConvertApp] msg_type: status
[NbConvertApp] content: {'execution_state': 'busy'}
[NbConvertApp] msg_type: execute_input
[NbConvertApp] content: {'code': "#imports\nimport pandas as pd\nimport numpy as np\nfrom google.colab import drive\nfrom google.colab import files\nimport gspread\nimport time\nimport os\nfrom google.colab import auth\nauth.authenticate_user()\n\nimport gspread\nfrom google.auth import default\ncreds, _ = default()\n\ngc = gspread.authorize(creds)\nfrom oauth2client.client import GoogleCredentials\n\ndrive.mount('/content/drive')\n", 'execution_count': 1}
[NbConvertApp] ERROR | Timeout waiting for execute reply (600s).
[NbConvertApp] Kernel is taking too long to finish, terminating
Traceback (most recent call last):
File "/usr/local/bin/jupyter-nbconvert", line 8, in <module>
sys.exit(main())
File "/usr/local/lib/python3.8/dist-packages/jupyter_core/application.py", line 277, in launch_instance
return super().launch_instance(argv=argv, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/traitlets/config/application.py", line 992, in launch_instance
app.start()
File "/usr/local/lib/python3.8/dist-packages/nbconvert/nbconvertapp.py", line 340, in start
self.convert_notebooks()
File "/usr/local/lib/python3.8/dist-packages/nbconvert/nbconvertapp.py", line 510, in convert_notebooks
self.convert_single_notebook(notebook_filename)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/nbconvertapp.py", line 481, in convert_single_notebook
output, resources = self.export_single_notebook(notebook_filename, resources, input_buffer=input_buffer)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/nbconvertapp.py", line 410, in export_single_notebook
output, resources = self.exporter.from_filename(notebook_filename, resources=resources)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/exporters/exporter.py", line 179, in from_filename
return self.from_file(f, resources=resources, **kw)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/exporters/exporter.py", line 197, in from_file
return self.from_notebook_node(nbformat.read(file_stream, as_version=4), resources=resources, **kw)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/exporters/notebook.py", line 32, in from_notebook_node
nb_copy, resources = super(NotebookExporter, self).from_notebook_node(nb, resources, **kw)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/exporters/exporter.py", line 139, in from_notebook_node
nb_copy, resources = self._preprocess(nb_copy, resources)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/exporters/exporter.py", line 316, in _preprocess
nbc, resc = preprocessor(nbc, resc)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/preprocessors/base.py", line 47, in __call__
return self.preprocess(nb, resources)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/preprocessors/execute.py", line 405, in preprocess
nb, resources = super(ExecutePreprocessor, self).preprocess(nb, resources)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/preprocessors/base.py", line 69, in preprocess
nb.cells[index], resources = self.preprocess_cell(cell, resources, index)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/preprocessors/execute.py", line 438, in preprocess_cell
reply, outputs = self.run_cell(cell, cell_index, store_history)
File "/usr/local/lib/python3.8/dist-packages/nbconvert/preprocessors/execute.py", line 571, in run_cell
if self._passed_deadline(deadline):
File "/usr/local/lib/python3.8/dist-packages/nbconvert/preprocessors/execute.py", line 541, in _passed_deadline
self._handle_timeout()
File "/usr/local/lib/python3.8/dist-packages/nbconvert/preprocessors/execute.py", line 504, in _handle_timeout
raise TimeoutError("Cell execution timed out")
TimeoutError: Cell execution timed out
</code></pre>
<p>Other things I have made sure:<br />
the file can be accessed by anyone who has a link<br />
the file runs fine when I run it manually<br />
I am the only user and this is done on my drive<br />
I believe the kernels are the same? python-3. though I am not 100% sure how to check</p>
|
<python><jupyter-notebook><google-colaboratory>
|
2023-01-23 18:53:45
| 0
| 371
|
elcunyado
|
75,213,272
| 1,472,474
|
limited resource for ProcessPoolExecutor (for example tcp port)
|
<h2>Motivation:</h2>
<p>I want to run some function (<code>myfunc</code> in the following example) for a large
number of values, and this function needs a free TCP port, and because the
number of TCP ports is limited I want to have only as many TCP ports as is the
number of worker processes.</p>
<blockquote>
<p>In reality, it runs a complex integration test which runs for many seconds
or minutes for each input value, it creates multiple processes which
communicate using this port - that's why the port is necessary.
For the purpose of this question, it's a limited resource which needs to be
bound to a specific worker.
I've simplified it to the following code which really doesn't need any TCP ports but
my real code does and it is the core of my problem.</p>
</blockquote>
<h2>Example:</h2>
<p>If I had unlimited TCP ports, it could look like this:</p>
<pre><code>import concurrent.futures
from typing import List, Tuple
PORT_BASE = 8000
def myfunc(param: Tuple[int, int]) -> str:
port_offset, data = param
port = PORT_BASE + port_offset
return 'executing %d on port %d' % (data, port)
def main() -> None:
n = 2
values = range(20)
with concurrent.futures.ProcessPoolExecutor(n) as exe:
print('\n'.join(exe.map(myfunc, enumerate(values))))
if __name__ == '__main__':
main()
</code></pre>
<p>But since I <em>do</em> have a limit on TCP ports I would like to "bind" a port to a
worker - but I don't know how to do that using <code>ProcessPoolExecutor</code>.</p>
<p>I would like to have (in context of the example above) only 2 TCP ports which
would be permanenly bound to 2 workers.</p>
<p>The main reason I'm using <code>ProcessPoolExecutor</code> is to avoid creating
<code>multiprocessing.Process</code>es, passing data using <code>multiprocessing.Queue</code>s,
joining processes, making sure all <code>Queue</code>s returned data, etc.</p>
<p>In fact I had exactly that and it was not reliable and it was a lot of code
(i.e. - a lot of bugs), so I would really, really like to switch to
<code>ProcessPoolExecutor</code> which in my experience "just works" with minimal code and is almost maintenance-free.</p>
<p>The only thing that prevents me from using <code>ProcessPoolExecutor</code> here is the need for limited resource - TCP ports.</p>
<h2>What have I tried:</h2>
<ol>
<li><p>using some "TCP port manager" which "lends" ports and then "takes them
back" to be available for "lending" again, but this involves a lot of shared
objects and communication and synchronization and in the end it's as much
complex as the original "Processes+Queues" solution and probably wouldn't be very
reliable</p>
</li>
<li><p>since python 3.7, <code>ProcessPoolExecutor</code> has <code>initializer</code> parameter, I
thought could be used to set some "worker-specific" data, but as far as I
understand, it can only modify global variables which is not useful for this case</p>
</li>
</ol>
<h2>Question:</h2>
<p>Is there any other way to set some "worker-specific" data and pass them to
<code>myfunc</code> using <code>ProcessPoolExecutor</code>?</p>
<p>Or is my only hope going back to manually creating <code>Process</code>es and gluing them with <code>Queue</code>s?</p>
|
<python><python-multiprocessing><concurrent.futures>
|
2023-01-23 17:55:40
| 2
| 5,587
|
Jan Spurny
|
75,213,238
| 11,168,443
|
Trying to find the difference between 2 datetime objects and getting only Hours, Minutes, and Seconds
|
<p>I am pulling an ending time from a json api response. Then I am trying to calculate the time remaining, before the end time, to call a function after it ends.</p>
<pre><code>end_time_string = data['endTime'] # Get the end time in a string in weird format
date_format = "%Y%m%dT%H%M%S.%fZ" # Specify the date format for the datetime parser
end_time = datetime.strptime(end_time_string, date_format) # turn the date time string into datetime object
current_time = datetime.utcnow() # Get current time in UTC time zone, which is what CoC uses.
time_remaining = end_time - current_time # Get the time remaining till end of war
</code></pre>
<p>My end_time is a datetime object. My current_time is a datetime object. But time_remaining is a timedelta object. I am able to pull the hours, minutes and seconds from the object using:</p>
<pre><code>hours, minutes, seconds = map(float, str(time_remaining).split(':'))
</code></pre>
<p>But the problem is that sometimes the time_remaining has days in it, and sometimes it doesn't.</p>
<pre><code>1 day, 4:55:22.761359
-1 days, 23:59:08.45766
</code></pre>
<p>When there are days involved, specifically when the timedelta object goes negative, my script fails.</p>
<p>What is the best find the amount of time between my two datetime objects in ONLY hours, minutes, and seconds, without days included?</p>
|
<python><python-3.x><datetime>
|
2023-01-23 17:52:07
| 2
| 965
|
Lzypenguin
|
75,213,207
| 18,029,950
|
pip returns error: CB_ISSUER_CHECK at startup
|
<p>Pip was running just fine yesterday. Today it's angry. I've attempted the following command:
<code>sudo apt reinstall python3-pip</code></p>
<p>The only thing I've done on this server since yesterday is run "sudo apt update"</p>
<p>What should I do to fix this?</p>
<p>Here is the error:</p>
<pre><code>Traceback (most recent call last):
File "/usr/bin/pip", line 11, in <module>
load_entry_point('pip==20.0.2', 'console_scripts', 'pip')()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 490, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2854, in load_entry_point
return ep.load()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2445, in load
return self.resolve()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2451, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 10, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py", line 9, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py", line 7, in <module>
from pip._internal.cli import cmdoptions
File "/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py", line 24, in <module>
from pip._internal.exceptions import CommandError
File "/usr/lib/python3/dist-packages/pip/_internal/exceptions.py", line 10, in <module>
from pip._vendor.six import iteritems
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 65, in <module>
vendored("cachecontrol")
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 36, in vendored
__import__(modulename, globals(), locals(), level=0)
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/__init__.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/wrapper.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/adapter.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py", line 95, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py", line 46, in <module>
File "/usr/lib/python3/dist-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import crypto, SSL
File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 1553, in <module>
class X509StoreFlags(object):
File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 1573, in X509StoreFlags
CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
Error in sys.excepthook:
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/apport_python_hook.py", line 72, in apport_excepthook
from apport.fileutils import likely_packaged, get_recent_crashes
File "/usr/lib/python3/dist-packages/apport/__init__.py", line 5, in <module>
from apport.report import Report
File "/usr/lib/python3/dist-packages/apport/report.py", line 32, in <module>
import apport.fileutils
File "/usr/lib/python3/dist-packages/apport/fileutils.py", line 12, in <module>
import os, glob, subprocess, os.path, time, pwd, sys, requests_unixsocket
File "/usr/lib/python3/dist-packages/requests_unixsocket/__init__.py", line 1, in <module>
import requests
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py", line 95, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py", line 46, in <module>
File "/usr/lib/python3/dist-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import crypto, SSL
File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 1553, in <module>
class X509StoreFlags(object):
File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 1573, in X509StoreFlags
CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
Original exception was:
Traceback (most recent call last):
File "/usr/bin/pip", line 11, in <module>
load_entry_point('pip==20.0.2', 'console_scripts', 'pip')()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 490, in load_entry_point
return get_distribution(dist).load_entry_point(group, name)
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2854, in load_entry_point
return ep.load()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2445, in load
return self.resolve()
File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2451, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main.py", line 10, in <module>
from pip._internal.cli.autocompletion import autocomplete
File "/usr/lib/python3/dist-packages/pip/_internal/cli/autocompletion.py", line 9, in <module>
from pip._internal.cli.main_parser import create_main_parser
File "/usr/lib/python3/dist-packages/pip/_internal/cli/main_parser.py", line 7, in <module>
from pip._internal.cli import cmdoptions
File "/usr/lib/python3/dist-packages/pip/_internal/cli/cmdoptions.py", line 24, in <module>
from pip._internal.exceptions import CommandError
File "/usr/lib/python3/dist-packages/pip/_internal/exceptions.py", line 10, in <module>
from pip._vendor.six import iteritems
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 65, in <module>
vendored("cachecontrol")
File "/usr/lib/python3/dist-packages/pip/_vendor/__init__.py", line 36, in vendored
__import__(modulename, globals(), locals(), level=0)
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/__init__.py", line 9, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/wrapper.py", line 1, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/CacheControl-0.12.6-py2.py3-none-any.whl/cachecontrol/adapter.py", line 5, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/requests-2.22.0-py2.py3-none-any.whl/requests/__init__.py", line 95, in <module>
File "<frozen importlib._bootstrap>", line 991, in _find_and_load
File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 655, in _load_unlocked
File "<frozen importlib._bootstrap>", line 618, in _load_backward_compatible
File "<frozen zipimport>", line 259, in load_module
File "/usr/share/python-wheels/urllib3-1.25.8-py2.py3-none-any.whl/urllib3/contrib/pyopenssl.py", line 46, in <module>
File "/usr/lib/python3/dist-packages/OpenSSL/__init__.py", line 8, in <module>
from OpenSSL import crypto, SSL
File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 1553, in <module>
class X509StoreFlags(object):
File "/usr/lib/python3/dist-packages/OpenSSL/crypto.py", line 1573, in X509StoreFlags
CB_ISSUER_CHECK = _lib.X509_V_FLAG_CB_ISSUER_CHECK
AttributeError: module 'lib' has no attribute 'X509_V_FLAG_CB_ISSUER_CHECK'
root@localhost:~#
</code></pre>
<p>SO won't let me post this because my text is mostly code. Which, I'm not sure what to do besides add in additional word-salad. There is only so many ways to express "I'm getting error X."</p>
|
<python><python-3.x><pip>
|
2023-01-23 17:48:39
| 1
| 308
|
TxTechnician
|
75,213,102
| 12,906,445
|
Can't train model from checkpoint on Google Colab as session expires
|
<p>I'm using Google Colab for finetuning a pre-trained model.</p>
<p>I successfully preprocessed a dataset and created an instance of the Seq2SeqTrainer class:</p>
<pre class="lang-py prettyprint-override"><code>trainer = Seq2SeqTrainer(
model,
args,
train_dataset=tokenized_datasets["train"],
eval_dataset=tokenized_datasets["validation"],
data_collator=data_collator,
tokenizer=tokenizer,
compute_metrics=compute_metrics
)
</code></pre>
<p>The problem is training it from last checkpoint after the session is over.</p>
<p>If I run <code>trainer.train()</code>, it runs correctly. As it takes a long time, I sometimes came back to the Colab tab after a few hours, and I know that if the session has crashed I can continue training from the last checkpoint like this: <code>trainer.train("checkpoint-5500")</code></p>
<p>The checkpoint data does no longer exist on Google Colab if I come back too late, so even though I know the point the training has reached, I will have to start all over again.</p>
<p>Is there any way to solve this problem? i.e. extend the session?</p>
|
<python><machine-learning><google-colaboratory>
|
2023-01-23 17:38:51
| 1
| 1,002
|
Seungjun
|
75,213,049
| 15,283,859
|
Using np.where to access values in a dictionary - vectorization
|
<p>Given a dictionary</p>
<pre><code>coaching_hours_per_level = {1:30, 2: 55, 3:80, 4:115}
coaching_hours_per_level
</code></pre>
<p>and a dataframe:</p>
<pre><code>df1 = {'skill_0': {'jay': 1, 'roy': 4, 'axel': 5, 'billy': 1, 'charlie': 2},
'skill_1': {'jay': 5, 'roy': 3, 'axel': 2, 'billy': 5, 'charlie': 1},
'skill_2': {'jay': 4, 'roy': 1, 'axel': 2, 'billy': 1, 'charlie': 4},
'skill_3': {'jay': 1, 'roy': 3, 'axel': 5, 'billy': 4, 'charlie': 3},
'skill_4': {'jay': 3, 'roy': 4, 'axel': 2, 'billy': 3, 'charlie': 4},
'skill_5': {'jay': 5, 'roy': 2, 'axel': 4, 'billy': 2, 'charlie': 4},
'skill_6': {'jay': 5, 'roy': 5, 'axel': 2, 'billy': 5, 'charlie': 1},
'skill_7': {'jay': 3, 'roy': 3, 'axel': 4, 'billy': 2, 'charlie': 1},
'skill_8': {'jay': 1, 'roy': 4, 'axel': 2, 'billy': 1, 'charlie': 2},
'skill_9': {'jay': 4, 'roy': 3, 'axel': 4, 'billy': 2, 'charlie': 1}}
</code></pre>
<p>My target is:</p>
<pre><code>target = {'skill_0': {'jim': 3},
'skill_1': {'jim': 5},
'skill_2': {'jim': 1},
'skill_3': {'jim': 2},
'skill_4': {'jim': 1},
'skill_5': {'jim': 2},
'skill_6': {'jim': 3},
'skill_7': {'jim': 5},
'skill_8': {'jim': 3},
'skill_9': {'jim': 3}}
</code></pre>
<p>What i want to do is to understand how many hours of coaching a person might need to catch up on a level of a certain skill. E.g., for Jay in skill_0, Jay has to upskill 2 levels (which is 30 + 55, total of 85h). If the skills is already at the same level or above, it should be 0.</p>
<p>I've tried with <code>np.where</code> like below, and it works to just obtain the difference</p>
<pre><code>np.where(df1>=target.values, 0, target.values-df1)
</code></pre>
<p>But when i try to access the dictionary to get a sum of the hours needed of coaching, it is like np.where doesn't vectorize anymore, even if i try to simply access the value in the dict</p>
<pre><code>
np.where(df1>=target.values, 0, coaching_hours_per_level[target.values+1])
</code></pre>
|
<python><pandas><numpy>
|
2023-01-23 17:33:01
| 1
| 895
|
Yolao_21
|
75,213,013
| 9,173,710
|
How to fill array by keypoints
|
<p>I have an array like this:<br />
<code>arr = [[180, 210, 240, 270, 300],[38.7, 38.4, 38.2, 37.9,37.7]]</code><br />
It contains frame numbers from a video and the value of a sensor recorded during that frame.</p>
<p>I have a program to evaluate the video material and need to know the value of the sensor for each frame. However, the data is only recorded in these large steps not to overwhelm the sensor.</p>
<p>How would I go about creating a computationally cheap function, that returns the sensor value for frames which are not listed? The video is not evaluated from the first frame but some unknown offset.</p>
<p>The data should be filled halfway up and down to the next listed frame, e.g.
Frames 195 - 225 would all use the value recorded for frame 210.
The step is always constant for one video but could vary between videos.<br />
The recording of the sensor starts some time into the video, here 3s in. I wanted to also use that values for all frames before, and similar for the end.
So f(0) == f(180) and f(350) == f(300) for this example.</p>
<p>I don't want to do a binary search through the array every time i need a value.<br />
I also thought about filling a second array with single step in a for loop at the beginning of the program, but the array is much larger than the example above. I am worried about memory consumption and again lookup performance.</p>
<p>This is my try at filing the array at the beginning:</p>
<pre class="lang-py prettyprint-override"><code>sparse_data = [[180, 210, 240, 270, 300],[38.7, 38.4, 38.2, 37.9,37.7]]
delta = sparse_data[0][1] - sparse_data[0][0]
fr_max = sparse_data[0][-1] + delta
fr_min = sparse_data[0][0]
cur_sparse_idx = 1
self.magn_data = np.zeros(fr_max, dtype=np.float32)
for i in range(fr_max):
if i <= (fr_min + delta//2):
self.magn_data[i] = sparse_data[1][0]
elif i > fr_max:
self.magn_data[i] = sparse_data[1][-1]
else:
if (i+delta//2) % delta == 0: cur_sparse_idx += 1
self.magn_data[i] = sparse_data[1][cur_sparse_idx]
</code></pre>
<p>Is there a batter way?</p>
|
<python><arrays><interpolation>
|
2023-01-23 17:28:26
| 2
| 1,215
|
Raphael
|
75,213,001
| 9,781,480
|
How to install package dependencies for a custom Airbyte connector?
|
<p>I'm developing a custom connector for Airbyte, and it involves extracting files from different compressed formats, like <code>.zip</code> or <code>.7z</code>. My plan was to use <a href="http://wummel.github.io/patool/" rel="nofollow noreferrer">patool</a> for this, and indeed it works in local tests, running:</p>
<pre class="lang-bash prettyprint-override"><code>python main.py read --config valid_config.json --catalog configured_catalog_old.json
</code></pre>
<p>However, since Airbyte runs in docker containers, I need those containers to have packages like <code>p7zip</code> installed. So my question is, what is the proper way to do that?</p>
<p>I just downloaded and deployed Airbyte Open Source in my own machine using the recommended commands listed on <a href="https://docs.airbyte.com/quickstart/deploy-airbyte" rel="nofollow noreferrer">Airbyte documentation</a>:</p>
<pre class="lang-bash prettyprint-override"><code>git clone https://github.com/airbytehq/airbyte.git
cd airbyte
docker compose up
</code></pre>
<p>I tried using <code>docker exec -it CONTAINER_ID bash</code> into <code>airbyte/worker</code> and <code>airbyte/connector-builder-server</code>, to install <code>p7zip</code> directly, but it's not working yet. My connector calls patoolib from a Python script, but it is unable to process the given file, because it fails to find a program to extract it. This is the log output:</p>
<pre><code>> patool: Extracting /tmp/tmpan2mjkmn ...
> unknown archive format for file `/tmp/tmpan2mjkmn'
</code></pre>
|
<python><docker><docker-compose>
|
2023-01-23 17:27:37
| 1
| 621
|
PiFace
|
75,212,965
| 5,507,389
|
Class not directly inheriting but can use method from another class
|
<p>I came accross a piece of Python legacy code at work that I couldn't understand how it could work without errors. Obviously I can't write the exact code here but here is a minimal working example:</p>
<pre><code>class ClassB:
def func(self, txt: str):
return self.str_to_uppercase(txt)
class ClassA(ClassB):
def str_to_uppercase(self, txt: str):
return txt.upper()
if __name__ == "__main__":
my_instance = ClassA()
print(my_instance.func("Hello, World!"))
stdout: HELLO, WORLD!
</code></pre>
<p>What's strange to me is that, although <code>ClassB</code> is not directly inheriting from <code>ClassA</code> where the instance method <code>str_to_uppercase()</code> is defined, <code>ClassB</code> is still able to call this method. I should also note that my linter (pylint) is complaining that <code>str_to_uppercase()</code> is not defined in <code>ClassB</code>. So I'm struggling to understand how the mechanics of the code works here regarding inheritence.</p>
<p>Secondly, this code looks strange to me. It doesn't seem very "Pythonic". So, as a second question, I was wondering in which usecases such code is useful?</p>
|
<python><class><inheritance><subclass>
|
2023-01-23 17:24:41
| 2
| 679
|
glpsx
|
75,212,902
| 5,437,090
|
webscraping an image with highlighted text
|
<p>I am doing web scraping on <a href="https://digi.kansalliskirjasto.fi/sanomalehti/binding/761979?term=Katri&term=Katrina&term=Ikonen&page=12" rel="nofollow noreferrer">this URL</a> which is a newspaper image with highlighted words. My purpose is to retrieve all those highlighted words in red. Inspecting the page gives the class: <code>image-overlay hit-rect ng-star-inserted</code> in which attribute <code>title</code> must be extract:</p>
<p><a href="https://i.sstatic.net/tHIeP.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tHIeP.jpg" alt="enter image description here" /></a>
Using the following code snippet with <code>BeautifulSoup</code>:</p>
<pre><code>from bs4 import BeautifulSoup
pg_snippet_highlighted_words = soup.find_all("div", class_="image-overlay hit-rect ng-star-inserted")
print(pg_snippet_highlighted_words) # returns nothing: []
print(pg_snippet_highlighted_words.get("title")) # AttributeError: ("'NoneType' object has no attribute 'get'",) when soup.find() is executed!
</code></pre>
<p>However, I get <code>[]</code> as a result!</p>
<p>My expected result is a list with <code>length of 17</code> in this specific example, containing all the highlighted words in this page, e.g., the ones identified with <code>title</code> attribute in inspect as follows:</p>
<pre><code>EXPECTED_RESULT = ["Katri", "Katrina", "Katri", "Katri", "Katri", "Katri", "Katri", "Katri", "Ikonen.", "Katrina", "Katri", "Ikonen.", "Katri", "Katrina", "Katri", "Katri", "Katri"]
</code></pre>
<p>Is BeautifulSoup a correct tool to extract information when dealing with dynamic content?</p>
<p>Cheers,</p>
|
<python><web-scraping><beautifulsoup>
|
2023-01-23 17:17:55
| 1
| 1,621
|
farid
|
75,212,894
| 6,534,818
|
Translating Python loop to Javascript
|
<p>How can I yield the same output in Javascript, given this Python example?</p>
<p>I want to loop over an array and check a value, if condition met, store it.</p>
<pre><code>arr1 = [ 0, 0, 0, 0, 0, 0, 0, 0, 0, 7, 7, 0, 0, 0, 0, 0, 0, 0,
13, 13, 0, 0, 0, 0, 0, 0, 0, 0, 13, 13, 0, 0, 0, 0, 0, 0,
0, 0, 0, 15, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 14, 0, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0,
0, 0]
new_labels = []
previous = None
for l in arr1:
if l != previous:
new_labels.append(l)
previous = l
# expected:
[0, 7, 0, 13, 0, 13, 0, 15, 0, 14, 0, 2, 0]
</code></pre>
|
<javascript><python>
|
2023-01-23 17:16:57
| 1
| 1,859
|
John Stud
|
75,212,878
| 10,967,961
|
Finding similarity in a pandas variable
|
<p>I have a dataset with company names as follows:</p>
<pre><code>{0: 'SEEO INC',
1: 'BOSCH GMBH ROBERT',
2: 'SAMSUNG SDI CO LTD',
12: 'NAGAI TAKAYUKI',
21: 'WESTPORT POWER INC',
26: 'SAMSUNG ELECTRONICS CO LTD',
27: 'SATO TOSHIO',
28: 'SUMITOMO ELECTRIC INDUSTRIES',
31: 'TOSHIBA KK',
35: 'TEIKOKU SEIYAKU KK',
46: 'MITSUBISHI ELECTRIC CORP',
47: 'IHI CORP',
49: 'WEI XI',
53: 'SIEMENS AG',
56: 'HYUNDAI MOTOR CO LTD',
57: 'COOPER TECHNOLOGIES CO',
58: 'TSUI CHENG-WEN',
64: 'UCHICAGO ARGONNE LLC',
68: 'BAYERISCHE MOTOREN WERKE AG',
70: 'YAMAWA MFG CO LTD',
71: 'YAMAWA MFG. CO., LTD.'}
</code></pre>
<p>the problem is that some of those names refer to the exact same firm but are written differently (e.g. with special symbols as in 70 and 71, or with LIMIED rather than LTD and many others that I am not able to check as firms are 170000). Now I would like of course to call all of them the same way and thought about this strategy:</p>
<ol>
<li>check the similarities of the variable firms (the one displayed) maybe with Louvain similarity;</li>
<li>Give the name of the firm to the most similar strings</li>
</ol>
<p>However, I am not aware of any pandas instrument to perform 1. and am not sure of how to catch the name of the firm in 2. (e.g. YAMAWA in the example above) if not by taking the first word and hoping that this is actually the name of the firm.</p>
<p>Could you please me advise on how to perform 1? Is there a way to deal with situations like mine?</p>
<p>Thank you</p>
|
<python><pandas><string><similarity>
|
2023-01-23 17:15:50
| 1
| 653
|
Lusian
|
75,212,866
| 9,373,756
|
Can I change "import" (english) to "importar" (portuguese) in python?
|
<p>To change a function in python I know I can simply assign to a new variable, like:</p>
<pre><code>imprimir = print
imprimir("test") # it prints "test"
</code></pre>
<p>However I can't do that to <code>import</code>. Like <code>import = importar</code>.
I would like to do the following:</p>
<pre><code>importar = import
como = as
importar pandas como pd
</code></pre>
<p>I know that many people think I should do this in English, but I'm writing an introductory paper that absolutely needs to be in my native language.</p>
<p>Any tips on how to achieve this?</p>
|
<python><built-in>
|
2023-01-23 17:14:32
| 1
| 725
|
Artur
|
75,212,830
| 9,786,534
|
xarray: Combine data variables with discrete observations in a new continuous dimension
|
<p>I am working with a crop calendar that records the day of the year (doy) at which a given phenological state occurs - here the mean planting (<code>plant</code>) and harvest (<code>harvest</code>) seasons (note that the <code>nan</code> printed below are pixels on oceans, the other values contain <code>int</code>):</p>
<pre><code><xarray.Dataset>
Dimensions: (y: 2160, x: 4320)
Coordinates:
* x (x) float64 -180.0 -179.9 -179.8 -179.7 ... 179.7 179.8 179.9 180.0
* y (y) float64 89.96 89.88 89.79 89.71 ... -89.71 -89.79 -89.88 -89.96
Data variables:
plant (y, x) float32 nan nan nan nan nan nan ... nan nan nan nan nan nan
harvest (y, x) float32 nan nan nan nan nan nan ... nan nan nan nan nan nan
</code></pre>
<p>I need to combine the two variables in a dataarray of dimension (doy: 365, y: 2160, x: 4320) in order to track, for each pixel, the phenological state as a function of the doy. Conceptually, the steps I identified so far are:</p>
<ol>
<li>assigne a numerical value for each state, e.g., <code>off=0</code>, <code>plant=1</code>, <code>harvest=2</code></li>
<li>use the doy as an index to the corresponding day in the <code>doy</code> dimension of the new dataarray and assign the numerical value corresponding to the state</li>
<li>complete the values in between using something similar to <code>pandas.DataFrame.fillna</code> with <code>method='ffill'</code></li>
</ol>
<p>I went through the <a href="https://docs.xarray.dev/en/stable/user-guide/reshaping.html" rel="nofollow noreferrer">Reshaping and reorganizing data</a> and the <a href="https://docs.xarray.dev/en/stable/user-guide/combining.html" rel="nofollow noreferrer">Combining Data</a> pages, but with my current understanding of xarray I honestly don't know where to start.</p>
<p>Can anyone point me in a direction? Is what I am trying to do even achievable using only matrix operations or do I have to introduce loops?</p>
<p>PS: Apologies for the confusing formulation of the question itself. I guess that only reflects something fundamental that I am still missing.</p>
|
<python><python-xarray>
|
2023-01-23 17:10:17
| 1
| 324
|
e5k
|
75,212,810
| 458,274
|
Prepopulate a DateTimeField in Django Rest Framework
|
<p>I'm using a Django Rest Framework <a href="https://www.django-rest-framework.org/api-guide/serializers/" rel="nofollow noreferrer">Serializer</a>. Fields allow the <a href="https://www.django-rest-framework.org/api-guide/fields/#initial" rel="nofollow noreferrer">initial</a> parameter to be passed, which prepopulates a values in the browsable API. In the docs, the <code>DateField</code> is used as an example with an initial value of <code>datetime.date.today</code>.</p>
<p>I would like to prepopulate a <code>DateTimeField</code>. However, the initial value is being ignored and I see <code>mm/dd/yyyy, --:-- --</code> as a default value.</p>
<pre><code>import datetime
class MySerializer(serializers.Serializer):
# DateField initial works
my_datefield = serializers.DateField(initial=datetime.date.today)
# DateTimeField initial does *NOT* work
my_datetimefield = serializers.DateTimeField(initial=datetime.datetime.now)
</code></pre>
<p><a href="https://i.sstatic.net/tMGxw.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tMGxw.png" alt="Browseable API, DateTimeField initial is not rendered" /></a></p>
<p>Why is the initial value for a <code>DateTimeField</code> not set? How can I prepopulate the field?</p>
|
<python><datetime><django-rest-framework>
|
2023-01-23 17:09:17
| 1
| 6,931
|
soerface
|
75,212,803
| 7,658,051
|
How can I call python directly inside Ansible playbook, in a for loop?
|
<p>I have developed a playbooks that sends a mail indicating a list of 3-zero-padded numbers.</p>
<p>I want to make the padding in the for loop which prints the information in the mail body.</p>
<p>The playbook takes the number to pad from variable <code>hostvars[host]['number']</code>.</p>
<p>The padding is made via the Ansible syntax</p>
<pre><code>myhost-{{ '%03d' % hostvars[host]['number']|int }}
</code></pre>
<p>as you can see in the following task.</p>
<p>Now, I would like to do the padding via Python, because later I want to make more formatting operations on each element printed by the loop.</p>
<p>So I have tried to substitute the upper line with</p>
<pre><code>{{ "myhost-" + str(hostvars[host]['number']).zfill(3) }}
</code></pre>
<p>but I am getting this error:</p>
<pre><code>Sending e-mail report...
localhost failed | msg: The task includes an option with an undefined variable. The error was: 'str' is undefined
</code></pre>
<p>So I tryed to substitute it (as suggested in comments of this question) with</p>
<pre><code>myhost-{{ hostvars[host].number.zfill(3) }}
</code></pre>
<p>but now I get</p>
<pre><code>Sending e-mail report...
localhost failed: {
"msg": "The task includes an option with an undefined variable. The error was: 'int object' has no attribute 'zfill'
}
</code></pre>
<p>So, how can I call python inside the for loop in order to manipulate the information to print with Python instead than Ansible?</p>
<p><strong>Note:</strong> I want to call python directly in the loop and I don't want to define variable in python and substitute them into the body of the mail.</p>
<p>the task of my playbook:</p>
<pre class="lang-yaml prettyprint-override"><code> - name: Sending e-mail report
community.general.mail:
host: myhost
port: 25
sender: mymail
to:
- people@mail.it
subject: "mail of day {{ current_date }}"
subtype: html
body: |
<br>title
<br>text
{% for host in mylist %}
myhost-{{ '%03d' % hostvars[host]['number']|int }}
<br>
{% endfor %}
<br>text
<br>closure
</code></pre>
|
<python><for-loop><ansible>
|
2023-01-23 17:08:50
| 1
| 4,389
|
Tms91
|
75,212,761
| 12,860,924
|
Segment image into sub-images components using python
|
<p>I am working on medical images containing sub-images. I want to segment these image types into their component sub-images. I tried a lot of codes that do segmentation but nothing of segment the image into the sub-images parts. please help me to do that.</p>
<p><strong>Original image and the sub-images that should be segmented</strong></p>
<p><a href="https://i.sstatic.net/R64yT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/R64yT.jpg" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/3CdW0.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3CdW0.jpg" alt="enter image description here" /></a></p>
<p><strong>Trail code</strong></p>
<pre><code>def tile(filename, dir_in, dir_out, d):
name, ext = os.path.splitext(filename)
img = Image.open(os.path.join(dir_in, filename))
w, h = img.size
grid = product(range(0, h-h%d, d), range(0, w-w%d, d))
for i, j in grid:
box = (j, i, j+d, i+d)
out = os.path.join(dir_out, f'{name}_{i}_{j}{ext}')
img.crop(box).save(out)
dir_in="/content/drive/MyDrive/Images"
dir_out="/content/drive/MyDrive/output_segments"
filename='image 2.jpg'
d=100
output=tile(filename,dir_in,dir_out,d)
</code></pre>
<p>Any help would be appreciated.py</p>
|
<python><image><image-processing><python-imaging-library><image-segmentation>
|
2023-01-23 17:04:54
| 0
| 685
|
Eda
|
75,212,757
| 6,543,183
|
compare several columns grouped by a list pandas
|
<p>I have the following pandas dataframe:</p>
<pre><code> A0 B0 C0 D0 E0 F0 G0 A1 B1 C1 D1 E1 F1 G1 label
0 1.3 2.4 1.5 2.0 2.0 3.2 5 1.3 2.4 1.5 2.0 2.0 3.2 5 True
1 1.3 2.4 1.5 2.0 2.0 3.2 7 1.3 2.4 1.5 2.0 2.0 3.2 5 False
2 1.3 2.4 1.5 2.0 2.0 3.2 7 1.3 2.4 1.5 2.0 2.0 3.2 7 True
3 1.3 2.4 1.5 2.0 2.0 3.2 6 1.3 2.4 1.5 2.0 2.0 3.2 6 True
4 1.3 2.4 1.5 2.0 2.0 3.2 5 1.3 2.4 1.5 2.0 2.0 3.2 6 False
</code></pre>
<p>I want to compare only df['G0'] and df['G1'],</p>
<p>but after comparing the previous columns
(A0 vs A1, B0 vs B1, C0 vs C1, etc, etc)
and keeping them all the same always.</p>
|
<python><pandas><numpy>
|
2023-01-23 17:04:36
| 1
| 880
|
rnv86
|
75,212,737
| 278,383
|
Validating "Parallel" JSON Arrays
|
<p>I am trying to use pydantic to validate JSON that is being returned in a "parallel" array format. Namely, there is an array defining the column names/types followed by an array of "rows" (this is similar to how pandas handles <code>df.to_json(orient='split')</code> seen <a href="https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.to_json.html" rel="nofollow noreferrer">here</a>)</p>
<pre class="lang-json prettyprint-override"><code>{
"columns": [
"sensor",
"value",
"range"
],
"data": [
[
"a",
1,
{"low": 1, "high": 2}
],
[
"b",
2,
{"low": 0, "high": 2}
]
]
}
</code></pre>
<p>I know that I could do this:</p>
<pre class="lang-py prettyprint-override"><code>class ValueRange(BaseModel):
low: int
high: int
class Response(BaseModel):
columns: Tuple[Literal['sensor'], Literal['value'], Literal['range']]
data: List[Tuple[str, int, ValueRange]]
</code></pre>
<p>But this has a few downsides:</p>
<ul>
<li>After parsing, it doesn't allow for an association of the data with the column names. So, you have to do everything by index. Ideally, I would like toΒ parse a response into a <code>List[Row]</code> and then be able to do things like <code>response.data[0].sensor</code>.</li>
<li>It hardcodes the column order.</li>
<li>It doesn't allow for responses that have variable columns in the responses. For example, the same endpoint could also return the following:</li>
</ul>
<pre><code>{
"columns": ["sensor", "value"],
"data": [
["a", 1],
["b", 2]
]
}
</code></pre>
<p>At first I thought that I could use pydantic's <a href="https://docs.pydantic.dev/usage/types/#discriminated-unions-aka-tagged-unions" rel="nofollow noreferrer">discriminated unions</a>, but I'm not seeing how to do this across arrays.</p>
<p>Anyone know of the best approach for validating this type of data? (I'm currently using pydantic, but am open to other libraries if it makes sense).</p>
<p>Thanks!</p>
|
<python><json><pydantic>
|
2023-01-23 17:02:30
| 1
| 663
|
JKD
|
75,212,700
| 5,908,629
|
Run Apache-beam pipeline job on existing google cloud VM
|
<p>I am creating a python apache-beam pipeline that has google cloud SQL ingestion, so when I am deploying the pipeline, a new VM is created automatically which has no access to my google cloud SQL instance, so my job is getting failed each time. showing below error log in job logs</p>
<p><a href="https://i.sstatic.net/tIWDP.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tIWDP.png" alt="enter image description here" /></a></p>
<p>So I am looking for an apache-beam pipeline option with python, if I can pass any argument name as a worker(VM) name or public IP of existing VM so that job will be automatically run over the existing VM that has access to my google cloud SQL VM</p>
<p>So far I have checked this <a href="https://cloud.google.com/dataflow/docs/reference/pipeline-options" rel="nofollow noreferrer">https://cloud.google.com/dataflow/docs/reference/pipeline-options</a> it does not have any worker name argument</p>
<p>Please help me out</p>
<p>Thanks</p>
|
<python><google-cloud-platform><google-cloud-dataflow><apache-beam><apache-beam-io>
|
2023-01-23 16:59:29
| 1
| 2,551
|
lazyCoder
|
75,212,567
| 6,367,971
|
Concatenating CSVs into dataframe with filename column
|
<p>I am trying to concat multiple CSVs that live in subfolders of my parent directory into a data frame, while also adding a new filename column.</p>
<pre><code>/ParentDirectory
β
β
ββββSubFolder 1
β test1.csv
β
ββββSubFolder 2
β test2.csv
β
ββββSubFolder 3
β test3.csv
β test4.csv
β
ββββSubFolder 4
β test5.csv
</code></pre>
<p>I can do something like this to concat all the CSVs into a single data frame</p>
<pre><code>import pandas as pd
import glob
files = glob.glob('/ParentDirectory/**/*.csv', recursive=True)
df = pd.concat([pd.read_csv(fp) for fp in files], ignore_index=True)
</code></pre>
<p>But is there a way to also add the filename of each file as a column to the final data frame, or do I have to loop through each individual file first <em>before</em> concatenating the final data frame? Output should look like:</p>
<pre><code> Col1 Col2 file_name
0 AAAA XYZ test1.csv
1 BBBB XYZ test1.csv
2 CCCC RST test1.csv
3 DDDD XYZ test2.csv
4 AAAA WXY test3.csv
5 CCCC RST test4.csv
6 DDDD XTZ test4.csv
7 AAAA TTT test4.csv
8 CCCC RRR test4.csv
9 AAAA QQQ test4.csv
</code></pre>
|
<python><pandas><concatenation><glob>
|
2023-01-23 16:46:37
| 2
| 978
|
user53526356
|
75,212,387
| 3,057,900
|
grpc server handles multiple requests in parallel
|
<p>I have a question regarding the grpc server handles multiple requests in parallel, I have a grpc server, and the server provides an endpoint to handle client requests, and there are multiple clients sending request to the same endpoint.</p>
<p>When different clients send multiple requests to server at the same time, how the server handle those requests received the same time? Will each request will be handled by a thread simultaneously? Or the requests will be queued and handled one by one?</p>
<p>Thanks!</p>
|
<python><grpc>
|
2023-01-23 16:30:16
| 1
| 1,681
|
ratzip
|
75,212,260
| 14,353,779
|
Pandas : Chaning date to a format where I can use `map`
|
<p>Im trying to get current data in string format <code>(Month-Date-Year)</code></p>
<p><code>def user_defined_function(date_str):</code> I have a user defined function which takes input date(<code>refresh_date</code>) in the above mentioned format. So I need to change the today's date to the required format and map it.</p>
<p>What I Tried :</p>
<pre><code> import datetime as dt
from datetime import date
refresh_date=date.today().strftime('%m-%d-%Y')
x=refresh_date.map(user_defined_function).values[0]
</code></pre>
<p>I am however getting an error using the above code <code>AttributeError: 'str' object has no attribute 'map'</code></p>
|
<python><pandas>
|
2023-01-23 16:20:33
| 1
| 789
|
Scope
|
75,212,158
| 19,155,645
|
VScode order of python imports: how to force tensorflow to import before keras?
|
<p>I am importing several libraries in a .py file using VScode.</p>
<p>somehow it always orders the imports when I am saving the file.
It is important for me that a certain order is maintained, for example:</p>
<pre><code>import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"
</code></pre>
<p>should be before:</p>
<pre><code>import tensorflow.compat.v1 as tf
</code></pre>
<p>which in turn should be before</p>
<pre><code>import keras.backend as K
import keras
</code></pre>
<p>but even if I press <code>option</code>+<code>shift</code>+<code>o</code>, this order is lost after saving.</p>
<p>How can I force the order I need in this case, while generally keep VScode setting the order alphabetically?</p>
|
<python><tensorflow><visual-studio-code><keras>
|
2023-01-23 16:11:41
| 1
| 512
|
ArieAI
|
75,212,147
| 10,901,843
|
How would I make clusters from a Levenshtein similarity matrix?
|
<p>I have a similarity matrix of words and would like to apply an algorithm that can put the words in clusters.</p>
<p>Here's the example I have so far:</p>
<pre><code>from Levenshtein import distance
import numpy as np
words = ['The Bachelor','The Bachelorette','The Bachelor Special','SportsCenter',
'SportsCenter 8 PM','SportsCenter Sunday']
list1 = words
list2 = words
matrix = np.zeros((len(list1),len(list2)),dtype=np.int)
for i in range(0,len(list1)):
for j in range(0,len(list2)):
matrix[i,j] = distance(list1[I],list2[j])
</code></pre>
<p>Obviously this is a very simple dummy example, but what I would expect the output to be is 2 clusters, one with 'The Bachelor','The Bachelorette','The Bachelor Special', and the other with 'SportsCenter','SportsCenter 8 PM','SportsCenter Sunday'.</p>
<p>Can anyone help me with this?</p>
|
<python><nlp><cluster-analysis><similarity><levenshtein-distance>
|
2023-01-23 16:11:12
| 1
| 407
|
AI92
|
75,211,952
| 3,692,004
|
Test fails in Foundry when using asterisk (*) for unpacking when creating a dataframe
|
<p>I want to create a DataFrame in a fixture using the following code:</p>
<pre><code>@pytest.fixture
def my_fun(spark_session):
return spark_session.createDataFrame(
[
(*['test', 'testy'])
],
T.StructType([
T.StructField('mytest', T.StringType()),
T.StructField('mytest2', T.StringType()
])
)
def test_something(my_fun):
return
</code></pre>
<p>However, this fails with the following error:</p>
<pre><code>TypeError: StructType can not accept object 'test' in type <class 'str'>
</code></pre>
<p>If I use <code>('test', 'testy')</code> instead of <code>(*['test', 'testy'])</code>, it works. <strong>But shouldn't this be synonymous?</strong></p>
<p>(I'm using Python 3.8.13, pytest-7.0.1)</p>
|
<python><apache-spark-sql><tuples><pytest><palantir-foundry>
|
2023-01-23 15:57:58
| 2
| 619
|
Benji
|
75,211,934
| 11,710,304
|
How can I use when, then and otherwise with multiple conditions in polars?
|
<p>I have a data set with three columns. Column A is to be checked for strings. If the string matches <code>foo</code> or <code>spam</code>, the values in the same row for the other two columns <code>L</code> and <code>G</code> should be changed to <code>XX</code>. For this I have tried the following.</p>
<pre><code>df = pl.DataFrame(
{
"A": ["foo", "ham", "spam", "egg",],
"L": ["A54", "A12", "B84", "C12"],
"G": ["X34", "C84", "G96", "L6",],
}
)
print(df)
shape: (4, 3)
ββββββββ¬ββββββ¬ββββββ
β A β L β G β
β --- β --- β --- β
β str β str β str β
ββββββββͺββββββͺββββββ‘
β foo β A54 β X34 β
β ham β A12 β C84 β
β spam β B84 β G96 β
β egg β C12 β L6 β
ββββββββ΄ββββββ΄ββββββ
</code></pre>
<p>expected outcome</p>
<pre><code>shape: (4, 3)
ββββββββ¬ββββββ¬ββββββ
β A β L β G β
β --- β --- β --- β
β str β str β str β
ββββββββͺββββββͺββββββ‘
β foo β XX β XX β
β ham β A12 β C84 β
β spam β XX β XX β
β egg β C12 β L6 β
ββββββββ΄ββββββ΄ββββββ
</code></pre>
<p>I tried this</p>
<pre><code>df = df.with_columns(
pl.when((pl.col("A") == "foo") | (pl.col("A") == "spam"))
.then((pl.col("L")= "XX") & (pl.col( "G")= "XX"))
.otherwise((pl.col("L"))&(pl.col( "G")))
)
</code></pre>
<p>However, this does not work. Can someone help me with this?</p>
|
<python><dataframe><python-polars>
|
2023-01-23 15:57:03
| 1
| 437
|
Horseman
|
75,211,830
| 15,414,112
|
how to install python modules where pipy.org is is not accessible from iran?
|
<p>so the problem is that pypi.org hase been filtered by iranian government(<strong>yes , i know it's ridiculous!</strong>). i tried to install some python modules from Github downloaded files:
<code>pip install moduleName</code>
but every module has it's own dependencies and try to connect to pipy.org to reach them. then there will be an error during installation.
is there any solution?
your help will be much appreciated.</p>
|
<python><installation><pip><module><pypi>
|
2023-01-23 15:49:32
| 6
| 500
|
nariman zaeim
|
75,211,676
| 11,999,452
|
For loop breaks for no apparente reason
|
<p>If I run the following code, I get i = 42598.</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
all_sets = []
working_sets = []
numbers = np.arange(10)
operators = ['+', '-', '*', '/']
i = 0
expression = ['', '', '', '', '', '', '']
for n1 in numbers:
expression[0] = str(n1)
for o1 in operators:
expression[1] = o1
for n2 in numbers:
expression[2] = str(n2)
for o2 in operators:
expression[3] = o2
for n3 in numbers:
expression[4] = str(n3)
for o3 in operators:
expression[5] = o3
for n4 in numbers:
expression[6] = str(n4)
i += 1
# get all sets for comparison
numbers = sorted([expression[0], expression[2], expression[4], expression[6]])
if not (numbers in all_sets):
all_sets.append(numbers)
print(i)
</code></pre>
<p>But if I comment out the this bit</p>
<pre><code># get all sets for comparison
numbers = sorted([expression[0], expression[2], expression[4], expression[6]])
if not (numbers in all_sets):
all_sets.append(numbers)
</code></pre>
<p>I get i = 640000. WHY? What is breaking my for loop?</p>
|
<python>
|
2023-01-23 15:34:22
| 1
| 400
|
Akut Luna
|
75,211,551
| 10,413,428
|
Pyside6 QDoubleValidator - decimal places are not working as expected
|
<p>I have written the following PySide6 program and would like to limit the decimal places of the inputs.</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PySide6 import QtWidgets
from PySide6.QtCore import QSize, QLocale
from PySide6.QtGui import QDoubleValidator
from PySide6.QtWidgets import QMainWindow, QLineEdit, QPushButton, QFormLayout, QWidget
class MainWindow(QMainWindow):
def __init__(self):
QMainWindow.__init__(self)
self.setMinimumSize(QSize(420, 100))
self.line_edit_2 = QLineEdit()
self.line_edit_4 = QLineEdit()
self.validator_2 = QDoubleValidator(0, 10, 2)
self.validator_2.setNotation(QDoubleValidator.StandardNotation)
self.line_edit_2.setValidator(self.validator_2)
self.validator_4 = QDoubleValidator(0, 10, 4)
self.validator_4.setNotation(QDoubleValidator.StandardNotation)
self.line_edit_4.setValidator(self.validator_4)
self.form_layout = QFormLayout()
self.form_layout.addRow("Double 2", self.line_edit_2)
self.form_layout.addRow("Double 4", self.line_edit_4)
self.button = QPushButton('Validate')
self.form_layout.addRow("", self.button)
self.button.clicked.connect(
lambda: {print(self.line_edit_2.hasAcceptableInput(), self.line_edit_4.hasAcceptableInput())})
self.main_widget = QWidget()
self.main_widget.setLayout(self.form_layout)
self.setCentralWidget(self.main_widget)
if __name__ == "__main__":
app = QtWidgets.QApplication(sys.argv)
mainWin = MainWindow()
mainWin.show()
sys.exit(app.exec_())
</code></pre>
<p>But when I run it, no matter what I define in the third parameter (which, according to <a href="https://doc.qt.io/qt-6/qdoublevalidator.html#decimals-prop" rel="nofollow noreferrer">here</a>, controls the digits), I am not able to enter any doubles with decimal places greater than 2.</p>
<p><a href="https://i.sstatic.net/WfrHS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WfrHS.png" alt="non working validator example" /></a></p>
|
<python><pyqt><pyside><pyside6><pyqt6>
|
2023-01-23 15:22:25
| 1
| 405
|
sebwr
|
75,211,499
| 56
|
How do I analyze what is hanging my Flask application
|
<p>I have a Python Flask web application, which uses a Postgresql database.</p>
<p>When I put a load on my application, it stops to respond. This only happens when I request pages which uses the database.</p>
<p>My setup:</p>
<ul>
<li>nginx frontend (although in my test environment, skipping this tier doesn't make a difference), connecting via UNIX socket to:</li>
<li>gunicorn application server with 3 child processes, connecting via UNIX socket to:</li>
<li>pgbouncer, connection pooler for PostgreSQL, connecting via TCP/IP to:
<ul>
<li>I need pgbouncer, because SQLAlchemy has connection pooling <strong>per process</strong>. If I don't use pgbouncer, my database get's overloaded with connection requests very quickly.</li>
</ul>
</li>
<li>postgresql 13, the database server.</li>
</ul>
<p>I have a test environment on Debian Linux (with nginx) and on my iMac, and the application hang occurs on both machines.</p>
<p>I put load on the application with <a href="https://github.com/rakyll/hey" rel="nofollow noreferrer">hey</a>, a http load generator. I use the default, which generates 200 requests with 50 workers. The test-page issues two queries to the database.</p>
<p>When I run my load test, I see <strong>gunicorn</strong> getting worker timeouts. It's killing the timedout processes, and starts up new ones. Eventually (after a lot of timeouts) everything is fine again. For this, I lowered the <em>statement timeout</em> setting of Postgresql. First is was 30 and later I set it to 15 seconds. Gunicorn's worker timeouts happend more quickly now. (I don't understand this behaviour; why would gunicorn recycle a worker, when a query times out?)</p>
<p>When I look at <strong>pgbouncer</strong>, with the <code>show clients;</code> command I see some waiting clients. I think this is a hint of the problem. My Web application is waiting on pgbouncer, and pgbouncer seems to be waiting for Postgres. When the waiting lines are gone, the application behaves normally again (trying a few requests). Also, when I restart the <em>gunicorn</em> process, everything goes back to normal.</p>
<p>But with my application under stress, when I look at <strong>postgresql</strong> (querying with a direct connection, by-passing pgbouncer), I can't see anything wrong, or waiting or whatever. When I query <code>pg_stat_activity</code>, all I see are idle connections (except from then connection I use to query the view).</p>
<p>How do I debug this? I'm a bit stuck. <code>pg_stat_activity</code> should show queries running, but this doesn't seem to be the case. Is there something else wrong? How do I get my application to work under load, and how to analyze this.</p>
|
<python><postgresql><flask><gunicorn><pgbouncer>
|
2023-01-23 15:19:07
| 1
| 19,370
|
doekman
|
75,211,253
| 12,065,399
|
Agregate Different String Columns on Pandas - List of Unique Values
|
<p>I am facing a problem to get the <strong>unique values</strong> of two different columns <strong>ordered A-Z</strong> and <strong>ignoring <code>nan</code> values</strong>.<br />
I was trying to create an ordered list of unique values, then remove the brackets, but I struggle.<br />
Can someone please help me on this?</p>
<h3>Columns to order:</h3>
<ul>
<li><code>df['A']</code></li>
<li><code>df['B']</code></li>
</ul>
<p>Thanks in advance.</p>
<h4>Sample Data</h4>
<pre class="lang-py prettyprint-override"><code>import pandas as pd
dict = {'A': ['1','2','8','4',nan],
'B': ['6','2','3','9','10']}
df = pd.DataFrame(dict)
</code></pre>
<h4>Desired Output</h4>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th></th>
<th>A</th>
<th>B</th>
<th>expected_col</th>
</tr>
</thead>
<tbody>
<tr>
<td>0</td>
<td>1</td>
<td>6</td>
<td>1, 6</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>2</td>
<td>2</td>
</tr>
<tr>
<td>2</td>
<td>8</td>
<td>3</td>
<td>3, 8</td>
</tr>
<tr>
<td>3</td>
<td>4</td>
<td>9</td>
<td>4, 9</td>
</tr>
<tr>
<td>4</td>
<td>nan</td>
<td>10</td>
<td>10</td>
</tr>
</tbody>
</table>
</div><h4>Solution given by user @somedude and @rawson</h4>
<pre class="lang-py prettyprint-override"><code>df.apply(lambda x: ', '.join(x.drop_duplicates().dropna()), axis=1)
</code></pre>
<h4>Solution given by @mozway</h4>
<pre class="lang-py prettyprint-override"><code>df.apply(lambda x: ', '.join(x.dropna().unique()), axis=1)
</code></pre>
|
<python><pandas>
|
2023-01-23 14:57:29
| 0
| 741
|
Andre Nevares
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.