QuestionId int64 74.8M 79.8M | UserId int64 56 29.4M | QuestionTitle stringlengths 15 150 | QuestionBody stringlengths 40 40.3k | Tags stringlengths 8 101 | CreationDate stringdate 2022-12-10 09:42:47 2025-11-01 19:08:18 | AnswerCount int64 0 44 | UserExpertiseLevel int64 301 888k | UserDisplayName stringlengths 3 30 ⌀ |
|---|---|---|---|---|---|---|---|---|
76,789,552 | 22,213,065 | Move even lines to last of odd lines except last line | <p>I have some txt files in <code>E:\Desktop\prog\OCR</code> directory that each file have a format like following:</p>
<pre><code>Fytytyotyrtyttyran
57.338
CtyOtyBtyOtyL
13.318
AytLtGtyOtyL
10.254
Ayttssemtybtyly
5.33
BtyAtySItyC
2.061
AytryPL
1.53
Lirtysyrtyp
1.466
Ctry
0
Patretsyttrcal
0
1965 Q2
</code></pre>
<p>Now i want to convert above list to following format:</p>
<pre><code>Fytytyotyrtyttyran;57.338
CtyOtyBtyOtyL;13.318
AytLtGtyOtyL;10.254
Ayttssemtybtyly;5.33
BtyAtySItyC;2.061
AytryPL;1.53
Lirtysyrtyp;1.466
Ctry;0
Patretsyttrcal;0
1965 Q2
</code></pre>
<p>note that last line of each file no need any change.<br />
<strong>I wrote following python script for this:</strong></p>
<pre><code>import os
input_directory = r'E:\Desktop\prog\OCR'
output_directory = r'E:\Desktop\prog\OCR\output'
def merge_even_odd_lines(input_path, output_path):
with open(input_path, 'r', encoding='utf-8') as infile:
lines = infile.readlines()
merged_lines = []
for i in range(0, len(lines), 2):
if i + 1 < len(lines):
odd_line = lines[i].strip()
even_line = lines[i + 1].strip()
merged_lines.append(f"{odd_line};{even_line}")
else:
merged_lines.append(lines[i].strip())
with open(output_path, 'w', encoding='utf-8') as outfile:
outfile.write('\n'.join(merged_lines))
def process_files(directory_path):
if not os.path.exists(output_directory):
os.makedirs(output_directory)
for root, _, files in os.walk(directory_path):
for file in files:
if file.endswith('.txt'):
input_file_path = os.path.join(root, file)
output_file_path = os.path.join(output_directory, file)
merge_even_odd_lines(input_file_path, output_file_path)
if __name__ == "__main__":
process_files(input_directory)
print("Conversion completed successfully.")
</code></pre>
<p><strong>But my script convert my files to following format:</strong></p>
<pre><code>Fytytyotyrtyttyran;57.338;CtyOtyBtyOtyL;13.318
AytLtGtyOtyL;10.254;Ayttssemtybtyly;5.33
BtyAtySItyC;2.061;AytryPL;1.53
Lirtysyrtyp;1.466;Ctry;0
Patretsyttrcal;0;1965 Q2
</code></pre>
<p><strong>where is my script problem?</strong></p>
| <python><python-3.x> | 2023-07-28 16:25:57 | 2 | 781 | Pubg Mobile |
76,789,504 | 1,199,464 | Mounting a certificate for using the GKE Kubernetes API | <p>I'm porting a django management command to a new [private] GKE cluster configured with service accounts & workload identity. This command uses the kubernetes API to change settings on the autoscaler for the cluster.</p>
<p>It looks like the API connection requires a token and a certificate. These are bundled up to create the configuration;</p>
<pre><code> configuration = kubernetes.client.Configuration()
configuration.api_key["authorization"] = token
configuration.api_key_prefix["authorization"] = "Bearer"
configuration.host = server
configuration.ssl_ca_cert = cert
api = kubernetes.client.AutoscalingV1Api(
kubernetes.client.ApiClient(configuration)
)
</code></pre>
<p>The existing project that I'm porting this command from uses defaults for token and certificate which are defined as;</p>
<pre><code> parser.add_argument(
"--cert",
action="store",
dest="cert",
default="/var/run/secrets/kubernetes.io/serviceaccount/ca.crt",
help="File containing valid certificate to make request",
)
parser.add_argument(
"--token",
action="store",
dest="token",
type=argparse.FileType("r"),
default="/var/run/secrets/kubernetes.io/serviceaccount/token",
help="File containing token to make request",
)
</code></pre>
<p>I've noticed that these aren't added by GKE by default. And looking at the pods for the existing project, I can see that <code>/var/run/secrets</code> doesn't exist. So I think this cluster is able to use this API via it's default service account, whereas this new cluster doesn't use that SA.</p>
<p>The error I'm seeing come from attempts to run this command point at the missing certificate;</p>
<blockquote>
<p>HTTPSConnectionPool(host='10.255.240.1', port=443): Max retries exceeded with url: /apis/autoscaling/v1/namespaces/staging/horizontalpodautoscalers/draft-nginx (Caused by SSLError(FileNotFoundError(2, 'No such file or directory')))</p>
</blockquote>
<p>I found the google <a href="https://cloud.google.com/kubernetes-engine/docs/how-to/kubernetes-service-accounts#assigning_a_kubernetes_service_account_to_a_pod" rel="nofollow noreferrer">docs</a> on how I can mount a token. So the helm for that is in my templates and I've verified the token in a pod;</p>
<pre><code> containers:
- name: scale-workloads
image: {{ .Values.gke_registry }}/base_python:{{ .Values.global.build }}
imagePullPolicy: Always
command:
- python -m django
args:
- scale_workloads
- --namespace={{ .Release.Namespace }}
- --appserver={{ .Values.pods.appserver.minReplicas | default 1 }}
- --nginx={{ .Values.pods.nginx.minReplicas | default 1 }}
env:
{{- include "proj.sharedEnv" $ | nindent 16 }}
- name: DJANGO_SETTINGS_MODULE
value: {{ .Values.django_settings_module }}
resources:
requests:
cpu: 1000m
memory: 500Mi
volumeMounts:
- mountPath: /etc/config
name: configs
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: ksa-token
volumes:
- name: configs
projected:
defaultMode: 420
sources:
- secret:
name: proj-secrets
- name: ksa-token
projected:
sources:
- serviceAccountToken:
path: ksa-token
expirationSeconds: 86400
audience: some-oidc-audience
</code></pre>
<p>But can't find any similar docs on mounting a certificate that the cluster either is, or could, be using.</p>
<p>The stacktrace from manually running this management command shows the following;</p>
<pre><code>File "/usr/src/app/drafty/core/management/commands/scale_workloads.py", line 198, in scale_pods
api.patch_namespaced_horizontal_pod_autoscaler(
File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api/autoscaling_v1_api.py", line 983, in patch_namespaced_horizontal_pod_autoscaler
return self.patch_namespaced_horizontal_pod_autoscaler_with_http_info(name, namespace, body, **kwargs) # noqa: E501
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api/autoscaling_v1_api.py", line 1098, in patch_namespaced_horizontal_pod_autoscaler_with_http_info
return self.api_client.call_api(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 348, in call_api
return self.__call_api(resource_path, method,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 180, in __call_api
response_data = self.request(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/kubernetes/client/api_client.py", line 407, in request
return self.rest_client.PATCH(url,
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/kubernetes/client/rest.py", line 296, in PATCH
return self.request("PATCH", url,
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/kubernetes/client/rest.py", line 169, in request
r = self.pool_manager.request(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/request.py", line 78, in request
return self.request_encode_body(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/request.py", line 170, in request_encode_body
return self.urlopen(method, url, **extra_kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/poolmanager.py", line 376, in urlopen
response = conn.urlopen(method, u.request_uri, **kw)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 826, in urlopen
return self.urlopen(
^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/connectionpool.py", line 798, in urlopen
retries = retries.increment(
^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.11/site-packages/urllib3/util/retry.py", line 592, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='10.255.240.1', port=443): Max retries exceeded with url: /apis/autoscaling/v1/namespaces/staging/horizontalpodautoscalers/draft-nginx (Caused by SSLError(FileNotFoundError(2, 'No such file or directory')))
</code></pre>
| <python><kubernetes><google-kubernetes-engine><kubernetes-helm> | 2023-07-28 16:18:05 | 1 | 12,944 | markwalker_ |
76,789,314 | 7,486,210 | Python parse a graphqls file | <p>Is there a lib/tool for Python to automatically parse a graphqls file? If not, what is a good way to parse the file?</p>
<p>I have a Python project that is part of a larger Java project that contains a <code>.graphqls</code> file with the <code>enum</code>, <code>input</code>, and <code>type</code> definitions.</p>
<p>For example, the file has:</p>
<pre><code>enum Version {
A
B
}
input ProjInput {
name: String!
term: String
version: Version
}
type ProjType {
name: String
term: String
version: Version
}
</code></pre>
<p>The file also contains the queries and mutations:</p>
<pre><code>type Query {
queryProj(projName: String!): ProjType
}
type Mutation {
addProj(projInput: ProjInput): ProjType
}
</code></pre>
<p>I looked around but I can't find a tool to parse this data into a dict (or some other type) in Python. It would be nice to just do something like parsing json with <code>json.loads()</code>.
Does such a lib/tool exist? And if not, how can I parse this file myself?</p>
<p>I can open and read the file, I am just not sure if just reading it line-by-line is a good method, or if I can automatically read <code>enum</code> types, for example.</p>
| <python><python-3.x><graphql><graphql-schema> | 2023-07-28 15:48:05 | 0 | 428 | Ebad |
76,789,301 | 3,685,918 | How can I change some, but not all, column names when using pd.read_excel? | <p>When importing <code>.xlsx</code> using <a href="https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.read_excel.html" rel="nofollow noreferrer">pd.read_excel()</a>,
how can I change the column names partially?</p>
<p>For example,
Excel document <em>data.xlsx</em> consists of 99 columns, like <code>col_1</code>, <code>col_1</code>, <code>col_3</code> .... <code>col_99</code>.</p>
<p>I'd like to only rename like the dictionary <code>rename = {'col_1' : 'ID', 'col_2' : 'name', 'col_3' : 'score'}</code></p>
<p>As for the other columns <code>col_4</code> ~ <code>col_99</code>, there isn't any need to rename them.</p>
<p><code>pd.read_excel('data.xlsx')</code> has the option <code>names = []</code>, but it needs the entire column names to be overwritten.</p>
<p>Is there another way to only change some of the column names, when using <code>pd.read_excel()</code>?</p>
| <python><pandas><excel><dataframe> | 2023-07-28 15:46:21 | 1 | 427 | user3685918 |
76,789,219 | 2,983,568 | What is the impact of having freq=None instead of a specific value as a DatetimeIndex attribute? | <p>These 2 ways of creating a date column and setting it as the index of the dataframe have <em>almost</em> the same result, except that the <code>freq</code> attribute of the index is <code>None</code> in the first case and <code>M</code> in the second one:</p>
<pre class="lang-python prettyprint-override"><code># This sets the index frequency to None
df["Date"] = pd.date_range("1864-jan", periods=df.shape[0], freq="M")
df.set_index("Date", inplace=True)
# This sets the index frequency to M
df.set_index(pd.date_range("1864-jan", periods=df.shape[0], freq="M"), inplace=True)
df.index.name = "Date"
</code></pre>
<ol>
<li>I assume this is because in the first case we first create a <em>column</em> and the freq attribute provided in the <code>date_range</code> function is lost in the <code>set_index</code> operation. Is this correct?</li>
<li>What is the impact/consequence of an index that has no <code>freq</code> defined? Are some operations dependent on this attribute and hence going to fail?</li>
</ol>
| <python><pandas><datetime><indexing> | 2023-07-28 15:34:56 | 0 | 4,665 | evilmandarine |
76,789,166 | 5,722,359 | What is the correct value to use for the `default` option in tkinter.messagebox`? | <p>I cannot get the <code>default</code> option of <code>tkinter.messagebox</code> to work. May I know what is the correct value to use for the <code>default</code> option?</p>
<p>Sample code:</p>
<pre><code>import tkinter as tk
import tkinter.messagebox as messagebox
root = tk.Tk()
def confirmExit():
# mbox = messagebox.askokcancel("Quit", "Shut down?", icon="question",
# default="yes") # don't work
mbox = messagebox.askokcancel("Quit", "Shut down?", icon="question",
default=tk.YES) # don't work
if mbox:
root.destroy()
root.protocol('WM_DELETE_WINDOW', confirmExit)
root.mainloop()
</code></pre>
<p>Error msg 1:</p>
<pre><code> File "/usr/lib/python3.10/tkinter/messagebox.py", line 76, in _show
res = Message(**options).show()
File "/usr/lib/python3.10/tkinter/commondialog.py", line 45, in show
s = master.tk.call(self.command, *master._options(self.options))
_tkinter.TclError: bad -default value "yes": must be abort, retry, ignore, ok, cancel, no, or yes
</code></pre>
<p>Error msg 2:</p>
<pre><code>File "/usr/lib/python3.10/tkinter/messagebox.py", line 76, in _show
res = Message(**options).show()
File "/usr/lib/python3.10/tkinter/commondialog.py", line 45, in show
s = master.tk.call(self.command, *master._options(self.options))
_tkinter.TclError: bad -default value "1": must be abort, retry, ignore, ok, cancel, no, or yes
</code></pre>
| <python><tkinter><tkmessagebox> | 2023-07-28 15:27:05 | 1 | 8,499 | Sun Bear |
76,788,983 | 1,333,294 | Limit field permissions on Graphene Django | <p>Lets say I got a user Type</p>
<pre><code>class UserType(DjangoObjectType):
class Meta:
model = User
fields = [
"fieldA",
"fieldB",
"relationshipA",
"relationshipB"
]
</code></pre>
<p>And I want <code>fieldB</code> and <code>relationshipB</code> to be visible only to the owner (user).<br />
What is the best strategy to do this?<br />
Initially Ive created a <code>PublicUserType</code> excluding private fields, but quickly I realized that it might not be scalable because not only I'll have to create private representation for <code>UserType</code> I'll might also create private representation for any relationship (<code>relationshipA </code> etc), and proper resolvers, and duplicate fragments etc.<br />
Is there a best practice here?</p>
| <python><django><graphene-django> | 2023-07-28 15:02:01 | 1 | 989 | ItayAmza |
76,788,820 | 3,734,914 | Add `activate.sh` and `deactivate.sh` to Conda Environment Created From YAML FIle | <p>I want to include <code>activate.sh</code> and <code>deactivate.sh</code> scripts in a Conda environment that is created for users when they do <code>conda env create -f environment.yaml</code>. Is there a way to have these scripts automatically copied into the <code>$CONDA_PREFIX/etc/conda</code> directory when it's created? Or do I need to write a script that creates the environment, and then copies the files manually?</p>
| <python><conda><miniconda><mamba> | 2023-07-28 14:42:00 | 1 | 9,017 | Batman |
76,788,727 | 1,668,622 | How can I change the debug-level and format for the Quart (i.e. hypercorn) logger? | <p>I'm trying to set the <code>level</code> and <code>format</code> for the loggers used by the Quart module the way I did it successfully for other 'foreign' loggers:</p>
<ul>
<li>by running <code>basicConfig</code> and implicitly setting up the root-logger or later</li>
<li>by running <code>logging.getLogger("urllib3.connectionpool").setLevel(logging.INFO)</code> to get
and modify an existing logger</li>
</ul>
<p>However those approaches don't work for the loggers spawned by Quart. Neither are those affected by <code>basicConfig</code> nor can I set the level. Output will always look like this:</p>
<pre><code>[2023-07-28 16:17:12 +0200] [1254610] [INFO] Running on http://0.0.0.0:5432 (CTRL + C to quit)
</code></pre>
<p>Setting breakpoints in <code>logging/__init__.py</code> let the program break on log messages by <code>hypercorn.error</code> (so it seems to use the same module), but setting the level like this</p>
<pre><code>logging.getLogger("hypercorn.error").setLevel(logging.WARNING)
</code></pre>
<p>doesn't have any effect.</p>
<p>The <a href="https://quart.palletsprojects.com/en/latest/how_to_guides/logging.html" rel="noreferrer">doc</a> says I should use <code>dictConfig</code>, so I've added</p>
<pre><code>dictConfig({
'version': 1,
'loggers': {
'quart.app': {'level': 'ERROR'},
'hypercorn.error': {'level': 'ERROR'},
},
})
</code></pre>
<p>.. no effect</p>
<p>I found <a href="https://github.com/pgjones/hypercorn/issues/120" rel="noreferrer">https://github.com/pgjones/hypercorn/issues/120</a>, and tried</p>
<pre><code>logger = logging.getLogger("hypercorn.error")
logger.addHandler(my_own_handler)
logger.setLevel(logging.WARNING)
logger.propagate = False
</code></pre>
<p>but also without effect.</p>
<p>What else can I try?</p>
| <python><logging><quart><hypercorn> | 2023-07-28 14:28:11 | 2 | 9,958 | frans |
76,788,556 | 8,948,544 | Flask_SQLAlchemy can't read structure of a table created in SQLAlchemy | <p>I created a table using SQLAlchemy:</p>
<pre><code>engine = create_engine('sqlite:///sd.sqlite')
images = Table(
'images', meta,
Column('id', Integer, primary_key = True),
Column('url', String),
...
)
meta.create_all(engine)
</code></pre>
<p>This works, I have a database and can fill it with data. I want to create a Flask application to access this data using Flask_SQLAlchemy, but can't get it to work:</p>
<pre><code>from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////sd.sqlite'
db = SQLAlchemy(app)
Base = db.declarative_base()
class Images(db.Model):
with app.app_context(): #needed to fix application context error
__table__ = db.Table('images', Base.metadata),
""" with app.app_context():
db.Model.metadata.reflect(db.engine) """
@app.route("/")
def hello_world():
images = Images.query.all()
return images
app.run(port=8131)
</code></pre>
<p>This gives the following error:</p>
<blockquote>
<p>Mapper Mapper[Images(images)] could not assemble any primary key columns for mapped table 'images'</p>
</blockquote>
<p>But I specified a primary key. What am I doing wrong?</p>
| <python><sqlite><flask-sqlalchemy> | 2023-07-28 14:05:24 | 1 | 331 | Karthik Sankaran |
76,788,293 | 1,915,846 | How to include the inputs of the first chain to the second chain in LangChain's SequentialChain? | <p>I receive a list of inputs <code>['a', 'b']</code> for the first chain in a <code>SequentialChain</code>, and then I have a second chain that receives as input <code>['a', 'c']</code>, where <code>c</code> is the output of the first one. I see that <code>SequentialChain</code> passes only the outputs of the first (<code>c</code>) to the second chain. Is there any way to currently do it, without having to implement my own <code>SequentialChain</code> equivalent?</p>
| <python><langchain><py-langchain> | 2023-07-28 13:32:14 | 2 | 912 | cserpell |
76,788,090 | 3,121,975 | Creating generic type argument bound to Exception in Python | <p>I'm trying to create a generic type for an exception, to be sent to a method. This is what I have, currently:</p>
<pre><code>E = TypeVar("E", bound=Exception)
</code></pre>
<p>I would annotate the argument like this:</p>
<pre><code>def parse_td_value(
self,
data: Tag,
name: str,
parser: Callable[[str], T],
exc_type: Type[E] = ValueError,
) -> T:
...
</code></pre>
<p>However, when I do this mypy gives me an error:</p>
<blockquote>
<p>Incompatible default for argument "exc_type" (default has type "Type[ValueError]", argument has type "Type[E]")</p>
</blockquote>
<p>I have attempted to resolve this issue using overloads, as described <a href="https://stackoverflow.com/questions/75058589/annotating-function-with-typevar-and-default-value-results-in-union-type">here</a>, but to no avail. I'm assuming this has something to do with the fact that the bound condition of <code>E</code> is different than the type of the default argument for the method, <code>ValueError</code>.</p>
<p>To my mind, this doesn't make sense because <code>ValueError</code> should be a <code>E</code>. So, why is mypy disagreeing here and how can I make this annotation work?</p>
| <python><exception><mypy><python-typing> | 2023-07-28 13:04:45 | 0 | 8,192 | Woody1193 |
76,788,026 | 1,732,969 | Python Wand auto rotate and deskew not working | <p>I have a medical form page that is not properly oriented (it's rotated 90 degrees clockwise) and it has text skew. I want to use Wand methods to auto orient the page to be human legible, and also to de-skew the text.
So far, I've tested and I think that de-skew works. Here comes the first question:</p>
<ol>
<li><p>I don't understand the threshold parameter in the <a href="https://docs.wand-py.org/en/0.6.11/wand/image.html#wand.image.BaseImage.deskew" rel="nofollow noreferrer">de-skew method</a>. What is the meaning of that "limit"? How can I parametrize that value to not be hardcoded?</p>
</li>
<li><p><a href="https://docs.wand-py.org/en/0.6.11/wand/image.html#wand.image.BaseImage.auto_orient" rel="nofollow noreferrer">Auto orient method</a> is not working at all. And it's not returning any value or log to know why.</p>
</li>
</ol>
<p>This is the code that I'm using with the threshold hardcoded as 0.9 and attached is the sample file.</p>
<p><strong>Anyone knows why auto_orient is not working?</strong></p>
<pre><code> from wand.image import Image
with Image(filename='dummy_edu_form.jpeg') as img:
img.auto_orient()
img.deskew(0.9*img.quantum_range)
img.save(filename='wand_output.jpeg')
</code></pre>
<p>OS and library information:</p>
<ul>
<li>Wand==0.6.11</li>
<li>Version: ImageMagick 6.9.11-60 Q16 x86_64 2021-01-25 <a href="https://imagemagick.org" rel="nofollow noreferrer">https://imagemagick.org</a></li>
<li>Python: 3.9.17</li>
<li>OS: Operating System: Linux Mint 21, Kernel: Linux 5.15.0-76-generic, Architecture: x86-64. This information should not be relevant because this code is intended to work in an AWS Lambda.</li>
</ul>
<p><a href="https://i.sstatic.net/7qabQ.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7qabQ.jpg" alt="dummy_edu_form.jpeg" /></a></p>
<p>Edit: Also I add this picture taken as landscape. Result is the same.
<a href="https://i.sstatic.net/WL9vD.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WL9vD.jpg" alt="enter image description here" /></a></p>
| <python><python-3.x><image-processing><imagemagick><wand> | 2023-07-28 12:57:06 | 0 | 1,593 | eduardosufan |
76,788,010 | 466,844 | Twitter API V2 Follows lookup | <p>I am trying to retrieve a list of who a Twitter account follows using Python.</p>
<p>After realizing that the free tier API access did not provide this endpoint I upgraded my developer account to the basic plan (for $100 a month) as it clearly states that once signed up, you can retrieve an accounts followers or following.</p>
<p>I have created a script that should retrieve the followers of an account based on a user id -</p>
<pre><code>import requests
def get_followers(user_id, bearer_token):
url = f'https://api.twitter.com/2/users/{user_id}/followers'
headers = {'Authorization': f'Bearer {bearer_token}'}
all_followers = []
while True:
response = requests.get(url, headers=headers)
if response.status_code == 200:
result_data = response.json()
all_followers.extend(result_data['data'])
if 'next_token' in result_data['meta']:
next_token = result_data['meta']['next_token']
url = f'https://api.twitter.com/2/users/{user_id}/followers?pagination_token={next_token}'
else:
break
else:
print(f"Failed to get followers for user ID '{user_id}' with status code: {response.status_code}")
print(response.json())
return None
return all_followers
</code></pre>
<p>However I am getting the following (quite common it would seem) error response -</p>
<pre><code>{
"client_id":"my-id",
"detail":"When authenticating requests to the Twitter API v2 endpoints, you must use keys and tokens from a Twitter developer App that is attached to a Project. You can create a project via the developer portal.",
"registration_url":"https://developer.twitter.com/en/docs/projects/overview",
"title":"Client Forbidden",
"required_enrollment":"Appropriate Level of API Access",
"reason":"client-not-enrolled",
"type":"https://api.twitter.com/2/problems/client-forbidden"
}
</code></pre>
<p>I have made sure that my application is located within a project that has the <code>V2 ACCESS</code> tag associated to it.</p>
<p>I also tried using <a href="https://www.tweepy.org/" rel="nofollow noreferrer">Tweepy</a> however was met with the same error response.</p>
<p>Also on reading the specific page in the <a href="https://developer.twitter.com/en/docs/twitter-api/users/follows/introduction" rel="nofollow noreferrer">Twitter docs</a>, the quick start guide AND the API explorer buttons both leads to broken links!</p>
| <python><twitter><twitter-api-v2> | 2023-07-28 12:55:10 | 1 | 2,484 | Ebikeneser |
76,787,930 | 13,706,389 | Detect downhill segments | <p>I have elevation data along a specific path. I'd like to detect downhill segments on this path.
I managed to find the location of local peaks and valleys through the <code>scipy.signal</code> function <code>find_peaks</code>.
And then used the following code to plot the data and add the downhill segments in red:</p>
<pre class="lang-py prettyprint-override"><code>from scipy.signal import find_peaks
import matplotlib.pyplot as plt
import numpy as np
elev = np.array([1601.9, 1584.7, 1583.3, 1587.4, 1608.4, 1608.2, 1608.6, 1604.1, 1607.1, 1619.1, 1616.1, 1615.6, 1619.3, 1611.4, 1602.0, 1597.1, 1587.5, 1588.6, 1571.3, 1556.2, 1516.4, 1497.2, 1497.5, 1493.5, 1490.0, 1495.4, 1507.4, 1482.1, 1481.1, 1480.2, 1472.2, 1471.8, 1453.9, 1456.2, 1446.1, 1457.5, 1447.8, 1447.0, 1447.6, 1443.3, 1440.5, 1434.4, 1433.9, 1434.5, 1435.7, 1437.4, 1436.3, 1426.9, 1451.7, 1471.0, 1474.2, 1480.8, 1482.0, 1480.8, 1475.3, 1474.9, 1474.8, 1476.7, 1476.6, 1477.4, 1475.9, 1478.2, 1491.6, 1495.2, 1496.9, 1495.1, 1495.3, 1487.5, 1487.7, 1484.6, 1483.4, 1482.7, 1481.5, 1481.9, 1481.6, 1487.7, 1494.5, 1495.0, 1494.9, 1492.9, 1487.2, 1476.4, 1475.8, 1476.4, 1476.6, 1448.0, 1478.4, 1464.6, 1492.3, 1466.2, 1435.1, 1454.3, 1455.2, 1452.3, 1451.8, 1451.7, 1451.3, 1452.1, 1454.9, 1454.6, 1454.2, 1439.9, 1464.1, 1484.4, 1490.4, 1471.3, 1480.6, 1447.9, 1474.8, 1469.3, 1470.4, 1470.0, 1469.2, 1473.6, 1473.8, 1490.0, 1490.8, 1491.2, 1490.0, 1487.6, 1486.4, 1486.6, 1491.5, 1495.2, 1496.8, 1495.0, 1495.6, 1487.4, 1487.4, 1484.6, 1483.4, 1482.0, 1482.7, 1482.4, 1481.6, 1481.9, 1481.6, 1487.4, 1494.5, 1494.7, 1495.1, 1494.1, 1492.3, 1492.9, 1487.3, 1476.3, 1476.0, 1476.6, 1476.6, 1431.7, 1479.9, 1466.1, 1492.0, 1472.1, 1465.5, 1437.7, 1448.2, 1462.7, 1463.1, 1500.5, 1493.0, 1461.8, 1483.7, 1462.0, 1451.8, 1431.0, 1418.8, 1405.5, 1403.7, 1372.9, 1380.6, 1385.0, 1388.1, 1392.9, 1413.8])
peaks,_ = find_peaks(elev)
valleys, _ = find_peaks(-elev)
plt.plot(elev)
plt.plot(peaks, elev[peaks], 'x')
plt.plot(valleys, elev[valleys], 'x')
if peaks[0] < valleys[0]:
for i in range(len(peaks)-1):
plt.plot(range(peaks[i], valleys[i]+1), elev[peaks[i]:valleys[i]+1], 'r')
else:
for i in range(len(peaks)):
plt.plot(range(peaks[i], valleys[i+1]+1), elev[peaks[i]:valleys[i+1]+1], 'r')
plt.show()
</code></pre>
<p>This works and gives the following result:<a href="https://i.sstatic.net/bTUpk.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bTUpk.png" alt="enter image description here" /></a></p>
<p>However, I don't want every little peak to show up so I added the prominence parameter to 10:</p>
<pre class="lang-py prettyprint-override"><code>peaks,_ = find_peaks(elev, prominence=10)
valleys, _ = find_peaks(-elev, prominence=10)
plt.plot(elev)
plt.plot(peaks, elev[peaks], 'x')
plt.plot(valleys, elev[valleys], 'x')
if peaks[0] < valleys[0]:
for i in range(len(peaks)-1):
plt.plot(range(peaks[i], valleys[i]+1), elev[peaks[i]:valleys[i]+1], 'r')
else:
for i in range(len(peaks)):
plt.plot(range(peaks[i], valleys[i+1]+1), elev[peaks[i]:valleys[i+1]+1], 'r')
plt.show()
</code></pre>
<p>Which gives: <a href="https://i.sstatic.net/frrO8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/frrO8.png" alt="enter image description here" /></a>
This also works fine until around index 135 because there are two valleys without a valley in between.
Is there a way to solve this?</p>
<p>TL;DR: I'm looking for a way to indicate valleys and peaks with a given prominence in a signal where the valleys and peaks alternate.</p>
| <python><scipy> | 2023-07-28 12:44:36 | 1 | 684 | debsim |
76,787,817 | 3,130,747 | How to parameterise multiple tuples using psycopg2, without using an f-string | <p>I'm wondering how I would parameterise the following postgres:</p>
<pre class="lang-sql prettyprint-override"><code>UPDATE your_table as t
SET val = x.val_to
FROM ( VALUES
(1, 20),
(3, 44)
) as x (val_from, val_to)
WHERE t.val = x.val_from
AND
your_table.other_id = %(other_id_value)
;
</code></pre>
<p>where <code>(1, 20), (3, 44)</code> would be parameterised using psycopg2.</p>
<p>Using :</p>
<pre class="lang-py prettyprint-override"><code>cur.execute("""
UPDATE your_table as t
SET val = x.val_to
FROM ( VALUES
$(vals)s
) as x (val_from, val_to)
WHERE t.val = x.val_from
AND
your_table.other_id = %(other_id_value)s
;
""", {
'other_id_value' : 3843,
'vals' : [(1, 20), (3, 44)]
}
)
</code></pre>
<p>Doesn't work.</p>
<p>In psycopg2 <a href="https://www.psycopg.org/docs/extras.html" rel="nofollow noreferrer">https://www.psycopg.org/docs/extras.html</a> they have:</p>
<pre class="lang-py prettyprint-override"><code>>>> execute_values(cur,
... """UPDATE test SET v1 = data.v1 FROM (VALUES %s) AS data (id, v1)
... WHERE test.id = data.id""",
... [(1, 20), (4, 50)])
</code></pre>
<p>But I need to parameterise the <code>data.id</code> there as well, not just the <code>VALUES</code>.</p>
| <python><postgresql><psycopg2> | 2023-07-28 12:28:32 | 3 | 4,944 | baxx |
76,787,791 | 5,379,182 | Can uuid4 collide when generated by python processes on different kubernetes pods | <p>When having replicas of the same pod with a single container running a single python process on kubernetes, will each python process still generate <code>uuid.uuid4()</code> ids that are practically unique across the all pods or can it happen that 2 python processes are using the same seed when calling the <code>os.urandom</code> function?</p>
| <python><random><uuid> | 2023-07-28 12:24:56 | 2 | 3,003 | tenticon |
76,787,676 | 12,052,180 | Unable to pyenv install python version on macOS Ventura 13.5 | <p>I am trying to install a python version using <code>pyenv</code>, e.g.</p>
<pre><code>pyenv install 3.11.4
</code></pre>
<p>but I am always getting this message</p>
<pre><code>python-build: use openssl@1.1 from homebrew
python-build: use readline from homebrew
Downloading Python-3.11.4.tar.xz...
-> https://www.python.org/ftp/python/3.11.4/Python-3.11.4.tar.xz
Installing Python-3.11.4...
python-build: use tcl-tk from homebrew
python-build: use readline from homebrew
BUILD FAILED (OS X 13.5 using python-build 20180424)
Inspect or clean up the working tree at /var/folders/kv/l0jzxgbj1kggff_kd35bfzkw0000gn/T/python-build.20230728132126.31601
Results logged to /var/folders/kv/l0jzxgbj1kggff_kd35bfzkw0000gn/T/python-build.20230728132126.31601.log
Last 10 log lines:
checking pkg-config is at least version 0.9.0... yes
checking for --enable-universalsdk... no
checking for --with-universal-archs... no
checking MACHDEP... "darwin"
checking for gcc... x86_64-apple-darwin13.4.0-clang
checking whether the C compiler works... no
configure: error: in `/var/folders/kv/l0jzxgbj1kggff_kd35bfzkw0000gn/T/python-build.20230728132126.31601/Python-3.11.4':
configure: error: C compiler cannot create executables
See `config.log' for more details
make: *** No targets specified and no makefile found. Stop.
</code></pre>
<ul>
<li>macOS Ventura 13.5</li>
<li>CommandLineTools version: 14.3.1.0.1.1683849156</li>
</ul>
<p>I found and tried <a href="https://github.com/pyenv/pyenv/issues/2143" rel="nofollow noreferrer">this</a> and <a href="https://stackoverflow.com/questions/62169855/homebrew-pyenv-cant-install-python-3-8-3-despite-i-already-have-it-installe">this</a> but both did not work for me. I am still getting the same error.</p>
<p>Can someone assist me with this?</p>
| <python><xcode><macos><pyenv> | 2023-07-28 12:09:30 | 0 | 802 | PeeteKeesel |
76,787,668 | 8,000,016 | Could not build wheels for xmlsec, which is required to install pyproject.toml-based projects | <p>I'm trying to install requirements.txt file on virtual env create from peen on Mac</p>
<p>arch -x86_64 pyenv install 3.10.11
pyenv virtualenv 3.10.11 venv_academy_olf
pyenv activate venv_academy_olf</p>
<p>This is the traceback error:</p>
<pre><code>Building wheel for xmlsec (pyproject.toml) ... error
error: subprocess-exited-with-error
× Building wheel for xmlsec (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [65 lines of output]
running bdist_wheel
running build
running build_py
creating build
creating build/lib.macosx-13.4-x86_64-cpython-310
creating build/lib.macosx-13.4-x86_64-cpython-310/xmlsec
copying src/xmlsec/py.typed -> build/lib.macosx-13.4-x86_64-cpython-310/xmlsec
copying src/xmlsec/tree.pyi -> build/lib.macosx-13.4-x86_64-cpython-310/xmlsec
copying src/xmlsec/__init__.pyi -> build/lib.macosx-13.4-x86_64-cpython-310/xmlsec
copying src/xmlsec/constants.pyi -> build/lib.macosx-13.4-x86_64-cpython-310/xmlsec
copying src/xmlsec/template.pyi -> build/lib.macosx-13.4-x86_64-cpython-310/xmlsec
running build_ext
building 'xmlsec' extension
creating build/temp.macosx-13.4-x86_64-cpython-310
creating build/temp.macosx-13.4-x86_64-cpython-310/private
creating build/temp.macosx-13.4-x86_64-cpython-310/private/var
creating build/temp.macosx-13.4-x86_64-cpython-310/private/var/folders
creating build/temp.macosx-13.4-x86_64-cpython-310/private/var/folders/yv
creating build/temp.macosx-13.4-x86_64-cpython-310/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn
creating build/temp.macosx-13.4-x86_64-cpython-310/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T
creating build/temp.macosx-13.4-x86_64-cpython-310/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg
creating build/temp.macosx-13.4-x86_64-cpython-310/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9
creating build/temp.macosx-13.4-x86_64-cpython-310/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src
clang -Wno-unused-result -Wsign-compare -Wunreachable-code -DNDEBUG -g -fwrapv -O3 -Wall -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX.sdk/usr/include -I/usr/local/opt/libxml2/include -D__XMLSEC_FUNCTION__=__func__ -DXMLSEC_NO_FTP=1 -DXMLSEC_NO_MD5=1 -DXMLSEC_NO_GOST=1 -DXMLSEC_NO_GOST2012=1 -DXMLSEC_NO_CRYPTO_DYNAMIC_LOADING=1 -DXMLSEC_CRYPTO_OPENSSL=1 -DMODULE_NAME=xmlsec -DMODULE_VERSION=1.3.13 -I/usr/local/Cellar/libxml2/2.11.4_1/include/libxml2 -I/usr/local/Cellar/libxmlsec1/1.3.1_1/include/xmlsec1 -I/usr/local/opt/openssl@3/include -I/usr/local/opt/openssl@3/include/openssl -I/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-build-env-1vqledsx/overlay/lib/python3.10/site-packages/lxml/includes -I/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-build-env-1vqledsx/overlay/lib/python3.10/site-packages/lxml -I/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-build-env-1vqledsx/overlay/lib/python3.10/site-packages/lxml/includes/libxml -I/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-build-env-1vqledsx/overlay/lib/python3.10/site-packages/lxml/includes/libxslt -I/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-build-env-1vqledsx/overlay/lib/python3.10/site-packages/lxml/includes/libexslt -I/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-build-env-1vqledsx/overlay/lib/python3.10/site-packages/lxml/includes/extlibs -I/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-build-env-1vqledsx/overlay/lib/python3.10/site-packages/lxml/includes/__pycache__ -I/Users/albertosanmartinmartinez/.pyenv/versions/3.10.11/envs/venv_academy_olf/include -I/Users/albertosanmartinmartinez/.pyenv/versions/3.10.11/include/python3.10 -c /private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/constants.c -o build/temp.macosx-13.4-x86_64-cpython-310/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/constants.o -g -std=c99 -fPIC -fno-strict-aliasing -Wno-error=declaration-after-statement -Werror=implicit-function-declaration -Os
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/constants.c:319:5: error: use of undeclared identifier 'xmlSecSoap11Ns'
PYXMLSEC_ADD_NS_CONSTANT(Soap11Ns, "SOAP11");
^
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/constants.c:304:46: note: expanded from macro 'PYXMLSEC_ADD_NS_CONSTANT'
tmp = PyUnicode_FromString((const char*)(JOIN(xmlSec, name))); \
^
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/common.h:19:19: note: expanded from macro 'JOIN'
#define JOIN(X,Y) DO_JOIN1(X,Y)
^
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/common.h:20:23: note: expanded from macro 'DO_JOIN1'
#define DO_JOIN1(X,Y) DO_JOIN2(X,Y)
^
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/common.h:21:23: note: expanded from macro 'DO_JOIN2'
#define DO_JOIN2(X,Y) X##Y
^
<scratch space>:81:1: note: expanded from here
xmlSecSoap11Ns
^
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/constants.c:320:5: error: use of undeclared identifier 'xmlSecSoap12Ns'; did you mean 'xmlSecXPath2Ns'?
PYXMLSEC_ADD_NS_CONSTANT(Soap12Ns, "SOAP12");
^
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/constants.c:304:46: note: expanded from macro 'PYXMLSEC_ADD_NS_CONSTANT'
tmp = PyUnicode_FromString((const char*)(JOIN(xmlSec, name))); \
^
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/common.h:19:19: note: expanded from macro 'JOIN'
#define JOIN(X,Y) DO_JOIN1(X,Y)
^
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/common.h:20:23: note: expanded from macro 'DO_JOIN1'
#define DO_JOIN1(X,Y) DO_JOIN2(X,Y)
^
/private/var/folders/yv/mbl800td04xb4pd9hctdshd00000gn/T/pip-install-mfm_1tzg/xmlsec_8952d61e0e764874bfd13520e20f21f9/src/common.h:21:23: note: expanded from macro 'DO_JOIN2'
#define DO_JOIN2(X,Y) X##Y
^
<scratch space>:83:1: note: expanded from here
xmlSecSoap12Ns
^
/usr/local/Cellar/libxmlsec1/1.3.1_1/include/xmlsec1/xmlsec/strings.h:34:33: note: 'xmlSecXPath2Ns' declared here
XMLSEC_EXPORT_VAR const xmlChar xmlSecXPath2Ns[];
^
2 errors generated.
error: command '/usr/bin/clang' failed with exit code 1
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for xmlsec
Successfully built autobahn cairocffi case-conversion django-fixture-magic django-ipware django-log-formatter-json django-positions fixtures future jwcrypto MarkupPy maxminddb msgpack-python neovim odfpy pendulum py3-validate-email pycairo pyjsparser pynvim PyVimeo rcssmin rjsmin sgmllib3k simplegeneric tuspy unicodecsv
Failed to build xmlsec
ERROR: Could not build wheels for xmlsec, which is required to install pyproject.toml-based projects
</code></pre>
<p>I tried installing libs but the error keeps</p>
<p>brew install libxml2 libxmlsec1 pkg-config</p>
<p>Any idea how to solve it ?</p>
| <python><macos> | 2023-07-28 12:08:10 | 2 | 1,264 | Alberto Sanmartin Martinez |
76,787,652 | 15,678,119 | fastAPI pass websocket to an endpoint with other parameters | <p>I have a fast api with many endpoints, one of them I want to pass a websocket and communicate with it. The problem is that all the working examples online pass the socket to <code>localhost:8000/ws</code> serving no other purpose other than communicating through the socket, but what I'm trying to do is pass the socket to a function with other parameters and send data through it, but that's where i get an error.</p>
<p><strong>main.py</strong></p>
<pre><code>@app.post("/file_upload")
async def file_upload(
data: UploadFile, name: str, user: Annotated[User, Depends(get_authenticated_user)], ws: WebSocket
):
""" docs for swagger ... """
try:
await ws.Accept()
path, size = await utils.save_file(data, ws) # in the save_file function ill be uploading or downloading the file locally and also sending progress to js through websocket
except Exception as e:
logging.error("Could not create file: %s", e)
raise HTTPException(
status_code=status.HTTP_500_INTERNAL_SERVER_ERROR,
detail="There was an error uploading the file",
)
return {"file_path": path}
</code></pre>
<p>as you can see my end goal is to send progress throughout the upload, and display it in js</p>
<p><strong>utils.save_file()</strong></p>
<pre><code>async def save_file(data: UploadFile, ws:WebSocket = None) -> Tuple[str, int]:
"""save a zip file."""
local_filepath = os.path.join(DATA_DIR, f"data_{gen_random_filename()}.zip")
# upload a zip file
total_size = data.size
size = 0
CHUNK = 64 * 1024 * 1024 # 64MB
try:
async with aiofiles.open(local_filepath, "wb") as f:
while contents := await data.read(CHUNK):
await f.write(contents)
size += len(contents)
print(f'{size} / {total_size}')
global progress
progress = {"progress": int(size / total_size * 100)}
await ws.send_text(json.dumps(data))
except Exception as e:
logging.error("Could not save file: %s", e)
raise Exception("Could not write data to the local file")
finally:
await data.close()
return local_filepath, size
</code></pre>
<p>the logic in this file is correct and it's not causing any errors
<strong>index.js</strong></p>
<pre><code> var ws = new WebSocket("ws://localhost:8000/file_upload");
ws.onmessage = function(event) {
console.log(event.data)
};
</code></pre>
<p>so i guess the issue is first caught here, js tries to open connection with the end point <code>file_upload</code> but its function takes more than just a socket, and so it fails.</p>
| <python><websocket><fastapi> | 2023-07-28 12:05:15 | 0 | 958 | Hannon qaoud |
76,787,643 | 515,976 | PySpark: Group by and aggregation of matching values | <p>I have a big table like below-</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: center;">status</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">In Progress</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">In Progress</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">In Progress</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">Start</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">In Progress</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Start</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">In Progress</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">Start</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">In Progress</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">Start</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">End</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">In Progress</td>
</tr>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: center;">End</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: center;">In Progress</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">In Progress</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: center;">End</td>
</tr>
</tbody>
</table>
</div>
<p>In PySpark using functions, want to derive <em>id-wise counts of each status</em> in separate columns like below:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th style="text-align: left;">id</th>
<th style="text-align: left;">start-count</th>
<th style="text-align: left;">in-progress-count</th>
<th style="text-align: left;">end-count</th>
</tr>
</thead>
<tbody>
<tr>
<td style="text-align: left;">1</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">5</td>
<td style="text-align: left;">2</td>
</tr>
<tr>
<td style="text-align: left;">2</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">0</td>
</tr>
<tr>
<td style="text-align: left;">3</td>
<td style="text-align: left;">1</td>
<td style="text-align: left;">2</td>
<td style="text-align: left;">1</td>
</tr>
</tbody>
</table>
</div>
<p>Can I make the PartitionBy work in this use-case? If not, what would be the best approach.</p>
| <python><sql><pyspark><group-by><window-functions> | 2023-07-28 12:04:37 | 2 | 8,329 | Avisek Chakraborty |
76,787,456 | 13,734,323 | Working with Langchain I get nlkt errors telling me: Package "tokenizers" not found in index and Packaage "taggers" not found in index | <p>I'm trying to load some documents, powerpoints and text to train my custom LLm using Langchain.</p>
<p>When I run it I come to a weird error message where it tells I don't have "tokenizers" and "taggers" packages (folders).</p>
<p>I've read the docs, asked Langchain chatbot, pip install nltk, uninstall, pip install nltk without dependencies, added them with nltk.download(), nltk.download("punkt"), nltk.download("all"),... Did also manually put on the path: nltk.data.path = ['C:\Users\zaesa\AppData\Roaming\nltk_data'] and added all the folders. Added the tokenizers folder and taggers folder from the github repo:<a href="https://github.com/nltk/nltk_data/tree/gh-pages/packages" rel="nofollow noreferrer">https://github.com/nltk/nltk_data/tree/gh-pages/packages</a>. Everything. Also asked on the Github repo. Nothing, no success.</p>
<p>Here the code of the file I try to run:</p>
<pre><code>from nltk.tokenize import sent_tokenize
from langchain.document_loaders import UnstructuredPowerPointLoader, TextLoader, UnstructuredWordDocumentLoader
from dotenv import load_dotenv, find_dotenv
import os
import openai
import sys
import nltk
nltk.data.path = ['C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data']
nltk.download(
'punkt', download_dir='C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data')
sys.path.append('../..')
_ = load_dotenv(find_dotenv()) # read local .env file
openai.api_key = os.environ['OPENAI_API_KEY']
# Replace with the actual folder paths
folder_path_docx = "DB\\ DB VARIADO\\DOCS"
folder_path_txt = "DB\\BLOG-POSTS"
folder_path_pptx_1 = "DB\\PPT DAY JUNIO"
folder_path_pptx_2 = "DB\\DB VARIADO\\PPTX"
# Create a list to store the loaded content
loaded_content = []
# Load and process DOCX files
for file in os.listdir(folder_path_docx):
if file.endswith(".docx"):
file_path = os.path.join(folder_path_docx, file)
loader = UnstructuredWordDocumentLoader(file_path)
docx = loader.load()
loaded_content.extend(docx)
# Load and process TXT files
for file in os.listdir(folder_path_txt):
if file.endswith(".txt"):
file_path = os.path.join(folder_path_txt, file)
loader = TextLoader(file_path, encoding='utf-8')
text = loader.load()
loaded_content.extend(text)
# Load and process PPTX files from folder 1
for file in os.listdir(folder_path_pptx_1):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_1, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_1 = loader.load()
loaded_content.extend(slides_1)
# Load and process PPTX files from folder 2
for file in os.listdir(folder_path_pptx_2):
if file.endswith(".pptx"):
file_path = os.path.join(folder_path_pptx_2, file)
loader = UnstructuredPowerPointLoader(file_path)
slides_2 = loader.load()
loaded_content.extend(slides_2)
# Process the loaded content as needed
# for content in loaded_content:
# Process the content
# pass
# print the first 500 characters of the first document
print(loaded_content[0].page_content)
print(nltk.data.path)
# Get the list of installed packages
installed_packages = nltk.downloader.Downloader(
download_dir='C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data').packages()
# Print the list of installed packages
print(installed_packages)
sent_tokenize("Hello. How are you? I'm well.")
</code></pre>
<p>When running the file I get:</p>
<pre><code>[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading tokenizers: Package 'tokenizers' not found
[nltk_data] in index
[nltk_data] Error loading taggers: Package 'taggers' not found in
[nltk_data] index
- HERE SOME TEXT -
['C:\\Users\\zaesa\\AppData\\Roaming\\nltk_data']
dict_values([<Package perluniprops>, <Package mwa_ppdb>, <Package punkt>, <Package rslp>, <Package porter_test>, <Package snowball_data>, <Package maxent_ne_chunker>, <Package moses_sample>, <Package bllip_wsj_no_aux>, <Package word2vec_sample>, <Package wmt15_eval>, <Package spanish_grammars>, <Package sample_grammars>, <Package large_grammars>, <Package book_grammars>, <Package basque_grammars>, <Package maxent_treebank_pos_tagger>, <Package averaged_perceptron_tagger>, <Package averaged_perceptron_tagger_ru>, <Package universal_tagset>, <Package vader_lexicon>, <Package lin_thesaurus>, <Package movie_reviews>, <Package problem_reports>, <Package pros_cons>, <Package masc_tagged>, <Package sentence_polarity>, <Package webtext>, <Package nps_chat>, <Package city_database>, <Package europarl_raw>, <Package biocreative_ppi>, <Package verbnet3>, <Package pe08>, <Package pil>, <Package crubadan>, <Package gutenberg>, <Package propbank>, <Package machado>, <Package state_union>, <Package twitter_samples>, <Package semcor>, <Package wordnet31>, <Package extended_omw>, <Package names>, <Package ptb>, <Package nombank.1.0>, <Package floresta>, <Package comtrans>, <Package knbc>, <Package mac_morpho>, <Package swadesh>, <Package rte>, <Package toolbox>, <Package jeita>, <Package product_reviews_1>, <Package omw>, <Package wordnet2022>, <Package sentiwordnet>, <Package product_reviews_2>, <Package abc>, <Package wordnet2021>, <Package udhr2>, <Package senseval>, <Package words>, <Package framenet_v15>, <Package unicode_samples>, <Package kimmo>, <Package framenet_v17>, <Package chat80>, <Package qc>, <Package inaugural>, <Package wordnet>, <Package stopwords>, <Package verbnet>, <Package shakespeare>, <Package ycoe>, <Package ieer>, <Package cess_cat>, <Package switchboard>, <Package comparative_sentences>, <Package subjectivity>, <Package udhr>, <Package pl196x>, <Package paradigms>, <Package gazetteers>, <Package timit>, <Package treebank>, <Package sinica_treebank>, <Package opinion_lexicon>, <Package ppattach>, <Package dependency_treebank>, <Package reuters>, <Package genesis>, <Package cess_esp>, <Package conll2007>, <Package nonbreaking_prefixes>, <Package dolch>, <Package smultron>, <Package alpino>, <Package wordnet_ic>, <Package brown>, <Package bcp47>, <Package panlex_swadesh>, <Package conll2000>, <Package universal_treebanks_v20>, <Package brown_tei>, <Package cmudict>, <Package omw-1.4>, <Package mte_teip5>, <Package indian>, <Package conll2002>, <Package tagsets>])
</code></pre>
<p>And here is how my folders structure from nltk_data looks like:</p>
<p><a href="https://i.sstatic.net/ecJ6n.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ecJ6n.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/10uYO.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/10uYO.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/bTj2f.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bTj2f.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/Da3N4.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/Da3N4.png" alt="enter image description here" /></a></p>
<p>Any help will be really appreciated because it is a serious work I have the commitment to achieve.</p>
<p>Will be full available for solving this issue. Let me know if you need anything else!</p>
| <python><nltk><langchain> | 2023-07-28 11:37:56 | 1 | 662 | Zaesar |
76,787,193 | 10,037,034 | How to remove repeated characters but keeping on? Regex Python | <p>I want to remove repeating characters but keeping one of them in a word for consonant letters.</p>
<p>I wrote following code but it deletes all of them.</p>
<pre><code>import re
text = "myyyy nname isss ssevvaall"
new = re.sub(r"([a-z])(\1{2,})","",text)
</code></pre>
<p>Result should be</p>
<blockquote>
<p>my name is sevvaal</p>
</blockquote>
| <python><regex> | 2023-07-28 11:01:50 | 2 | 1,311 | Sevval Kahraman |
76,787,108 | 17,082,611 | Change Conv2DTranspose output shape from (None,28,28,1) to (None,32,32,1) | <p>I am trying to implement a Decoder whose output shape is <code>(None,32,32,1)</code>. The snippet below implements a decoder whose output shape is <code>(None,28,28,1)</code> instead:</p>
<pre><code># Decoder
latent_dim = 2
latent_inputs = keras.Input(shape=(latent_dim,))
x = layers.Dense(7 * 7 * 64, activation="relu")(latent_inputs)
x = layers.Reshape((7, 7, 64))(x)
x = layers.Conv2DTranspose(64, 3, activation="relu", strides=2, padding="same")(x)
x = layers.Conv2DTranspose(32, 3, activation="relu", strides=2, padding="same")(x)
decoder_outputs = layers.Conv2DTranspose(1, 3, activation="sigmoid", padding="same")(x)
decoder = keras.Model(latent_inputs, decoder_outputs, name="decoder")
decoder.summary()
</code></pre>
<p>Summary is:</p>
<pre><code>Model: "decoder"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_2 (InputLayer) [(None, 2)] 0
dense_1 (Dense) (None, 3136) 9408
reshape (Reshape) (None, 7, 7, 64) 0
conv2d_transpose (Conv2DTr (None, 14, 14, 64) 36928
anspose)
conv2d_transpose_1 (Conv2D (None, 28, 28, 32) 18464
Transpose)
conv2d_transpose_2 (Conv2D (None, 28, 28, 1) 289
Transpose)
=================================================================
Total params: 65089 (254.25 KB)
Trainable params: 65089 (254.25 KB)
Non-trainable params: 0 (0.00 Byte)
_________________________________________________________________
</code></pre>
<ul>
<li>How can I achieve that? Can you help me?</li>
</ul>
| <python><tensorflow><machine-learning><keras><tensorflow2.0> | 2023-07-28 10:48:21 | 1 | 481 | tail |
76,787,107 | 12,769,783 | Type hinting mypy in list comprehension over duck-typed content | <p>I have a (nested) list comprehension, and receive an error from mypy.</p>
<p>In my list comprehension, I am iterating over a list which is guaranteed to contain only instances of two types (that don't implement the same base; in the example below called <code>A</code> and <code>B</code>).
The instances of both types are guaranteed to have the same attribute <code>name</code> which I utilize in the list comprehension. Mypy cannot automatically infer the type of the list, considers the content of the list mere <code>objects</code> (only common base), and thus warns, that the attribute <code>name</code> is not defined in instances of <code>object</code>.</p>
<p>I broke down my code into the following example:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import Union, Set, List
@dataclass
class A:
name: str
@dataclass
class B:
name: str
# Approach 1:
c: Union[A, B]
f : Set[str] = {c.name for c in [A('1'), B('2')]} # error: "object" has no attribute "name" [attr-defined]
# Approach 2:
d: List[Union[A, B]] = [A('1'), B('2')]
g : Set[str] = {c.name for c in d} # No issue!
</code></pre>
<p>In approach 2, mypy is not reporting an error. Why is this the case, and why is the type hint for <code>c</code> in The approach 1 not sufficient?</p>
<p>Sadly, in my real use-case, the list <code>d</code> is only intermediate (does not have a name) and I cannot type hint like in approach 2. Is there a way to hint mypy that accessing <code>name</code> is actually viable without needing to transform the list comprehension into a loop?</p>
<p>I am using python3.8.</p>
| <python><python-typing><mypy> | 2023-07-28 10:48:18 | 0 | 1,596 | mutableVoid |
76,786,890 | 18,482,459 | Matplotlib xaxis with ticks on empty dates | <p>I want to display a pandas DataFrame without the built-in plot function and struggle to get only the dates of the DataFrame on the x-axis. Matplotlib on its own displays empty bins/dates which I would like to avoid.</p>
<p>Example:</p>
<pre><code>import matplotlib.pyplot as plt
import pandas as pd
dates = [pd.to_datetime(x) for x in ['2023-07-27', '2023-07-28', '2023-07-31']]
data = pd.DataFrame([3,2,4], index=dates, columns=['val'])
fig, ax = plt.subplots()
data.plot(kind='bar', ax=ax)
labels = [item.get_text()[:10] for item in ax.get_xticklabels()]
ax.set_xticklabels(labels)
plt.show()
</code></pre>
<p>This gives the following figure:
<a href="https://i.sstatic.net/3hCF8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/3hCF8.png" alt="pandas plot" /></a></p>
<p>Trying to replicate this plot without pandas plot looks like this, but the figure has multiple empty ticks.</p>
<pre><code>fig, ax = plt.subplots()
ax.set_xticks(data.index)
ax.set_xticklabels(data.index.strftime("%Y-%m-%d"))
ax.bar(data.index, data.val)
plt.xticks(rotation=90)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/pOzFp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pOzFp.png" alt="enter image description here" /></a></p>
<p>How to not plot the empty bins in matplotlib? The <code>set_xticks</code> approach above did not work as expected. If you comment both <code>ax.set_xtick*</code> out, you get the labeled ticks in between as well.</p>
<p>Edit: It can happen that there are other dataframes that will be plotted in the same graphic with other dates. The example above just showcases the gap and I would want a method that tells matplotlib to not display all the empty bins, <strong>ideally independent from the input dataset</strong>. Matplotlib has all the information, it should be able to delete empty bins.</p>
<pre><code>#additional example
data2 = pd.DataFrame([1,3], index=[pd.to_datetime(x) for x in ['2023-08-01', '2023-08-02']], columns=['val2'])
fig, ax = plt.subplots()
ax.bar(data.index, data.val)
ax.bar(data2.index, data2.val2)
### add something here to delete empty bins
plt.xticks(rotation=90)
plt.show()
</code></pre>
<p>If you call <code>ax.xaxis.get_majorticklocs()</code> you will see that it created major ticks at positions, where no datapoint was recorded. As stated in a comment I believe this is due to the conversion of date -> integer number of days after 1970. Therefore, I would like to have the option to tell matplotlib to ignore empty bins or a function to manually remove them.</p>
| <python><pandas><matplotlib> | 2023-07-28 10:16:12 | 1 | 405 | Firefighting Physicist |
76,786,682 | 5,218,240 | inspect getsource didn't response with annotation on a class | <p>For exampel I have a class in Python:</p>
<pre class="lang-py prettyprint-override"><code>def dec(clz):
return clz
@dec
class Test:
pass
</code></pre>
<p>when I use <code>insepct.getsource(Test)</code> in python 3.8, the souce code has no <code>@dec</code>
But sourcecode will contain <code>@dec</code> when I use python 3.10</p>
<p>any information where this difference comes?
If I have to support 3.8 python, what function I need to use the get the annotation along with source code?</p>
<p>Thanks</p>
| <python><decorator><inspect> | 2023-07-28 09:49:27 | 0 | 1,215 | cinqS |
76,786,673 | 1,039,302 | Python Plotly: annotation delete trace | <p>The code below displayed this:
<a href="https://i.sstatic.net/X5ERG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/X5ERG.png" alt="enter image description here" /></a></p>
<p>If I uncommented <code>fig.update_layout</code>, it would display this - <strong>the added annotation deleted the trace and axis.</strong></p>
<p>Any idea? Than you.</p>
<p><a href="https://i.sstatic.net/KFZF5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KFZF5.png" alt="enter image description here" /></a></p>
<pre><code>import plotly.graph_objs as go
x_coor=[21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6, 21.6,
21.6, 21.6, 21.6]
y_coor=[0.0, 0.0, None, 70.0, 70.0, None, 0.0, 3.984799575740227, None, 3.984799575740227, 11.091689619072922, None, 11.091689619072922, 19.061288770553375, None, 19.061288770553375, 27.030887922033827, None, 27.030887922033827, 35.0, None, 70.0, 66.01520042425977, None, 66.01520042425977, 58.90831038092708, None, 58.90831038092708, 50.938711229446625, None, 50.938711229446625, 42.96911207796617, None, 42.96911207796617, 35.0, None, 35.0, 35.0, None, 82.0, 82.0, None, 82.0, 77.89068291331529, None, 77.89068291331529, 70.0, None]
z_coor=[0.0, 8.0, None, 0.0, 8.0, None, 8.0, 8.348385334336147, None, 8.348385334336147, 8.969730578124661, None, 8.969730578124661, 9.666501246796951, None, 9.666501246796951, 10.363271915469245, None, 10.363271915469245, 11.06, None, 8.0, 8.348385334336147, None, 8.348385334336147, 8.969730578124661, None, 8.969730578124661, 9.666501246796951, None, 9.666501246796951, 10.363271915469245, None, 10.363271915469245, 11.06, None, 0.0, 11.06, None, 0.0, 5.0, None, 5.0, 5.684886181114119, None, 5.684886181114119, 7.0, None]
annos=[{'x': 21.6, 'y': 0.0, 'z': 4.0, 'text': 'BB_350x200x8: HA350'}, {'x': 21.6, 'y': 70.0, 'z': 4.0, 'text': 'BB_350x200x8: HA350'}, {'x': 21.6, 'y': 1.9923997878701134, 'z': 8.174192667168073, 'text': 'BB_1100x200x8: HA350'}, {'x': 21.600000000000005, 'y': 7.538244597406575, 'z': 8.659057956230404, 'text': 'BB_800x200x8: HA350'}, {'x': 21.6, 'y': 15.07648919481315, 'z': 9.318115912460806, 'text': 'BB_800x200x8: HA350'}, {'x': 21.6, 'y': 23.0460883462936, 'z': 10.014886581133098, 'text': 'BB_800x200x8: HA350'}, {'x': 21.6, 'y': 31.015443961016913, 'z': 10.711635957734622, 'text': 'BB_800x200x8: HA350'}, {'x': 21.6, 'y': 68.00760021212989, 'z': 8.174192667168073, 'text': 'BB_800x200x8: HA350'}, {'x': 21.600000000000005, 'y': 62.461755402593425, 'z': 8.659057956230404, 'text': 'BB_800x200x8: HA350'}, {'x':21.6, 'y': 54.92351080518685, 'z': 9.318115912460806, 'text': 'BB_800x200x8: HA350'}, {'x': 21.6, 'y': 46.9539116537064, 'z': 10.014886581133098, 'text': 'BB_800x200x8: HA350'}, {'x': 21.6, 'y': 38.98455603898309, 'z': 10.711635957734622, 'text': 'BB_800x200x8: HA350'},{'x': 21.6, 'y': 35.0, 'z': 5.53, 'text': 'BB_450x150x6: HA350'}, {'x': 21.6, 'y': 82.0, 'z': 2.5, 'text': 'BB_300x150x6: HA350'}, {'x': 21.6, 'y': 79.94534145665764, 'z': 5.34244309055706, 'text': 'BB_900x150x8: HA350'}, {'x': 21.6, 'y': 73.94534145665764, 'z': 6.34244309055706, 'text': 'BB_900x150x8: HA350'}]
x_coor = [x_coor[0]]*len(x_coor)
trace_members = go.Scatter3d(
x=x_coor,
y=y_coor,
z=z_coor,
mode='lines+text', # draw 'line'
name='members', # legend name
line=dict(
color='black',
width=8,
dash='solid',
),
)
fig = go.Figure(data=[trace_members])
# fig.update_layout(scene=dict(annotations=annos),)
camera = dict(
up=dict(x=0, y=0, z=1),
center=dict(x=0, y=0, z=0),
eye=dict(x=-2, y=0, z=0) # project to yz plane
)
fig.update_layout(scene_camera=camera)
fig.show()
</code></pre>
| <python><plotly><trace> | 2023-07-28 09:48:16 | 0 | 1,713 | warem |
76,786,670 | 3,828,640 | What is the difference between threshold and prominence in 'scipy.signal.find_peaks'? | <p>I'm trying to find the peaks of a noisy signal using <a href="https://docs.scipy.org/doc/scipy/reference/generated/scipy.signal.find_peaks.html#scipy.signal.find_peaks" rel="nofollow noreferrer">scipy.signal.find_peaks</a> and I realised that I don't fully understand the difference between the threshold and prominence arguments.</p>
<p>I understand that prominence is equivalent to topographical prominence, i.e. the height of a peak relative to the surrounding terrain. However, I don't quite understand in which ways the threshold argument is different from this. From the above link both arguments seem equivalent to me. What is exactly the difference between threshold and prominence in this case?</p>
| <python><scipy> | 2023-07-28 09:48:11 | 1 | 428 | S - |
76,786,657 | 1,763,250 | pd.Series.agg showing weird behaviour with geometric mean | <p>I was trying to find the geometric and harmonic mean of a Pandas series using pandas when I came across some odd behavior. I initially defined a couple of simple functions and tested it on a dummy series. I got the error below:</p>
<pre><code>def gmean(x):
a = np.log(x)
return np.exp(a.mean())
def hmean(x):
return 1/(1/x).mean()
s=pd.Series([1,3,9])
s.agg([hmean, gmean])
</code></pre>
<p>This gave the following error:</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
File ~/miniconda3/envs/dsbasic/lib/python3.8/site-packages/pandas/core/apply.py:430, in Apply.agg_list_like(self)
429 try:
--> 430 concatenated = concat(results, keys=keys, axis=1, sort=False)
431 except TypeError as err:
432 # we are concatting non-NDFrame objects,
433 # e.g. a list of scalars
File ~/miniconda3/envs/dsbasic/lib/python3.8/site-packages/pandas/util/_decorators.py:331, in deprecate_nonkeyword_arguments.<locals>.decorate.<locals>.wrapper(*args, **kwargs)
326 warnings.warn(
327 msg.format(arguments=_format_argument_list(allow_args)),
328 FutureWarning,
329 stacklevel=find_stack_level(),
330 )
--> 331 return func(*args, **kwargs)
File ~/miniconda3/envs/dsbasic/lib/python3.8/site-packages/pandas/core/reshape/concat.py:368, in concat(objs, axis, join, ignore_index, keys, levels, names, verify_integrity, sort, copy)
159 """
160 Concatenate pandas objects along a particular axis.
161
(...)
366 1 3 4
367 """
--> 368 op = _Concatenator(
369 objs,
370 axis=axis,
371 ignore_index=ignore_index,
372 join=join,
373 keys=keys,
374 levels=levels,
375 names=names,
376 verify_integrity=verify_integrity,
377 copy=copy,
378 sort=sort,
379 )
381 return op.get_result()
File ~/miniconda3/envs/dsbasic/lib/python3.8/site-packages/pandas/core/reshape/concat.py:458, in _Concatenator.__init__(self, objs, axis, join, keys, levels, names, ignore_index, verify_integrity, copy, sort)
454 msg = (
455 f"cannot concatenate object of type '{type(obj)}'; "
456 "only Series and DataFrame objs are valid"
457 )
--> 458 raise TypeError(msg)
460 ndims.add(obj.ndim)
TypeError: cannot concatenate object of type '<class 'numpy.float64'>'; only Series and DataFrame objs are valid
The above exception was the direct cause of the following exception:
ValueError Traceback (most recent call last)
Cell In[122], line 1
----> 1 s.agg([gmean, hmean])
File ~/miniconda3/envs/dsbasic/lib/python3.8/site-packages/pandas/core/series.py:4605, in Series.aggregate(self, func, axis, *args, **kwargs)
4602 func = dict(kwargs.items())
4604 op = SeriesApply(self, func, convert_dtype=False, args=args, kwargs=kwargs)
-> 4605 result = op.agg()
4606 return result
File ~/miniconda3/envs/dsbasic/lib/python3.8/site-packages/pandas/core/apply.py:1108, in SeriesApply.agg(self)
1107 def agg(self):
-> 1108 result = super().agg()
1109 if result is None:
1110 f = self.f
File ~/miniconda3/envs/dsbasic/lib/python3.8/site-packages/pandas/core/apply.py:172, in Apply.agg(self)
169 return self.agg_dict_like()
170 elif is_list_like(arg):
171 # we require a list, but not a 'str'
--> 172 return self.agg_list_like()
174 if callable(arg):
175 f = com.get_cython_func(arg)
File ~/miniconda3/envs/dsbasic/lib/python3.8/site-packages/pandas/core/apply.py:438, in Apply.agg_list_like(self)
436 result = Series(results, index=keys, name=obj.name)
437 if is_nested_object(result):
--> 438 raise ValueError(
439 "cannot combine transform and aggregation operations"
440 ) from err
441 return result
442 else:
443 # Concat uses the first index to determine the final indexing order.
444 # The union of a shorter first index with the other indices causes
445 # the index sorting to be different from the order of the aggregating
446 # functions. Reindex if this is the case.
ValueError: cannot combine transform and aggregation operations
</code></pre>
<p>After a little digging, I saw that when using the gmean function with <code>pd.Series.agg</code>, it was returning the original series back, even though calling it directly seemed to work:</p>
<pre><code>s.agg([gmean])
# gmean
# 0 1.0
# 1 3.0
# 2 9.0
gmean(s)
# 3.0000000000000004
</code></pre>
<p>I thought there may be some bug in my code, so I found a scipy function that does geometric mean(It even has the same name). That function also had the same issue:</p>
<pre><code>from scipy.stats import gmean
s.agg([gmean])
# gmean
# 0 1.0
# 1 3.0
# 2 9.0
</code></pre>
<p>Funnily enough, <code>DataFrame.agg</code> and <code>DataFrameGroupBy.agg</code> don't have the same issues:</p>
<pre><code>test_df = pd.DataFrame(dict(
a = ['alpha','beta'] * 5,
x = np.random.rand(10)
))
test_df[['x']].agg(gmean)
# x 0.378187
test_df.groupby('a')['x'].agg(gmean)
# a
# alpha 0.498097
# beta 0.287143
# Name: x, dtype: float64
</code></pre>
<p>Am I missing something here? I can use the Dataframe version and solve my issue, but I want to know why this is breaking like this</p>
| <python><pandas><numpy><scipy> | 2023-07-28 09:46:00 | 1 | 2,017 | Rohit |
76,786,640 | 10,375,073 | TypeError: Cannot set a Categorical with another, without identical categories | <p>I have an empty pd.DataFrame <code>df</code> with <code>col1</code> set as <code>category</code></p>
<pre><code>df = pd.DataFrame({"col1": []})
df["col1"] = df["col1"].astype("category")
</code></pre>
<p>I have also a pd.Series with one value set as a category</p>
<pre><code>s = pd.Series(["MP1"])
s = s.astype("category")
</code></pre>
<p>When I try the following</p>
<pre><code>df["col1"] = df["col1"].combine_first(s)
</code></pre>
<p>I got this error</p>
<pre><code>TypeError: Cannot set a Categorical with another, without identical categories
</code></pre>
<p><strong>What I've tried</strong><br>
Adding the category from the pd.Series into my empty DataFrame</p>
<pre><code>df["col1"].cat.add_categories(s.cat.categories.to_list())
</code></pre>
<p>But it didn't seems to work, I got the same error and when I output the category it looks like it didn't add anything</p>
<pre><code>[in]-> df["col1"].cat.categories
[out]-> Float64Index([], dtype='float64')
</code></pre>
| <python><pandas><dataframe> | 2023-07-28 09:44:17 | 1 | 405 | RaphWork |
76,786,606 | 5,378,132 | Running Ray on top of AWS Batch multi-node? | <p>I am interested in running Ray on AWS Batch multi-node. This is a pattern that hasn't been done before on Ray, and thus, there's no documentation on it. But, I'd really like to try it since Ray can be installed on-premise as well.</p>
<p>I stood up the AWS Batch multi-node gang-scheduled closer and ran the following commands:</p>
<ol>
<li>For the head node:</li>
</ol>
<pre><code>subprocess.Popen(f"ray start --head --node-ip-address {current.parallel.main_ip} --port {master_port} --block", shell=True).wait()
</code></pre>
<ol start="2">
<li>For the worker nodes:</li>
</ol>
<pre><code>import ray
node_ip_address = ray._private.services.get_node_ip_address()
subprocess.Popen(f"ray start --node-ip-address {node_ip_address} --address {current.parallel.main_ip}:{master_port} --block", shell=True).wait()
</code></pre>
<p>The head node seems to be working, but there's some issue with the worker nodes not syncing with the head node.</p>
<p>I get the following output in <code>stderr</code>:</p>
<pre><code>[2023-07-28 09:25:55,500 I 427 427] global_state_accessor.cc:356: This node has an IP address of 10.14.52.21, but we cannot find a local Raylet with the same address. This can happen when you connect to the Ray cluster with a different IP address or when connecting to a container.
</code></pre>
<p>Any insight on how I can get Ray working on AWS Batch multi-node would be much appreciated!</p>
| <python><distributed-computing><ray><aws-batch><ray-train> | 2023-07-28 09:39:21 | 2 | 2,831 | Riley Hun |
76,786,521 | 6,385,925 | Parse number with thousands separator | <p>I want different number formats parsed properly. The strings could use commas or dots, so this is what I'd like:</p>
<pre><code>"-1526" → -1526.0
"15 000" → 15000.0
"15.000,00" → 15000.0
"15,000.00" → 15000.0
"15,000,000" → 15000000.0
"15,000.000" → 15000.0
</code></pre>
<p>Strings like these should fail:
"157023,12.5323", "15,000,00", "15.12,000,000"</p>
| <python><parsing><numbers> | 2023-07-28 09:30:29 | 2 | 1,303 | fer0n |
76,786,429 | 22,221,987 | Python's CSV module writes header in first row instead of the column names | <p>I'm trying to write dict to csv file. I add specific column names, but, they writes in the first row, not in the header (tried to open via Libre Office)
Here is the code:</p>
<pre><code> def write_csv():
with open(f'RTD {1}.csv', 'w') as csvfile:
writer = csv.DictWriter(csvfile, delimiter=',', fieldnames=['one', 'two', 'three'])
writer.writeheader()
writer.writerow({'one': 1, 'two': 2, 'three': 3})
</code></pre>
<p>Here is the output:</p>
<p><a href="https://i.sstatic.net/SF6KX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SF6KX.png" alt="enter image description here" /></a></p>
| <python><python-3.x><csv> | 2023-07-28 09:20:03 | 1 | 309 | Mika |
76,786,352 | 6,523,288 | How to create a "fat virtual environment" for python applications | <p>I need to deploy a python application to a machine where python is not installed, and cannot be installed in the traditional sense. I cannot use containerization technologies such as docker. The only guarantees about the target I have are</p>
<ul>
<li>Glibc is present</li>
<li>Some linux kernel is present</li>
<li>gunzip/unzip/another archiving tool is present</li>
</ul>
<p>In java world I would package into an archive entire development kit (since there's no runtime environment anymore), write a crude shell script that sets JAVA_HOME (development kit location), and adds all the dependencies to "classpath/modulepath" (what modules should be loaded at runtime). Such deployments are resilient, as they do not clash with what ever is in the system (there may be another java installation on the system) and they're self contained: they have <strong>everything</strong> that is needed to run the application.</p>
<p>Looking at python equivalent solutions I'm seeing virtual environments, but I fail to grasp how to build such equivalent environment for python as it keeps symlinking to my currently installed python version on the system, which I cannot provide as an archive. Is there an equivalent way to "download the SDK, set PYTHON_HOME, add loadable modules, run entrypoint" with virtual environments?</p>
| <python><virtualenv> | 2023-07-28 09:09:09 | 1 | 1,281 | Dragas |
76,786,295 | 5,850,635 | Twilio Chatbot with python not responding for message | <p>I am trying to create twilio chatbot,</p>
<pre><code>from flask import Flask,request
from twilio.twiml.messaging_response import MessagingResponse
app = Flask(__name__)
@app.route('/sms',methods=['POST'])
def sms():
from_number = request.form['From']
to_number = request.form['To']
body = request.form['Body']
resp = MessagingResponse()
resp.message("The Robots are coming! Head for the hills!")
return str(resp)
app.run()
</code></pre>
<p>My post request is working fine, as I can see 200 code for each post request</p>
<p><a href="https://i.sstatic.net/ybavy.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ybavy.png" alt="enter image description here" /></a></p>
<p>This is my ngrok terminal. As my request is going, my response message is not reflecting in twilio sandbox. Nothing responding in my whatsapp.
When I tried to monitor the error log its showing like this, not sure why this is happening.</p>
<pre><code>WARNING - 12200
Schema validation warning
The provided XML does not conform to the Twilio Markup XML schema. Please refer to the specific error and correct the problem. ## Warning - 12200
</code></pre>
<p><a href="https://i.sstatic.net/1Ouq5.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/1Ouq5.png" alt="enter image description here" /></a>
http://localhost:4040/inspect/http</p>
<p><a href="https://i.sstatic.net/OtPFo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/OtPFo.png" alt="enter image description here" /></a>
I am new to this please someone help me why its not responding with my message.</p>
<p>Thanks!</p>
| <python><twilio><sms><chatbot><messaging> | 2023-07-28 09:00:26 | 0 | 1,032 | Kavya Shree |
76,786,143 | 662,285 | Merge "n" number json in for loop using python | <p>I am trying to merge "n" number of json files but i need to first load json from filepath and then use it. I referred below link but they already have json data available but in my case i want to first load json data in loop and then merge it. I am looking for generic way of doing it.</p>
<p><a href="https://stackoverflow.com/questions/3494906/how-do-i-merge-a-list-of-dicts-into-a-single-dict">How do I merge a list of dicts into a single dict?</a></p>
<p>My code: I want to first load json data for all the files and then merge it. I am trying something like below but it does not work and give <code>error as TypeError: str , bytes or ospath.like object, not Tuple</code></p>
<pre><code>def merge_JsonFiles(*file):
result = {}
for f in file:
json_data = loadjson(file)
result = {**json_data}
return result
merge_JsonFiles(file1, file2, file3)
def loadjson(file)
with open(file , 'r') as fh:
return json.load(fh)
Merge logic for two files = {**file1, **file2)
</code></pre>
| <python><json> | 2023-07-28 08:42:41 | 2 | 4,564 | Bokambo |
76,786,111 | 1,227,922 | Adding fields to ModelForm programatically not showing in django admin | <p>I have a usecase where I want to add fields to a modelform in django admin, but when trying to add them in the __init__function like this:</p>
<pre><code>class PackageFileInlineForm(forms.ModelForm):
class Meta:
model = File
fields = '__all__'
def __init__(self, *args, **kwargs) -> None:
super().__init__(*args, **kwargs)
self.fields['test'] = forms.CharField(max_length=100)
</code></pre>
<p>Nothing shows up in the form. However if I do the form like this:</p>
<pre><code>class PackageFileInlineForm(forms.ModelForm):
test = forms.CharField(max_length=100)
class Meta:
model = File
fields = '__all__'
</code></pre>
<p>Why does the <strong>init</strong> version not work?</p>
| <python><django> | 2023-07-28 08:39:14 | 0 | 1,489 | Andreas |
76,786,042 | 16,010,394 | How to seek for partial data with BeautifulSoup | <p>I'm scraping a website that use data attributes putting the name of the data inside the value along the value itself.</p>
<pre class="lang-html prettyprint-override"><code><div data-title="Subscribers: 4,471"></div>
</code></pre>
<p>I would like to know how to grab this <code>div</code> based on partial value that the div should contain "Subscribers", something like this (the * represent the pattern to seek for):</p>
<pre class="lang-py prettyprint-override"><code>test = soup.find_all("div", {"data-title": "Subscribers"*})
</code></pre>
| <python><web-scraping><beautifulsoup> | 2023-07-28 08:29:19 | 1 | 616 | ethicnology |
76,785,608 | 9,412,288 | pipreqs generate requirements.txt with conflict versions | <p>i use pipreqs to generate requirements.txt, the generated requirements.txt has a library <code>GitPython</code> with two versions <code>3.1.31</code> and <code>3.1.32</code></p>
<p>my command is <code>python3 -m pipreqs.pipreqs $BASEDIR --force </code></p>
<p>log is:</p>
<pre><code>WARNING: Import named "GitPython" not found locally. Trying to resolve it at the PyPI server.
WARNING: Import named "GitPython" was resolved to "GitPython:3.1.32" package (https://pypi.org/project/GitPython/).
Please, verify manually the final list of requirements.txt to avoid possible dependency confusions.
WARNING: Import named "Requests" not found locally. Trying to resolve it at the PyPI server.
WARNING: Import named "Requests" was resolved to "requests:2.31.0" package (https://pypi.org/project/requests/).
Please, verify manually the final list of requirements.txt to avoid possible dependency confusions.
INFO: Successfully saved requirements file in ./requirements.txt
</code></pre>
<p>generate file is:</p>
<pre><code>GitPython==3.1.31
GitPython==3.1.32
oss2==2.17.0
Requests==2.31.0
</code></pre>
<p>when i run install command <code>pip3 install -r ./requirements.txt </code></p>
<p>i got this error message:</p>
<pre><code>Defaulting to user installation because normal site-packages is not writeable
Requirement already satisfied: GitPython==3.1.31 in /Users/tian/Library/Python/3.8/lib/python/site-packages (from -r ./requirements.txt (line 1)) (3.1.31)
ERROR: Cannot install GitPython==3.1.31 and GitPython==3.1.32 because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested GitPython==3.1.31
The user requested GitPython==3.1.32
To fix this you could try to:
1. loosen the range of package versions you've specified
2. remove package versions to allow pip attempt to solve the dependency conflict
ERROR: ResolutionImpossible: for help visit https://pip.pypa.io/en/latest/topics/dependency-resolution/#dealing-with-dependency-conflicts
</code></pre>
<p>is there a way to generate requirement.txt without conflict versions, if not i'll have to remove the conflict version manually</p>
| <python><pip><requirements.txt> | 2023-07-28 07:21:44 | 1 | 716 | Jade |
76,785,474 | 5,457,202 | Featuretools failed to load plugin tsfresh from library featuretools_tsfresh_primitives.__init__ | <p>I'm trying to make featuretools and featuretools_tsfresh_primitives in my Jupyter notebook environment.</p>
<p>I installed both library using conda</p>
<pre class="lang-bash prettyprint-override"><code> conda install -c conda-forge featuretools
conda install -c conda-forge featuretools-tsfresh-primitives
</code></pre>
<p>However, when I tried using them, first I get a warning</p>
<pre><code>>>> import featuretools
2023-07-28 08:48:31,448 featuretools - WARNING Featuretools failed to load plugin tsfresh from library featuretools_tsfresh_primitives.__init__. For a full stack trace, set logging to debug.
</code></pre>
<p>I ran the import again inside a file after setting the logging level to DEBUG, but I didn't throw anything useful</p>
<p>The script:</p>
<pre><code>import logging
logging.basicConfig()
logging.getLogger().setLevel(logging.DEBUG)
import featuretools
</code></pre>
<p>The output:</p>
<pre><code>INFO:numexpr.utils:NumExpr defaulting to 8 threads.
2023-07-28 08:53:19,424 featuretools - WARNING Featuretools failed to load plugin tsfresh from library featuretools_tsfresh_primitives.__init__. For a full stack trace, set logging to debug.
</code></pre>
<p>However, I can import the libraries on its own apparently.</p>
<pre><code>>>> import featuretools as ft
2023-07-28 08:56:11,150 featuretools - WARNING Featuretools failed to load plugin tsfresh from library featuretools_tsfresh_primitives.__init__. For a full stack trace, set logging to debug.
>>> ft.__version__
'1.27.0'
>>> import featuretools_tsfresh_primitives as fttp
>>> fttp.__version__
'1.0.2'
>>> ts.__version__
'0.20.1'
</code></pre>
<p>I'm very confused because I installed these same libraries in a different environment (it was Python 3.11 while this one is 3.10) and there is no issue with that installation. What could be wrong here?</p>
| <python><deep-learning><feature-engineering><featuretools> | 2023-07-28 06:59:01 | 1 | 436 | J. Maria |
76,785,405 | 3,043,636 | How use a Blob Container folder in Azure ML Notebook | <p>For a Deep Learning (Computer Vision based), I have imported a folder in an Azure Blob Container. The folder itself contains two different folders "Train" and "Test". In both the "Train" and "Test" folders I have a set of different folders according to the 9 classes of images I would like to classify.</p>
<p>Now, I would like to train a Deep Learning algorithm to classify to obtain a model able to classify those 9 classes and I want to use Azure Machine Learning Notebook- with the proper compute instance.</p>
<p>Now, when I try recalling the folder path in Blob Storage with the code:</p>
<pre><code> train_ds = tf.keras.preprocessing.image_dataset_from_directory(
r'path', #here I have to specify the path
shuffle =True,
image_size =(img_height,img_width) ,
batch_size = batch_size # 32)
</code></pre>
<p>I have to specify the path. I use a lot of different approaches, passing directly the path the services providing me but none of them work:</p>
<ol>
<li>take the link directly from blob storage and pass it.</li>
<li>Create a Azure datastore a pass the Folder_URI.</li>
<li>Create a Azure data asset and pass it</li>
</ol>
<p>The error is for instance:</p>
<pre><code> Could not find directory azureml://subscriptions/ data asset/images/Train
</code></pre>
<p>How can I pass the proper link to the folder to read the data and start training the algorithm in Azure Machine Learning Notebook?</p>
| <python><azure><azure-machine-learning-service> | 2023-07-28 06:46:03 | 0 | 579 | user3043636 |
76,785,306 | 2,898,713 | Plot label at each vertex and intersection of a hexagram in matplotlib | <p>I'm trying to produce the below picture of "A hexagram or 6-pointed star polygon in which numbers are placed at each of the six vertices and six intersections".</p>
<p><a href="https://i.sstatic.net/lod8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lod8J.png" alt="enter image description here" /></a></p>
<p>Below is my code so far. I'm stuck trying to find the vertices of the hexagon to plot the numbers there.</p>
<pre><code>import matplotlib.pyplot as plt
import math
def plot_hexagram_with_numbers(side_length):
# Calculate the coordinates for the vertices of the hexagram
vertices = []
mu_hexagon = 360 / 6
for i in range(6):
angle_rad = math.radians(i * mu_hexagon)
x = side_length * math.cos(angle_rad)
y = side_length * math.sin(angle_rad)
vertices.append((x, y))
# Create a list of triangles' vertices for the hexagram
triangles = []
for i in range(6):
triangle = [vertices[i], vertices[(i + 2) % 6], vertices[(i + 4) % 6]]
triangles.append(triangle)
# Plot the hexagram's triangles
for triangle in triangles:
x_coords, y_coords = zip(*triangle)
plt.plot(x_coords, y_coords, 'b-')
# Add numbers at each vertex and intersection of the hexagram
label_distance = side_length / 20 # Adjust this value as needed
for i in range(6):
x, y = vertices[i]
# cant figure out how to find the edges/intersections :(
# Set axis equal and show the plot
plt.axis('equal')
plt.show()
</code></pre>
<p>This is the image the above code produces:</p>
<p><a href="https://i.sstatic.net/SgXnH.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/SgXnH.png" alt="enter image description here" /></a></p>
| <python><matplotlib><geometry> | 2023-07-28 06:23:43 | 3 | 1,403 | Reed Jones |
76,785,260 | 10,374,485 | python component import multi component | <p>I want import</p>
<p><a href="https://i.sstatic.net/fiJSu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/fiJSu.png" alt="enter image description here" /></a></p>
<p>license.py and profiel.py</p>
<pre><code>from scenario.myinfo.license import license
from scenario.myinfo.profile import profile
</code></pre>
<p>is working but</p>
<pre><code>from scenario.myinfo import license, profile
</code></pre>
<p>is not working
how can I import muilti component</p>
| <python> | 2023-07-28 06:15:06 | 0 | 373 | Jeong hyunseok |
76,785,192 | 1,897,151 | missing value # in fastapi | <p>have a query string that sends something like search=abc and funny thing is this works but when send</p>
<pre><code>search=#### <--- this will result empty in fastapi debug for the search parameter something like empty string ''
if i pass search=@$%#$$ <--- everything after # will be missing and fastapi only gets @S%
</code></pre>
<p>my frontend is passing # as something like this</p>
<pre><code>search=%23%23%23%23%23%23
</code></pre>
<p>i cant find any reason why fastapi is filtering out the # in this case also i am going through 2 services before hitting my last services part that shows empty ''. my first services interacts with the frontend and i received the # text and first services passing to 2nd services with <code>request.query_string</code> where this seems to be making it missing</p>
| <python><fastapi> | 2023-07-28 06:01:57 | 0 | 503 | user1897151 |
76,785,180 | 9,055,450 | Python3 OpenCV on Raspberry Pi Zero: VideoCapture for USB cam on /dev/video0 returns always False but camera works correctly if I access it via motion | <p>I try to use OpenCV in Python 3 on a Raspberry Pi Zero to capture a video frame from a USB camera identified as <code>/dev/video0</code>. Here is my code:</p>
<pre><code>import cv2
...
cam = cv2.VideoCapture(0)
if cam.isOpened():
print("Camera successfully opened")
else:
print("Camera could not be opened")
while True:
success, image = cam.read()
print("success in taking image? ", success)
if success and not image.empty():
image = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
// further code to be excecuted to process the image
else:
print("problem capturing image")
</code></pre>
<p>The output is</p>
<pre><code>Camera successfully opened
success in taking image? False
problem capturing image
success in taking image? False
problem capturing image
success in taking image? False
problem capturing image
...
</code></pre>
<p>It will never take an image successfully. It worked once (<code>success = True</code>) for a single iteration the very first time I executed my script but since then it is not working any more. So my first assumption was that the camera is broken. But if I use motion to access the camera, it works flawlessly. So the camera is connected correctly.</p>
<p>Any clue on what I am doing wrong?</p>
| <python><opencv><camera><raspberry-pi-zero> | 2023-07-28 05:58:02 | 1 | 544 | the_smart_home_maker |
76,785,143 | 1,581,090 | How to fix this python telnet code to send a command to a device a second time? | <p>Under windows 10 and using python 3.10.11 I am trying to run a command on a telnet-connected device twice. The problem is that only the first time it works and the second time the command is only half(?) transmitted to the device?</p>
<p>First, here is the complete code:</p>
<pre><code>import time
import telnetlib
def read(session, timeout=1):
received = b""
t0 = time.time()
while time.time() - t0 < timeout:
data = session.read_very_eager()
if data:
received += data
t0 = time.time()
return received.decode()
def write_and_read(session, command):
time.sleep(1)
session.write(command.encode() + b"\r\n")
time.sleep(1)
reply = read(session)
print(reply)
session = telnetlib.Telnet("192.168.200.10", 9000, 20)
command = "$SENSOR,STATUS,192.168.200.53"
write_and_read(session, command)
write_and_read(session, command)
</code></pre>
<p>As you see the same command ("$SENSOR,STATUS,192.168.200.53") is executed twice. But only the first time it works and I get the expected output.</p>
<p>The output I get is</p>
<pre><code>Requesting radar report from 192.168.200.53...
Failed to query sensor. Is the IP correct? Is the sensor powered?
$SENSOR,STATUS,192.168
</code></pre>
<p>The first two lines is the correct response from the device. And the fourth line shows what I get when the command is executed again.</p>
<p>I tried already different sleeps, different reads, different timing, nothing. The second command just does not get through (?). It varies however in the response I get. Thhey can be</p>
<pre><code>$SENSOR,STATUS,192.168
$SENSOR,STATUS,192.168.2
$SENSOR,STATUS,192.168.
</code></pre>
<p>etc. It is always some random number of characters missing.</p>
<p>Is there any that issue can be fixed? Might that be related to windows 10? A colleague with the same device using windows 11 does not get this issue at all...</p>
<p>Here is the complete communication when I use a slightly altered code as follows:</p>
<pre><code>def write_and_read(session, command, a, b):
time.sleep(a)
to_write = command.encode() + b"\r\n"
print("WRITE: ", to_write)
session.write(to_write)
time.sleep(b)
reply = session.read_until(b"\n")
print("READ: ", reply)
reply = session.read_until(b"\n")
print("READ: ", reply)
return reply
command = "$SENSOR,STATUS,192.168.200.53"
with telnetlib.Telnet("192.168.200.10", 9000, 10) as session:
write_and_read(session, command, 0, 2)
write_and_read(session, command, 0, 2)
session.close()
</code></pre>
<p>then the output is</p>
<pre><code>WRITE: b'$SENSOR,STATUS,192.168.200.53\r\n'
READ: b'\x1b[0GRequesting radar report from 192.168.200.53...\r\n'
READ: b'Failed to query sensor. Is the IP correct? Is the sensor powered?\r\n'
WRITE: b'$SENSOR,STATUS,192.168.200.53\r\n'
</code></pre>
<p>and then it blocks indefinitely.</p>
<p>On windows it did work when windows was using a different network adapter ("Realtek USB GbE Family Controller #4") instead of the network adapter "Realtek USB GbE Family Controller" (which is the exact same hardware device with the same parameters).</p>
<p>Debug output (note: using a different command, but does not matter):</p>
<pre><code>WRITE: b'$CONFIG,SET,IMU,DESTINATION_IP,192.168.200.5\r\n'
Telnet(192.168.200.10,9000): send b'$CONFIG,SET,IMU,DESTINATION_IP,192.168.200.5\r\n'
Telnet(192.168.200.10,9000): recv b'\xff\xfb\x01\xff\xfb\x03\x1b[0GDESTINATION_IP = 192.168.200.5\r\n'
Telnet(192.168.200.10,9000): IAC WILL 1
Telnet(192.168.200.10,9000): IAC WILL 3
Telnet(192.168.200.10,9000): recv b'\x1b'
Telnet(192.168.200.10,9000): recv b'[0G\xff\xfd\x18\xff\xfa\x18\x01\xff\xf0'
Telnet(192.168.200.10,9000): IAC DO 24
Telnet(192.168.200.10,9000): IAC 250 not recognized
Telnet(192.168.200.10,9000): IAC 240 not recognized
DESTINATION_IP = 192.168.200.5
READ:
WRITE: b'$CONFIG,SET,IMU,DESTINATION_IP,192.168.200.5\r\n'
Telnet(192.168.200.10,9000): send b'$CONFIG,SET,IMU,DESTINATION_IP,192.168.200.5\r\n'
Telnet(192.168.200.10,9000): recv b'$\x1b[1D \x1b[1D$C\x1b[2D \x1b[2D$CO\x1b[3D \x1b[3D$CON\x1b[4D \x1b['
Telnet(192.168.200.10,9000): recv b'4D$CONF\x1b[5D \x1b[5D$CONFI\x1b[6D \x1b[6D$CONFIG\x1b[7'
Telnet(192.168.200.10,9000): recv b'D \x1b[7D$CONFIG,\x1b[8D \x1b[8D$CONFIG,S\x1b[9D '
Telnet(192.168.200.10,9000): recv b' \x1b[9D$CONFIG,SE\x1b[10D \x1b[10D$CONFIG,'
Telnet(192.168.200.10,9000): recv b'SET\x1b[11D \x1b[11D$CONFIG,SET,\x1b[12D '
Telnet(192.168.200.10,9000): recv b' \x1b[12D$CONFIG,SET,I\x1b[13D \x1b[13D$CONFI'
Telnet(192.168.200.10,9000): recv b'G,SET,IM\x1b[14D \x1b[14D$CONFIG,SET,IMU\x1b[1'
Telnet(192.168.200.10,9000): recv b'5D \x1b[15D$CONFIG,SET,IMU,\x1b[16D '
Telnet(192.168.200.10,9000): recv b' \x1b[16D$CONFIG,SET,IMU,D\x1b[17D '
Telnet(192.168.200.10,9000): recv b' \x1b[17D$CONFIG,SET,IMU,DE\x1b[18D \x1b'
Telnet(192.168.200.10,9000): recv b'[18D$CONFIG,SET,IMU,DES\x1b[19D \x1b[1'
Telnet(192.168.200.10,9000): recv b'9D$CONFIG,SET,IMU,DEST\x1b[20D \x1b[2'
Telnet(192.168.200.10,9000): recv b'0D$CONFIG,SET,IMU,DESTI\x1b[21D \x1b'
Telnet(192.168.200.10,9000): recv b'[21D$CONFIG,SET,IMU,DESTIN\x1b[22D '
Telnet(192.168.200.10,9000): recv b' \x1b[22D$CONFIG,SET,IMU,DESTINA\x1b[23D '
Telnet(192.168.200.10,9000): recv b' \x1b[23D$CONFIG,SET,IMU,DESTINAT\x1b[24D '
Telnet(192.168.200.10,9000): recv b' \x1b[24D$CONFIG,SET,IMU,DESTINATI'
READ: $CONFIG,SET,IMU,DESTINATI
READ:
</code></pre>
<p>The first write/read is correct, the second write/read does not work again...
This matches also what I can see on wireshark.</p>
| <python><telnet> | 2023-07-28 05:50:50 | 4 | 45,023 | Alex |
76,784,693 | 20,771,478 | Access Excel on SharePoint via Rest API and copy data to MS SQL | <p>We have the following use case:<br />
Users are not able to input all data they need into our ERP system. So what they do is maintaining Excel files with that data. To do planning activities they want to combine their Excel data with ERP data (from the database) in yet another Excel file. They want to make comments in this file and they want it to be updated every day using the most recent data from the ERP system.</p>
<p>The solution that I envision is to use a Python script that gathers the Excel data at night, loads it to MS SQL, transforms it over there and then loads it back to SharePoint, replacing the old file if the process is successful.</p>
<p>The difficult part is accessing the Excel file on SharePoint. I wanted to use the Microsoft Graph API but was instead asked to use a Python library called <a href="https://pypi.org/project/Office365-REST-Python-Client/" rel="nofollow noreferrer">Office365</a>. Now I don't get along with the documentation too well so I will describe what I need in pseudo-code below and hope for your help.</p>
<p>Do you know a way in Python to achieve roughly the following?</p>
<pre><code>import MagicLibrary as MagL
import pandas as pd
SHAREPOINT_USERNAME = "Username@company.com"
SHAREPOINT_PASSWORD = "Password"
SHAREPOINT_URL = "Sharepoint URL"
SHAREPOINT_PATH = "Path to Document"
FILE_NAME = "NeededData.xls"
SHEET_NAME = "Sheet 1"
RANGE = "A1:D20"
SharePoint_Connection = MagL.Connection(SHAREPOINT_URL, SHAREPOINT_USERNAME, SHAREPOINT_PASSWORD)
Excel_Range = SharePoint_Connection.Path(SHAREPOINT_PATH).File(FILE_NAME).Sheets(SHEET_NAME ).Range(RANGE ).As_Pandas_Usable_Array
df = pd.to_dataframe(Excel_Range)
#Use SQL Alchemy below to write dataframe to MS SQL Server.
#Code
</code></pre>
<p>Alternatively, if the API is only able to get into SharePoint, but not into Excel, I would also be fine with loading the Excel file to the server that executes the script and then accessing the file contents there, using Pandas.</p>
| <python><rest><sharepoint> | 2023-07-28 03:33:48 | 1 | 458 | Merlin Nestler |
76,784,634 | 19,157,137 | Jupyter Notebook Not Updating When Using Docker with Volumes | <p>I have set up a Docker environment to run Jupyter Notebook using a Dockerfile and a bash script to start the notebook. Additionally, I have created a <code>.gitignore</code> file to ignore certain elements. The issue I am facing is that Jupyter Notebook is not updating when I make changes to the notebooks in the <code>notebooks</code> directory on my host machine.</p>
<p>Directory Structure on My Host Machine:</p>
<pre><code>my_project/
│
├── Dockerfile
├── start_notebook.sh
├── .gitignore
├── requirements.txt
└── notebooks/
├── notebook1.ipynb
├── notebook2.ipynb
├── notebook3.ipynb
└── ...
</code></pre>
<p>Here's the content of my Dockerfile:</p>
<pre><code>FROM jupyter/base-notebook:python-3.11
# Copy the requirements.txt file to the container
COPY requirements.txt /app/requirements.txt
# Install the dependencies from the requirements.txt file
RUN pip install -r /app/requirements.txt
# Set working directory
WORKDIR /app
# Expose Jupyter Notebook port
EXPOSE 8888
# Copy the notebooks directory to the container's /app directory
COPY notebooks /app/notebooks
# Run Jupyter Notebook on container startup
CMD ["jupyter", "notebook", "--ip=0.0.0.0", "--port=8888", "--no-browser", "--allow-root"]
</code></pre>
<p>And here's the content of my bash script (<code>start_notebook.sh</code>):</p>
<pre class="lang-bash prettyprint-override"><code>#!/bin/bash
# Get the current directory path
CURRENT_DIR=$(pwd)
# Build the Docker image using the Dockerfile in the current directory
docker build -t my_jupyter_image .
# Create a Docker volume for the ipynb files in the current directory
docker volume create --name notebook_volume --opt type=none --opt device="$CURRENT_DIR/notebooks" --opt o=bind
# Run the Docker container with the Jupyter Notebook image and disable login
docker run -d -p 8888:8888 \
--name my_jupyter_container \
-v notebook_volume:/app/notebooks \
my_jupyter_image \
start-notebook.sh --NotebookApp.token=''
</code></pre>
<p>And my <code>.gitignore</code> file contains the following entries:</p>
<pre><code># Ignore __pycache__ folder
./notebooks/__pycache__/
# Ignore .ipynb_checkpoints folder
.ipynb_checkpoints/
# Readme file
README.md
</code></pre>
<p>Even though I make changes to the notebooks in the <code>notebooks</code> directory on my host machine, the changes are not reflected when I access Jupyter Notebook through the browser at <code>http://localhost:8888</code>. The notebooks appear to be stuck in their previous state.</p>
<p>How can I resolve this issue and make sure that Jupyter Notebook updates properly when I modify the notebooks in the <code>notebooks</code> directory on my host machine? Is there something I am missing or need to configure differently in my Docker setup?</p>
| <python><bash><docker><jupyter-notebook><jupyter> | 2023-07-28 03:15:32 | 1 | 363 | Bosser445 |
76,784,615 | 2,148,718 | Avoiding pip's backtracking when installing a package I created | <p>I have a package which has a complex web of dependencies. I need to publish this on PyPI; it's not designed as an end-user application. However, when I install the package, I get messages such as <code>INFO: This is taking longer than usual. You might need to provide the dependency resolver with stricter constraints to reduce runtime. See https://pip.pypa.io/warnings/backtracking for guidance. If you want to abort this run, press Ctrl + C.</code>. Often this goes on for longer than an hour, and I have to kill the installation process.</p>
<p>However, none of this happens if I <code>pip install my_package --use-deprecated=legacy-resolver</code>. This is fine for development, but once I publish on PyPI, my users will have to use <code>--use-deprecated=legacy-resolver</code> if they want it to ever install.</p>
<p>I could introduce arbitrary version constraints to help the resolver as suggested by Pip, but I don't actually want to constrain versions unnecessarily, because this will limit the compatibility of my package. And in any case I have tried doing this and limiting the dependency ranges of a few packages doesn't help, because most of the problematic dependencies are transitive dependencies - dependencies of dependencies. I don't want to declare these in my <code>pyproject.toml</code> because I don't actually use them myself.</p>
<p>Is there an official way to declare that I want to use a different dependency resolver for my package, such as the legacy one? Does PEP 517 make this possible? Or is this a core limitation of the Python ecosystem? Can I use a lockfile or constraint file for a published package to hint to the resolver which versions to use without restricting it?</p>
| <python><pip><setuptools> | 2023-07-28 03:09:12 | 0 | 20,337 | Migwell |
76,784,602 | 2,148,718 | Declaring dependencies only used for type checking in Python | <p>I often have packages which might be used with my library, but which I don't use any functionality from. However I also want to register all of this with the type system.</p>
<p>For example let's say that my package <code>a</code> has a function that can accept <code>b.B</code>:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import annotations
from typing import TYPE_CHECKING
if TYPE_CHECKING;
from b import B
def do_a(b: B):
pass
</code></pre>
<p>This works well at runtime and also for type checking, but it's not clear how <code>b</code> should be treated in the package metadata. Is it a dependency that should go into <code>dependencies</code> in <code>pyproject.toml</code> because it's being imported (albeit conditionally) in the project? Automated tools like <a href="https://github.com/tweag/FawltyDeps" rel="nofollow noreferrer">https://github.com/tweag/FawltyDeps</a> will flag this package if it's not declared at all. Should it be declared as an optional extra? Or not declared at all?</p>
| <python><setuptools><typing><pyproject.toml> | 2023-07-28 03:02:17 | 0 | 20,337 | Migwell |
76,784,564 | 8,968,910 | Python: 'OptionEngine' object has no attribute 'execute' | <p>This is my Code, I ran it on my vscode:</p>
<pre><code>engine_sql = sqlalchemy.create_engine(
'mssql+pyodbc://abcd:xxxx@tt.t.t.tt/fruit?driver=SQL+Server+Native+Client+8.0',
fast_executemany=True)
sql_query = f'''
SELECT TOP(10)
A.[Fruit_Name]
,B.[Store_Location]
FROM ['fruit'].[dbo].['info'] A
LEFT JOIN ['fruit'].[dbo].['shop'] B
ON A.fruit_ID = B.fruit_ID
'''
df = pd.read_sql_query(sql_query, engine_sql)
</code></pre>
<p>error:</p>
<pre><code>'OptionEngine' object has no attribute 'execute'
</code></pre>
<p>I've checked other solutions. When I use text():</p>
<pre><code>engine_sql = sqlalchemy.create_engine(
'mssql+pyodbc://abcd:xxxx@tt.t.t.tt/fruit?driver=SQL+Server+Native+Client+8.0',
fast_executemany=True)
sql_query = text(f'''
SELECT TOP(10)
A.[Fruit_Name]
,B.[Store_Location]
FROM ['fruit'].[dbo].['info'] A
LEFT JOIN ['fruit'].[dbo].['shop'] B
ON A.fruit_ID = B.fruit_ID
''')
df = pd.read_sql_query(sql_query, engine_sql)
</code></pre>
<p>error:</p>
<pre><code>UnicodeDecodeError: 'cp950' codec can't decode byte 0xe5 in position 527: illegal multibyte sequence
</code></pre>
<p>I still cannot figure out how to fix it. This is just part of the code. If I run the whole code, there is no error. That's pretty strange to me.</p>
| <python><sqlalchemy> | 2023-07-28 02:49:06 | 0 | 699 | Lara19 |
76,784,521 | 23,512,643 | R/Python: Extracting Information from Google Maps | <p>I am working with the R and Python languages.</p>
<p>Suppose I search for the following Canadian Postal Code (M5V 3L9) on Google Maps:</p>
<p><a href="https://www.google.com/maps/place/Toronto,+ON+M5V+3L9/@43.642566,-79.3875851,18z/data=!4m6!3m5!1s0x882b34d436f9c825:0x9e9c6195e38030f2!8m2!3d43.6429129!4d-79.3853443!16s%2Fg%2F1tvq4rqd?entry=ttu" rel="nofollow noreferrer">https://www.google.com/maps/place/Toronto,+ON+M5V+3L9/@43.642566,-79.3875851,18z/data=!4m6!3m5!1s0x882b34d436f9c825:0x9e9c6195e38030f2!8m2!3d43.6429129!4d-79.3853443!16s%2Fg%2F1tvq4rqd?entry=ttu</a></p>
<p>When I search for this, I can see that the "perimeter" of this Postal Code is highlighted in red:</p>
<p><a href="https://i.sstatic.net/EKM4P.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EKM4P.png" alt="enter image description here" /></a></p>
<p><strong>My Question:</strong> (Using Selenium via R/Python) From an HTML/CSS/XML perspective - I am trying to get a list of all coordinates that make up the boundary of this perimeter.</p>
<p>I have been trying to explore the source code that is generated from this website to try and see if there is something I can do to see where the source code of this perimeter (e.g. in JSON) is being stored - but so far, I can't find anything:</p>
<p><a href="https://i.sstatic.net/2ugSu.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2ugSu.png" alt="enter image description here" /></a></p>
<p>I was hoping that perhaps there might be something which would allow me to use Selenium to repeatedly click around this perimeter and extract the longitude/latitude points - but so far, I can not find anything.</p>
<p>Can someone please show me how to do this?</p>
<p>Thanks!</p>
<p><strong>Note</strong>: Generic Selenium Code:</p>
<pre><code>library(RSelenium)
library(wdman)
library(netstat)
selenium()
seleium_object <- selenium(retcommand = T, check = F)
remote_driver <- rsDriver(browser = "chrome", chromever = "114.0.5735.90", verbose = F, port = free_port())
remDr<- remote_driver$client
remDr$navigate("https://www.google.com/maps/place/Toronto,+ON+M5V+3L9/@43.642566,-79.3875851,18z/data=!4m6!3m5!1s0x882b34d436f9c825:0x9e9c6195e38030f2!8m2!3d43.6429129!4d-79.3853443!16s%2Fg%2F1tvq4rqd?entry=ttu")
</code></pre>
| <python><html><r><json><xml> | 2023-07-28 02:35:58 | 3 | 6,799 | stats_noob |
76,784,420 | 1,982,032 | Why can't the Python script be executed during reboot? | <p>The simple Python script <code>/home/debian/project/del_video.py</code> works fine:</p>
<pre><code>#!/usr/bin/env python3
# -*- coding:utf-8 -*-
import os
import time
today = time.strftime("%w",time.localtime())
video_dir = '/home/ftpuser/ftp_dir/upload/6J078DEGAG56788'
if today == '5':
for item in os.listdir(video_dir):
if item != 'DVRWorkDirectory':
del_dir = os.path.join(video_dir,item)
os.system('sudo /usr/bin/rm -rf {} '.format(del_dir))
</code></pre>
<p>Today is Friday (when I post), <code>/usr/bin/python3 /home/debian/project/del_video.py</code> delete files in <code>/home/ftpuser/ftp_dir/upload/6J078DEGAG56788</code>. I log in to my <a href="https://en.wikipedia.org/wiki/IP_camera" rel="nofollow noreferrer">IP camera</a>'s web UI to set the storage, and create a new file in <code>/home/ftpuser/ftp_dir/upload/6J078DEGAG56788</code>. I add it in <a href="https://en.wikipedia.org/wiki/Cron#Overview" rel="nofollow noreferrer">crontab</a>:</p>
<pre class="lang-none prettyprint-override"><code>crontab -e
</code></pre>
<p>Output:</p>
<pre class="lang-none prettyprint-override"><code>@reboot /usr/bin/python3 /home/debian/project/del_video.py
</code></pre>
<p>Reboot, no file deleted, no error information in <code>cron.log</code>:</p>
<pre class="lang-none prettyprint-override"><code>Jul 28 09:46:29 debian cron[756]: (CRON) INFO (Running @reboot jobs)
Jul 28 09:46:31 debian CRON[825]: (debian) CMD (/usr/bin/python3 /home/debian/project/del_video.py)
Jul 28 09:49:06 debian crontab[2218]: (debian) BEGIN EDIT (debian)
Jul 28 09:50:38 debian crontab[2218]: (debian) END EDIT (debian)
Jul 28 09:53:35 debian crontab[2694]: (debian) BEGIN EDIT (debian)
Jul 28 09:55:01 debian CRON[2739]: (root) CMD (command -v debian-sa1 > /dev/null && debian-sa1 1 1)
Jul 28 09:57:56 debian crontab[2694]: (debian) END EDIT (debian)
</code></pre>
<p>How can I fix it then?</p>
<p>I can write the same function with Bash. In file <em>project/del_video.sh</em>:</p>
<pre class="lang-none prettyprint-override"><code>dw=$(date +%w)
video_dir='/home/ftpuser/ftp_dir/upload/6J078DEGAG56788'
if [[ $dw = 5 ]]; then
for item in $(ls "$video_dir");do
if [[ $item != DVRWorkDirectory ]];then
sudo /usr/bin/rm -rf "$video_dir"/"$item"
fi
done
fi
</code></pre>
<p>I verified that <code>del_video.sh</code> works fine. How can I fix the Python script?</p>
<p>The permissions are already set for user <code>debian</code> in file <em>/etc/sudoers</em>:</p>
<pre class="lang-none prettyprint-override"><code>debian ALL=(ALL:ALL) NOPASSWD:ALL
</code></pre>
<p>Please replace the value (day of the week) according to the real value when you try.</p>
| <python><cron><reboot> | 2023-07-28 02:00:48 | 1 | 355 | showkey |
76,784,316 | 1,956,069 | How to sample efficiently from a large Pandas Dataframe? | <p>I have a dataframe called X_it with shape (2667913, 42)</p>
<p>I'm trying to sample from that dataframe by using the below code:</p>
<pre><code>import numpy as np
np.random.seed(42)
sel_idx = X_it.sample(frac=0.1).index
X = X_it.loc[sel_idx]
</code></pre>
<p>The final line of code hangs indefinitely. Is there any better way of doing it?</p>
| <python><pandas><dataframe><numpy> | 2023-07-28 01:17:18 | 2 | 391 | procrastinationmonkey |
76,784,207 | 2,037,637 | Limit number of choices shown in argparse | <p>I have a command-line option with a huge number of choices (hundreds) and it clutters up the <code>--help</code> output. Is there a way to truncate the <em>displayed</em> choices while still allowing the full range to be used?</p>
<p>Tiny example:</p>
<pre class="lang-py prettyprint-override"><code>parser.add_argument('foo', default=42, choices=[x for x in range(1000)])
</code></pre>
| <python><argparse> | 2023-07-28 00:37:31 | 1 | 3,849 | Alex Shroyer |
76,784,143 | 4,953,820 | Different ffmpeg result after saving to png | <p>Saving images to PNG first seems to produce different ffmpeg encodes. Running this test code</p>
<pre><code>from PIL import Image
import cv2
import ffmpeg
import hashlib
ffmpeg.input('test.jpg').output('testff.png').run()
cv2.imwrite('testcv.png',cv2.imread('test.jpg'))
Image.open('test.jpg').save('testpil.png')
hashes=[]
for suf in ['.jpg','ff.png','cv.png','pil.png']:
dest='test'+suf.replace('.','')+'.mp4'
ffmpeg.input('test'+suf).output(dest).run()
hashes.append(hashlib.file_digest(open(dest,'rb'),'md5').hexdigest())
print(hashes)
</code></pre>
<p>I get <br/>
['a5b744a8ac0f6de9ec4de43ff737c46e'<br/>
,'ab62474f2160899e064ba24890047372'<br/>
,'baa788d5e4ef212ab610b8b5cf7772cb'<br/>
,'baa788d5e4ef212ab610b8b5cf7772cb']</p>
<p>As you can see, the only two that match are the cv2 and pillow conversions, and none of them match the original. In terms of file size, the results that passed to png first seem to be about 10% smaller than the direct-from-jpg result.</p>
<p>Why is this happening and how can I avoid changing image data until I'm ready to encode?</p>
| <python><image><video><ffmpeg><file-format> | 2023-07-28 00:13:38 | 0 | 617 | Kalev Maricq |
76,784,090 | 6,118,986 | Override default True == 1, False == 0 behavior | <p>I have dataframes that can contain a mix of booleans and integers, and I'd like to be able to do things like <code>df_1 == df_2[0][0]</code>, and guarantee that if <code>df_2[0][0]</code> is 1 that it won't match <code>True</code> values in <code>df_1</code>.</p>
| <python><pandas><dataframe> | 2023-07-27 23:58:57 | 2 | 403 | user6118986 |
76,784,012 | 398,670 | pip install -e . fails with "Expected end or semicolon" for git+https in requirements, but pip install -r succeeds | <p>I have a <code>requirements.txt</code> that uses <code>git+https</code> URLs like</p>
<pre><code>git+https://github.com/myorg/myrepo@v1.0.0#egg=mypackage
</code></pre>
<p>This installs fine with <code>pip install -r requirements.txt</code> .</p>
<p>But if I reference it in my <code>setup.cfg</code>:</p>
<pre><code>[options]
install_requires = file: requirements.txt
</code></pre>
<p>and <code>pip install -e .</code> my package as an editable install from a local dir for development, <code>pip</code> fails with:</p>
<pre><code> Getting requirements to build editable ... error
error: subprocess-exited-with-error
× Getting requirements to build editable did not run successfully.
│ exit code: 1
[...snip...]
setuptools.extern.packaging._tokenizer.ParserSyntaxError: Expected end or semicolon (after name and no valid version specifier)
git+https://github.com/myorg/myrepo@v1.0.0#egg=mypackage
</code></pre>
<p>despite the same <code>requirements.txt</code> being valid for <code>-r</code> . What's up?</p>
| <python><pip> | 2023-07-27 23:32:33 | 1 | 328,701 | Craig Ringer |
76,783,984 | 3,540,528 | Force annotation line to connect xy and xytext coordinates | <p>As described in the title, I would like to draw an annotation line that connects the annotation point (<code>xy</code>) with the text anchor point (<code>xytext</code>).</p>
<p>When the text is horizontally aligned to the center, I get the desired behavior. For instance, consider the following code:</p>
<pre><code>import matplotlib.pyplot as plt
plt.annotate('some text', xy=(5, 0), xytext=(5, -3), arrowprops=dict(arrowstyle="-"),
fontsize=6, va='top', ha='center', xycoords='data', textcoords='data')
# Additional code for illustration purposes
plt.plot([5, 5], [0, -3], 'r--')
plt.scatter([5], [0], color='red', label='xy point')
plt.scatter([5], [-3], color='blue', label='xytext point')
plt.legend()
plt.grid()
plt.show()
</code></pre>
<p>This code produces the following image:</p>
<p><a href="https://i.sstatic.net/2Okya.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/2Okya.png" alt="Example ok" /></a></p>
<p>As soon as the horizontal alignment changes, for instance:</p>
<pre><code>plt.annotate('some text', xy=(5, 0), xytext=(5, -3), arrowprops=dict(arrowstyle="-"),
fontsize=6, va='top', ha='right', xycoords='data', textcoords='data')
</code></pre>
<p>I get the following undesired result (I would like the annotation line to remain vertical):</p>
<p><a href="https://i.sstatic.net/yCKvS.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yCKvS.png" alt="Example wrong" /></a></p>
<p>Looking at the documentation of the <code>plt.annotate</code> function, I could not find any option that allows the desired result to be achieved. Is there any way to achieve the desired behavior other than plotting the line and the text separately?</p>
| <python><matplotlib> | 2023-07-27 23:24:53 | 1 | 467 | Davide |
76,783,828 | 1,673,776 | One or more OTLP metric data point(s) were dropped because New Relic does not fully support cumulative metrics over GRPC for FedRAMP customers | <p>First time user of New Relic here. Trying to figure out what I am doing wrong about sending OTLP data to New Relic. My Python code is:</p>
<pre class="lang-py prettyprint-override"><code>otlp_metric_reader = PeriodicExportingMetricReader(
OTLPMetricExporter(
endpoint="https://otlp.eu01.nr-data.net:4317",
headers={"api-key": "..."},
)
)
provider = MeterProvider(metric_readers=[otlp_metric_reader])
metrics.set_meter_provider(provider)
meter = metrics.get_meter("my_service")
</code></pre>
<p>The metric I configured is:</p>
<pre class="lang-py prettyprint-override"><code>my_histogram = meter.create_histogram(
"my_histogram",
unit="ms",
description="...",
)
</code></pre>
<p>I am emitting the metric as follows:</p>
<pre class="lang-py prettyprint-override"><code>start_time = perf_counter_ns()
try:
await foo()
finally:
my_histogram.record((perf_counter_ns() - start_time) / 1_000_000, {"foo": "bar"})
</code></pre>
<p>Querying for</p>
<pre><code>SELECT message, metricName, name
FROM NrIntegrationError
WHERE newRelicFeature = 'Metrics'
SINCE 5 HOURS AGO
LIMIT 100
</code></pre>
<p>Shows:</p>
<blockquote>
<p>One or more OTLP metric data point(s) were dropped because New Relic does not fully support cumulative metrics over GRPC for FedRAMP customers. Histograms will be dropped and Sums will be ingested as Gauges. Try sending over HTTP instead.</p>
</blockquote>
<p>Now, as you can see, I am using the Europe endpoint for OTLP. It's true, I am using the gRPC endpoint, but definitely nothing to do with FedRAMP.</p>
<p>Am I hitting some weird issue with New Relic? Or am I doing something wrong myself?</p>
| <python><newrelic><open-telemetry> | 2023-07-27 22:40:11 | 1 | 14,690 | Victor |
76,783,751 | 9,983,652 | how to assign same color to marker and marker border in px.scatter() plot? | <p>Hi I am using below code from official website.</p>
<p><a href="https://plotly.com/python/marker-style/" rel="nofollow noreferrer">https://plotly.com/python/marker-style/</a></p>
<p>The marker border color is defined as same color which is 'DarkSlateGrey'. I am wondering if it is possible to make border color the same as marker color for different markers?
Thanks</p>
<pre><code>import plotly.express as px
df = px.data.iris()
fig = px.scatter(df, x="sepal_width", y="sepal_length", color="species",color_discrete_sequence=['blue','red','green'])
fig.update_traces(marker=dict(size=12,
line=dict(width=2,
color='DarkSlateGrey')),
selector=dict(mode='markers'))
fig.show()
</code></pre>
| <python><plotly> | 2023-07-27 22:21:07 | 1 | 4,338 | roudan |
76,783,648 | 2,735,009 | Mapreduce job not working for fetching data | <p>I have a map reduce job that I'd like to use to fetch some data from the <a href="https://pypi.org/project/scholarly/" rel="nofollow noreferrer">scholarly</a> package. Here's the code I've edited from the documentation:</p>
<pre><code>from multiprocessing import Pool
from scholarly import scholarly
def GetFirstAuthor(author_name):
search_query = scholarly.search_author(author_name)
first_author_result = next(search_query)
return first_author_result
def RetrieveAuthorDetails(first_author_result):
author = scholarly.fill(first_author_result)
return author
def RetrievePublicationDetails(author):
author_pub = []
for pub in author['publications']:
title = ''
pub_year = 0
author = ''
journal = ''
citation = ''
if 'title' in pub['bib']:
title = pub['bib']['title']
if 'pub_year' in pub['bib']:
pub_year = pub['bib']['pub_year']
if 'author' in pub['bib']:
author = pub['bib']['author']
if 'journal' in pub['bib']:
journal = pub['bib']['journal']
if 'citation' in pub['bib']:
citation = pub['bib']['citation']
author_pub.append([title,pub_year,author,journal,citation])
return author_pub
def mapper(user_name):
try:
first_author_result = GetFirstAuthor(user_name)
except StopIteration:
author_pub_details.append([user_name, 'Record not found for the author'])
author = RetrieveAuthorDetails(first_author_result)
return author, user_name
def reducer(author, user_name):
author_pub = RetrievePublicationDetails(author)
author_pub_details.append([user_name, author_pub])
author_pub_details = []
user_names = ['Marc Mertens','Katharina Breininger','Marie Caroline Guzian','James M. Shwayder','M. Narasimha','Brian W. Miller','Peter C. Y. Chen','Maxim N. Peshkov']
with Pool() as pool:
author, user_name = pool.map(mapper, user_names)
reducer(author, user_name)
</code></pre>
<p>This code works well without <code>mapreduce</code> but it seems to be stuck with it. I'm trying to run it for a small sample of 10 users and even that doesn't finish. What am I doing wrong here? It'd also be ideal to show a progress bar as the <code>mapreduce</code> job progresses.</p>
| <python><parallel-processing><mapreduce> | 2023-07-27 21:57:30 | 0 | 4,797 | Patthebug |
76,783,550 | 825,227 | Improve program/system performance of program run in parallel | <p>I have a Python script I'm running to retrieve data from an external API.</p>
<p>The script takes an arbitrary ID number and a product name as inputs, and prints and saves to file data that is returned by the API. Data throughput varies by product name, but is returned at roughly the rate of 1 data row per second (slower at different times in the day).</p>
<p>I'm able to run the script in parallel for multiple products but notice CPU load creeps up as the programs continue to run over time (with limited load used by other software). Have seen CPU load go as high as 4 on an 8-core machine. Currently running on Ubuntu 22.04 on an i7-8650 Thinkpad.</p>
<p>Wonder if anyone can shed some light on how to improve the below's performance? Imagine the write to disk (eg, <code>df.to_csv(...</code>) as a primary bottleneck, but not sure how to modify to be written to file as the data is streamed continuously.</p>
<pre><code>from client import *
from wrapper import *
from contract import *
import time
import pandas as pd
import datetime
import threading
import sys
import os
global cl
cl = sys.argv[1]
global sym
sym = sys.argv[2]
global df
df = pd.DataFrame()
class TestApp(EClient, EWrapper):
def __init__(self):
EClient.__init__(self, wrapper=self)
def reqIds(self, numIds: int):
return super().reqIds(numIds)
def updateMktDepth(self, reqId: TickerId, position: int, operation: int, side: int, price: float, size: int):
global df
super().updateMktDepth(reqId, position, operation, side, price, size)
time = datetime.datetime.now()
cols = ['Time','Symbol','Position','Operation','Side','Price','Size']
data = [time,sym,position,operation,side,price,size]
print(f'{time}: ReqId: {reqId} Sym: {sym} Position: {position} Operation: {operation} Side: {side}, Price: {price} Size {size}')
d2 = pd.DataFrame(data, cols)
d2 = d2.T
df = pd.concat([df, d2]) #df.concat(d2)
df.to_csv(f'~/data/{sym}_l2.csv')
def main():
try:
app = TestApp()
app.connect("127.0.0.1", 7496, cl)
print('Connection successful')
t = threading.Thread(name="API_worker", target=app.run)
t.start()
print("Returned from run()")
c = Contract()
c.localSymbol = sym
c.secType = 'FUT'
c.exchange = 'CME'
c.currency = 'USD'
time.sleep(1)
# clean up prior to call
f = f'/home/chris/data/{sym}_l2'
t = datetime.datetime.now().strftime('%Y%m%d%H%M')
if os.path.isfile(f+'.csv'):
os.rename(f+'.csv', f'{f}_{t}.csv')
app.reqMktDepth(cl, c, 20, 0, [])
except KeyboardInterrupt:
print('Keyboard interrup, processing ended')
app.disconnect()
if __name__ == "__main__":
main()
</code></pre>
<p><strong>Edit:</strong></p>
<p>Amended version per comment below:</p>
<pre><code>class TestApp(EClient, EWrapper):
def __init__(self):
EClient.__init__(self, wrapper=self)
def reqIds(self, numIds: int):
return super().reqIds(numIds)
def contractDetails(self, reqId: int, contractDetails: ContractDetails):
super().contractDetails(reqId, contractDetails)
print(f"contract details: {contractDetails}")
#return super().contractDetails(reqId, contractDetails)
def contractDetailsEnd(self, reqId: int):
print("End of contract details")
self.disconnect()
def updateMktDepth(self, reqId: TickerId, position: int, operation: int, side: int, price: float, size: int):
super().updateMktDepth(reqId, position, operation, side, price, size)
time = datetime.datetime.now()
data = [time,sym,position,operation,side,price,size]
print(f'{time}: ReqId: {reqId} Sym: {sym} Position: {position} Operation: {operation} Side: {side}, Price: {price} Size {size}')
with open(f'/home/chris/data/{sym}_l2.csv', mode='a', newline='') as file:
wr = csv.writer(file)
wr.writerow(data)
def main():
try:
app = TestApp()
app.connect("127.0.0.1", 7496, cl)
print('Connection successful')
t = threading.Thread(name=f'API_worker_{sym}', target=app.run)
t.start()
# app.run()
print("Returned from run()")
c = Contract()
c.localSymbol = sym
c.secType = 'FUT'
c.exchange = 'CME'
c.currency = 'USD'
time.sleep(1)
# clean up prior to call
f = f'/home/chris/data/{sym}_l2'
t = datetime.datetime.now().strftime('%Y%m%d%H%M')
cols = ['Time','Symbol','Position','Operation','Side','Price','Size']
# if file exists, rename and create new file with headers
if os.path.isfile(f+'.csv'):
os.rename(f+'.csv', f'{f}_{t}.csv')
with open(f+'.csv', 'w', newline='') as file:
wr = csv.writer(file)
wr.writerow(cols)
# else just create new files with headers
else:
with open(f+'.csv', 'w', newline='') as file:
wr = csv.writer(file)
wr.writerow(cols)
app.reqMktDepth(cl, c, 20, 0, [])
</code></pre>
| <python><pandas><tws> | 2023-07-27 21:38:52 | 0 | 1,702 | Chris |
76,783,488 | 12,870,515 | PySide2: qt.qpa.plugin: Could not load the Qt platform plugin "windows" in "" even though it was found | <p>I just installed PySide2 with pip in my base environment and, when I tried to run an application, I got the following error:</p>
<pre><code>qt.qpa.plugin: Could not load the Qt platform plugin "windows" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: direct2d, minimal, offscreen, webgl, windows.
</code></pre>
<p>I then created a venv and installed PySide2 and the application runs, but I'd like to understand what causes this error and how I can fix it.</p>
<p>What I tried to run is a very simple window with no widgets yet.</p>
<pre class="lang-py prettyprint-override"><code>from PySide2.QtWidgets import (QApplication, QLabel, QVBoxLayout, QPushButton,
QWidget, QGridLayout, QTabWidget, QListWidget,
QListWidgetItem, QCheckBox, QHBoxLayout,
QHBoxLayout, QLineEdit, QDialog, QMessageBox,
QStyle, QStyleOption, QStylePainter,
QStackedWidget, QScrollArea, QComboBox,
QDoubleSpinBox)
from PySide2 import QtCore, QtWidgets
from PySide2.QtCore import QRect, QPoint, QSettings, QSize, QProcess, Qt
class Window(QWidget):
def __init__(self):
super().__init__()
self.setStyleSheet('''
''')
self.setWindowTitle('Some title')
self.screen = QApplication.primaryScreen()
rect = QRect(QPoint(), self.screen.size() * 0.9)
rect.moveCenter(self.screen.geometry().center())
self.setGeometry(rect)
if __name__ == '__main__':
import sys
app = QApplication(sys.argv)
window = Window()
window.show()
sys.exit(app.exec_())
</code></pre>
| <python><pyqt><pyqt5><pyside2> | 2023-07-27 21:25:02 | 0 | 358 | TheSprinter |
76,783,485 | 15,452,898 | PySpark filtering on multiple criteria | <p>My example (dataframe) is like this:</p>
<pre><code>Name ID ContractDate LoanSum ClosingDate
A ID1 2022-10-10 10 2022-10-16
A ID1 2022-10-10 15 2022-10-18
A ID1 2022-10-20 20 2022-10-31
A ID1 2022-10-20 20 2022-10-30
A ID1 2022-11-10 14 2022-11-22
A ID1 2022-11-10 15 2022-11-22
B ID2 2022-11-11 15 2022-11-15
B ID2 2022-11-11 30 2022-11-18
B ID2 2022-11-17 35 2022-11-22
B ID2 2022-11-17 35 2022-11-24
C ID3 2022-12-19 19 2022-11-10
</code></pre>
<p>My goal is to create a new dataframe that contains all loans issued to specific borrowers (group by unique ID) given the following conditions:</p>
<ul>
<li>two loans should be granted at the same day;</li>
<li>next two loans should be also granted at the same day but the time difference between ClosingDate of any of the previously issued loans should not exceed 5 days from the newly issued two loans.</li>
</ul>
<p>In other words my desirable outcome is like this:</p>
<pre><code>Name ID ContractDate LoanSum ClosingDate
A ID1 2022-10-10 10 2022-10-16
A ID1 2022-10-10 15 2022-10-18
A ID1 2022-10-20 20 2022-10-31
A ID1 2022-10-20 20 2022-10-30
</code></pre>
<p>(the difference between first ClosingDate and next ContractDate in this example is 4 (2020-10-20 minus 2022-10-16))</p>
<p>I've performed the following code that gives me an opportunity to get two or more loans issued to a specific borrower at the same date (grouped by ID):</p>
<pre><code>from pyspark.sql import functions as f
from pyspark.sql import Window
df = spark.createDataFrame(data).toDF('Name','ID','ContractDate','LoanSum','ClosingDate')
df.show()
cols = df.columns
w = Window.partitionBy('ID').orderBy('ContractDate')
df.withColumn('PreviousContractDate', f.lag('ContractDate').over(w)) \
.withColumn('Target', f.expr('datediff(ContractDate, PreviousContractDate) == 0')) \
.withColumn('Target', f.col('Target') | f.lead('Target').over(w)) \
.filter('Target == True')
</code></pre>
<p>But I am stuck to filter.</p>
<p>Any help is highly appreciated!</p>
| <python><dataframe><pyspark><filtering> | 2023-07-27 21:24:17 | 1 | 333 | lenpyspanacb |
76,783,459 | 1,457,672 | python pandas: Generate (three) cells from one cell | <p>I have a simple dataframe consisting of some metadata in a few columns and then a column with a sentence in it. I would like to use textacy's SVO extractor to generate three new columns, one each for the subject, verb, and object. I am trying to do this in as pandas a way as possible:</p>
<pre><code>metadata sentence
1-0 Thank you so much, Chris.
1-1 And it's truly a great honor to be here.
1-2 I have been blown away by this conference.
1-3 And I say that sincerely.
</code></pre>
<p>To which I tried this:</p>
<pre class="lang-py prettyprint-override"><code>def svo(text):
svotriple = textacy.extract.triples.subject_verb_object_triples(nlp(text))
for item in svotriple:
df['subject'] = str(item[0][-1])
df['verb'] = str(item[1][-1])
df['object'] = str(item[2])
df.apply(svo(df['sentence'].values[0]))
</code></pre>
<p>I've tried to get just the sentence as a string out of the sentence column a couple of ways. Most of them returned the fact that I was actually getting a series. I want this to work row-by-row. My impulse was to go with a <code>for</code> loop, but I really want to try to do this the pandas way. (Not that my for loops were working terribly well.)</p>
| <python><pandas><nlp><spacy> | 2023-07-27 21:17:49 | 1 | 407 | John Laudun |
76,783,437 | 552,683 | How best to implement a child property that is a list of integers | <p>I have a 1-N relationship where the child property is just a list of integers. Something like:</p>
<pre><code>class LotteryDraw(Base):
__tablename__ = "draw"
id: Mapped[int] = mapped_column(primary_key=True)
draw_time: Mapped[dt.datetime]
numbers: Mapped[List["DrawNumber"]] = relationship(
back_populates="draw", cascade="all, delete-orphan"
)
def __repr__(self):
return f"Draw(id={self.id!r}, draw_time={self.draw_time})"
class DrawNumber(Base):
__tablename__ = "draw_numbers"
id: Mapped[int] = mapped_column(primary_key=True)
number: Mapped[int]
draw_id: Mapped[int] = mapped_column(ForeignKey("draw.id"))
draw: Mapped["LotteryDraw"] = relationship(back_populates="numbers")
def __repr__(self):
return f"{self.number}"
with Session(engine) as session:
draw = LotteryDraw(
draw_time=dt.datetime.now(),
numbers=[DrawNumber(number=1), DrawNumber(number=39), DrawNumber(number=45), DrawNumber(number=46)],
# I would like to be able to do this instead: numbers=[1, 39, 45, 46],
)
session.add_all([draw])
session.commit()
</code></pre>
<p>As you can see, working with the <code>numbers</code> property of a LotteryDraw requires spelling out the full instantiation: <code>DrawNumber(number=N)</code>. Is it possible to pretend in my code that they are just plain integers, both for getting and setting <code>LotteryDraw.numbers</code>? (I don't mind having them as <code>DrawNumber</code>s when working just with the <code>draw_numbers</code> table directly, I'm mainly after simplifying working with LotteryDraw.)</p>
| <python><sqlalchemy> | 2023-07-27 21:12:16 | 1 | 1,140 | Davor Cubranic |
76,783,432 | 7,169,895 | Issues with PySide6 running | <p>I recently installed Python on a Windows machine.</p>
<p>I installed PySide6 with no errors.
Reinstalling it shows that it is looking in a different install folder than my base Python install.</p>
<pre><code>PS C:\Users\Owner\PyCharmProjects\InvestmentHunter> pip install PySide6
Requirement already satisfied: PySide6 in c:\python\lib\site-packages (6.5.2)
Requirement already satisfied: shiboken6==6.5.2 in c:\users\owner\appdata\roaming\python\python311\site-packages (from PySide6) (6.5.2)
Requirement already satisfied: PySide6-Essentials==6.5.2 in c:\users\owner\appdata\roaming\python\python311\site-packages (from PySide6) (6.5.2)
Requirement already satisfied: PySide6-Addons==6.5.2 in c:\users\owner\appdata\roaming\python\python311\site-packages (from PySide6) (6.5.2)
</code></pre>
<p>Notice the install location.</p>
<p>Running a simple GUI program with PySide6 results in</p>
<pre><code>Traceback (most recent call last):
File "C:\Users\Owner\PyCharmProjects\InvestmentHunter\main.py", line 2, in <module>
from gui import app
File "C:\Users\Owner\PyCharmProjects\InvestmentHunter\gui.py", line 1, in <module>
from PySide6.QWidgets import QApplication, QTableWidget,QTableWidgetItem
File "C:\Python\Lib\site-packages\PySide6\__init__.py", line 124, in <module>
_setupQtDirectories()
File "C:\Python\Lib\site-packages\PySide6\__init__.py", line 58, in _setupQtDirectories
for dir in _additional_dll_directories(pyside_package_dir):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Python\Lib\site-packages\PySide6\__init__.py", line 27, in _additional_dll_directories
raise ImportError(str(shiboken6) + ' does not exist')
ImportError: C:\Python\Lib\shiboken6\libshiboken does not exist
</code></pre>
<p><code>pip install shiboken6</code></p>
<p>results in</p>
<pre><code>pip install shiboken6
Requirement already satisfied: shiboken6 in c:\users\owner\appdata\roaming\python\python311\site-packages (6.5.2)
</code></pre>
<p>My code is</p>
<pre><code>from PySide6.QWidgets import QApplication, QTableWidget,QTableWidgetItem
from PySide6.QGui import QColor
app = QApplication()
table = QTableWidget()
table.setRowCount(2)
table.setColumnCount(3)
table.show()
</code></pre>
<p>I do not get an error from a PySide6 import but it does not detect QtWidgets so I get an error that the module is not found.
I am stumped as I have tried a reinstall, adding PATH variables, and a lot of other stuff listed on StackOverflow. Any help getting this running is appreciated.</p>
| <python><pyside6> | 2023-07-27 21:10:49 | 0 | 786 | David Frick |
76,783,322 | 3,566,313 | has anyone tried putting a cell in jupyter 7 that can handle streaming data? | <p><strong>Please note I am asking if this CAN be done NOT how to do it.</strong></p>
<p>I am about to try jupyter 7 and I think it should be able to handle streaming data which updates a cell. So I would set up a stream of price from a data source which updates a datastructure in the cell once every second. There might be 20 symbols in the data structure. The feed would come in as a structured ZMQ message which would be parsed and used to update the change inside the cell. So the structure might look like this</p>
<pre><code>symbol price
AAPL 100
DE 232
TSLA 1000
</code></pre>
<p>a new ZMQ ipm would come in reflecting a change in DE's price</p>
<pre><code>DE 250
</code></pre>
<p>the cell structure would then look like this ( auto refresh )</p>
<pre><code>symbol price
AAPL 100
DE 250
TSLA 1000
</code></pre>
<p>My understanding is the jupyter uses ZMQ under the hood and jupyter 7 has a collaboration mode that might help me here.</p>
| <python><jupyter-notebook><pyzmq><data-stream> | 2023-07-27 20:48:06 | 0 | 546 | theakson |
76,783,294 | 8,519,380 | Intelligent distribution of tasks among workers in Celery | <p>After a week of trying and searching, I didn't get any results and I would appreciate your help.</p>
<p>Summary:<br>
I have 4 workers and there is an app.task inside each worker.<br>
Every day, these 4 workers have to do nearly thousand tasks.<br>
The problem is how to intelligently divide tasks among these 4 workers.<br>
<br>
more details:<br>
My current code divides 1,000 tasks by 4, and then each worker is given 250 tasks. Why do I share? Because I have to apply_async the tasks at the beginning of the work. (so each worker has a separate queue)<br>
Workers execute tasks without any problems, but a challenge arises when some workers execute tasks faster and end up without tasks, while some workers may be executing their tasks for more hours.
<br><br>
<br>
What am I looking for?<br>
We are looking for a way to have all 1,000 tasks in one queue without dividing them and automatically remove these 4 worker tasks from this 1,000 queue and execute them, in this case the other workers will finish their tasks almost together.</p>
<br>
**My codes are in 4 files:**
<br>
<p>celery.py:</p>
<pre class="lang-py prettyprint-override"><code>from __future__ import absolute_import,unicode_literals
from celery import Celery
app=Celery('celery_app')
app.config_from_object('celery_app.celeryconfig')
if __name__ == '__main__':
app.start()
</code></pre>
<p>celeryconfig.py:</p>
<pre class="lang-py prettyprint-override"><code>broker_url='amqp://guest@localhost//'
result_backend='rpc://'
include=['celery_app.tasks']
worker_prefetch_multiplier = 1
task_routes={
'celery_app.tasks.app_1000':{'queue':'q_1000'},
'celery_app.tasks.app_1000':{'queue':'q_1002'},
'celery_app.tasks.app_1000':{'queue':'q_1004'},
'celery_app.tasks.app_1000':{'queue':'q_1006'},
'celery_app.tasks.app_timeout':{'queue':'q_timeout'},
}
</code></pre>
<p>tasks.py:<br>
There are many codes in the tasks.py file, see this link <a href="https://raw.githubusercontent.com/arezooebrahimi/celery_distribution_tasks/main/celery_app/tasks.py" rel="nofollow noreferrer">in GitHub</a>.</p>
<p>api.py:<br>
This is a simulated api, for example, if I send the number 1000, it is like it has to do 1000 tasks and this thousand tasks are divided between 4 apps.<br></p>
<pre class="lang-py prettyprint-override"><code>from fastapi import FastAPI, HTTPException,Request
from celery_app.tasks import app_1000,app_1002,app_1004,app_1006,get_active_queue
import random
app = FastAPI()
@app.get("/run_tasks")
async def run_tasks(num_of_tasks:int):
try:
app_list = [app_1000,app_1002,app_1004,app_1006]
for i in range(0,num_of_tasks, 4):
app_list[0].apply_async()
app_list[1].apply_async()
app_list[2].apply_async()
app_list[3].apply_async()
return 'ok'
except Exception as e:
raise HTTPException(status_code=500, detail=str(e))
@app.get("/get_active_queue")
async def get_active_q():
res = get_active_queue()
print(res)
return 'ok'
</code></pre>
<br>
Please guide me how to do this?
If something was dumb, comment so I can explain more.
<br>
My code is in the following link: https://github.com/arezooebrahimi/celery_distribution_tasks
| <python><rabbitmq><celery><distributed-computing> | 2023-07-27 20:43:03 | 1 | 778 | Sardar |
76,783,224 | 880,783 | Order of entries in a dictionary composed by comprehension | <p>I read that Python's <code>dict</code>s are ordered by insertion order. However, the following example code prints <code>dict</code> with seemingly random orders varying from run to run:</p>
<pre class="lang-py prettyprint-override"><code>DICT = {"1D Hist": 1, "1D Plot": 2}
print(DICT)
dict = {entry for entry in DICT}
print(dict)
</code></pre>
<p>Why is that? Is that intended?</p>
<pre><code>C:\Code>python bug.py
{'1D Hist': 1, '1D Plot': 2}
{'1D Plot', '1D Hist'}
C:\Code>python bug.py
{'1D Hist': 1, '1D Plot': 2}
{'1D Hist', '1D Plot'}
</code></pre>
<p>I am using Pyton 3.11.4.</p>
| <python><dictionary> | 2023-07-27 20:31:56 | 1 | 6,279 | bers |
76,783,153 | 4,658,078 | Python: pool.map_async with multiple parameters | <p>I need some help with python pool.</p>
<pre><code>def read_values(entry, second):
....
async_output = pool.map_async(partial(read_values, second='second'), string_array)
output_array = async_output.get()
</code></pre>
<p>The above code is working. I really want the below:</p>
<pre><code>logfile = open("./async.log", "a+", 1)
def read_values(entry, second):
....
async_output = pool.map_async(partial(read_values, second=logfile), string_array)
output_array = async_output.get()
</code></pre>
<p>This is not working!</p>
<p>Is it possible to pass an open file reference to a pool?</p>
| <python><pool><filereference> | 2023-07-27 20:20:13 | 0 | 563 | ScubaInstructor |
76,783,085 | 10,969,942 | Python: How to sort a list of custom objects by multiple attributes by different order? | <p>For example, if I have a <code>Person</code> class</p>
<pre class="lang-py prettyprint-override"><code>class Person:
def __init__(self, name: str, age: int):
self.name = name
self.age = age
def __repr__(self) -> str:
return f"({self.name}, {self.age})"
</code></pre>
<p>and a List of <code>Person</code></p>
<pre class="lang-py prettyprint-override"><code>persons = [
Person("Bob", 25),
Person("Alice", 25),
Person("Charlie", 23),
Person("Dave", 25),
]
</code></pre>
<p>I can sort the list by <code>age</code> in ascending order, and in case of a tie, sort by <code>name</code> in ascending order using the following method:</p>
<pre><code>sorted_persons = sorted(persons, key=lambda p: (p.age, p.name))
</code></pre>
<p>Question:</p>
<p>However, I'm looking for a way to sort the list by <code>age</code> in ascending order and, in the event of a tie in age, sort by <code>name</code> in descending order. How could I achieve this in Python?</p>
<p>I've come up with one solution, as shown below, but it seems a bit inelegant. Is there a more succinct way to write a string comparison method that can handle all three cases (i.e., less than, equal to, and greater than)? For instance, Java has a <code>s1.compareTo(s2)</code> method that makes such comparisons straightforward.</p>
<p>Here's the solution I'm currently working with:</p>
<pre class="lang-py prettyprint-override"><code>from functools import cmp_to_key
def compare(p1, p2):
cmp = p1.age - p2.age
if cmp != 0:
return cmp
if p1.name < p2.name:
return 1
elif p1.name > p2.name:
return -1
return 0
sorted_persons = sorted(persons, key=cmp_to_key(compare))
</code></pre>
<p>This code correctly sorts the <code>persons</code> list first by <code>age</code> in ascending order, and then by <code>name</code> in descending order when the ages are equal. However, I feel there should be a cleaner, more Pythonic way to handle this. Any suggestions?</p>
| <python><sorting><comparator><string-comparison> | 2023-07-27 20:06:11 | 5 | 1,795 | maplemaple |
76,782,898 | 19,130,803 | Sklearn: Return custom transformer output | <p>I am working on ML project. During feature engineering, I have created custom datetime transformer to extract new fields like day, month, etc.</p>
<pre><code>from sklearn.base import BaseEstimator
from sklearn.base import TransformerMixin
from sklearn.compose import ColumnTransformer
import pandas as pd
class DateTimeTransformer(BaseEstimator, TransformerMixin):
format = {
"A": "d_m_y_h_m_s", # new columns: 6
"B": "d_m_y" # new columns: 3
}
def __init__(self) -> None:
super().__init__()
self.count_ = 0
self.format_ = DateTimeTransformer.format
def fit(self, X, y=None):
return self
def transform(self, X, y=None):
X_temp = pd.DataFrame()
for column in X.columns.tolist():
X_ = pd.DataFrame()
X[column] = pd.to_datetime(X[column], dayfirst=True)
format = self.format_[column]
match format:
case "d_m_y_h_m_s":
X_["day"] = X[column].dt.day
X_["month"] = X[column].dt.month
X_["year"] = X[column].dt.year
X_["hour"] = X[column].dt.hour
X_["minute"] = X[column].dt.minute
X_["second"] = X[column].dt.second
self.count_ += 6
case "d_m_y":
X_["d"] = X[column].dt.day
X_["m"] = X[column].dt.month
X_["y"] = X[column].dt.year
self.count_ += 3
case "h_m_s":
X_["h"] = X[column].dt.hour
X_["m"] = X[column].dt.minute
X_["s"] = X[column].dt.second
self.count_ += 3
X_temp = pd.concat([X_temp, X_], axis=1, ignore_index=True)
return X_temp
def __str__(self) -> str:
return f"New columns created {self.count_}"
</code></pre>
<p><strong>Input</strong></p>
<pre><code>d = {
"A": ["10/4/2023 4:4:4", "11/4/2023 3:3:3", "12/4/2023 2:2:2", "13/4/2023 1:1:1"],
"B": ["15/4/2023", "16/4/2023", "17/4/2023", "18/4/2023"],
}
df = pd.DataFrame(d)
ct = ColumnTransformer(
[("datetime", DateTimeTransformer(), ["A", "B"])],
)
ct.fit(df)
new_df = ct.transform(df)
print(new_df)
for n,t,c in ct.transformers_:
print(t)
</code></pre>
<p>While performing transformations, I am trying to return number of newly created columns for above eg</p>
<pre><code>for `d_m_y_h_m_s` new columns 6
for `d_m_y` new columns 3
so total new columns as 9
</code></pre>
<p>But getting wrong output as double the value i.e 18</p>
<p><strong>Current output</strong></p>
<pre><code>[[ 10 4 2023 4 4 4 15 4 2023]
[ 11 4 2023 3 3 3 16 4 2023]
[ 12 4 2023 2 2 2 17 4 2023]
[ 13 4 2023 1 1 1 18 4 2023]]
New columns created 18 # Wrong count should be 9
</code></pre>
<p>What I am missing?</p>
| <python><pandas><scikit-learn> | 2023-07-27 19:35:07 | 0 | 962 | winter |
76,782,739 | 22,213,065 | Why is Firefox selenium webdriver not processing more then 20 tabs? | <p>I am using the following script to upload my png images to a secret website:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
import time
import os
def select_image_for_tab(driver, image_file):
# Find and click on "Browse for a file" button
browse_button = driver.find_element(By.XPATH, "//button[text()='Browse for a file']")
browse_button.click()
# Wait for the file input to be visible and interactable
time.sleep(0.5)
# Handle the file upload dialog using Selenium
file_input = driver.find_element(By.XPATH, "//input[@type='file']")
file_input.send_keys(image_file)
# Wait for the file to be uploaded and the file selection dialog to close
WebDriverWait(driver, 20).until(EC.invisibility_of_element_located((By.XPATH, "//input[@type='file']")))
if __name__ == "__main__":
image_files_directory = r"E:\Desktop\social\Output_folder\folder 20"
# Prompt to focus on Firefox
print("Please focus on Firefox. The script will start in 5 seconds...")
time.sleep(5)
# Set up the Firefox driver
driver = webdriver.Firefox()
try:
# Open the website in a new tab
driver.get("UPLOAD_URL.com")
# Wait for the website to load and the target element to be clickable
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "(//div[@class='ms-Stack css-147']//div)[1]")))
# Click on the target element
target_element = driver.find_element(By.XPATH, "(//div[@class='ms-Stack css-147']//div)[1]")
target_element.click()
# Get a list of all PNG image files in the directory
image_files = [f for f in os.listdir(image_files_directory) if f.lower().endswith('.png')]
# Select image files for each tab
for image_file in image_files:
select_image_for_tab(driver, os.path.join(image_files_directory, image_file))
time.sleep(0.5) # Adjust this delay if needed
# Open a new tab and load the website in it
driver.execute_script("window.open('about:blank');")
driver.switch_to.window(driver.window_handles[-1])
driver.get("UPLOAD_URL.com")
# Wait for the website to load in the new tab and the target element to be clickable
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.XPATH, "(//div[@class='ms-Stack css-147']//div)[1]")))
time.sleep(4) # Additional wait time if needed
print("File selection completed for all tabs.")
except Exception as e:
print("An error occurred:", e)
finally:
# The script will not close the browser or any tabs after completion
pass
</code></pre>
<p>But every time I have to divide my files into groups of 20.<br />
Firefox is not processing more than 20 tabs. How can I bypass this limit in remoted Firefox?</p>
<p><strong>Note: Please don't ask about secret URL or UPLOAD_URL.com</strong></p>
| <python><selenium-webdriver><selenium-firefoxdriver> | 2023-07-27 19:06:42 | 1 | 781 | Pubg Mobile |
76,782,715 | 2,623,630 | How to use twilio and twiml to record an incoming call in Python | <p>Very similar to this question:
<a href="https://stackoverflow.com/questions/58551641/how-to-use-twilio-and-twiml-to-record-a-call-and-redirect-the-call-or-join-someo">How to use twilio and twiml to record a call and redirect the call or join someone else to the call</a></p>
<p>I'm trying to record an incoming call. The solution in the post above is written in JavaScript. How do you write the same solution in Python? There is no method in twilio's Python library like <code>client.calls(callSid).recordings.create()</code>. I'm trying the following:</p>
<pre><code>client = Client(os.getenv("TWILIO_ACCOUNT_SID"), os.getenv("TWILIO_AUTH_TOKEN"))
callCon = client.calls(request.form["CallSid"])
recordings = callCon.recordings(request.form["CallSid"]).create() # this line errors out
</code></pre>
<p>I was expecting twilio to start recording the call. Instead I get an error saying no <code>create()</code> method exists.</p>
| <python><twilio><twilio-twiml> | 2023-07-27 19:02:34 | 1 | 516 | Nick Garyu |
76,782,554 | 8,807,152 | ANTLR4 Python failed to install and cannot establish a connection | <p>I am trying to install the requirements of Apache AGE Python driver. Whenever it comes to installing the antlr4-python3-runtime, it fails.</p>
<p>I install the packages through:</p>
<pre class="lang-bash prettyprint-override"><code>pip3 install -r requirements.txt
</code></pre>
<p>This gets the following:</p>
<pre class="lang-bash prettyprint-override"><code>Collecting antlr4-python3-runtime==4.11.1
WARNING: Retrying (Retry(total=4, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f8ce27dfd90>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /packages/e0/64/c548678120cccf784f555972ed37cedcf4f026abeec30ab0340c3af4ea07/antlr4-python3-runtime-4.11.1.tar.gz
WARNING: Retrying (Retry(total=3, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f8ce27df2e0>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /packages/e0/64/c548678120cccf784f555972ed37cedcf4f026abeec30ab0340c3af4ea07/antlr4-python3-runtime-4.11.1.tar.gz
WARNING: Retrying (Retry(total=2, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f8ce27dee90>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /packages/e0/64/c548678120cccf784f555972ed37cedcf4f026abeec30ab0340c3af4ea07/antlr4-python3-runtime-4.11.1.tar.gz
WARNING: Retrying (Retry(total=1, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f8ce27def50>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /packages/e0/64/c548678120cccf784f555972ed37cedcf4f026abeec30ab0340c3af4ea07/antlr4-python3-runtime-4.11.1.tar.gz
WARNING: Retrying (Retry(total=0, connect=None, read=None, redirect=None, status=None)) after connection broken by 'NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f8ce27df0d0>: Failed to establish a new connection: [Errno 101] Network is unreachable')': /packages/e0/64/c548678120cccf784f555972ed37cedcf4f026abeec30ab0340c3af4ea07/antlr4-python3-runtime-4.11.1.tar.gz
ERROR: Could not install packages due to an OSError: HTTPSConnectionPool(host='files.pythonhosted.org', port=443): Max retries exceeded with url: /packages/e0/64/c548678120cccf784f555972ed37cedcf4f026abeec30ab0340c3af4ea07/antlr4-python3-runtime-4.11.1.tar.gz (Caused by NewConnectionError('<pip._vendor.urllib3.connection.HTTPSConnection object at 0x7f8ce2818190>: Failed to establish a new connection: [Errno 101] Network is unreachable'))
</code></pre>
<ul>
<li>OS: Ubunutu 22.04</li>
<li>Python version: 3.10.6</li>
</ul>
<p>To reproduce:</p>
<pre><code># update and install
sudo apt-get update
sudo apt-get install python3-dev libpq-dev
# get apache age
git clone https://github.com/apache/age
# move to the drivers
cd age/drivers/python
# create virtual environment
virtualenv venv
# active
source venv/bin/activate
# install
pip3 install -r requirements.txt
</code></pre>
<p>P.S When I have tried to download the package files from <a href="https://pypi.org/project/antlr4-python3-runtime/#files" rel="nofollow noreferrer">https://pypi.org/project/antlr4-python3-runtime/#files</a>
it has been downloaded</p>
<p>Has anyone got a similar issue and/or a fix?</p>
| <python><python-3.x><pip><apache-age> | 2023-07-27 18:35:06 | 4 | 1,263 | rrrokhtar |
76,782,535 | 6,402,231 | How to create a submit button with multiple options that can alter a csv file and post it back | <p>I'm trying to create a submit button that lists files for a particular user that has actions associated to each drop down. I can download the csv file and alter it, save it as a new file but I can not seem to repost the file and add it to the list.</p>
<p>I am stuck how to add the newFile.csv to the DocumentForm(request.POST, request.FILES)</p>
<p>models.py</p>
<pre><code>class Document(models.Model):
user = models.ForeignKey(User, blank=True, null=True, on_delete=models.CASCADE)
description = models.CharField(max_length=255, blank=False)
document = models.FileField(upload_to=fileLocation)
uploaded_at = models.DateTimeField(auto_now_add=True)
</code></pre>
<p>forms.py</p>
<pre><code>class DocumentForm(forms.ModelForm):
class Meta:
model = Document
fields = ('description', 'document')
</code></pre>
<p>html</p>
<pre><code> {% for file in files %}
<form id="file" action= "{% url 'process' %}" method="post" enctype="multipart/form-data">
{% csrf_token %}
<select name="selectedOption">
<option value="1">1</option>
<option value="2">2</option>
<option value="3">3</option>
</select>
<input type="submit" value="Submit" class="btn btn-sm">
</form>
{% endfor %}
</code></pre>
<p>views.py</p>
<pre><code>def process(request):
selected_option = request.POST.get('selectedOption')
form = DocumentForm(request.POST, request.FILES)
current_client = request.user
files = Document.objects.filter(user=current_client)
fileResponse = s3_client.get_object(Bucket=settings.AWS_STORAGE_BUCKET_NAME, Key=document)
df = pd.read_csv(fileResponse.get("Body"), encoding='utf-8', dtype=str)
df.to_csv('temp/temp.csv', index=False, header=True, encoding='utf-8-sig')
newFile = 'temp/temp.csv'
if request.method == 'POST':
form = DocumentForm(request.POST, request.FILES)
if form.is_valid():
instance = form.save(commit=False)
instance.user = request.user
instance.save()
return redirect('/filelist')
else:
form = DocumentForm()
return render(request, 'accounts/filelist.html', {'selected_option':selected_option, 'form': form, 'files': files})
</code></pre>
<p><a href="https://i.sstatic.net/PX51M.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PX51M.png" alt="enter image description here" /></a></p>
| <python><django> | 2023-07-27 18:31:03 | 1 | 441 | leo |
76,782,430 | 4,391,249 | How to create database-like relations between Python objects | <p>Say I have a type of object <code>Foo</code> and another called <code>Bar</code>. Say <code>Foo</code>s can "have" 0 or more <code>Bar</code>s, and that all <code>Bar</code>s must have a single <code>Foo</code>. Something like this doesn't feel right because there's no straight-forward way of enforcing consistency:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Foo:
some_field: object
bars: Optional[list[Bar]] = None
@dataclass
class Bar:
a_field: object
foo: Foo
</code></pre>
<p>To be clear, by "enforcing consistency" what I mean is that if a <code>Foo</code> claims to have a list of <code>Bar</code>s then those <code>Bar</code>s should all claim to have that <code>Foo</code>.</p>
<p>I could instead use mappings. But now I need to create IDs which maybe I don't need for any other purpose, and consistency is still not straight-forward to enforce.</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Foo:
some_field: object
@dataclass
class Bar:
a_field: object
id_to_foo = {0: a_foo, 1: another_foo, ...}
id_to_bar = {0: a_bar, 1: another_bar, ...}
foo_to_bar = {0: [0, 1], 1: [1, 2, 3], ...}
</code></pre>
<p>I could make the relation explicit one way, and implicit the other way like so:</p>
<pre class="lang-py prettyprint-override"><code>@dataclass
class Foo:
some_field: object
bars: Optional[list[Bar]] = None
@dataclass
class Bar:
a_field: object
</code></pre>
<p>This would partially solve consistency (although I still am not enforcing that each <code>Bar</code> has exactly one <code>Foo</code>). But then every time I want to know which <code>Foo</code> a <code>Bar</code> belongs to, I'd need to round up all my <code>Foo</code>s and run through their lists checking for an <code>id</code> match.</p>
<p>What's the simplest way to achieve this many-to-one relation in Python, without writing a bunch of boiler-plate machinery myself?</p>
| <python><relationship> | 2023-07-27 18:13:57 | 0 | 3,347 | Alexander Soare |
76,782,328 | 5,134,817 | Python unittest C extension call to exit() | <h3>TLDR</h3>
<p>I can't get unittest to run a test where I am trying to check that my Python C extensions calls <code>exit(1)</code> from <code>stdlib.h</code>.</p>
<h3>The setup</h3>
<p>I have a Python unit test and C extension which looks like the following:</p>
<h4>The Python test</h4>
<pre class="lang-py prettyprint-override"><code>import unittest
import binding
class TestBindings(unittest.TestCase):
def test_fail(self):
self.assertRaises(SystemExit, binding.fail)
if __name__ == '__main__':
unittest.main()
</code></pre>
<h3>The C extension</h3>
<p><code>core.h</code></p>
<pre class="lang-c prettyprint-override"><code>// The main core C library declarations.
void fail(void);
</code></pre>
<p><code>core.c</code></p>
<pre class="lang-c prettyprint-override"><code>// The main core C library implementations.
void fail(void)
{
fprintf(stderr, "We are failing now.\n");
exit(1);
}
</code></pre>
<p><code>bindings.c</code></p>
<pre class="lang-c prettyprint-override"><code>// The main core C library bindings.
#include <Python.h>
#include <stdio.h>
#include <stdlib.h>
#include "core.h"
PyObject *_fail(PyObject *self, PyObject *args, PyObject *kwargs)
{
fail();
Py_RETURN_NONE;
}
static PyMethodDef binding_methods[] = {
{"fail", PyFunc(_fail), METH_VARARGS | METH_KEYWORDS,
"Fails and calls exit()."},
{NULL, NULL, 0, NULL} /* Sentinel */
};
static struct PyModuleDef binding_module = {
PyModuleDef_HEAD_INIT,
"binding",
"A simple module to demonstrate C bindings.",
-1,
binding_methods};
PyMODINIT_FUNC
PyInit_binding(void)
{
return PyModule_Create(&binding_module);
}
</code></pre>
<h2>The tests don't run</h2>
<p>The Python test always exits, and does not run the remaining tests nor report the results. I have tried to find ways to capture this exiting behaviour, but can't find anything that works.</p>
<p>My thoughts are to either:</p>
<ul>
<li>Add a more robust capture in the unit test.</li>
<li>Add/declare some form of error handler in the module containing all the python bindings.</li>
<li>Change the way the core library exits when something goes wrong and replace all my calls to <code>exit(1)</code>. <strong>I want to avoid any changes to the <code>core*</code> files.</strong></li>
</ul>
<h2>EDIT</h2>
<p>After some more digging around, there were two routes that looked promising:</p>
<ol>
<li><p>Trying to add functionality when <code>exit</code> is called (without trying to mock/stub the C standard library <code>exit</code>). This led me to functions such as <code>atexit</code> (and the even nicer GNU extension <code>on_exit</code>). However, I couldn't figure out a way to add my custom exit without calling another <code>exit</code> from within <code>exit</code>, and calling <code>exit</code> twice is undefined behaviour.</p>
</li>
<li><p>Launching the test in its own process which is allowed to die, and then checking the return code for this other process. This is the closest to the requirements I had, and seems to solve my problem with minimal code changes in the core C library, and has <a href="https://stackoverflow.com/a/73070027/5134817">a solution outlined here</a>.</p>
</li>
</ol>
| <python><c><python-unittest><exit><python-c-api> | 2023-07-27 17:56:11 | 1 | 1,987 | oliversm |
76,782,041 | 2,930,793 | Install pydantic using poetry and pip throws ERROR | <p>I install pydantic with poetry. It works fine But when try to use Pydantic after installing it throws</p>
<pre><code>E ImportError: dlopen(/Users/sazzad/Library/Caches/pypoetry/virtualenvs/my-test-project-qHFD2Grb-py3.9/lib/python3.9/site-packages/pydantic/__init__.cpython-39-darwin.so, 0x0002): tried: '/Users/sazzad/Library/Caches/pypoetry/virtualenvs/my-test-project-qHFD2Grb-py3.9/lib/python3.9/site-packages/pydantic/__init__.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64')), '/System/Volumes/Preboot/Cryptexes/OS/Users/sazzad/Library/Caches/pypoetry/virtualenvs/my-test-project-qHFD2Grb-py3.9/lib/python3.9/site-packages/pydantic/__init__.cpython-39-darwin.so' (no such file), '/Users/sazzad/Library/Caches/pypoetry/virtualenvs/my-test-project-qHFD2Grb-py3.9/lib/python3.9/site-packages/pydantic/__init__.cpython-39-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64'))
</code></pre>
<p>Then I tried this
<code> pip uninstall pydantic && python3.9 -m pip install --no-binary :all: pydantic --nocache-dir</code> from pottery venv</p>
<p>I get this error</p>
<pre><code> creating build/lib.macosx-10.9-universal2-cpython-39/maturin
copying maturin/__init__.py -> build/lib.macosx-10.9-universal2-cpython-39/maturin
copying maturin/import_hook.py -> build/lib.macosx-10.9-universal2-cpython-39/maturin
copying maturin/__main__.py -> build/lib.macosx-10.9-universal2-cpython-39/maturin
running egg_info
creating maturin.egg-info
writing maturin.egg-info/PKG-INFO
writing dependency_links to maturin.egg-info/dependency_links.txt
writing requirements to maturin.egg-info/requires.txt
writing top-level names to maturin.egg-info/top_level.txt
writing manifest file 'maturin.egg-info/SOURCES.txt'
reading manifest file 'maturin.egg-info/SOURCES.txt'
reading manifest template 'MANIFEST.in'
warning: no files found matching '*.json' under directory 'src/python_interpreter'
writing manifest file 'maturin.egg-info/SOURCES.txt'
running build_ext
running build_rust
error: can't find Rust compiler
If you are using an outdated pip version, it is possible a prebuilt wheel is available for this package but pip is not able to install from it. Installing from the wheel would avoid the need for a Rust compiler.
To update pip, run:
pip install --upgrade pip
and then retry package installation.
If you did intend to build this package from source, try installing a Rust compiler from your system package manager and ensure it is on the PATH during installation. Alternatively, rustup (available at https://rustup.rs) is the recommended way to download and update the Rust compiler toolchain.
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for maturin
Failed to build maturin
ERROR: Could not build wheels for maturin, which is required to install pyproject.toml-based projects
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× pip subprocess to install build dependencies did not run successfully.
│ exit code: 1
╰─> See above for output.
note: This error originates from a subprocess, and is likely not a problem with pip.
</code></pre>
<p>Note: I am using macos which arm64 not x86 as you already noticed in the log.</p>
<p>Any clue? thanks in advance.</p>
| <python><macos><pip><pydantic><python-poetry> | 2023-07-27 17:08:32 | 1 | 903 | Sazzad |
76,781,980 | 3,121,975 | Mocking an SSLError in Python | <p>I've been dealing with the <code>ssl.SSLError: [SSL: UNSAFE_LEGACY_RENEGOTIATION_DISABLED] unsafe legacy renegotiation disabled</code> issue described <a href="https://stackoverflow.com/questions/71603314/ssl-error-unsafe-legacy-renegotiation-disabled">here</a>. As part of the fix, I'm attempting to write in exception-handling for the exception:</p>
<pre><code>try:
return req()
except (ssl.SSLError, RSSLError) as ssl_err:
print(ssl_err)
if "UNSAFE_LEGACY_RENEGOTIATION_DISABLED" in str(ssl_err):
ctx = ssl.create_default_context(ssl.Purpose.SERVER_AUTH)
ctx.options |= 0x4
self._sess.mount("https://", CustomHttpAdapter(ctx))
return req()
raise
</code></pre>
<p>The issue I'm having is testing it. I've tried doing this:</p>
<pre><code>err = SSLError()
err.reason = "UNSAFE_LEGACY_RENEGOTIATION_DISABLED"
</code></pre>
<p>but this prints as <code>()</code>. How do I create a mock <code>SSLError</code> that I can use to test this code?</p>
| <python><ssl><testing> | 2023-07-27 17:01:29 | 1 | 8,192 | Woody1193 |
76,781,968 | 6,088,984 | Method shapely.intersection returns a wrong answer | <p>I try to find intersection of two edges in 2D using method shapely.intersection and get a wrong answer
Shapely module v.2.0.1</p>
<pre><code>from shapely.geometry import LineString
a = LineString([[30.0,0.0],[36.0,30.0]])
b = LineString([[32.8,14.0],[35.2,26.0]])
intersection = a.intersection(b)
</code></pre>
<p>It is easy to check that edge <code>b</code> entirely lays on edge <code>a</code>, and so the result of intersection will be equals <code>b</code>
but code returns one point (the center of the edge <code>b</code>)
<code>Point(34,20)</code></p>
| <python><geometry><shapely> | 2023-07-27 16:59:15 | 1 | 681 | Grag2015 |
76,781,887 | 11,028,689 | Using unsqueeze to get a 2-D torch tensor size for labels with a class instance? | <p>I am struggling to get the correct shape for my y (labels for a multiclassification problem) which should be of torch.Size([37715, 1]).
I have tried .unsqueeze(0),.unsqueeze(1) and np.reshape yet I am still getting torch.Size([1]) as in the code belo. with .unsqeeze(-2) I am getting torch.Size([171812]).</p>
<pre><code>import numpy as np
import torch
...
from torch.utils.data import Dataset
from torch.utils.data import DataLoader
from torchvision.transforms import ToTensor, Lambda
..
# defining the class
class EmbeddingDataset(Dataset):
def __init__(self, embedding_fp, transform=None, target_transform=None):
with open(embedding_fp, "rb") as fIn:
stored_data = pickle.load(fIn)
stored_labels = stored_data['labels']
stored_embeddings = stored_data['embeddings']
self.X = stored_embeddings
self.X = torch.tensor(self.X).float().unsqueeze(0)
self.y = stored_labels.to_numpy()
# self.y = np.reshape(self.y, (len(self.y), -1)) , alternatively
self.y= torch.tensor(self.y).float().unsqueeze(-1)
def __len__(self):
return len(self.X)
def __getitem__(self, idx):
X = self.X[idx]
y = self.y[idx]
return X, y
# creating a class instance for my training data.
train_data = EmbeddingDataset('embeddings_db_train.pkl')
X_train, y_train = train_data[0]
X_train.shape
torch.Size([171812, 384])
y_train.shape
torch.Size([1])
</code></pre>
<p>Can anyone advise how can I get shape ([37715, 1]) for my y?
also any suggestions how to define my class in a better way would be welcome.</p>
| <python><numpy><pytorch><torchvision> | 2023-07-27 16:47:30 | 1 | 1,299 | Bluetail |
76,781,817 | 10,133,797 | `torch.conj_physical` faster than `torch.conj` if full output is used? | <p><a href="https://pytorch.org/docs/stable/generated/torch.conj.html" rel="nofollow noreferrer">conj</a> docs</p>
<blockquote>
<p><code>torch.conj()</code> performs a lazy conjugation, but the actual conjugated tensor can be materialized at any time using <code>torch.resolve_conj()</code></p>
</blockquote>
<p><a href="https://pytorch.org/docs/stable/generated/torch.conj_physical.html" rel="nofollow noreferrer">conj_physical</a> docs</p>
<blockquote>
<p>This performs the conjugate operation regardless of the fact conjugate bit is set or not.</p>
</blockquote>
<p>So I figure, if we're guaranteed to access/modify the entire output, then latter can't be any slower than former, and should sometimes be faster. In benchmarking, I define</p>
<pre class="lang-py prettyprint-override"><code>def fn0(x):
o = torch.conj(x)
o += 1j
def fn1(x):
o = torch.conj_physical(x)
o = += 1j
</code></pre>
<p>and here <code>fn1</code> is definitively faster, 5-50% across CPU & GPU. But replacing the second line with <code>torch.mean(o)</code> or other common operations, the difference is insignificant (<code>fn1</code> is still a little ahead). Replacing with <code>o *= x</code> still reproduces the speedup, so it seems to concern in-place operations.</p>
<p>Is <code>torch.conj</code> ever faster than <code>conj_physical</code> if we're accessing/modifying the entire output? And optionally, why the difference with in-place vs not? <em>(torch 2.0.1, Python 3.11.4, Windows 11)</em></p>
<h3>Full bench script</h3>
<pre class="lang-py prettyprint-override"><code># -*- coding: utf-8 -*-
import torch
import torch.utils.benchmark as benchmark
# Define funcs ###############################################################
def fn0(x):
o = torch.conj(x)
o += 1j
def fn1(x):
o = torch.conj_physical(x)
o += 1j
# Make bench funcs ###########################################################
names = ('fn0', 'fn1')
n_iters = 2000
n_iters_gpu = n_iters * 100
got_gpu = bool(torch.cuda.is_available())
#%% Benchmark ################################################################
times = {}
for N in (10000, 100000, 1000000):
x = torch.randn(N, dtype=torch.complex64)
x_gpu = x.cuda()
times[N] = {}
for name in names:
common = dict(stmt=f'{name}(x)', setup=f'from __main__ import {name}')
# cpu ----------------------------------------------------------------
key = name
bench_fn = benchmark.Timer(**common, globals={'x': x})
# warmup
_ = bench_fn.timeit(3)
# bench
times[N][key] = bench_fn.timeit(n_iters).mean
# gpu ----------------------------------------------------------------
if got_gpu:
key = name + '-gpu'
# warmup
bench_fn = benchmark.Timer(**common, globals={'x': x_gpu})
# warmup
_ = bench_fn.timeit(300)
# bench
times[N][key] = bench_fn.timeit(n_iters_gpu).mean
# "progress bar"
print(end='.', flush=True)
#%% Print results ############################################################
print()
for N in times:
print(f"N={N}")
for name in names:
print(name + '-cpu', "%.3g" % times[N][name])
if got_gpu:
for name in names:
print(name + '-gpu', "%.3g" % times[N][name + '-gpu'])
print()
</code></pre>
| <python><performance><pytorch><lazy-evaluation> | 2023-07-27 16:37:57 | 0 | 19,954 | OverLordGoldDragon |
76,781,760 | 4,354,822 | Can a python requests.Session break? If yes, what should I do? | <p>I have python workers. They use a similar code than the one below:</p>
<pre><code>import requests
class Requester:
def __init__(self):
self.session = requests.Session()
def get(self):
return self.session.get("https://...", params={"a": "b"})
</code></pre>
<p>My goal is to save ~50ms latency with the help of a session to perform multiple independant calls.</p>
<p>I have no authentication nor cookies involved.</p>
<p>The question is: should I fear the session to "break" and my subsequent <code>get</code> calls to fail? Will the requests library try to open a new one under the hood if such a situation happens?</p>
<p>If the session might break, how should I handle such cases? With a try/catch block and try to replace the old one with a new one?</p>
| <python><http><python-requests><httpsession> | 2023-07-27 16:30:27 | 0 | 653 | Jbb |
76,781,565 | 5,463,883 | Error in running Pyomo with CBC solver on Docker | <p>I am trying to run a python application using <code>pyomo</code> with <code>cbc</code> solver.</p>
<p>When I run it in a docker container it is giving the following error:</p>
<blockquote>
<p>AttributeError: 'numpy.float64' object has no attribute 'polynomial_degree'</p>
</blockquote>
<p>I am inclined to believe that it is linked with Python not being able to find the <code>cbc</code> executable.</p>
<p>Here is my <code>Dockerfile</code> file:</p>
<pre><code>FROM python:3.9-slim
WORKDIR /app
COPY . .
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get install -y \
build-essential \
gcc \
libglpk-dev \
glpk-utils \
curl \
git \
software-properties-common \
&& rm -rf /var/lib/apt/lists/*
RUN python3 -m venv venv \
&& . venv/bin/activate \
&& python3 -m pip install --upgrade pip \
&& pip3 install -r requirements.txt
RUN /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" \
&& (echo; echo 'eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)"') >> /root/.profile \
&& eval "$(/home/linuxbrew/.linuxbrew/bin/brew shellenv)" \
&& brew install glpk \
&& brew install ipopt \
&& brew install cbc
ENV PATH="/opt/homebrew/opt/cbc/bin:$PATH"
ENV LDFLAGS="-L/opt/homebrew/opt/cbc/lib"
ENV CPPFLAGS="-I/opt/homebrew/opt/cbc/include"
ENV PKG_CONFIG_PATH="/opt/homebrew/opt/cbc/lib/pkgconfig"
ENV PATH="${PATH}:/app/venv/bin:/home/linuxbrew/.linuxbrew/bin:/usr/local/opt/cbc/bin"
EXPOSE 8501
HEALTHCHECK CMD curl --fail http://localhost:8501/_stcore/health
ENTRYPOINT ["streamlit", "run", "app.py", "--server.port=8501", "--server.address=0.0.0.0"]
</code></pre>
<p>And here is the <code>requirements.txt</code> file:</p>
<pre><code>numpy==1.24.3
pandas==2.0.2
pyomo==6.6.1
streamlit==1.23.1
streamlit-aggrid==0.3.4.post3
</code></pre>
<p>Please let me know how to resolve this.</p>
<p>I have tried executing <code>which cbc</code> from the python code and it finds the executable to <code>cbc</code></p>
| <python><docker><dockerfile><pyomo><coin-or-cbc> | 2023-07-27 16:03:08 | 1 | 1,463 | Ahmed Akhtar |
76,781,491 | 2,868,299 | Polars: Count number of rows where two or more columns are null | <p>I'd like to find the percentage of rows where two specific columns are both null or both populated.</p>
<p>I'd like to accomplish something similar to this SQL:</p>
<pre><code>SELECT
Field_A
,COUNT(*) as Row_Count
,SUM(CASE WHEN FIELD_B IS NULL AND FIELD_C IS NULL THEN 1 ELSE 0 END) as Both_Null_Count
</code></pre>
<p>I tried this:</p>
<pre><code>df_all_cols = df_all_cols.group_by(lineage_fields)\
.agg(
pl.len().alias('Row_Count'),
(pl.col('TRACKING').is_null() & pl.col('ORDER NUMBER').is_null()).count().alias('Null_Order_And_Tracking'),
(pl.col('TRACKING').is_not_null() & pl.col('ORDER NUMBER').is_not_null()).count().alias('NotNull_Order_And_Tracking')
)
</code></pre>
<p>However, all three columns return the same output for each row.</p>
<p>How can I do this with Polars?</p>
| <python><dataframe><python-polars> | 2023-07-27 15:51:57 | 1 | 494 | DixieFlatline |
76,781,485 | 11,065,874 | fastapi: how to cache the value of fastapi dependency in a class-based (callable) dependency? | <p>I have a small fastapi example here</p>
<pre><code># run.py
from functools import cache
import uvicorn
from fastapi import FastAPI, Depends
app = FastAPI()
class Dep:
def __init__(self, inp):
self.inp = inp
@cache
def __call__(self):
print(self.inp * 100)
return self.inp
@app.get("/a")
def a(v: str = Depends(Dep("a"))):
return v
@app.get("/a/a")
def aa(v: str = Depends(Dep("a"))):
return v
@app.get("/b")
def b(v: str = Depends(Dep("b"))):
return v
@app.get("/b/b")
def bb(v: str = Depends(Dep("b"))):
return v
def main():
uvicorn.run(
"run:app",
host="0.0.0.0",
reload=True,
port=8000,
workers=1
)
if __name__ == "__main__":
main()
</code></pre>
<p>I run <code>python run.py</code> and the application spins up.</p>
<p><strong>What I expect is that:</strong></p>
<p>the first time I hit <code>/a</code> or <code>/a/a</code> endpoints it shows logs to me and print 100 "a" for me. the next times no logging happens because of the <code>@cache</code> decorator</p>
<p>and</p>
<p>the first time I hit <code>/b</code> or <code>/b/b</code> endpoints it shows logs to me and print 100 "b"s for me. the next times no logging happens because of the <code>@cache</code> decorator</p>
<p><strong>What happens</strong></p>
<p>The first time I hit <code>/b</code>, it shows logs. the next times no log
The first time I hit <code>/b/b</code>, it shows logs. the next times no log
The first time I hit <code>/a</code>, it shows logs. the next times no log
The first time I hit <code>/a/a</code>, it shows logs. the next times no log</p>
<hr />
<p>the reason is that each time I am passing a value to the Dep class it is creating a new object and the call method is that of a new object each time. that is why the caching is not working properly.</p>
<p>and the <a href="https://fastapi.tiangolo.com/tutorial/dependencies/sub-dependencies/#using-the-same-dependency-multiple-times" rel="nofollow noreferrer">caching is being done by fastapi dependency</a> . That is why when I call an endpoint for the next times, I do not see new logs</p>
<p><strong>But the question is how do I</strong></p>
<ul>
<li>pass arguments to the callable dependency (a class)</li>
<li>and also cache the value the function returns</li>
</ul>
<hr />
<p>Extra information</p>
<p>I tried to achieve the same using the approach below:</p>
<pre><code># run.py
from functools import cache
import uvicorn
from fastapi import FastAPI, Depends, Request
app = FastAPI()
@cache
def dep(inp):
@cache
def sub_dep():
print(inp*100)
return inp
return sub_dep
@app.get("/a")
def a(v: str = Depends(dep("a"))):
return v
@app.get("/a/a")
def aa(v: str = Depends(dep("a"))):
return v
@app.get("/b")
def b(v: str = Depends(dep("b"))):
return v
@app.get("/b/b")
def bb(v: str = Depends(dep("b"))):
return v
def main():
uvicorn.run(
"run:app",
host="0.0.0.0",
reload=True,
port=8000,
workers=1
)
if __name__ == "__main__":
main()
</code></pre>
<p>and it working as expected above</p>
| <python><fastapi> | 2023-07-27 15:51:10 | 1 | 2,555 | Amin Ba |
76,781,298 | 14,222,845 | How would I style a column's elements based on another column's elements in a pandas data frame? | <p>Say, for example, I have a pandas data frame like so:</p>
<pre><code>import pandas as pd
# Sample DataFrame
data = {
'Greek letters': ["Alpha", "Beta", "Gamma", "Omega", "Delta"],
'English letters': ["A", "B", "C", "D", "E"],
'Greek Letter score': [5, 10, 15, 20, 25],
'English Letter score': [3, 11, 12, 18, 25]
}
df = pd.DataFrame(data)
</code></pre>
<p>What I want to do is apply specific background colors only to the elements in the columns <code>Greek letters</code> and <code>English letters</code> based on their respective scores (so, based on the elements in the <code>Greek Letter score</code> and <code>English Letter score</code> columns respectively).</p>
<pre><code>def highlight_letter(value):
# How would I use the letter element to obtain its corresponding score?
# score = some technique to obtain the letter's score
if score <= 10:
return 'background-color: lightgreen'
elif score <= 20:
return 'background-color: yellow'
else:
return 'background-color: blue'
styled_df = df.style.applymap(highlight_letter, subset=['Greek letters', 'English letters'])
</code></pre>
<p>This is what the expected output should look like:</p>
<p><a href="https://i.sstatic.net/yJJWD.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/yJJWD.png" alt="enter image description here" /></a></p>
| <python><pandas><background-color> | 2023-07-27 15:27:43 | 1 | 330 | Diamoniner12345 |
76,781,222 | 3,685,918 | how to calculate value using previous other columns' value in python | <p>I'd like to calculate rate of return from Dataframe belows.
Using other columns' previous row, even group by <code>id</code>
To be specific ,</p>
<p>From</p>
<pre><code>>>> df = pd.DataFrame({'id': ['Blue', 'Blue','Blue','Red','Red'],
'a':[100,200,300,1,2],
'b':[10,20,15,3,2],
'c':[1,2,3,4,5]})
>>> df
id a b c
0 Blue 100 10 1
1 Blue 200 20 2
2 Blue 300 15 3
3 Red 1 3 4
4 Red 2 2 5
</code></pre>
<p>I want to make following.</p>
<p><code>df['new_col'] = a / a(previous row value) + b(previous row value) - c(previous row value)</code></p>
<p>I think <code>pct_change()</code> doen't help since it works only same column.</p>
<pre><code>
>>> df
id a b c new_col
0 Blue 100 10 1 -
1 Blue 200 20 2 = 200 / (100 + 10 - 1)
2 Blue 300 15 3 = 300 / (200 + 20 - 2)
3 Red 1 3 4 -
4 Red 2 2 5 = 2 / (1 + 3 - 4)
</code></pre>
| <python><row> | 2023-07-27 15:18:26 | 1 | 427 | user3685918 |
76,781,109 | 11,693,768 | Converting aiohttp script to asyncio + requests (aiohttp not working on ubuntu while asyncio + requests works) | <p>I am using the following script to do queries on a website.</p>
<p>It works on macos, however it does not work with ubuntu. I have tried requests and it works, I also tried a simple asyncio + requests with the website and it works.</p>
<p>I need help converting it to using requests instead of aiohttp. Any help would be appreciated.</p>
<p>This is my original code.</p>
<pre><code>import aiohttp
import asyncio
async def get_data(session, x):
while True:
try:
async with session.get(url=f'https://api.abc.com/{x}') as response:
if response.status == 200:
data = await response.json()
try:
data = float(data)
return data
except ValueError:
print("Data is not a valid float. Retrying...")
else:
print("Received non-200 status code. Retrying...")
await asyncio.sleep(1) # Wait for 1 second before retrying
except Exception as e:
print("Unable to get url {} due to {}. Retrying...".format(x, e.__class__))
await asyncio.sleep(1) # Wait for 1 second before retrying
async def main(datas):
async with aiohttp.ClientSession() as session:
ret = await asyncio.gather(*[get_data(session, data) for data in datas])
return {datas[i]: ret[i] for i in range(len(datas))} # Return the results as a dictionary
datas = ['x1', 'x2', 'x3', 'x4']
results = asyncio.run(main(datas))
</code></pre>
<p>Here is my code</p>
<pre><code>
import asyncio
import requests
async def get_data(x):
while True:
try:
response = requests.get(url=f'https://api.abc.com/{x}')
if response.status_code == 200:
try:
data = float(response.json())
return data
except ValueError:
print("Data is not a valid float. Retrying...")
else:
print("Received non-200 status code. Retrying...")
await asyncio.sleep(1) # Wait for 1 second before retrying
except Exception as e:
print("Unable to get url {} due to {}. Retrying...".format(x, e.__class__))
await asyncio.sleep(1) # Wait for 1 second before retrying
async def main(datas):
tasks = [get_data(data) for data in datas]
ret = await asyncio.gather(*tasks)
return {datas[i]: ret[i] for i in range(len(datas))} # Return the results as a dictionary
datas = ['x1', 'x2', 'x3', 'x4']
results = asyncio.run(main(datas))
print(results)
</code></pre>
<p>and the error I am getting so far,</p>
<pre><code>Unable to get data for x1 due to <class 'TypeError'>. Retrying...
Unable to get data for x2 due to <class 'TypeError'>. Retrying...
Unable to get data for x3 due to <class 'TypeError'>. Retrying...
Unable to get data for x4 due to <class 'TypeError'>. Retrying...
Unable to get data for x1 due to <class 'TypeError'>. Retrying...
...
</code></pre>
| <python><python-requests><python-asyncio><aiohttp><code-conversion> | 2023-07-27 15:07:14 | 2 | 5,234 | anarchy |
76,781,052 | 8,075,540 | Subclassing typing.Any | <p>The <a href="https://docs.python.org/3/library/typing.html?highlight=typing%20textio#typing.Any" rel="nofollow noreferrer">documentation for <code>typing.Any</code></a> says</p>
<blockquote>
<p>Changed in version 3.11: Any can now be used as a base class. This can be useful for avoiding type checker errors with classes that can duck type anywhere or are highly dynamic.</p>
</blockquote>
<p>What kind of errors is this trying to avoid? At first, I thought the situation in mind was</p>
<pre class="lang-py prettyprint-override"><code>class Foo:
def __getattr__(self, attr: str) -> typing.Any:
if attr == 'bar':
return 5
raise AttributeError(attr)
def print_bar(foo: Foo) -> None:
print(foo.bar)
</code></pre>
<p>However, mypy 0.971 with Python 3.10.6 doesn't produce any errors.</p>
| <python><python-typing><python-3.11> | 2023-07-27 15:01:01 | 0 | 6,906 | Daniel Walker |
76,780,886 | 1,763,955 | Is there already something in python or numpy to determine a number's format? | <p>I need to determine if a string is a plain int, a plain float, a float using <code>e</code>, or not parsable as a number. Here's what I came up with, but this feels like something that probably already exists, perhaps in numpy? I did a brief scan of the libraries and google and didn't see anything, is this already a thing and I'm just not seeing it?</p>
<pre><code>PLAIN_INT, PLAIN_FLOAT, E_FLOAT, STRING = range(4)
# should be just optionally - then numbers
sample_plain_ints = ['1', '0', '-5', '333333333']
# need to contain a dot
plain_floats = ['1.0', '-5.0', '-33.212', '0.0', '-1.', '-3.']
# do not need to contain a dot
e_floats = ['1.3e5', '-1.2e5', '0.0e0', '5e-3', '3e23', '3E5', '-3E-12']
# other
strings = ['aether', '1ee3', 'buzz', 'eeep', '121212beep']
def determine_str_type(item):
try:
float(item)
try:
int(item)
return PLAIN_INT
except ValueError:
return E_FLOAT if 'E' in item.upper() else PLAIN_FLOAT
except ValueError:
return STRING
assert all([determine_str_type(item) == PLAIN_INT for item in sample_plain_ints])
assert all([determine_str_type(item) == PLAIN_FLOAT for item in plain_floats])
assert all([determine_str_type(item) == E_FLOAT for item in e_floats])
assert all([determine_str_type(item) == STRING for item in strings])
</code></pre>
| <python><numpy> | 2023-07-27 14:42:30 | 1 | 3,993 | Carbon |
76,780,845 | 10,007,302 | Writing to Excel data validation from Python in a .xslm file | <p>I have an Excel file that has to be in a macro-enabled workbook (.xlsm) that has a named range called <code>names_to_match</code>. The output of a different function will take those company names, match them against a database and return the top 5 matches along with each one's match score, match name, and match index.</p>
<p>The goal would be go name by name in the .xlsm file, and if the highest match score is 100 (Assume the name is column A). column B would then be the highest match name and column C would be the match score of 100.</p>
<p>The tricky part is if the match score is less than 100, column b should be a data validation list of all the match names plus one entry called "create new entry".</p>
<p>The problem I'm having is after struggling to get this to work using <code>openpyxl</code> for days, I was finally able to use <code>xlsxwriter</code> to get it to work in my sample file. However, as soon as I tried to use it on my actual file, it didn't work b/c xlsxwriter doesn't support .xlsm extensions.</p>
<p>Here is the code I've attempted in <code>openpyxl</code>. Is it possible to do this in <code>openpyxl</code>? If not, is there another workaround for the .xlsm file extension?</p>
<pre><code>from sqlupdate import data_frame_from_xlsx_range
import pandas as pd
from openpyxl import Workbook
def read_data():
# Read in the data from the two Excel files
df_names = data_frame_from_xlsx_range('excel_write_test.xlsm', 'names_to_match')
df_db_pull = pd.read_excel('test_with_db_pull.xlsx')
return df_names, df_db_pull
def process_row(row, df_db_pull):
# Find the corresponding name in the test_with_db_pull dataframe
match_row = df_db_pull.loc[df_db_pull['original_name'] == row['Tracker_Name'], :]
# Check if the score is 100
if match_row['score_0'].values[0] == 100:
# Set the value in column B to the match index
row['Possible Matches'] = [match_row['match_name_0'].values[0]]
# Set the value in column C to the score
row['Match Score'] = match_row['score_0'].values[0]
else:
# Get the unique values in the match name columns
match_names = set(
match_row[['match_name_0', 'match_name_1', 'match_name_2',
'match_name_3', 'match_name_4']].values.ravel())
# Remove any NaN values
match_names.discard(float('nan'))
match_names = {x for x in match_names if pd.notnull(x)}
match_names = list(match_names) + ['Create New Database Entry']
# Set the value in column B to a list of the match names
row['Possible Matches'] = match_names
# Set the value in column C to the highest score
row['Match Score'] = match_row[['score_0', 'score_1', 'score_2', 'score_3', 'score_4']].max().values[0]
return row
def create_dropdowns(df):
# Get the workbook and the worksheet
workbook = Workbook()
worksheet = workbook.active
# Write the dataframe to the worksheet
df.to_excel(worksheet, index=False)
# Loop over the dataframe's 'Possible Matches' column and create a data validation for each row
for idx, item in enumerate(df['Possible Matches'], start=2):
# Prepare a list of options
options = item
# Create a data validation object with the list of options
dv = {'validate': 'list',
'source': options}
# Add data validation to the corresponding cell in the worksheet
worksheet.cell(row=idx, column=1).data_validation = dv
# Save the workbook
workbook.save_as('excel_write_test.xlsm')
def main():
df_names, df_db_pull = read_data()
# Process each row
df_names['Possible Matches'] = None
df_names['Match Score'] = None
df_names = df_names.apply(process_row, df_db_pull=df_db_pull, axis=1)
# Create the dropdowns in the Excel file
create_dropdowns(df_names)
if __name__ == '__main__':
main()
</code></pre>
| <python><pandas><excel><dataframe><openpyxl> | 2023-07-27 14:38:23 | 0 | 1,281 | novawaly |
76,780,803 | 14,114,654 | Clean duplicates based on multiple conditions | <p>I have a df of fruit purchases sorted by date. I want to drop duplicates by fruit. But the way to drop duplicates depend on the column. The solution needs to generalise to more columns. But the 3 types of operations remain the same:</p>
<p>For each fruit:</p>
<ol>
<li>price column should be the highest sold price</li>
<li>date, place and colour columns should be the most recent value that isn't NaN</li>
<li>qty should be the average number sold</li>
</ol>
<pre><code>df fruit date price place colour qty
0 Apple 25-12-2023 4 NaN Green 5
1 Apple 22-11-2023 5 London Red 6
2 Apple 20-10-2023 6 Paris NaN 8
3 Pear 19-10-2023 4 Sweden Red 8
4 Pear 18-10-2023 5 London Green 8
5 Pear 17-10-2023 10 Paris Purple 9
</code></pre>
<p>Expected Output:</p>
<pre><code> fruit date price place colour qty
Apple 25-12-2023 6 London Green 6.33 (5+6=8/3)
Pear 19-10-2023 10 Sweden Red 8.33 (8+8+9/3)
</code></pre>
| <python><pandas><drop-duplicates> | 2023-07-27 14:34:28 | 0 | 1,309 | asd |
76,780,759 | 2,868,299 | Dynamic Aggregation in Polars | <p>I'd like to check the percentage of nulls for a set of columns in my dataframe, grouped by a different set of columns.</p>
<p>The columns in the dataframe may change, so I'd like to be able to pass in a list of columns to group by and a list of columns to count nulls for.</p>
<p>Grouping easily takes a list of column names, but I can't find a way to do this for the counts. I can only do it by writing a line for each column I want to count nulls for:</p>
<pre><code>df = df.group_by(list_of_grouping_fields)\
.agg(
pl.len().alias('Row_Count'),
pl.col("INVOICE NUMBER").null_count().alias('INVOICENUMBER_nullcount'),
pl.col("ORDER NUMBER").null_count().alias('ORDERNUMBER_nullcount')
)
</code></pre>
<p>This seems inefficient and would require me to come back and edit the code if the columns changed.</p>
<p>Ideally I'd like to do something like this:</p>
<pre><code>df = df.group_by(blist_of_grouping_fields)\
.agg(
pl.len().alias('Row_Count'),
pl.col([list_of_agg_fields]).null_count().alias([list_of_aliases])
)
</code></pre>
<p>Is there a way to do this with Polars?</p>
| <python><dataframe><python-polars> | 2023-07-27 14:30:16 | 1 | 494 | DixieFlatline |
76,780,741 | 2,241,241 | How to read zipfile from stdin | <p>I'm trying to solve reading a zipfile from <code>stdin</code> in python, but I keep getting issues. What I want is to be able to run <code>cat test.xlsx | python3 test.py</code> and create a valid <code>zipfile.ZipFile</code> object without first writing a temporary file if possible.</p>
<p>My initial approach was this, but <code>ZipFile</code> complained the file is not seekable,</p>
<pre><code>
import sys
import zipfile
zipfile.ZipFile(sys.stdin)
</code></pre>
<p>so I changed it around, but now it complains that this is not a valid zip file:</p>
<pre><code>import io
import sys
import zipfile
zipfile.ZipFile(io.StringIO(sys.stdin.read()))
</code></pre>
<p>Can this be solved without writing the zip to a temporary file?</p>
| <python><zip><stdin> | 2023-07-27 14:28:28 | 1 | 2,183 | fbence |
76,780,688 | 11,793,491 | Running a Flask App in docker doesn't show a page in browser | <p>I created the following Flask app:</p>
<p>app.py</p>
<pre><code>from flask import Flask, render_template
app = Flask(__name__)
@app.route('/')
def inicio():
return render_template('index.html')
if __name__=='__main__':
app.run(debug=True)
</code></pre>
<p>index.html</p>
<pre><code><html>
<head>
</head>
<body>
<p>This is a beautiful world indeed</p>
</body>
</html>
</code></pre>
<p>Dockerfile</p>
<pre><code>FROM python:3.9-alpine
COPY . app
COPY ./requirements.txt /app/requirements.txt
WORKDIR app
EXPOSE 5000:5000
RUN pip install -r requirements.txt
CMD [ "python", "app.py" ]
</code></pre>
<p>Then I created the image and run it:</p>
<pre><code>docker build -t myimage .
docker run -t -i myimage
</code></pre>
<p>But when I receive the link, I click on it <code>Running on http://127.0.0.1:5000</code> and it takes me to a browser. However, nothing displays. Is there anything I am doing wrong?</p>
| <python><docker><flask> | 2023-07-27 14:21:38 | 2 | 2,304 | Alexis |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.