QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,977,408
| 12,403,888
|
Options request throws AssertionError error in Django REST Framework
|
<p>I have the following Viewset:</p>
<pre><code>class My_ViewSet(viewsets.ModelViewSet):
serializer_class = My_Serializer
queryset = My_Object.objects.all()
def list(self, request):
# The code here is irrelevant
return Response()
def retrieve(self, request, pk=None):
# The code here is irrelevant
my_object = get_object_or_404(My_Object, id=pk)
return Response(my_object.id)
</code></pre>
<p>urls.py sample:</p>
<pre><code>urlpatterns = [
path('api/my_object/', My_ViewSet.as_view({'get': 'list'})),
path('api/my_object/<int:pk>/', My_ViewSet.as_view({'get': 'retrieve'})),
]
</code></pre>
<p>When I try to make OPTIONS request on <strong>api/my_object/</strong> I have the following error:</p>
<p><strong>AssertionError: Expected view My_ViewSet to be called with a URL keyword argument named "pk". Fix your URL conf, or set the <code>.lookup_field</code> attribute on the view correctly.</strong></p>
|
<python><django><django-rest-framework><http-method><http-options-method>
|
2023-04-10 13:07:54
| 1
| 443
|
SWater
|
75,977,389
| 1,624,552
|
TypeError: a bytes-like object is required, not 'str' when trying to iterate over a list of running process
|
<p>I am using Python in Windows. I am trying to kill a windows running process if it is already running but i get below error:</p>
<blockquote>
<p>TypeError: a bytes-like object is required, not 'str'</p>
</blockquote>
<p>I import the following modules:</p>
<pre><code>import os
import subprocess
from time import sleep
</code></pre>
<p>Then below my code:</p>
<pre><code>s = subprocess.check_output('tasklist', shell=True)
if "myProcess.exe" in s:
print('myProcess.exe is currently running. Killing...')
os.system("taskkill /f /im myProcess.exe")
sleep(0.5)
</code></pre>
<p>The error happens just in the conditional when trying to compare if the process myProcess.exe is in the list s.</p>
|
<python><windows><taskkill>
|
2023-04-10 13:05:18
| 1
| 10,752
|
Willy
|
75,977,327
| 6,117,017
|
Deploy .zip archive (python code) to Azure Function using TerraForm
|
<p>I am having issues with uploading Python code to a Function App, in the sense that the function-app is created, but the code is not uploaded in the "Function" section, but rather in the "AppFiles" (there is no function in "Functions" in Azure Portal).</p>
<p>Here is my TerraForm code:</p>
<pre><code>provider "azurerm" {
features {}
}
data "archive_file" "function_zip" {
type = "zip"
source_dir = "../${path.module}/blob_storage_trigger"
output_path = "../${path.module}/blob_storage_trigger.zip"
depends_on = [null_resource.pip]
}
resource "null_resource" "pip" {
triggers = {
requirements_md5 = "${filemd5("../${path.module}/blob_storage_trigger/requirements.txt")}"
}
provisioner "local-exec" {
command = "pip install --target='.python_packages/lib/site-packages' -r requirements.txt"
working_dir = "../${path.module}/blob_storage_trigger"
}
}
#Uploading to the blob container
resource "azurerm_storage_blob" "storage_blob_function" {
name = "my-function.zip"
storage_account_name = "***"
storage_container_name = "***"
type = "Block"
source = "../${path.module}/blob_storage_trigger.zip"
}
# Create Azure App Service Plan using Consumption pricing
resource azurerm_service_plan "app_service_plan" {
name = "app_service_plan"
location = "West Europe"
resource_group_name = "***"
os_type = "Linux"
sku_name = "Y1"
}
resource "azurerm_function_app" "blurring_fn_app" {
name = "calinfntest-app-terraform"
location = "West Europe"
resource_group_name = "***"
storage_account_name = "***"
storage_account_access_key = "***"
app_service_plan_id = azurerm_service_plan.app_service_plan.id
app_settings = {
"WEBSITE_RUN_FROM_PACKAGE" = azurerm_storage_blob.storage_blob_function.url
}
site_config {}
}
</code></pre>
<p>Observations:</p>
<p>The function-app is successfully created, the pip packages are successfully installed inside the <code>.zip</code> package that is uploaded on a blob container within a storage account (I double-checked this step, even in the logs of TerraForm) and the <code>.zip</code> is successfully uploaded to the blob container specified in <code>azurerm_function_app</code> resource.</p>
<p>The <code>terraform apply</code> runs successfully, but when I open the Azure Portal and go the the function app, there is no actual function deployed (no functions available).</p>
<p>However, I can I can see my code appearing in "AppFiles" section as denoted below, but not in the Function section.</p>
<p><a href="https://i.sstatic.net/kHV9x.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kHV9x.png" alt="enter image description here" /></a></p>
<p>How can I solve this problem?</p>
|
<python><azure><terraform><azure-functions>
|
2023-04-10 12:55:57
| 1
| 15,173
|
Timbus Calin
|
75,977,316
| 8,391,698
|
How to use output from T5 model to replace masked tokens in input sequence
|
<p>I'm working with the T5 model from the Hugging Face Transformers library and I have an input sequence with masked tokens that I want to replace with the output generated by the model. Here's the <a href="https://huggingface.co/docs/transformers/model_doc/t5#inference" rel="nofollow noreferrer">code</a>.</p>
<pre><code>from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("t5-small")
model = T5ForConditionalGeneration.from_pretrained("t5-small")
input_data = "The <extra_id_0> walks in <extra_id_1> park"
input_ids = tokenizer(input_data, return_tensors="pt").input_ids
sequence_ids = model.generate(input_ids)
output_sequences = tokenizer.batch_decode(sequence_ids)
output_sequences
</code></pre>
<p>This code produces the following output:</p>
<pre><code>['<pad><extra_id_0> park offers<extra_id_1> the<extra_id_2> park.</s>']
</code></pre>
<p>What I want to do is replace the masked tokens <code><extra_id_0></code> and <code><extra_id_1></code> in the input sequence with the corresponding output tokens from the model, so that the final output is:</p>
<pre><code>The park offers walks in the park.
</code></pre>
<p>I'm hoping someone can help me with the code to achieve this.</p>
<p>Notice that this is the correspondence:</p>
<pre><code>mask in input_data -> answer in output_sequences
<extra_id_0> -> <extra_id_0> park offers (so we extract 'park offers' only)
<extra_id_1> -> <extra_id_1> the (so we extract 'the' only)
</code></pre>
|
<python><nlp><huggingface-transformers><generative-pretrained-transformer>
|
2023-04-10 12:54:14
| 1
| 5,189
|
littleworth
|
75,977,087
| 10,551,444
|
How to fix "sys.stderr is None" after compiled a script with Pyinstaller --noconsole?
|
<p><strong>ENVIRONMENT</strong></p>
<p>Windows 10</p>
<p>Python 3.10</p>
<p>Pyinstaller 5.9</p>
<p><strong>PROBLEM:</strong></p>
<p>I am trying to compile a script that needs to run some command lines in the console in order to get some values from the system.</p>
<pre><code>if __name__ == '__main__':
faulthandler.enable()
try:
proc = subprocess.Popen(p_command, shell=True, stdout=subprocess.PIPE, stdin=subprocess.PIPE,
stderr=subprocess.STDOUT, close_fds=True)
output, errors = proc.communicate()
except:
pass
</code></pre>
<p>You can use this code above to reproduce the error.</p>
<p>When I compile with the console, the "exe" works fine.
When I compile with "--noconsole", the "exe" failed with this error message :</p>
<pre><code>Traceback (most recent call last):
File "script.py", line 5432, in <module>
faulthandler.enable()
RuntimeError: sys.stderr is None
</code></pre>
<p>After a few hours of investigation, I've understood the stdin and stdout arguments from the code "subprocess.Popen) can't be used with "--noconsole" compilation mode.
I tried this solution :</p>
<pre><code>stream = io.StringIO()
sys.stdout = stream
sys.stderr = stream
</code></pre>
<p>But it still doesn't work.
I verified the doc:
<a href="https://readthedocs.org/projects/pyinstaller/downloads/pdf/stable/" rel="nofollow noreferrer">https://readthedocs.org/projects/pyinstaller/downloads/pdf/stable/</a>
It clearly said :</p>
<blockquote>
<p>(Windows) In windowed/noconsole mode, stop setting sys.stdout and
sys.stderr to custom NullWriter object, and instead leave them at
None. This matches the behavior of windowed python interpreter
(pythonw. exe) and prevents interoperability issues with code that
(rightfully) expects the streams to be either None or objects that are
fully compatible with io.IOBase. (#3503)</p>
</blockquote>
<p>So the solution N°1 seems correct, doesn't it?</p>
<p>I didn't have this issue last year when I compiled the previous version. I don't understand why now there is this issue.</p>
<p>Does anyone have any solution to this issue PLEASE?</p>
|
<python><python-3.x><pyinstaller>
|
2023-04-10 12:22:04
| 2
| 1,223
|
Gauthier Buttez
|
75,977,012
| 10,048,799
|
Save and return to cursor in CLI not working
|
<p>I am attempting a type ahead / predictive typing effect in a python cli. I am referencing the following list of ANSI escape codes: <a href="https://gist.github.com/fnky/458719343aabd01cfb17a3a4f7296797" rel="nofollow noreferrer">https://gist.github.com/fnky/458719343aabd01cfb17a3a4f7296797</a></p>
<p>Looking at the "Cursor Controls" section there are two commands for saving and returning to a cursor position. I want to couple that with deleting so that when typing it adds the character to the rest and clears the type ahead effect. So I currently have:</p>
<pre class="lang-py prettyprint-override"><code>import readchar
import sys
print("Reading input: ", end="")
sys.stdout.flush()
s = ""
while True:
# Goto saved position
print("\x1B 8", end="")
sys.stdout.flush()
char = readchar.readchar()
# remove to end / clear effect
print("\x1B[0K", end="")
sys.stdout.flush()
if char == "\r":
break
print(char, end="")
# save the cursor position
ahead_text = "#"
print("\x1B 7" + ahead_text, end="")
sys.stdout.flush()
s += char
print("\nresult was", s)
sys.stdout.flush()
</code></pre>
<p>However on windows terminal it is not clearing the temporary ahead of time text <code>#</code>:
<a href="https://i.sstatic.net/4wrlf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4wrlf.png" alt="enter image description here" /></a></p>
<p>Do these commands work? Is there an alternative ANSI sequence?</p>
|
<python><ansi-escape>
|
2023-04-10 12:11:44
| 0
| 3,723
|
Ben
|
75,976,883
| 2,825,403
|
Fill in the previous value from specific column based on a condition
|
<p>I have a Polars DataFrame that looks something like so:</p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame(
{
'val': [9, 7, 9, 11, 2, 5],
'count': [1, 2, 1, 2, 1, 2],
'id': [1, 1, 2, 2, 3, 3],
}
)
</code></pre>
<p>What I need is to create a new column <code>'prev_val'</code> which will contain values for the same unique id taken from a row where the value in <code>'count'</code> column is smaller by one, i.e. looking something like:</p>
<pre><code>┌─────┬───────┬─────┬──────────┐
│ val ┆ count ┆ id ┆ prev_val │
│ --- ┆ --- ┆ --- ┆ --- │
│ i64 ┆ i64 ┆ i64 ┆ i64 │
╞═════╪═══════╪═════╪══════════╡
│ 9 ┆ 1 ┆ 1 ┆ null │
│ 7 ┆ 2 ┆ 1 ┆ 9 │
│ 9 ┆ 1 ┆ 2 ┆ null │
│ 11 ┆ 2 ┆ 2 ┆ 9 │
│ 2 ┆ 1 ┆ 3 ┆ null │
│ 5 ┆ 2 ┆ 3 ┆ 2 │
└─────┴───────┴─────┴──────────┘
</code></pre>
<p>I couldn't figure a way of using native expressions so I tried doing this using <code>map_rows</code> and a UDF, even though Polars guide discourages the use of UDFs.</p>
<p>Is there maybe a native way to do this?</p>
|
<python><python-polars>
|
2023-04-10 11:56:09
| 1
| 4,474
|
NotAName
|
75,976,820
| 913,749
|
Glue PySpark kernel not showing in VS Code
|
<p>I have a Window 10 VDI and have set-up AWS Glue Interactive session using <a href="https://docs.aws.amazon.com/glue/latest/dg/interactive-sessions.html" rel="nofollow noreferrer">this link</a>. Python version set-up for this is Python 3.9.7</p>
<p>When I start the notebook from command prompt, I see the various kernels in the browser</p>
<p><a href="https://i.sstatic.net/ssE44.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ssE44.png" alt="enter image description here" /></a></p>
<p>I have VSCode ( updated to v1.77 ) and have installed the Python and Jupyter extensions as well and trying to set-up VSCode to use the Glue Interactive sessions using <a href="https://docs.aws.amazon.com/glue/latest/dg/interactive-sessions-vscode.html" rel="nofollow noreferrer">this</a> .</p>
<p>In VSCode, I do not see Glue PySpark as kernel Option, though see Glue Spark. I have also added python path the kernel.json as described <a href="https://stackoverflow.com/questions/73994337/cannot-find-pyspark-kernel-aws-glue-interactive-sessions-with-vscode">here</a></p>
<p>But I still do not see Glue PySpark as an option in VSCode. Am I missing any set-up ?</p>
<p><a href="https://i.sstatic.net/AkhbG.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AkhbG.png" alt="enter image description here" /></a></p>
|
<python><visual-studio-code><jupyter-notebook><kernel><aws-glue>
|
2023-04-10 11:46:22
| 2
| 2,259
|
adbdkb
|
75,976,755
| 7,225,171
|
Credentials object from google service-account has no attribute `to_json`
|
<p>In official <a href="https://developers.google.com/sheets/api/quickstart/python#configure_the_sample" rel="nofollow noreferrer">google python client library docs</a> it is suggested to create credentials using Oauth2 client id:</p>
<pre><code>flow = InstalledAppFlow.from_client_secrets_file(
'credentials.json', SCOPES)
creds = flow.run_local_server(port=0)
</code></pre>
<p>and save them using <code>to_json</code>:</p>
<pre><code>token.write(creds.to_json())
</code></pre>
<p>But if I try to use service-account credentials as suggested in <a href="https://stackoverflow.com/a/64509140/7225171">this answer</a> or in <a href="https://developers.google.com/identity/protocols/oauth2/service-account#authorizingrequests" rel="nofollow noreferrer">the following doc</a> like this:</p>
<pre><code>creds2 = service_account.Credentials.from_service_account_file(
SERVICE_ACCOUNT_FILE, SCOPES)
</code></pre>
<p>The types of <strong><code>creds2</code></strong> and <strong><code>creds</code></strong> are different, and <strong><code>creds2</code></strong> (from service_account) does not have <code>to_json</code> function. It does not even have <code>items()</code>, so I can not call <code>json.dumps({k: v for (k, v) in creds.items()})</code> directly.</p>
<p>How can this type of credentials be saved into json file?</p>
<p>Libs versions:</p>
<pre><code>google-api-python-client==2.84.0
google-auth==2.17.2
</code></pre>
|
<python><google-api><google-oauth><google-api-python-client>
|
2023-04-10 11:38:01
| 1
| 1,091
|
Serob
|
75,976,716
| 8,406,122
|
Extracting i-th column from a tab separated text file in python
|
<p>Say, I have a text file like this</p>
<pre><code>John 32 Britain Marry Sunny
Britney 21 India Angel Cloudy
Jack 22 UK Becky Snowy
Jill 43 United States of America Bill Cloudy
Amy 31 South Africa Claudey Sunny
</code></pre>
<p>The data is tab separated. I want to extract the 3rd column which has the country names in a text file</p>
<pre><code>Britain
India
UK
United States of America
South Africa
</code></pre>
<p>How to do this using python? Any help will be great. :)</p>
|
<python>
|
2023-04-10 11:32:07
| 1
| 377
|
Turing101
|
75,976,629
| 331,174
|
Web Socket terminating prematurely
|
<p>I have been trying to connect to the <code>skinport</code> WebSocket using Python. The socket documentation is at the bottom of <a href="https://docs.skinport.com/?javascript--nodejs#salefeed" rel="nofollow noreferrer">this link</a>. I am using <code>websocket-client</code> module for making a connection but the connection closes prematurely. Below is my code so far.</p>
<pre><code>import websocket
def on_message(ws, message):
print(message)
def on_error(ws, error):
print(error)
def on_close(ws, b, c):
print("Connection closed")
def on_open(ws):
print("Connection opened")
ws.send("Hello, Server!")
if __name__ == "__main__":
websocket.enableTrace(True)
ws = websocket.WebSocketApp("wss://skinport.com/saleFeed?app_id=730&currency=EUR",
on_message = on_message,
on_error = on_error,
on_close = on_close)
ws.on_open = on_open
ws.run_forever()
</code></pre>
<p>I get the following traces/messages on the console</p>
<blockquote>
<p>--- request header --- GET /saleFeed?app_id=730&currency=EUR HTTP/1.1 Upgrade: websocket Host: skinport.com Origin: <a href="https://skinport.com" rel="nofollow noreferrer">https://skinport.com</a>
Sec-WebSocket-Key: G9bSTZyU7TP2SQIOZgbbgg== Sec-WebSocket-Version: 13
Connection: Upgrade</p>
<hr />
<p>--- response header --- HTTP/1.1 200 OK Date: Mon, 10 Apr 2023 11:16:12 GMT Content-Type: text/html; charset=utf-8 Transfer-Encoding:
chunked Connection: keep-alive CF-Ray: 7b5a9387bebd6bfd-SIN
Cache-Control: no-cache, no-store, must-revalidate, max-age=0 Expires:
-1 Strict-Transport-Security: max-age=31536000; includeSubDomains; preload Via: 1.1 vegur CF-Cache-Status: DYNAMIC
Content-Security-Policy: script-src 'self' 'unsafe-eval'
'unsafe-inline' <a href="https://platform.twitter.com" rel="nofollow noreferrer">https://platform.twitter.com</a>
<a href="https://syndication.twitter.com" rel="nofollow noreferrer">https://syndication.twitter.com</a> <a href="https://www.googletagmanager.com" rel="nofollow noreferrer">https://www.googletagmanager.com</a>
<a href="https://cdn.syndication.twimg.com" rel="nofollow noreferrer">https://cdn.syndication.twimg.com</a> <a href="https://www.google.com" rel="nofollow noreferrer">https://www.google.com</a>
<a href="https://www.google-analytics.com" rel="nofollow noreferrer">https://www.google-analytics.com</a> <a href="https://maps.googleapis.com" rel="nofollow noreferrer">https://maps.googleapis.com</a>
<a href="https://apis.google.com" rel="nofollow noreferrer">https://apis.google.com</a> <a href="https://connect.facebook.net" rel="nofollow noreferrer">https://connect.facebook.net</a>
<a href="https://challenges.cloudflare.com" rel="nofollow noreferrer">https://challenges.cloudflare.com</a>
<a href="https://static.cloudflareinsights.com" rel="nofollow noreferrer">https://static.cloudflareinsights.com</a> https://bat.bing.com;report-uri
<a href="https://o298045.ingest.sentry.io/api/5193335/security/?sentry_key=98577efcbca24e6daef4a099b6611076" rel="nofollow noreferrer">https://o298045.ingest.sentry.io/api/5193335/security/?sentry_key=98577efcbca24e6daef4a099b6611076</a>
Expect-Ct: max-age=0 Origin-Agent-Cluster: ?1 Pragma: no-cache
Referrer-Policy: strict-origin-when-cross-origin
X-Content-Type-Options: nosniff X-Dns-Prefetch-Control: off
X-Download-Options: noopen X-Frame-Options: SAMEORIGIN
X-Permitted-Cross-Domain-Policies: none X-Xss-Protection: 0
Set-Cookie:
__cf_bm=kin41Kk7yGk9NxwTBggN4_Ga4scFC2391h_Ggv5rVeQ-1681125372-0-Ae23hoR65z9NdcPVYOlSk2y5PTPe9WtzqBTlWG1NvjVHoD5bac7WaPISitKfUEnLizFf45QQFxKbMurf47/I/rs=;
path=/; expires=Mon, 10-Apr-23 11:46:12 GMT; domain=.skinport.com;
HttpOnly; Secure Server: cloudflare
----------------------- Handshake status 200 OK - goodbye Handshake status 200 OK Connection closed</p>
</blockquote>
<p>I don't understand why the connection is closed without the messages being exchanged. The <code>on_message</code> function is never called during the process.</p>
|
<python><python-3.x><websocket>
|
2023-04-10 11:20:43
| 1
| 17,844
|
Muhammad Adeel Zahid
|
75,976,617
| 1,867,328
|
Definition of custom class and different methods
|
<p>I have below definition:</p>
<pre><code>def fun1(x) :
return x + 2
class my_class :
def fun1(x) :
return x + 22
def fun2(x) :
return fun1(x) + 33
print(my_class.fun2(10))
</code></pre>
<p>However this returns 45, whereas I am expecting 65 (10 + 22 + 33).</p>
<p>Where am I making a mistake?</p>
|
<python>
|
2023-04-10 11:19:37
| 1
| 3,832
|
Bogaso
|
75,976,403
| 12,291,110
|
Implementing event bridge rule onto aws CDK
|
<p>I am new to AWS CDK and have a issue implementing what I was able to implement on the AWS console successfully</p>
<p>this is the event rule that I have setup {</p>
<pre><code>"$or": [{
"detail.distance": [{
"numeric": [">=", 0]
}]
}, {
"detail.duration": [{
"numeric": [">=", 0]
}]
}],
"replay-name": [{
"exists": true
}]
}
</code></pre>
<p>When I am trying to implement this in CDK
I could see the limitation it has with setting up event pattern</p>
<p><a href="https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_events/EventPattern.html" rel="nofollow noreferrer">https://docs.aws.amazon.com/cdk/api/v1/python/aws_cdk.aws_events/EventPattern.html</a></p>
<p>what i am trying:</p>
<pre><code>cfn_rule = aws_events.CfnRule(
self,
id=f"{stack_id}-rule-to-invoke-lambda",
name="rule-to-invoke-lambda",
event_bus_name=event_bus.event_bus_name,
event_pattern=aws_events.EventPattern(
source=["mts-simulation"], #toDo Have to add the pattern but unable to design it
),
targets=[
aws_events.CfnRule.TargetProperty(
arn=lambda_function.function_arn, #coming in from function parameter
id=f"{stack_id}-target-to-lambda-function",
input_transformer=aws_events.CfnRule.InputTransformerProperty(
input_template='{"uptime":"<detail-uptime>",'
' "distance": "<detail-distance>",'
'"duration": "<detail-duration>","timestamp": "<timestamp>"}',
input_paths_map={
"timestamp": "$.time",
"detail-uptime": "$.detail.system_uptime",
"detail-distance": "$.detail.distance",
"detail-duration": "$.detail.duration",
},
),
)
],
)
</code></pre>
<p>This could be elementary but currently stuck with it.
Could anyone have bit of patient and help me out of it?</p>
<p>Edited after implementing the answer from Scott Hsieh:</p>
<p>Missing out on 2 things:</p>
<ol>
<li>associating it to an exiting event bus</li>
<li></li>
</ol>
<pre><code>{
"detail": {
"replay-name": [{
"exists": true
}],
"$or": [{
"distance": [{
"numeric": [">=", 0]
}]
}, {
"duration": [{
"numeric": [">=", 0]
}]
}]
},
"source": ["mts-simulation"]
}
</code></pre>
<p>this rule is failing because "replay-name" is flowing into a event outside detail part of the a json event.</p>
<p>for example:
event =</p>
<pre><code>{ "field1":"value",
"field2":"value",
detail:{ "distance":"value",
"duration":"value"
}
}
</code></pre>
|
<python><amazon-web-services><aws-lambda><aws-cdk><aws-event-bridge>
|
2023-04-10 10:44:26
| 2
| 2,191
|
sumanth shetty
|
75,976,334
| 6,560,267
|
Jupyter notebook kernel error exclusively when running it as a systemctl service
|
<p>I have the following service file:</p>
<pre class="lang-ini prettyprint-override"><code>[Unit]
Description=Jupyter-Notebook Daemon
[Service]
Type=simple
ExecStart=/bin/bash -c "~/mambaforge/envs/main/bin/jupyter-notebook --no-browser --notebook-dir=~/Code/notebooks"
User=<my_user>
PIDFile=/run/jupyter-notebook.pid
Restart=on-failure
RestartSec=60s
[Install]
WantedBy=multi-user.target
</code></pre>
<p>(notice I use <strong>mambaforge</strong> instead of miniconda). When I run the command under ExecStart as-is in my command line, the server starts and upon opening a notebook, kernel initializes correctly.</p>
<p>However, when it gets launched as a service (e.g. <code>sudo systemctl start jupyter-service</code>), kernel fails to connect. It gives a very long error message but the important part is this:</p>
<pre><code>FileNotFoundError: [Errno 2] No such file or directory: '~/miniconda3/envs/main/bin/python'
</code></pre>
<p>Notice how it attempts to run from <em>miniconda3</em> (which doesn't exist)!
Why is this?</p>
<p>For reference when running <code>jupyter kernelspec list</code> I get a correct kernel path:</p>
<pre><code>Available kernels:
%s %s python3 ~/mambaforge/envs/main/share/jupyter/kernels/python3
</code></pre>
<p>and using <code>cat ~/mambaforge/envs/main/share/jupyter/kernels/python3/kernel.json</code> return:</p>
<pre class="lang-json prettyprint-override"><code>{
"argv": [
"~/mambaforge/envs/main/bin/python",
"-m",
"ipykernel_launcher",
"-f",
"{connection_file}"
],
"display_name": "Python 3 (ipykernel)",
"language": "python",
"metadata": {
"debugger": true
}
}
</code></pre>
<p>I'd like to know where is Jupyter looking for the python kernel in? And why?</p>
|
<python><python-3.x><linux><jupyter-notebook><jupyter>
|
2023-04-10 10:35:58
| 1
| 913
|
Adrian
|
75,976,239
| 5,016,028
|
Sagemath expression does not solve
|
<p>I am trying to implement this equation in SageMath as a function:</p>
<pre><code>def prob_k(p_uv_list, tw):
probs = []
w, k = var('w, k')
# compute probability of generating k entanglements, k=1..w
for tk in range(tw):
pk = binomial(w, k)*(p_uv_list[tk]**k)*(1-p_uv_list[tk])**(w-k).subs({x:tw,k:tk})
probs += [pk]
return probs
</code></pre>
<p>The input is a list of probabilities and an integer. I want the output to be a list of probabilities computed given the formula for pk. What I get is:</p>
<pre><code>[0.500000000000000^k*0.500000000000000^w*binomial(w, k),
0.500000000000000^k*0.500000000000000^(w - 1)*binomial(w, k),
0.500000000000000^k*0.500000000000000^(w - 2)*binomial(w, k)]
</code></pre>
<p>Why does it not substitute w and k for tw and tk ?</p>
|
<python><list><sage>
|
2023-04-10 10:20:29
| 1
| 4,373
|
Qubix
|
75,975,877
| 5,672,673
|
error reshaping a Dense layer in Keras functional API
|
<p>Hi I want to reshape a layer after a Dense layer but it returns funny error. Here is the code</p>
<pre><code>codings_size=10
decoder_inputs = tf.keras.layers.Input(shape=[codings_size])
# x=tf.keras.layers.Flatten(decoder_inputs)
x=tf.keras.layers.Dense(3 * 3 * 16)(decoder_inputs),
x=tf.keras.layers.Reshape((3, 3, 16))(x),
</code></pre>
<p>Here is the error</p>
<pre><code>AttributeError: Exception encountered when calling layer "reshape_28" (type Reshape).
'tuple' object has no attribute 'shape'
Call arguments received by layer "reshape_28" (type Reshape):
• inputs=('tf.Tensor(shape=(None, 144), dtype=float32)',)
</code></pre>
|
<python><tensorflow><keras><deep-learning>
|
2023-04-10 09:29:58
| 1
| 1,177
|
Linh Chi Nguyen
|
75,975,807
| 1,279,355
|
How to stop a loop on shutdown in FastAPI?
|
<p>I have a route <code>/</code> which started an endless loop (technically until the websocket is disconnected but in this simplified example it is truly endless).
How do I stop this loop on shutdown:</p>
<pre><code>from fastapi import FastAPI
import asyncio
app = FastAPI()
running = True
@app.on_event("shutdown")
def shutdown_event():
global running
running = False
@app.get("/")
async def index():
while running:
await asyncio.sleep(0.1)
</code></pre>
<p>According to the <a href="https://fastapi.tiangolo.com/de/advanced/events/" rel="noreferrer">docs</a> <code>@app.on_event("shutdown")</code> should be called during the shutdown, but is suspect it is called similar to the lifetime event which is called after everything is finished which is a deadlock in this situation.</p>
<p>To test:</p>
<ol>
<li>i run it as <code>uvicorn module.filename:app --host 0.0.0.0</code></li>
<li>curl http://ip:port/</li>
<li>then stop the server (pressing <code>CTRL+C</code>)</li>
</ol>
<p>and you see that it hangs forever since running is never set to false because <code>shutdown_event</code> is not called.
(Yes you can force shutdown by pressing <code>CTRL+C</code>)</p>
|
<python><python-asyncio><fastapi><asgi>
|
2023-04-10 09:19:37
| 3
| 4,420
|
Sir l33tname
|
75,975,641
| 5,672,673
|
shape must be rank 4 but is rank 5 for max pool layer
|
<p>Hi I want to build a CNN Model for RGB images of 32x32x3
But the max pooling returns error saying:</p>
<pre><code>ValueError: Exception encountered when calling layer "max_pooling2d_11" (type MaxPooling2D).
Shape must be rank 4 but is rank 5 for '{{node max_pooling2d_11/MaxPool}} = MaxPool[T=DT_FLOAT, data_format="NHWC", explicit_paddings=[], ksize=[1, 2, 2, 1], padding="VALID", strides=[1, 2, 2, 1]](max_pooling2d_11/MaxPool/input)' with input shapes: [1,?,32,32,16].
Call arguments received by layer "max_pooling2d_11" (type MaxPooling2D):
• inputs=('tf.Tensor(shape=(None, 32, 32, 16), dtype=float32)',)
</code></pre>
<p>Here is the layers:</p>
<pre class="lang-py prettyprint-override"><code>inputs = tf.keras.layers.Input(shape=(32, 32, 3))
Z = tf.keras.layers.Conv2D(16, 3, padding="same", activation="relu")(inputs),
Z = tf.keras.layers.MaxPool2D(pool_size=(2,2))(Z)
</code></pre>
<p>EDIT: Additional question:</p>
<p>Hi, I want to make a net that output an image of 32x32, but it gives some error. Here is the code:</p>
<pre><code>codings_size=10
decoder_inputs = tf.keras.layers.Input(shape=[codings_size])
x=tf.keras.layers.Dense(3 * 3 * 16)(decoder_inputs)
x=tf.keras.layers.Reshape((3, 3, 16))(x)
x=tf.keras.layers.Conv2DTranspose(32, 3, strides=2, activation="relu")(x)
x=tf.keras.layers.Conv2DTranspose(16, 3, strides=2, padding="same",
activation="relu")(x)
x=tf.keras.layers.Conv2DTranspose(1, 3, strides=2, padding="same")(x)
outputs=tf.keras.layers.Reshape([32, 32])(x)
variational_decoder = tf.keras.Model(inputs=[decoder_inputs], outputs=[outputs])
</code></pre>
<p>Here is the error:</p>
<pre><code>ValueError: Exception encountered when calling layer "reshape_13" (type Reshape).
total size of new array must be unchanged, input_shape = [28, 28, 1], output_shape = [32, 32]
Call arguments received by layer "reshape_13" (type Reshape):
• inputs=tf.Tensor(shape=(None, 28, 28, 1), dtype=float32)
</code></pre>
|
<python><tensorflow><keras><image-processing><conv-neural-network>
|
2023-04-10 08:54:16
| 1
| 1,177
|
Linh Chi Nguyen
|
75,975,480
| 5,405,813
|
Reg. google cloud vision api
|
<p>I am trying to read the hand-written data on hot rolled coils using python and google-cloud-vision api. But what i have observed that when the data is written in slightly inclined manner the vision api completely fails. For eg if the data is written as <a href="https://i.sstatic.net/EDbRm.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/EDbRm.jpg" alt="inclined" /></a> it completely fails and gives some absurd output even though visibly the data should be ok to read.<br/></p>
<p>However if the data is written as <a href="https://i.sstatic.net/5E3V6.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5E3V6.jpg" alt="straight" /></a> the api reads it correctly.<br/> Should any pre-processing for the image needs to be done so that api reads it correctly?</p>
|
<python><google-cloud-vision>
|
2023-04-10 08:29:29
| 1
| 455
|
bipin_s
|
75,975,294
| 4,420,797
|
Create labels.csv file ground truth (gt.txt)
|
<p>I have a <code>gt.txt</code> file and I would like to convert it into <code>labels.csv</code> file. There is a difference between the two of these files.</p>
<p><strong>gt.txt</strong></p>
<p><a href="https://i.sstatic.net/KOgqY.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/KOgqY.png" alt="enter image description here" /></a></p>
<p><strong>labels.csv</strong> (Require)</p>
<p>The labels.csv should contain two columns the first column name is <code>filename</code> and <code>words</code> is the second column name. <strong>How I can do it?</strong></p>
|
<python>
|
2023-04-10 07:57:37
| 2
| 2,984
|
Khawar Islam
|
75,974,892
| 13,916,049
|
ValueError: s must be a scalar, or float array-like with the same size as x and y (NetworkX)
|
<p>I want to plot the network for <code>gp.enrichment_map(gsea_res.res2d)</code>.</p>
<pre><code>import gseapy as gp
import networkx as nx
import matplotlib.pyplot as plt
nodes, edges = gp.enrichment_map(gsea_res.res2d)
# build graph
G = nx.from_pandas_edgelist(edges,
source='src_idx',
target='targ_idx',
edge_attr=['jaccard_coef', 'overlap_coef', 'overlap_genes'])
fig, ax = plt.subplots(figsize=(8, 8))
# init node cooridnates
pos=nx.layout.spiral_layout(G)
#node_size = nx.get_node_attributes()
# draw node
nx.draw_networkx_nodes(G,
pos=pos,
cmap=plt.cm.RdYlBu,
node_color=list(nodes.NES),
node_size=list(nodes.Hits_ratio *1000))
# draw node label
nx.draw_networkx_labels(G,
pos=pos,
labels=nodes.Term.to_dict())
# draw edge
edge_weight = nx.get_edge_attributes(G, 'jaccard_coef').values()
nx.draw_networkx_edges(G,
pos=pos,
width=list(map(lambda x: x*10, edge_weight)),
edge_color='#CDDBD4')
plt.show()
</code></pre>
<p>Traceback:</p>
<pre><code>---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Input In [307], in <cell line: 7>()
4 pos=nx.layout.spiral_layout(G)
5 #node_size = nx.get_node_attributes()
6 # draw node
----> 7 nx.draw_networkx_nodes(G,
8 pos=pos,
9 cmap=plt.cm.RdYlBu,
10 node_color=list(nodes.NES),
11 node_size=list(nodes.Hits_ratio *1000))
12 # draw node label
13 nx.draw_networkx_labels(G,
14 pos=pos,
15 labels=nodes.Term.to_dict())
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/networkx/drawing/nx_pylab.py:433, in draw_networkx_nodes(G, pos, nodelist, node_size, node_color, node_shape, alpha, cmap, vmin, vmax, ax, linewidths, edgecolors, label, margins)
430 node_color = apply_alpha(node_color, alpha, nodelist, cmap, vmin, vmax)
431 alpha = None
--> 433 node_collection = ax.scatter(
434 xy[:, 0],
435 xy[:, 1],
436 s=node_size,
437 c=node_color,
438 marker=node_shape,
439 cmap=cmap,
440 vmin=vmin,
441 vmax=vmax,
442 alpha=alpha,
443 linewidths=linewidths,
444 edgecolors=edgecolors,
445 label=label,
446 )
447 ax.tick_params(
448 axis="both",
449 which="both",
(...)
453 labelleft=False,
454 )
456 if margins is not None:
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/matplotlib/__init__.py:1412, in _preprocess_data.<locals>.inner(ax, data, *args, **kwargs)
1409 @functools.wraps(func)
1410 def inner(ax, *args, data=None, **kwargs):
1411 if data is None:
-> 1412 return func(ax, *map(sanitize_sequence, args), **kwargs)
1414 bound = new_sig.bind(ax, *args, **kwargs)
1415 auto_label = (bound.arguments.get(label_namer)
1416 or bound.kwargs.get(label_namer))
File /scg/apps/software/jupyter/python_3.9/lib/python3.9/site-packages/matplotlib/axes/_axes.py:4378, in Axes.scatter(self, x, y, s, c, marker, cmap, norm, vmin, vmax, alpha, linewidths, edgecolors, plotnonfinite, **kwargs)
4374 s = np.ma.ravel(s)
4375 if (len(s) not in (1, x.size) or
4376 (not np.issubdtype(s.dtype, np.floating) and
4377 not np.issubdtype(s.dtype, np.integer))):
-> 4378 raise ValueError(
4379 "s must be a scalar, "
4380 "or float array-like with the same size as x and y")
4382 # get the original edgecolor the user passed before we normalize
4383 orig_edgecolor = edgecolors
ValueError: s must be a scalar, or float array-like with the same size as x and y
</code></pre>
<p>Data:</p>
<p><code>gsea_res.res2d[1:5]</code></p>
<pre><code>pd.DataFrame({'Name': {1: 'gsea', 2: 'gsea', 3: 'gsea', 4: 'gsea'},
'Term': {1: 'Canonical and non-canonical Notch signaling WP3845',
2: 'miR-509-3p alteration of YAP1/ECM axis WP3967',
3: 'Focal Adhesion WP306',
4: 'Striated Muscle Contraction Pathway WP383'},
'ES': {1: 0.6969705540478509,
2: 0.81212075311302,
3: 0.48417248403127733,
4: 0.7654724255001505},
'NES': {1: 2.1730818778829937,
2: 2.1661621833197726,
3: 2.1526281774632117,
4: 2.14929157606087},
'NOM p-val': {1: 0.0, 2: 0.0, 3: 0.0, 4: 0.0},
'FDR q-val': {1: 0.0017047353760445683,
2: 0.0011364902506963788,
3: 0.0014206128133704735,
4: 0.0015910863509749303},
'FWER p-val': {1: 0.003, 2: 0.003, 3: 0.004, 4: 0.006},
'Tag %': {1: '7/23', 2: '8/17', 3: '61/153', 4: '8/21'},
'Gene %': {1: '2.89%', 2: '7.74%', 3: '21.87%', 4: '1.76%'},
'Lead_genes': {1: 'MFAP2;HEY1;DNER;NOTCH2;NOTCH3;JAG1;RRAS',
2: 'COL3A1;FN1;COL1A1;PBX3;SPARC;COL5A1;TEAD1;EDNRA',
3: 'MYL12A;COL1A2;FN1;CAV2;ACTG1;CAV1;MYL9;LAMA2;COL6A2;COL1A1;KDR;ITGAV;CAPN2;EGFR;RHOA;COL5A2;PDGFD;PTEN;CTNNB1;VCL;MAPK10;TLN2;HGF;PARVA;SPP1;TLN1;VEGFC;EGF;THBS3;ACTN1;FLNC;COL4A1;PAK1;COL4A2;FLNB;ILK;SOS1;PDGFC;BAD;HRAS;MET;LAMB2;PIK3R1;CDC42;SRC;RAP1B;BIRC2;PPP1R12A;ITGB4;PPP1CB;MYL12B;ITGB1;PIK3CD;VEGFB;COL5A3;JUN;PIK3R3;DIAPH1;CCND2;ACTB;ACTN4',
4: 'ACTA2;TPM2;ACTG1;MYL9;TPM4;VIM;TPM1;TNNC1'}})
</code></pre>
|
<python><networkx>
|
2023-04-10 06:51:46
| 1
| 1,545
|
Anon
|
75,974,839
| 3,423,825
|
How to fix wrong number of constraints in Django with psql prompt?
|
<p>I'm unable to migrate a model due to the following exception and I'm trying to fix the issue with <code>psql</code>.</p>
<pre><code>ValueError: Found wrong number (2) of constraints for accounting_inventory(trade_id)
</code></pre>
<p><strong>models.py</strong></p>
<pre><code>class Inventory(TimestampedModel):
class Type(models.TextChoices):
SNAPSHOT = 0, "Snapshot"
ASSET = 1, "Asset"
CONTRACT = 2, "Contract"
id = models.UUIDField(default=uuid.uuid4, primary_key=True, editable=False)
account = models.ForeignKey(Account, on_delete=models.CASCADE, null=True)
trade = models.OneToOneField(Trade, on_delete=models.CASCADE, null=True)
type = models.CharField(max_length=64, choices=Type.choices)
datetime = models.DateTimeField()
assets = models.JSONField(default=dict, null=True)
contracts = models.JSONField(default=dict, null=True)
class Meta:
verbose_name_plural = "Inventory"
unique_together = [('trade',), ]
def __str__(self):
return str(self.id)[-4:]
</code></pre>
<p>Now, Django says there is no migration:</p>
<pre><code>root@1f3de954c6e0:/app# python manage.py makemigrations
No changes detected
</code></pre>
<p>Problem is when I remove the <code>unique_together</code> it generates this migration file but can't migrate because of the exception above.</p>
<p><strong>migration file</strong></p>
<pre><code>root@1f3de954c6e0:/app# more accounting/migrations/0022_alter_inventory_unique_together.py
# Generated by Django 4.0.6 on 2023-04-10 06:40
from django.db import migrations
class Migration(migrations.Migration):
dependencies = [
('accounting', '0021_alter_inventory_unique_together_inventory_account_and_more'),
]
operations = [
migrations.AlterUniqueTogether(
name='inventory',
unique_together=set(),
),
]
</code></pre>
<p>I connected with <code>psql</code> and found this when listing constraints:</p>
<pre><code>defaultdb=> SELECT con.*
defaultdb-> FROM pg_catalog.pg_constraint con
defaultdb-> INNER JOIN pg_catalog.pg_class rel
defaultdb-> ON rel.oid = con.conrelid
defaultdb-> INNER JOIN pg_catalog.pg_namespace nsp
defaultdb-> ON nsp.oid = connamespace
defaultdb-> WHERE nsp.nspname = 'public'
defaultdb-> AND rel.relname = 'accounting_inventory';
oid | conname | connamespace | contype | condeferrable | condeferred | convalidated | conrelid | contypid | conindid | conparentid | confrelid | confupdtype | confdeltype | confmatchtype | conislocal | coninhcount | connoinherit | conkey | confkey | conpfeqop | conppeqop | conffeqop | confdelsetcols | conexclop | conbin
-------+----------------------------------------------------------------+--------------+---------+---------------+-------------+--------------+----------+----------+----------+-------------+-----------+-------------+-------------+---------------+------------+-------------+--------------+--------+---------+-----------+-----------+-----------+----------------+-----------+--------
45847 | accounting_inventory_account_id_d03d3e05_fk_trading_account_id | 2200 | f | t | t | t | 20082 | 0 | 17118 | 0 | 17112 | a | a | s | t | 0 | t | {24} | {1} | {410} | {410} | {410} | | |
20288 | accounting_inventory_trade_id_5cf7c1af_fk_accounting_trade_id | 2200 | f | t | t | t | 20082 | 0 | 20272 | 0 | 20267 | a | a | s | t | 0 | t | {8} | {3} | {2972} | {2972} | {2972} | | |
20106 | accounting_inventory_trade_id_5cf7c1af_uniq | 2200 | u | f | f | t | 20082 | 0 | 20105 | 0 | 0 | | | | t | 0 | t | {8} | | | | | | |
20089 | accounting_inventory_trade_id_key | 2200 | u | f | f | t | 20082 | 0 | 20088 | 0 | 0 | | | | t | 0 | t | {8} | | | | | | |
43164 | accounting_inventory_uuid_110c3781_pk | 2200 | p | f | f | t | 20082 | 0 | 43163 | 0 | 0 | | | | t | 0 | t | {23} | | | | | | |
(5 rows)
</code></pre>
<p>And this:</p>
<pre><code>defaultdb=> \d+ accounting_inventory
Table "public.accounting_inventory"
Column | Type | Collation | Nullable | Default | Storage | Compression | Stats target | Description
-------------+--------------------------+-----------+----------+---------+----------+-------------+--------------+-------------
dt_created | timestamp with time zone | | | | plain | | |
dt_modified | timestamp with time zone | | | | plain | | |
trade_id | uuid | | | | plain | | |
type | character varying(64) | | not null | | extended | | |
datetime | timestamp with time zone | | not null | | plain | | |
assets | jsonb | | | | extended | | |
contracts | jsonb | | | | extended | | |
id | uuid | | not null | | plain | | |
account_id | bigint | | | | plain | | |
Indexes:
"accounting_inventory_uuid_110c3781_pk" PRIMARY KEY, btree (id)
"accounting_inventory_account_id_d03d3e05" btree (account_id)
"accounting_inventory_trade_id_5cf7c1af_uniq" UNIQUE CONSTRAINT, btree (trade_id)
"accounting_inventory_trade_id_key" UNIQUE CONSTRAINT, btree (trade_id)
Foreign-key constraints:
"accounting_inventory_account_id_d03d3e05_fk_trading_account_id" FOREIGN KEY (account_id) REFERENCES trading_account(id) DEFERRABLE INITIALLY DEFERRED
"accounting_inventory_trade_id_5cf7c1af_fk_accounting_trade_id" FOREIGN KEY (trade_id) REFERENCES accounting_trade(id) DEFERRABLE INITIALLY DEFERRED
Access method: heap
</code></pre>
<p>What should I do to fix the problem ? I notice there are two unique constraints in the db for <code>trade</code>:</p>
<pre><code>accounting_inventory_trade_id_5cf7c1af_uniq
accounting_inventory_trade_id_key
</code></pre>
|
<python><django><postgresql><psql>
|
2023-04-10 06:42:08
| 2
| 1,948
|
Florent
|
75,974,811
| 4,420,797
|
Fine-Tune/Train EASY OCR on any language/Korean handwritten dataset
|
<p>I would like to fine-tune the EASY OCR library on the Korean handwritten samples, I am assuming that the pre-trained model is already trained on Korean and English samples.</p>
<p>My idea is to enhance the Korean handwritten accuracy on EASY OCR. How I can achieve it? I know how to train custom models but due to the large size of English datasets, I don't want to train on Korean and English from scratch. I have already 10 M KOREAN handwritten images.</p>
<p><strong>Easy OCR Custom Training from Scratch</strong></p>
<pre><code>https://github.com/JaidedAI/EasyOCR/blob/master/custom_model.md
</code></pre>
|
<python><python-3.x><pytorch><ocr><easyocr>
|
2023-04-10 06:37:57
| 1
| 2,984
|
Khawar Islam
|
75,974,776
| 4,343,712
|
getting speed issues with tcp communication and getting data truncated problem with udp communication
|
<p>I need to send huge number of logs in json format to a program that examine them for issues. Tried using udp first. Data are sent and received instantly but losing 30% of them. (I discard any incomplete json)</p>
<p>Then I try an answer from stackoverflow
<a href="https://stackoverflow.com/questions/68425239/how-to-handle-multithreading-with-sockets-in-python/68425926#68425926">How to handle multithreading with sockets in Python?</a></p>
<pre><code>import threading
import time
def handle_client(conn, addr):
print("[thread] starting")
# recv message
message = conn.recv(1024)
message = message.decode()
print("[thread] client:", addr, 'recv:', message)
# simulate longer work
time.sleep(5)
# send answer
message = "Bye!"
message = message.encode()
conn.send(message)
print("[thread] client:", addr, 'send:', message)
conn.close()
print("[thread] ending")
host = '0.0.0.0'
port = 8080
s = socket.socket()
s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) # solution for "[Error 89] Address already in use". Use before bind()
s.bind((host, port))
s.listen(1)
all_threads = []
try:
while True:
print("Waiting for client")
conn, addr = s.accept()
print("Client:", addr)
t = threading.Thread(target=handle_client, args=(conn, addr))
t.start()
all_threads.append(t)
except KeyboardInterrupt:
print("Stopped by Ctrl+C")
finally:
if s:
s.close()
for t in all_threads:
t.join()
</code></pre>
<p>This solves the data lose problem but it is very slow. I suspect the sending can be a problem coz I need to keep opening and closing the socket in order to send all rows in the csv.</p>
<pre><code>def send_data(conn, data):
size = len(data)
size_in_4_bytes = struct.pack('I', size)
conn.send(size_in_4_bytes)
conn.send(data)
def recv_data(conn):
size_in_4_bytes = conn.recv(4)
size = struct.unpack('I', size_in_4_bytes)
size = size[0]
data = conn.recv(size)
return data
def logLoop(ip, port, logPath):
# sock = socket.socket()
with open(logPath, newline='') as csvFile:
cReader = csv.reader((line.replace('\0', '') for line in csvFile), delimiter=',')
next(cReader)
for r in cReader:
sent = 0
while not sent:
try:
sock = socket.socket()
# sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
sock.connect((ip, port))
text = json.dumps(r)
data = text.encode()
send_data(sock, data)
sock.close()
sent = 1
except (ConnectionRefusedError, ConnectionResetError, BrokenPipeError) as error:
print ("failed")
pass
return
</code></pre>
<p>I read some tutorials on socket communication and that is all I know. Any advice on sending all logs without losing data and in reasonable speed?</p>
|
<python><sockets><tcp><udp>
|
2023-04-10 06:31:12
| 0
| 401
|
user4343712
|
75,974,655
| 2,964,927
|
How to reconstruct (changed) sys.argv from argparse
|
<p>I have an argparse that after parsing does preprocessing, changes and removes things etc.
Then it wraps a main that is used to get <code>sys.argv</code>.
Is there a preferred way to reconstruct a new <code>sys.argv</code> with the adapted fields?</p>
<pre><code>args, unknown = parser.parse_known_args()
args.new = args.old + "new"
main(args.get_sys_argv())
</code></pre>
|
<python><argparse>
|
2023-04-10 06:05:53
| 2
| 26,345
|
borgr
|
75,974,539
| 10,035,190
|
how to fetch all links from html without ancore tag?
|
<p>I want to fetch all links from the link given in code and particularly this <a href="https://api.somthing.com/v1/companies/" rel="nofollow noreferrer">https://api.somthing.com/v1/companies/</a> link. All the regex which I found online is only fetching simple links like <a href="https://api.somthing.com" rel="nofollow noreferrer">https://api.somthing.com</a></p>
<pre><code>import requests
import re
from bs4 import BeautifulSoup
url='https://www.linkdin.com/'
x = requests.get(url)
html_doc=x.text
soup = BeautifulSoup(html_doc,"html.parser" )
print(soup)
</code></pre>
|
<python><beautifulsoup><html-parsing>
|
2023-04-10 05:40:04
| 1
| 930
|
zircon
|
75,974,202
| 654,019
|
error trying to train a model using ultralytics yolo 8 model
|
<p>I am trying to train Yolo 8 using a sample data set as explained here: <a href="https://learnopencv.com/train-yolov8-on-custom-dataset/#The-Pothole-Detection-Dataset-to-Train-YOLOv8" rel="nofollow noreferrer">https://learnopencv.com/train-yolov8-on-custom-dataset/#The-Pothole-Detection-Dataset-to-Train-YOLOv8</a></p>
<p>The yaml file that used is as follow:</p>
<pre><code>path: d:/MyData/Yolo8_tutorial/pothole_dataset_v8
train: 'train/images'
val: 'valid/images'
# class names
names:
0: 'pothole'
</code></pre>
<p>and the code that I have is:</p>
<pre><code># Load the model.
model = YOLO('yolov8n.pt')
# Training.
results = model.train(
data=pothole_yaml_file,
epochs=1,
name='yolov8n_v8_50e'
)
</code></pre>
<p>I am running this code on a jupyter notebook.</p>
<p>when I run this code, I am getting this error after 7 minutes that seems the application tries to run the training loop.</p>
<pre><code>Validating d:\MyData\Yolo8_tutorial\runs\detect\yolov8n_v8_50e7\weights\best.pt...
Ultralytics YOLOv8.0.72 Python-3.9.0 torch-2.0.0+cu118 CUDA:0 (NVIDIA GeForce GTX 1050, 3072MiB)
Model summary (fused): 168 layers, 3005843 parameters, 0 gradients, 8.1 GFLOPs
FileNotFoundError: Dataset 'pathole_data.ymal' for task=detect not found
</code></pre>
<p>What is the error and how can I fix it?</p>
|
<python><artificial-intelligence><object-detection><yolo><yolov8>
|
2023-04-10 04:07:02
| 1
| 18,400
|
mans
|
75,974,118
| 3,018,860
|
Arithmetic operation in Python returns weird output
|
<p>I'm getting weird results in Python with a very simple operation. In my case, I'm making a funny terminal game to practice operations with fractions, and the game works fine until Python returns too much decimals. For example:</p>
<pre class="lang-py prettyprint-override"><code>res1 =(3/4)-(6/5)
res2 = -9/20
print(res1)
print(res2)
</code></pre>
<p>If you operate (3/4) - (6/5) you get the result of -9/20 (in fraction). So both operations should return the same result, in decimals. However, as you can see, I get the following results:</p>
<pre><code>res1 = -0.44999999999999996
res2 = -0.45
</code></pre>
<p>At the moment I'm using the function <code>round()</code> with 2 decimals. However, I would like to know if there's a better way to fix this.</p>
|
<python><math><floating-point><precision>
|
2023-04-10 03:37:01
| 0
| 2,834
|
Unix
|
75,974,090
| 13,138,455
|
psutil error in building python Dockerfile
|
<p>I built below dockerfile using python image for my python heavy project</p>
<pre><code>FROM python:3.11-slim-buster
# Update and install system packages
RUN apt-get update -y && \
apt-get install --no-install-recommends -y -q \
git libpq-dev python-dev && \
apt-get clean && \
rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
# Copy the requirements.txt file
COPY requirements.txt .
# Install Python dependencies using pip
RUN python3.11 -m pip install --no-cache-dir --upgrade pip \
&& python3.11 -m pip install --no-cache-dir -r requirements.txt
EXPOSE 9700
WORKDIR /my_app
</code></pre>
<p><strong>requirements.txt</strong> include</p>
<pre><code>snowflake-connector-python==3.0.2
DataProflier==0.8.8
</code></pre>
<p>When I run this Dockerfile I am getting the error:</p>
<pre><code>note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for psutil
Failed to build psutil
ERROR: Could not build wheels for psutil, which is required to install
pyproject.toml-based projects
</code></pre>
<p>Based on the other posts, I tried to lower the python image version/ pip but still I am getting the same error. I observed that I'm only getting this error while installing certain packages from requirements.txt like <strong>DataProfiler</strong></p>
<p><strong>Edit:</strong></p>
<p>Even after upgrading the setup tools, I am still getting the below error</p>
<pre><code>#0 10.85 error: command 'gcc' failed: No such file or directory
#0 10.85 [end of output]
#0 10.85
#0 10.85 note: This error originates from a subprocess, and is likely not a problem with pip.
#0 10.85 ERROR: Failed building wheel for python-snappy
#0 10.85 Running setup.py clean for python-snappy
#0 11.06 Failed to build psutil python-snappy
#0 11.06 ERROR: Could not build wheels for psutil, which is required to install pyproject.toml-based projects
------
failed to solve: executor failed running [/bin/sh -c python3.11 -m pip install --no-cache-dir --upgrade pip && python3.11 -m pip install -U setuptools && python3.11 -m pip install --no-cache-dir -r requirements.txt]: exit code: 1
</code></pre>
|
<python><docker>
|
2023-04-10 03:23:42
| 3
| 567
|
R0bert
|
75,974,053
| 8,968,910
|
MessageToDict: How to extract keys in multiple dictionaries?
|
<pre><code>from google.protobuf.json_format import MessageToDict
audio={'results': [{'alternatives':
[{'transcript': 'His name is Charlie', 'confidence': 0.9259988}], 'resultEndTime': '7.700s', 'languageCode': 'en'},
{'alternatives':
[{'transcript': 'and he lives in xxx street', 'confidence': 0.9259988}], 'resultEndTime': '11.900s', 'languageCode': 'en'}
],
'totalBilledTime': '14s',
'requestId': '68687945678899765555'}
</code></pre>
<p>Hi! I want to extract 'transcript and' 'resultEndTime' from my dictionary. When I did</p>
<pre><code>audio['results'][0]['alternatives'][0]['transcript']
</code></pre>
<p>it only prints "His name is Charlie", but what I want is "His name is Charlie and he lives in xxx street". Sometimes there are more than 2 dictionaries so I do not know how to merge them.</p>
<p>I also only want to extract the last 'resultEndTime' which is '11.900s'</p>
|
<python><dictionary>
|
2023-04-10 03:13:12
| 1
| 699
|
Lara19
|
75,974,013
| 4,780,574
|
Python Selenium Geckodriver autoinstall: permission denied when getting Firefox version
|
<p>I was running a python script that has been fine on several different distros, but is giving me trouble on Gentoo. The problem command is <code>geckodriver_autoinstaller.install()</code>:</p>
<pre><code>Traceback (most recent call last):
File "/home/james/Gits/News_Scraper/main.py", line 5, in <module>
from main_functions import *
File "/home/james/Gits/News_Scraper/main_functions.py", line 25, in <module>
from scraping_functions import *
File "/home/james/Gits/News_Scraper/scraping_functions.py", line 19, in <module>
geckodriver_autoinstaller.install()
File "/home/james/Venvs/scraper_venv/lib/python3.10/site-packages/geckodriver_autoinstaller/__init__.py", line 15, in install
geckodriver_filepath = utils.download_geckodriver(cwd)
File "/home/james/Venvs/scraper_venv/lib/python3.10/site-packages/geckodriver_autoinstaller/utils.py", line 148, in download_geckodriver
firefox_version = get_firefox_version()
File "/home/james/Venvs/scraper_venv/lib/python3.10/site-packages/geckodriver_autoinstaller/utils.py", line 88, in get_firefox_version
with subprocess.Popen(['firefox', '--version'], stdout=subprocess.PIPE) as proc:
File "/usr/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: 'firefox'
</code></pre>
<p>Using the Traceback I ran:</p>
<pre><code>>>> subprocess.Popen(['firefox', '--version'], stdout=subprocess.PIPE)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: 'firefox'
</code></pre>
<p>I checked the firefox command in the shell and got a permission denied error. In my system, firefox is called <code>firefox-bin</code>, so I get why it didn't work, but I would have expected the error to be "command not found" instead of "permission denied". I am going on the assumption that this is the same "permission denied" that I get from within Python.</p>
<pre><code>~$ which firefox
firefox not found
~$ which firefox-bin
/usr/bin/firefox-bin
I thought that maybe I could just alias `firefox-bin` to `firefox` but no luck:
</code></pre>
<p>~$ which firefox<br />
firefox: aliased to firefox-bin</p>
<pre><code>>>> import subprocess
>>> subprocess.Popen(['firefox', '--version'], stdout=subprocess.PIPE)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/lib/python3.10/subprocess.py", line 971, in __init__
self._execute_child(args, executable, preexec_fn, close_fds,
File "/usr/lib/python3.10/subprocess.py", line 1847, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
PermissionError: [Errno 13] Permission denied: 'firefox'
>>>
</code></pre>
<p>Was I on the wrong track thinking that this "permission denied" error displayed in Python was due to the name of the firefox executable being firefox=bin? Or perhaps setting an alias in the shell was not sufficient?</p>
<p>Edit: If I substitute <code>firefox-bin</code> for <code>firefox</code> in the python command, I get no error, but it also returns None.</p>
<pre><code>>>> subprocess.Popen(['firefox-bin', '--version'], stdout=subprocess.PIPE)
<Popen: returncode: None args: ['firefox-bin', '--version']>
</code></pre>
<p>In any case, the command I am using <code>geckodriver_autoinstaller.install()</code> doesn't seem to have a way to specify an alternate name for the binary, so maybe I should just use a different command.</p>
|
<python><selenium-webdriver><permissions><alias><geckodriver>
|
2023-04-10 02:57:18
| 0
| 814
|
Stonecraft
|
75,973,925
| 7,339,624
|
Can I use pytoch amp functions, GradScaler and autocast on CPU?
|
<p>I am currently trying to debug my code and would like to run it on the CPU, but I am using <code>torch.cuda.amp.autocast()</code> and <code>torch.cuda.amp.GradScaler()</code>, which are part of the Automatic Mixed Precision package that is from <code>cuda</code> and will be automatically on GPU.</p>
<p>Is there a way to use these functions on the CPU? If not, what alternative approaches can I use to achieve similar functionality on the CPU? Thank you for your help.</p>
|
<python><pytorch>
|
2023-04-10 02:30:53
| 1
| 4,337
|
Peyman
|
75,973,809
| 3,398,324
|
Save dataframe as txt with specific delimiters
|
<p>I would like to save my dataframe in txt format with specific delimiters (libsvm format), to look like this:</p>
<pre><code>1 qid:0 0:1.465648768921554 1:-0.2257763004865357 2:0.06752820468792384 3:-1.424748186213457 4:-0.5443827245251827
1 qid:0 0:1.465648768921554 1:-0.2257763004865357 2:0.06752820468792384 3:-1.424748186213457 4:-0.5443827245251827
2 qid:0 0:0.7384665799954104 1:0.1713682811899705 2:-0.1156482823882405 3:-0.3011036955892888 4:-1.478521990367427
</code></pre>
<p>Notice that first 2 columns are separated by space, and then separated by colons, where the integer before the colon is an identifier of that column.</p>
<p>This is my current dataset:</p>
<pre><code>data = {'label': [2,3,2],
'qid': ['qid:0', 'qid:1','qid:0'],
'0': [0, 0, 0],
'0': [0.4967, 0.4967,0.4967],
'1': [1,1,1],
'1': [0.4967, 0.4967,0.4967],
'2': [2,2,2],
'2': [0.4967, 0.4967,0.4967],
'3': [3,3,3],
'2': [0.4967, 0.4967,0.4967],
'4': [4,4,4]}
df = pd.DataFrame(data)
</code></pre>
<p>Is there a way to save this as txt to match that format exactly?</p>
<p>For context, my machine learning model was trained on a dataset in this specific txt format, and I need to match it to use it for my own dataset.</p>
|
<python><pandas><libsvm>
|
2023-04-10 01:56:54
| 1
| 1,051
|
Tartaglia
|
75,973,552
| 9,873,381
|
How can I use the odeint function from torchdiffeq to classify two concentric annuli?
|
<p>I have a dataset where the two classes are distributed on two concentric circles with 0 overlap. How can I use a neural ODE to solve this?</p>
<p>I get the error <code>RuntimeError: shape '[1024, 2]' is invalid for input of size 1024</code> when I run the network below.</p>
<pre><code>class ODEFunc(nn.Module):
def __init__(self):
super(ODEFunc, self).__init__()
hdim = 32
# define your self.net as an MLP
self.net = nn.Sequential(
nn.Linear(2, hdim),
nn.Tanh(),
nn.Linear(hdim, hdim),
nn.Tanh(),
nn.Linear(hdim, 1)
)
def forward(self, t, y):
return self.net(y)
class Model(nn.Module):
def __init__(self, odefunc, device="cpu"):
super(Model, self).__init__()
self.odefunc = odefunc
# this is to make a prediction
self.linear_layer = nn.Linear(2, 1)
def forward(self, y):
t_span = torch.linspace(0., 1., 2)
# call odeint with your odefunc, initial value, and t_span
pred_y = odeint(self.odefunc, y, t_span)
# transform your output for the binary classification
yhat = self.linear_layer(pred_y)
return yhat
</code></pre>
<p>PS: The size of the training data is <code>[1024, 2]</code>.<br />
PPS: 100% accuracy is not important.</p>
|
<python><pytorch><classification><odeint>
|
2023-04-10 00:25:33
| 1
| 672
|
Skywalker
|
75,973,437
| 14,509,604
|
Pandas | Make a summary table with multiple criteria per value
|
<p>I'm trying to make a summary table like this:</p>
<p><a href="https://i.sstatic.net/uquq7.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/uquq7.png" alt="enter image description here" /></a></p>
<p>From this dataset:</p>
<pre class="lang-py prettyprint-override"><code>
pd.DataFrame(data={"grade":[10,5,9,7],
"sex": ["F", "F", "M", "M"],
"pred_1": [1,0,1,1],
"pred_2": [0,0,1,1],
"pred_3": [0,0,0,1]})
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>grade</th>
<th>sex</th>
<th>pred_1</th>
<th>pred_2</th>
<th>pred_3</th>
</tr>
</thead>
<tbody>
<tr>
<td>10</td>
<td>F</td>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>5</td>
<td>F</td>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>9</td>
<td>M</td>
<td>1</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>7</td>
<td>M</td>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
</tbody>
</table>
</div>
<p>I'm not asking for the hole code, but some help on how to apply different functions to each column while pivoting and grouping. Like:</p>
<pre class="lang-py prettyprint-override"><code>pd.pivot_table(df, values=pred_cols, index=["sex"] )
</code></pre>
<p>Gives gives me the "sex" data that i'm looking for. But how can I concatenate different aggs, crating some "new indices" like the ones I've showed in the picture above ("mean_grade", "min_grade", etc) and generate a single table.</p>
<p>Thank you in adnvance.</p>
|
<python><pandas>
|
2023-04-09 23:44:44
| 1
| 329
|
juanmac
|
75,973,322
| 1,574,054
|
Cannot create slots-dataclass with __dict__
|
<p>In python, it is possible to create slots classes which do have an explicit <code>__dict__</code> available. However, I am struggling to get this working with dataclasses:</p>
<pre><code>from dataclasses import dataclass
@dataclass(slots=True)
class A:
__dict__: dict[str, object]
A()
</code></pre>
<p>This raises the following on the last line:</p>
<blockquote>
<p>TypeError: __dict__ must be set to a dictionary, not a 'mappingproxy'</p>
</blockquote>
<p>It is not even clear to me where this exception actually occurs since there is no further information provided. There also seems to be no information regarding this problem on google. What is the problem here? How do I create a slots-dataclass with a <code>__dict__</code>?</p>
|
<python><python-dataclasses>
|
2023-04-09 23:09:15
| 0
| 4,589
|
HerpDerpington
|
75,973,296
| 12,300,981
|
Scipy Minimize, can the inverse hessian be used to eliminate multiple solutions with similar "fun"?
|
<p>Let's say you have a minimization problem:</p>
<pre><code>data_set_1=[[24.714, 24.713, 24.605, 24.607], [25.708, 25.59, 25.557, 25.753], [22.713, 22.654, 22.655, 22.812], [24.231, 24.233, 24.289, 24.109], [24.401, 24.385, 24.396, 24.408], [26.124, 26.107, 25.915, 25.847], [15.341, 15.242, 15.049, 14.847], [22.332, 22.282, 22.324, 22.314], [25.843, 25.835, 25.683, 25.983], [24.4, 24.402, 24.522, 24.409], [20.218, 20.23, 20.212, 20.254], [21.927, 21.955, 21.94, 21.917], [20.697, 20.705, 20.731, 20.692], [21.13, 21.134, 21.196, 21.266], [24.619, 24.55, 24.556, 24.784], [25.294, 25.29, 25.322, 25.337], [24.237, 24.124, 24.06, 24.143], [18.533, 18.603, 18.814, 19.229], [26.235, 26.183, 26.226, 26.137], [25.754, 25.72, 25.658, 25.639], [26.786, 26.716, 26.646, 26.668], [23.863, 23.838, 23.946, 24.034], [21.402, 21.395, 21.401, 21.317], [22.382, 22.386, 22.363, 22.244], [22.435, 22.454, 22.434, 22.579], [20.416, 20.444, 20.439, 20.583], [24.929, 24.943, 24.819, 24.916], [25.839, 25.811, 25.716, 25.752]]
data_set_2=[[0.786, 0.781, 0.766, 0.773], [0.78, 0.773, 0.782, 0.798], [0.983, 0.993, 0.991, 1.031], [0.74, 0.741, 0.756, 0.695], [0.614, 0.608, 0.642, 0.646], [0.659, 0.67, 0.666, 0.675], [0.488, 0.456, 0.438, 0.571], [0.99, 0.979, 0.97, 0.959], [0.806, 0.815, 0.801, 0.818], [0.772, 0.772, 0.756, 0.752], [0.364, 0.373, 0.357, 0.394], [0.735, 0.727, 0.729, 0.705], [0.489, 0.484, 0.495, 0.462], [0.803, 0.806, 0.817, 0.831], [1.018, 1.02, 1.021, 0.993], [0.589, 0.606, 0.599, 0.622], [0.61, 0.612, 0.591, 0.615], [0.973, 0.955, 0.94, 0.956], [0.628, 0.634, 0.643, 0.669], [0.64, 0.656, 0.637, 0.619], [0.732, 0.738, 0.742, 0.743], [0.872, 0.865, 0.859, 0.838], [0.199, 0.205, 0.21, 0.221], [0.783, 0.771, 0.775, 0.727], [1.069, 1.075, 1.064, 1.1], [0.707, 0.708, 0.705, 0.741], [1.07, 1.061, 1.058, 1.034], [0.911, 0.908, 0.91, 0.888]]
def get_populations(initial,io):
k,k1,x,y=initial[0],initial[1],initial[2],initial[3]
kx,k1x=k*x,k1*x
ky,k1y=k*y,k1*y
kxy,k1xy=k*x*y,k1*x*y
pF=(np.sqrt((k*k1)**2+(8*io*k*k1)+(8*io*k*k1**2))-(k*k1))/(4*(1+k1))
pC=(((4*io)+(4*io*k1))/(4*(1+k1)**2))-(pF/(1+k1))
pO=k1*pC
pF2=(np.sqrt((kx*k1x)**2+(8*io*kx*k1x)+(8*io*kx*k1x**2))-(kx*k1x))/(4*(1+k1x))
pC2=(((4*io)+(4*io*k1x))/(4*(1+k1x)**2))-(pF2/(1+k1x))
pO2=k1x*pC2
pF3=(np.sqrt((ky*k1y)**2+(8*io*ky*k1y)+(8*io*ky*k1y**2))-(ky*k1y))/(4*(1+k1y))
pC3=(((4*io)+(4*io*k1y))/(4*(1+k1y)**2))-(pF3/(1+k1y))
pO3=k1y*pC3
pF4=(np.sqrt((kxy*k1xy)**2+(8*io*kxy*k1xy)+(8*io*kxy*k1xy**2))-(kxy*k1xy))/(4*(1+k1xy))
pC4=(((4*io)+(4*io*k1xy))/(4*(1+k1xy)**2))-(pF4/(1+k1xy))
pO4=k1xy*pC4
local_chi2=0
for data1,data2 in zip(data_set_1,data_set_2):
populations=np.array([[pF,pO,pC],[pF2,pO2,pC2],[pF3,pO3,pC3],[pF4,pO4,pC4]])
least_squared_fit_1=lsmr(populations/io,np.array([data1])/4*800,maxiter=10)
least_squared_fit_2=lsmr(populations/io,np.array([data2])*800,maxiter=10)
local_chi2+=least_squared_fit_1[3]**2+least_squared_fit_2[3]**2
return local_chi2
global_parameter_solution=minimize(get_populations,args=io, x0=np.array([2e3,2e-3,9,2e1]),bounds=np.array([[0,np.inf],[0,np.inf],[0,np.inf],[0,np.inf]]),options={'maxiter':100000})
</code></pre>
<p>Unfortunately, the landscape for this problem is quite flat, so there are multiple solutions. I'm looking at 2 particular solutions using 2 initial guesses.</p>
<p>Solution using 5e2,2e-2,7,3e1</p>
<pre><code>message: CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH
success: True
status: 0
fun: 1568.1943235398385
x: [ 5.000e+02 1.226e-02 6.638e+00 3.039e+01]
nit: 8
jac: [-2.683e-03 -1.183e-01 -2.933e-03 -1.796e-03]
nfev: 50
njev: 10
hess_inv: <4x4 LbfgsInvHessProduct with dtype=float64>
</code></pre>
<p>With inverse hessian diagonal of</p>
<pre><code>[0.99964765 0.00841953 1.41597902 3.25752296]
</code></pre>
<p>Solution 2 using 2e3,2e-3,9,2e1</p>
<pre><code>message: CONVERGENCE: REL_REDUCTION_OF_F_<=_FACTR*EPSMCH
success: True
status: 0
fun: 1568.1943236577647
x: [ 2.000e+03 9.445e-03 6.398e+00 2.838e+01]
nit: 24
jac: [-1.137e-04 1.100e-01 -1.000e-03 -1.523e-03]
nfev: 370
njev: 74
hess_inv: <4x4 LbfgsInvHessProduct with dtype=float64>
</code></pre>
<p>with inverse hessian</p>
<pre><code>[1.00008883e+00 2.07424141e-03 5.98621521e+01 6.83255242e+01]
</code></pre>
<p>The "fun", or sum of residuals for both solutions is basically identical. However, the variance (the diagonal of the inverse hessian), for one solution is significantly worse than the other. Can I eliminate solution number 2 on the basis that the variance for one of the parameters is erronously high and thus the other solution is the valid one?</p>
|
<python><numpy><scipy><scipy-optimize>
|
2023-04-09 23:00:27
| 1
| 623
|
samman
|
75,973,288
| 480,118
|
regular expression to group matches and non-matches
|
<p>i have the following string. i want to tokenize this such that there are groups for both the matched curly braces and non match</p>
<pre class="lang-py prettyprint-override"><code>str='{field1}somestring{field2}somestring2{feild3}<somestring3>'
</code></pre>
<p>i know i can tokenize the curly braces with:</p>
<pre class="lang-py prettyprint-override"><code>re.findall('({[^}]*})', str)
['{field1}', '{field2}', '{feild3}']
</code></pre>
<p>But what i really want is:</p>
<pre class="lang-py prettyprint-override"><code>['{field1}', 'somestring', '{field2}', 'somestring2', '{feild3}', '<somestring3>']
</code></pre>
<p>I am having trouble doing this...the closest i've come so far is the following</p>
<pre class="lang-py prettyprint-override"><code>re.findall('(.*?)({[^}]*})(.*?)', str)
[('', '{field1}', ''), ('somestring', '{field2}', ''), ('somestring2', '{feild3}', '')]
</code></pre>
<p>But you can see it creates an array of three groups, with empty strings, and the last token <code><somestring3></code> is not getting tokenized.</p>
<p>any help would be appreciated</p>
|
<python><regex>
|
2023-04-09 22:56:37
| 1
| 6,184
|
mike01010
|
75,973,048
| 20,947,319
|
How to check if a user is subscribed to a product in Django template
|
<p>I am working on a Django project whereby I want to check if a user is subscribed to a product or not. I have created in my models.py several model instances and I am stuck on how to check if the user is subscribed inside the template. Here is my template where I loop through the fetched data:</p>
<pre><code><ul>
{% for content in content %}
{% if content.model.is_premium and user.is_subscribed %}
<li>Premium</li>
{% else %}
<li>Not premium</li>
{% endif %}
{% endfor %}
</ul>
</code></pre>
<p>Here is my views.py :</p>
<pre><code>@login_required(login_url='/user/login')
def homepage(request):
content = ModelContent.objects.all()
categorys = Category.objects.all()
models = MyModels.objects.all()
suggestions = MyModels.objects.all()[:3]
# profiles = Profile.objects.filter(user__is_creator=True)
context = {"categorys": categorys, "models": models, "content":content, "suggestions":suggestions}
return render(request, 'users/home.html', context)
</code></pre>
<p>And here is the models.py:</p>
<pre><code>User = get_user_model()
class MyModels(models.Model):
owner = models.ForeignKey(Profile, on_delete=models.CASCADE, null=False, blank=False)
name = models.CharField(max_length=500, null=False, blank=False)
username = models.CharField(max_length=500, null=False, blank=False, unique=True)
title = models.CharField(max_length=500, null=False, blank=False)
description = models.TextField(max_length=500, null=False, blank=False)
image = models.ImageField(upload_to='img', blank=True, null=True)
placeholder = models.ImageField(upload_to='img', blank=True, null=True)
sex = models.CharField(max_length=50, choices=SEX_CHOICES, default=NONE)
category = models.ForeignKey(Category, on_delete=models.CASCADE, null=False, blank=False)
content_id = models.UUIDField(default=uuid.uuid4, primary_key=True, unique=True, editable=False)
created = models.DateField(auto_now_add=True)
is_popular = models.BooleanField(default=False)
is_premium = models.BooleanField(default=False)
# posted_content = models.ManyToManyField('ModelContent', related_name='model_content')
def __str__(self):
return self.username
class ModelContent(models.Model):
model = models.ForeignKey(MyModels, on_delete=models.CASCADE, null=False, blank=False )
title = models.CharField(max_length=500, null=False, blank=False)
date = models.DateTimeField(auto_now_add=True)
content_id = models.UUIDField(default=uuid.uuid4, primary_key=True, unique=True, editable=False)
class Meta:
ordering = ['-date']
def __str__(self):
return str(self.title)
class Subscription(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE, related_name='subscriptions_to_product')
model = models.ForeignKey(MyModels, on_delete=models.CASCADE, related_name='subscriptions_from_subscriber')
is_subscribed = models.BooleanField(default=False)
def __str__(self):
return f"{self.user.username} subscribed to {self.model.username}"
</code></pre>
<p>The views.py is inside an app called 'users' while the models.py is inside an app called 'posts'.
I have tried doing this :</p>
<pre><code>{% for content in content %}
{% if content.model.is_premium %}
{% with subscription=request.user.subscriptions_to_product.filter(model_id=content.model_id).first %}
{% if subscription and subscription.is_subscribed %}
<li>User Subscribed</li>
{% else %}
<li>User Not Subscribed</li>
{% endif %}
{% endwith %}
{% else %}
<li>Not premium</li>
{% endif %}
{% endfor %}
</code></pre>
<p>But it did not work.</p>
|
<python><django><django-templates><subscription>
|
2023-04-09 21:48:04
| 1
| 446
|
victor
|
75,973,040
| 15,724,084
|
attributeError 'tkinter.app' object has no attribute, where I defined method inside classs
|
<p>My code gives me an error of <code> File "C:\pythonPro\...widgets.py", line 109, in openNewWindow self.btn_submit=self.button_creation(self.newWindow) File "C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\tkinter\__init__.py", line 2383, in __getattr__ return getattr(self.tk, attr) AttributeError: '_tkinter.tkapp' object has no attribute 'button_creation'</code></p>
<p>where I call inside a class <code>CustomWidgets</code> one function <code>button_creation()</code> -> I guess it should be a method as it is inside class from another function <code>openNewWindow</code> .</p>
<pre><code>from tkinter import *
from tkinter import ttk
class CustomWidgets():
def __init__(self):
self.root=Tk()
menubar = MenuBar(self.root)
self.root.config(menu=menubar)
def button_creation(self,window=None):
self.btn_0=Button(master=window)
self.btn_0.pack(expand='YES')
return self.btn_0
def openNewWindow(self,Mainwindow=None):
self.newWindow = Toplevel(Mainwindow)
self.frmbutton = Frame(master=self.newWindow)
self.frmbutton.place(y=75)
self.btn_submit=self.button_creation(self.newWindow) #Error giving line
self.btn_submit.configure(text='SUBMIT')
class MenuBar(Menu):
def __init__(self, parent):
Menu.__init__(self, parent)
fileMenu = Menu(self, tearoff=False)
self.add_cascade(label="Menu",underline=0, menu=fileMenu)
fileMenu.add_command(label="Email Configuration", underline=1, command=lambda : CustomWidgets.openNewWindow(parent))
if __name__=='__main__':
f=CustomWidgets()
f.root.mainloop()
</code></pre>
<p>I want to understand why method <code>button_creation</code> is read by python as an attribute <code>self.button_creation(self.newWindow)</code>?</p>
|
<python><tkinter>
|
2023-04-09 21:46:41
| 1
| 741
|
xlmaster
|
75,972,956
| 3,398,324
|
Replace row values in Panda columns
|
<p>I have a dataframe column which is a date and I would like to replace it like this:</p>
<p>Current DataFrame:</p>
<pre><code>data = {'date': ['1/1/2022', '1/2/2022,1/3/2022]}
df = pd.DataFrame(data)
</code></pre>
<p>Target DataFrame:</p>
<pre><code>data = {'date': ['qid:0', 'qid:1','qid:2']}
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><date>
|
2023-04-09 21:25:01
| 1
| 1,051
|
Tartaglia
|
75,972,844
| 11,146,276
|
AWS CDK Lambda says "[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda': No module named 'orjson'"
|
<p>I'm writing the code on Windows. This is my project structure:</p>
<pre><code>my-project/
├─ stack/
│ ├─ src/
│ │ ├─ functions/
│ │ │ ├─ lambda.py
│ ├─ __init__.py
│ ├─ my_stack.py
├─ app.py
├─ poetry.lock
├─ pyproject.toml
</code></pre>
<p>My root stack defines a /package endpoint that takes query parameters:</p>
<pre><code>from aws_cdk import Stack, aws_lambda, aws_apigateway as apigw, RemovalPolicy
from constructs import Construct
class MyStack(Stack):
def __init__(self, scope: Construct, construct_id: str, **kwargs) -> None:
super().__init__(scope, construct_id, **kwargs)
layer = aws_lambda.LayerVersion(
self,
"asset_layer",
code=aws_lambda.Code.from_asset("vendor.zip"),
description="Dependency layer",
compatible_runtimes=[aws_lambda.Runtime.PYTHON_3_9],
removal_policy=RemovalPolicy.DESTROY,
)
lambda_fn = aws_lambda.Function(
self,
"lambda_function",
code=aws_lambda.Code.from_asset(path="my_project/stack/src/functions"),
handler="lambda.handler",
runtime=aws_lambda.Runtime.PYTHON_3_9,
layers=[layer],
environment={
"POSTGRESQL_NAME": os.environ["POSTGRESQL_NAME"],
"POSTGRESQL_USER": os.environ["POSTGRESQL_USER"],
"POSTGRESQL_PASSWORD": os.environ["POSTGRESQL_PASSWORD"],
"POSTGRESQL_HOST": os.environ["POSTGRESQL_HOST"],
"POSTGRESQL_PORT": os.environ["POSTGRESQL_PORT"],
},
)
api = apigw.LambdaRestApi(
self,
"api",
handler=lambda_fn,
)
package = api.root.add_resource("package")
package_integration = apigw.LambdaIntegration(lambda_fn)
package.add_method(
"GET",
package_integration,
request_parameters={
"method.request.querystring.boxId": True,
"method.request.querystring.w": True,
"method.request.querystring.h": True,
"method.request.querystring.d": True,
},
authorization_type=apigw.AuthorizationType.NONE,
api_key_required=False,
)
</code></pre>
<p>My lambda function:</p>
<pre><code>import orjson
def handler(event, context):
print("## ENVIRONMENT VARIABLES")
print(os.environ)
print("## EVENT")
print(event)
box_id = int(event["queryStringParameters"].get("boxId"))
width = int(event["queryStringParameters"].get("w"))
height = int(event["queryStringParameters"].get("h"))
depth = int(event["queryStringParameters"].get("d"))
if box_id:
data = {"width": 1, "height": 2, "depth": 3}
elif width and height and depth:
data = {"boxId": 1}
else:
data = []
response = {
"statusCode": 200,
"headers": {"Content-Type": "application/json"},
"body": orjson.dumps(data),
}
return response
</code></pre>
<p>Because I'm using external dependencies, I used a shell script to export my <code>pyproject.toml</code> into a <code>requirements-poetry.txt</code> and install my dependencies into the <code>vendor</code> directory at the root of the project:</p>
<pre><code>poetry export --without-hashes --format=requirements.txt > requirements-poetry.txt
pip install -r requirements-poetry.txt -t ./vendor
</code></pre>
<p>My <code>requirements-poetry.txt</code>:</p>
<pre><code>attrs==22.2.0 ; python_version >= "3.9" and python_version < "4.0"
cattrs==22.2.0 ; python_version >= "3.9" and python_version < "4.0"
certifi==2022.12.7 ; python_version >= "3.9" and python_version < "4"
charset-normalizer==3.1.0 ; python_version >= "3.9" and python_version < "4"
confuse==2.0.1 ; python_version >= "3.9" and python_version < "4.0"
constructs==10.1.306 ; python_version >= "3.9" and python_version < "4.0"
docker==6.0.1 ; python_version >= "3.9" and python_version < "4.0"
exceptiongroup==1.1.1 ; python_version >= "3.9" and python_version < "3.11"
greenlet==2.0.2 ; python_version >= "3.9" and platform_machine == "aarch64" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "ppc64le" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "x86_64" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "amd64" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "AMD64" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "win32" and python_version < "4.0" or python_version >= "3.9" and platform_machine == "WIN32" and python_version < "4.0"
idna==3.4 ; python_version >= "3.9" and python_version < "4"
importlib-resources==5.12.0 ; python_version >= "3.9" and python_version < "4.0"
jsii==1.80.0 ; python_version >= "3.9" and python_version < "4.0"
orjson==3.8.10 ; python_version >= "3.9" and python_version < "4.0"
packaging==23.0 ; python_version >= "3.9" and python_version < "4.0"
psycopg==3.1.8 ; python_version >= "3.9" and python_version < "4.0"
publication==0.0.3 ; python_version >= "3.9" and python_version < "4.0"
python-dateutil==2.8.2 ; python_version >= "3.9" and python_version < "4.0"
pywin32==306 ; python_version >= "3.9" and python_version < "4.0" and sys_platform == "win32"
pyyaml==6.0 ; python_version >= "3.9" and python_version < "4.0"
requests==2.28.2 ; python_version >= "3.9" and python_version < "4"
six==1.16.0 ; python_version >= "3.9" and python_version < "4.0"
sqlalchemy==2.0.9 ; python_version >= "3.9" and python_version < "4.0"
typeguard==2.13.3 ; python_version >= "3.9" and python_version < "4.0"
typing-extensions==4.5.0 ; python_version >= "3.9" and python_version < "4.0"
tzdata==2023.3 ; python_version >= "3.9" and python_version < "4.0" and sys_platform == "win32"
urllib3==1.26.15 ; python_version >= "3.9" and python_version < "4.0"
websocket-client==1.5.1 ; python_version >= "3.9" and python_version < "4.0"
zipp==3.15.0 ; python_version >= "3.9" and python_version < "3.10"
</code></pre>
<p>and then make a <code>vendor.zip</code> for my layer as you can see in <code>MyStack</code>.</p>
<p>However, if I hit my endpoint, it returns 502 and my log says:</p>
<pre><code>[ERROR] Runtime.ImportModuleError: Unable to import module 'lambda': No module named 'orjson'
</code></pre>
<p>and it probably means that my lambda function does not recognize the dependency package in the layer.</p>
<p>What am I doing wrong here?</p>
|
<python><amazon-web-services><aws-lambda><aws-cdk>
|
2023-04-09 20:58:34
| 0
| 428
|
Firefly
|
75,972,670
| 357,024
|
Python asyncio "contains"
|
<p>Can the <code>__contains__</code> function by adapted to Python asyncio, where the method becomes a coroutine? For example</p>
<pre><code>class A():
async def __contains__(self, a):
return True
async def main():
a = A()
print(2 in a)
</code></pre>
<p>Of course this generates the warning</p>
<blockquote>
<p>RuntimeWarning: coroutine 'A.__contains__' was never awaited</p>
</blockquote>
<p>I assume there needs to be some special version of <code>in</code> for asyncio, though I've tried several permutations of using <code>await</code> and <code>in</code> with no luck.</p>
<p>My real use case of <code>contains</code> is a wrapper around a database search to check if there is an entity conflict. One case is to check if a user's screen name already exists in a database table, such as <code>'mike' in Users</code></p>
<p>Of course I could make my own <code>contains</code> and do something like <code>User.contains('mike')</code> but I prefer the beauty of <code>in</code>.</p>
|
<python><python-asyncio>
|
2023-04-09 20:18:31
| 1
| 61,290
|
Mike
|
75,972,643
| 9,764,940
|
How can i shift a 2d array by an 1d array in numpy? The same for 3d by 2d
|
<p>Say i have an 2d array and an 1d array like this:</p>
<pre><code>a= np.array([[1,1,1,1],[2,2,2,2],[3,3,3,3]])
b= np.array([0,2,1,0])
</code></pre>
<p>And as a result i want this:</p>
<pre><code>out= np.array([[1,2,3,1],[2,3,1,2],[3,1,2,3]])
</code></pre>
<p>Every values in a is shifted along an axis by a value in an other array.</p>
<p>Is there something like this in numpy or any other common library?
I need to do this for a big array in 3d ( ~ 10000,10000,100) so doing it by iterating feels wrong.</p>
|
<python><numpy>
|
2023-04-09 20:10:10
| 2
| 723
|
HrkBrkkl
|
75,972,600
| 5,617,608
|
Some CSV values appear in the wrong column (with open python)
|
<p>I'm writing data to a CSV file in Python. The data is in Unicode-8. I've written several rows successfully. However, a part of the data from column B is written to column A. The data already includes commas and numbers in English along with Arabic text.</p>
<p>I also checked in NotePad that the text is between double quotations, and it seems OK, but in MS Office and LibreOffice preview, it doesn't look OK. When it's open after the preview in LibreOffice, it looks fine. Here are my trials:</p>
<pre><code>with open(df_path, "w", newline="", encoding="utf-8") as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=["File Name", "Content"], quoting=csv.QUOTE_ALL)
writer.writeheader()
writer.writerow({"File Name": file, "Content": txt})
</code></pre>
<pre><code>with open(df_path, "w", newline="", encoding="utf-8") as csv_file:
writer = csv.writer(csv_file)
writer.writerow(["File Name", "Content"])
writer.writerow([file, '"' + txt + '"'])
</code></pre>
<pre><code>with open(df_path, "w", newline="", encoding="utf-8") as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=["File Name", "Content"])
writer.writeheader()
writer.writerow({"File Name": file, "Content": txt})
</code></pre>
<pre><code>with open(df_path, "w", newline="", encoding="utf-8") as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=["File Name", "Content"], quoting=csv.QUOTE_ALL)
writer.writeheader()
writer.writerow({"File Name": file, "Content": txt})
</code></pre>
<pre><code>with open(df_path, "w", newline="", encoding="utf-8") as csv_file:
writer = csv.DictWriter(csv_file, fieldnames=["File Name", "Content"], delimiter=",")
writer.writeheader()
writer.writerow({"File Name": file, "Content": txt})
</code></pre>
<p>What should I try to keep the data as is in column B? I can't use Pandas because I want to keep the file open instead of writing all data at once. Thanks in advance!</p>
<p><strong>Edit:</strong>
I cannot share the text because when I copy and paste the text, the issue disappears.</p>
<p>What I concluded until now is that when I add certain data to the file and check in the terminal, It shows "Unicode text, UTF-8 text, with CRLF, LF line terminators" instead of "CSV text". The first code snippet is what I use and it's what produced both.</p>
|
<python><python-3.x><csv><writer><csvwriter>
|
2023-04-09 20:00:41
| 1
| 1,759
|
Esraa Abdelmaksoud
|
75,972,526
| 13,326,869
|
How to access camera or webcam in kaggle?
|
<p>I tried using a webcam in Kaggle to take capture for the face detection project. but it's not working.</p>
<blockquote>
<p>code working well in Jupyter Notebook in the system</p>
</blockquote>
<p>here is the code that I'm using :</p>
<pre><code>cap = cv2.VideoCapture(0)
for imgnum in range(number_images):
print('Collectiong image {}'.format(imgnum))
ret,frame = cap.read()
imgname = os.path.join(IMAGES_PATH,f'{str(uuid.uuid1())}.jpg')
cv2.imwrite(imgname,frame)
cv2.imshow('frame', frame)
time.sleep(1)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
</code></pre>
<p>this is the error that Kaggle shows :
Collectiong image 0</p>
<blockquote>
<p>[ WARN:0] global
/tmp/pip-req-build-jpmv6t9_/opencv/modules/videoio/src/cap_v4l.cpp
(890) open VIDEOIO(V4L2:/dev/video0): can't open camera by index
--------------------------------------------------------------------------- error Traceback (most recent call
last) /tmp/ipykernel_27/1415701618.py in
4 ret,frame = cap.read()
5 imgname = os.path.join(IMAGES_PATH,f'{str(uuid.uuid1())}.jpg')
----> 6 cv2.imwrite(imgname,frame)
7 cv2.imshow('frame', frame)
8 time.sleep(1)</p>
<p>error: OpenCV(4.5.4)
/tmp/pip-req-build-jpmv6t9_/opencv/modules/imgcodecs/src/loadsave.cpp:799:
error: (-215:Assertion failed) !_img.empty() in function 'imwrite'</p>
</blockquote>
<p>note: I tried to use 1,2 values in cap = cv2.VideoCapture(0) . but is not working.</p>
<p>if any body has any way please write. thankyou</p>
|
<python><data-science><webcam><face-detection><kaggle>
|
2023-04-09 19:42:42
| 0
| 651
|
Alireza Atashnejad
|
75,972,508
| 977,345
|
Typing an Optional staticmethod with and without arguments
|
<p>I'm trying to add typing for a method that is None (has the literal value of None) in the base class, but child classes can implement it using a staticmethod. What I have so far is as followed:</p>
<pre class="lang-py prettyprint-override"><code>class BaseClass:
attr: ClassVar[Optional[Callable[[str], None]]] = None
class ChildClass1(BaseClass):
pass
class ChildClass2(BaseClass):
@staticmethod
def attr(s: str):
return "hello"
</code></pre>
<p>This is what I expected to work, but instead mypy is throwing:</p>
<pre class="lang-none prettyprint-override"><code>test.py:14: error: Signature of "attr" incompatible with supertype "BaseClass" [override]
</code></pre>
<p>Interestingly, if the function doesn't have arguments, it works:</p>
<pre class="lang-py prettyprint-override"><code>class BaseClass:
attr: ClassVar[Optional[Callable[[], None]]] = None
class ChildClass2(BaseClass):
@staticmethod
def attr():
return "hello"
</code></pre>
|
<python><mypy><python-typing>
|
2023-04-09 19:39:07
| 1
| 821
|
Ivailo Karamanolev
|
75,972,375
| 8,923,742
|
get a dict with key as column names and value as column type of a pandas dataframe
|
<p>I have a pandas dataframe as shown below:-</p>
<pre><code> A B C D
0 56 89 16 b
1 51 41 99 b
2 49 3 72 d
3 15 98 58 c
4 92 55 77 d
</code></pre>
<p>I want to create a dict where key is column name and value is column data type.</p>
<pre><code>dtypes = df.dtypes.to_dict()
print(dtypes)
{'A': dtype('int64'), 'B': dtype('int64'), 'C': dtype('int64'), 'D': dtype('O')}
</code></pre>
<p>Instead of above how do I get the dict in below format:-</p>
<pre><code>{'A': 'int64', 'B': 'int64', 'C': 'int64', 'D': 'object'}
</code></pre>
|
<python>
|
2023-04-09 19:07:26
| 1
| 1,396
|
itthrill
|
75,972,319
| 799,593
|
Python 3.x functools.partial() function got multiple values for argument parameter
|
<p>I am trying to pass an extra argument to map function for a Django REST framework API endpoint. The following worked in my test, but I am not sure what am I missing in my Django API.</p>
<pre><code>>>> tt
[(datetime.datetime(2023, 4, 1, 0, 0), datetime.datetime(2023, 4, 10, 0, 0)), (datetime.datetime(2023, 4, 10, 0, 0), datetime.datetime(2023, 4, 1, 0, 0))]
>>> def t(d1: datetime.datetime, d2: datetime.datetime, i: int):
... print(f"First: {d1.strftime('%Y-%m-%d')} & Second: {d2.strftime('%Y-%m-%d')} w/ ID: {i}")
>>> list(map(partial(t, i=10), *tt))
First: 2023-04-01 & Second: 2023-04-10 w/ ID: 10
First: 2023-04-10 & Second: 2023-04-01 w/ ID: 10
[None, None]
</code></pre>
<p>But the similar apporach does not seem to work in my API in views.py</p>
<pre><code>@api_view(["GET"])
def get_registration_counts(request, start: str, stop: str) -> Response:
date_format = "%Y-%m-%d"
# validations to make sure start and stop parameters are valid, else return HTTP 400
datasets = []
def get_counts(d1: datetime.datetime, d2: datetime.datetime, id: int) -> int:
r = Entity.object.filter(some_date__gte=d1, some_date__lte=d2, fk_id=id);
c = r.count()
return c
# calculate date range
dates = [(some_start_date1, some_end_date1), (some_start_date2, some_end_date2)]
some_values = OtherEntity.objects.filter(active=1).values("id", "name")
for v in some_values:
counts = list(map(functools.partial(get_counts, id=v["id"]), *dates) # error
datasets.append(counts)
...
...
</code></pre>
<p>Returns following error</p>
<p><strong>ERROR</strong></p>
<p>Exception Type: TypeError
Exception Value: get_counts() got multiple values for argument 'id'</p>
<p><strong>EDIT:</strong></p>
<p>I do know, I could use list comprehension, but I woud like to understand if I am missing something.</p>
|
<python><python-3.x><functools>
|
2023-04-09 18:55:12
| 0
| 722
|
ablaze
|
75,972,272
| 6,423,456
|
Adding custom expensive field on Django model serializer
|
<p>Normally, in Django, using rest_framework, to add a custom field to a model, you can use <code>SerializerMethodField</code>. From what I understand however, this works great for values that are easy to calculate, but if the value requires database queries to related tables, you're going to be performing these for every item being returned.</p>
<p>If I have a ViewSet like this:</p>
<pre class="lang-py prettyprint-override"><code>class ProductViewSet(ModelViewSet):
queryset = models.Product.objects.all()
serializer_class = serializers.ProductSerializer
...
</code></pre>
<p>And a Serializer like this:</p>
<pre class="lang-py prettyprint-override"><code>class ProductSerializer(ModelSerializer):
class Meta:
model = models.Product
fields = "__all__"
</code></pre>
<p>How do I run a bulk query to get some data from a related table for all the products being serialized, and attach it to a new field?</p>
<p>Ex: if each product is attached to an <code>Order</code>, maybe I want to add an <code>order_number</code> field. I could do this:</p>
<pre class="lang-py prettyprint-override"><code>class ProductSerializer(ModelSerializer):
order_number = SerializerMethodField()
@staticmethod
def get_order_number(obj):
return obj.order.order_number
class Meta:
model = models.Product
fields = "__all__"
</code></pre>
<p>But if the view is returning 100 products, that will be 100 database queries. Is there a more efficient way to do this?</p>
|
<python><django><django-rest-framework>
|
2023-04-09 18:41:52
| 1
| 2,774
|
John
|
75,972,260
| 5,495,304
|
Cannot install own package with `pip install .`
|
<p>I have a package with a <code>setup.py</code> file importing external packages like <code>numpy</code> and <code>Cython</code> for building the package during installation. Here is how the top of my <code>setup.py</code> file looks like:</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/env python3
import os
import sys
from Cython.Build import cythonize
from Cython.Distutils import build_ext
import numpy as np
from setuptools import find_packages, setup, Extension
</code></pre>
<p>I would like to install my package in my current virtual environment (I am using <code>mamba</code>). To do so, I run</p>
<p><code>pip install -e .</code></p>
<p>However, I get this error:</p>
<pre><code> File "/tmp/pip-build-env-u_xs09i6/overlay/lib/python3.9/site-packages/setuptools/build_meta.py", line 335, in run_setup
exec(code, locals())
File "<string>", line 5, in <module>
ModuleNotFoundError: No module named 'Cython'
</code></pre>
<h2>Things I verified</h2>
<h3>Verified cython is working</h3>
<p>I know for a fact that <code>Cython</code> is installed and working, because I can import it from a python session:</p>
<pre><code>❯ python
Python 3.9.15 | packaged by conda-forge | (main, Nov 22 2022, 15:55:03)
[GCC 10.4.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import Cython
>>>
</code></pre>
<p>And I also verified that both the session and <code>pip install</code> both are running the same python through printing the executable path (<code>sys.executable</code>).</p>
<h3>Comment out <code>cython</code> import</h3>
<p>I removed the <code>import Cython</code> statements in setup.py just to see what happens in <code>pip install -e .</code>, but now it cannot find the <code>numpy</code> package, which leads me to think something weird is happening where the environment is not discovered, even if it is running in it...</p>
<h3>Install my package with a different method</h3>
<p>When I install my package using <code>python setup.py --develop</code>, it works!</p>
<p>However I need the <code>pip install</code> command to work because the deployment system I am using runs <code>pip</code>.</p>
|
<python><pip><setuptools>
|
2023-04-09 18:38:21
| 1
| 6,659
|
Gerges
|
75,972,194
| 2,789,334
|
formatting groups of cells in pandas
|
<p>My pandas dataframe is shown below</p>
<pre><code>import pandas as pd
inp_df = pd.DataFrame(
[
["a1", "b1", "c1", "gbt", "auc", 82.5, 80.1, 83.6],
["a1", "b1", "c1", "gbt", "pr@5%", 0.3, 0.2, 0.4],
["a1", "b1", "c1", "gbt", "re@5%", 60.2, 58.1, 61.3],
["a1", "b1", "c1", "rnn", "auc", 84.1, 83.8, 84.5],
["a1", "b1", "c1", "rnn", "pr@5%", 0.5, 0.4, 0.6],
["a1", "b1", "c1", "rnn", "re@5%", 61.5, 61.4, 61.7],
["a1", "b1", "c1", "llm", "auc", 84.3, 84.1, 84.6],
["a1", "b1", "c1", "llm", "pr@5%", 0.8, 0.7, 0.9],
["a1", "b1", "c1", "llm", "re@5%", 61.2, 61.1, 61.3],
["a1", "b1", "c2", "gbt", "auc", 82.5, 80.1, 83.6],
["a1", "b1", "c2", "gbt", "pr@5%", 0.3, 0.2, 0.4],
["a1", "b1", "c2", "gbt", "re@5%", 60.2, 58.1, 61.3],
["a1", "b1", "c2", "llm", "auc", 84.3, 84.1, 84.6],
["a1", "b1", "c2", "llm", "pr@5%", 0.8, 0.7, 0.9],
["a1", "b1", "c2", "llm", "re@5%", 61.2, 61.1, 61.3],
], columns=["A","B","C","model","metric","val","val_lo","val_hi"])
</code></pre>
<p>I would like to display it as shown below <a href="https://i.sstatic.net/eurTN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/eurTN.png" alt="output" /></a></p>
<p>Notes:</p>
<ol>
<li>for each <code>metric</code> (eg <code>auc</code>) use <strong>bold</strong> for model with highest <code>val</code></li>
<li>highlight cells for all models (within that (A,B,C)) with overlapping (val_lo,val_hi) which are the confidence intervals.</li>
<li>draw a line after each set of models</li>
</ol>
<hr />
<p>I came up with a solution which takes me most of the way</p>
<pre><code>cols = ["val","val_lo","val_hi"]
inp_df["value"] = list(inp_df[cols].to_records(index=False))
inp_df.drop(columns=cols, inplace=True)
out_df = inp_df.pivot(index=inp_df.columns[:4], columns="metric", values="value")\
.reset_index().rename_axis(None, axis=1)
out_df
</code></pre>
<pre><code> A B C model auc pr@5% re@5%
0 a1 b1 c1 gbt [82.5, 80.1, 83.6] [0.3, 0.2, 0.4] [60.2, 58.1, 61.3]
1 a1 b1 c1 llm [84.3, 84.1, 84.6] [0.8, 0.7, 0.9] [61.2, 61.1, 61.3]
2 a1 b1 c1 rnn [84.1, 83.8, 84.5] [0.5, 0.4, 0.6] [61.5, 61.4, 61.7]
3 a1 b1 c2 gbt [82.5, 80.1, 83.6] [0.3, 0.2, 0.4] [60.2, 58.1, 61.3]
4 a1 b1 c2 llm [84.3, 84.1, 84.6] [0.8, 0.7, 0.9] [61.2, 61.1, 61.3]
</code></pre>
<pre><code>mets = ["auc","pr@5%","re@5%"]
def flag(block):
out = [block["model"].values.tolist()]
for met in mets:
val,lo,hi = map(np.array, zip(*block[met].values))
maxind = val.argmax()
overlapbool = np.logical_and(hi[maxind]>=lo, lo[maxind]<=hi)
overlapinds = set(np.where(overlapbool)[0]) if overlapbool.sum()>1 else set()
curr = list()
for n,(x,y,z) in enumerate(zip(val,lo,hi)):
cell = f"{x:.1f} ({y:.1f}-{z:.1f})"
if n==maxind: cell += "*"
if n in overlapinds: cell += "†"
curr.append(cell)
out.append(curr)
return pd.DataFrame(zip(*out), columns=["model"]+mets)
out_df.groupby(["A","B","C"]).apply(flag).droplevel(-1).set_index(["model"], append=True)\
.style.applymap(lambda val:"font-weight:bold" if "*" in val else None)\
.applymap(lambda val:f"background-color:beige" if "†" in val else None)
</code></pre>
<p><a href="https://i.sstatic.net/LBYTA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/LBYTA.png" alt="enter image description here" /></a></p>
<p>I hope to be able to resolve the following</p>
<ol>
<li>have the models in the order [gbt, rnn, llm]. Right now even if I made that row an ordered Categorical, that does not seem to work.</li>
<li>draw a line after each set of models. So, in this figure there would be one line between block (a1,b1,c1) and (a1,b1,c2).</li>
<li>drop characters * and † which I am introducing for cell level formatting but would be great to get rid of.</li>
</ol>
|
<python><pandas><dataframe>
|
2023-04-09 18:22:39
| 1
| 1,068
|
ironv
|
75,972,052
| 15,322,101
|
Correctly implementing specific formula into python
|
<p>This may be more of a math question, rather than a programing question but I am absolutely stuck and the problem occured to me while implementing it in python via <code>numpy</code>.
So, in my Machine Learning course the professor constructed an estimator for <code>Linear Regression</code> which resulted into the following equation for finding the <code>weight vector</code> with the labels <code>y</code> and data <code>x</code>:</p>
<img src="https://latex.codecogs.com/svg.image?%5Cwidetilde%7B%5Comega%7D&space;&space;=&space;%5Cleft&space;(&space;&space;%5Csum_%7Bi=1%7D%5E%7Bn%7D%5Cwidetilde%7Bx%7D%5E%7B(i)%7D[%5Cwidetilde%7Bx%7D%5E%7Bi%7D]%5ET&space;%5Cright&space;)%5E%7B-1%7D&space;%5Csum_%7Bi=1%7D%5E%7Bn%7D%5Cwidetilde%7Bx%7D%5E%7B(i)%7Dy%5E%7B(i)%7D" title="https://latex.codecogs.com/svg.image?\widetilde{\omega} = \left ( \sum_{i=1}^{n}\widetilde{x}^{(i)}[\widetilde{x}^{i}]^T \right )^{-1} \sum_{i=1}^{n}\widetilde{x}^{(i)}y^{(i)}" />
<p>As I understood it for the first part of the equation, I should just multiply the <code>vector x</code> with the <code>vector x transposed</code>. This results into a <code>matrix</code> which I should take the <code>inverse</code> from. Unfortunately taking the <code>inverse</code> isn't possible because the resulting matrix has a <code>determinant with the value of 0</code>. The way I understand it is that no matter how I change the x-vector, the equation will always result in a determinant with a value of zero.</p>
<p>Could it be that I am interpreting the equation in a wrong way? Or that I shouldn't do the calculation on a whole vector but rather run through the equation with every single datapoint iteratively (Imo it wouldn't make sense to transpose a single x value). So I'm not sure about that.</p>
<p>In Python code I had the equation as follows: <code>a = (np.linalg.inv(x * x.reshape(-1,1)))*(x*y)</code></p>
<p>With x and y being simple <code>numpy arrays</code>, it throws the following Error: <code>LinAlgError: Singular matrix</code>.</p>
|
<python><numpy><machine-learning><math><linear-algebra>
|
2023-04-09 17:59:33
| 2
| 347
|
Infomagier
|
75,971,842
| 5,431,132
|
Expose Python API to Kubernetes minikube deployment
|
<p>I want to expose an API written in Python to a Minikube cluster. I have the following directory structure</p>
<pre><code>- Dockerfile
- src
- pyproject.toml
- src
- api.py
- nginx-chart
- templates
- deployment.yaml
- service.yaml
- chart.yaml
- values.yaml
</code></pre>
<p>In api.py I have</p>
<pre class="lang-py prettyprint-override"><code>import uvicorn
from fastapi import FastAPI
app = FastAPI()
@app.get("/root")
def read_root():
return {"Hello": "World"}
def start():
"""Launched with `poetry run start` at root level"""
uvicorn.run("src.api:app", host="0.0.0.0", port=8080, reload=True)
</code></pre>
<p>In pyproject.toml I have</p>
<pre class="lang-ini prettyprint-override"><code>[tool.poetry]
name = "api"
version = "0.1.0"
description = "The API"
readme = "README.md"
authors = ["me"]
[tool.poetry.dependencies]
python = "^3.9"
fastapi = "^0.95.0"
kubernetes = "^26.1.0"
uvicorn = "^0.21.1"
[tool.poetry.group.dev.dependencies]
black = "^23.3.0"
pytest = "^7.3.0"
isort = "^5.12.0"
flake8 = "^6.0.0"
mypy = "^1.2.0"
lxml = "^4.9.2"
[tool.poetry.scripts]
start = "src.api:start"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
</code></pre>
<p>In my Dockerfile I have</p>
<pre><code>FROM python:3.9 as base
COPY src src
WORKDIR /src
RUN dir -s
RUN pip install poetry && poetry config virtualenvs.create false \
&& poetry install
EXPOSE 8080
CMD ["poetry", "run", "start"]
</code></pre>
<p>Deployment.yaml is</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Release.Name }}-nginx
labels:
app: nginx
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.service.targetPort }}
protocol: TCP
</code></pre>
<p>service.yaml is</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v1
kind: Service
metadata:
name: {{ .Release.Name }}-service
spec:
selector:
app.kubernetes.io/instance: {{ .Release.Name }}
type: {{ .Values.service.type }}
ports:
- protocol: {{ .Values.service.protocol | default "TCP" }}
port: {{ .Values.service.port }}
targetPort: {{ .Values.service.targetPort }}
</code></pre>
<p>values.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>replicaCount: 2
image:
repository: <my docker>/<my_image>
tag: "v0.0.2"
pullPolicy: Always
service:
type: NodePort
port: 80
targetPort: 8080
</code></pre>
<p>chart.yaml</p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: v2
name: nginx-chart
description: A Helm chart for Kubernetes
type: application
version: 0.1.0
appVersion: "1.16.0"
</code></pre>
<p>I first run <code>minikube start</code> to start minikube. I then deploy with <code>helm upgrade --install service /nginx-chart</code> from the root of the directory.</p>
<p>The docker image builds (and is tagged with <code>v0.0.2</code>), and the helm command succeeds, however, I cannot find my api endpoint in the resulting deployment. Once deployed I run <code>kubectl proxy --port=8080</code> and then open the endpoint in my browser to obtain</p>
<pre><code>{
"paths": [
"/.well-known/openid-configuration",
"/api",
"/api/v1",
"/apis",
"/apis/",
"/apis/admissionregistration.k8s.io",
"/apis/admissionregistration.k8s.io/v1",
"/apis/apiextensions.k8s.io",
"/apis/apiextensions.k8s.io/v1",
"/apis/apiregistration.k8s.io",
"/apis/apiregistration.k8s.io/v1",
"/apis/apps",
"/apis/apps/v1",
"/apis/authentication.k8s.io",
"/apis/authentication.k8s.io/v1",
"/apis/authorization.k8s.io",
"/apis/authorization.k8s.io/v1",
"/apis/autoscaling",
"/apis/autoscaling/v1",
"/apis/autoscaling/v2",
"/apis/batch",
"/apis/batch/v1",
"/apis/certificates.k8s.io",
"/apis/certificates.k8s.io/v1",
"/apis/coordination.k8s.io",
"/apis/coordination.k8s.io/v1",
"/apis/discovery.k8s.io",
"/apis/discovery.k8s.io/v1",
"/apis/events.k8s.io",
"/apis/events.k8s.io/v1",
"/apis/flowcontrol.apiserver.k8s.io",
"/apis/flowcontrol.apiserver.k8s.io/v1beta2",
"/apis/flowcontrol.apiserver.k8s.io/v1beta3",
"/apis/networking.k8s.io",
"/apis/networking.k8s.io/v1",
"/apis/node.k8s.io",
"/apis/node.k8s.io/v1",
"/apis/policy",
"/apis/policy/v1",
"/apis/rbac.authorization.k8s.io",
"/apis/rbac.authorization.k8s.io/v1",
"/apis/scheduling.k8s.io",
"/apis/scheduling.k8s.io/v1",
"/apis/storage.k8s.io",
"/apis/storage.k8s.io/v1",
"/apis/storage.k8s.io/v1beta1",
"/healthz",
"/healthz/autoregister-completion",
"/healthz/etcd",
"/healthz/log",
"/healthz/ping",
"/healthz/poststarthook/aggregator-reload-proxy-client-cert",
"/healthz/poststarthook/apiservice-openapi-controller",
"/healthz/poststarthook/apiservice-openapiv3-controller",
"/healthz/poststarthook/apiservice-registration-controller",
"/healthz/poststarthook/apiservice-status-available-controller",
"/healthz/poststarthook/bootstrap-controller",
"/healthz/poststarthook/crd-informer-synced",
"/healthz/poststarthook/generic-apiserver-start-informers",
"/healthz/poststarthook/kube-apiserver-autoregistration",
"/healthz/poststarthook/priority-and-fairness-config-consumer",
"/healthz/poststarthook/priority-and-fairness-config-producer",
"/healthz/poststarthook/priority-and-fairness-filter",
"/healthz/poststarthook/rbac/bootstrap-roles",
"/healthz/poststarthook/scheduling/bootstrap-system-priority-classes",
"/healthz/poststarthook/start-apiextensions-controllers",
"/healthz/poststarthook/start-apiextensions-informers",
"/healthz/poststarthook/start-cluster-authentication-info-controller",
"/healthz/poststarthook/start-kube-aggregator-informers",
"/healthz/poststarthook/start-kube-apiserver-admission-initializer",
"/healthz/poststarthook/start-kube-apiserver-identity-lease-controller",
"/healthz/poststarthook/start-kube-apiserver-identity-lease-garbage-collector",
"/healthz/poststarthook/start-legacy-token-tracking-controller",
"/healthz/poststarthook/storage-object-count-tracker-hook",
"/livez",
"/livez/autoregister-completion",
"/livez/etcd",
"/livez/log",
"/livez/ping",
"/livez/poststarthook/aggregator-reload-proxy-client-cert",
"/livez/poststarthook/apiservice-openapi-controller",
"/livez/poststarthook/apiservice-openapiv3-controller",
"/livez/poststarthook/apiservice-registration-controller",
"/livez/poststarthook/apiservice-status-available-controller",
"/livez/poststarthook/bootstrap-controller",
"/livez/poststarthook/crd-informer-synced",
"/livez/poststarthook/generic-apiserver-start-informers",
"/livez/poststarthook/kube-apiserver-autoregistration",
"/livez/poststarthook/priority-and-fairness-config-consumer",
"/livez/poststarthook/priority-and-fairness-config-producer",
"/livez/poststarthook/priority-and-fairness-filter",
"/livez/poststarthook/rbac/bootstrap-roles",
"/livez/poststarthook/scheduling/bootstrap-system-priority-classes",
"/livez/poststarthook/start-apiextensions-controllers",
"/livez/poststarthook/start-apiextensions-informers",
"/livez/poststarthook/start-cluster-authentication-info-controller",
"/livez/poststarthook/start-kube-aggregator-informers",
"/livez/poststarthook/start-kube-apiserver-admission-initializer",
"/livez/poststarthook/start-kube-apiserver-identity-lease-controller",
"/livez/poststarthook/start-kube-apiserver-identity-lease-garbage-collector",
"/livez/poststarthook/start-legacy-token-tracking-controller",
"/livez/poststarthook/storage-object-count-tracker-hook",
"/logs",
"/metrics",
"/openapi/v2",
"/openapi/v3",
"/openapi/v3/",
"/openid/v1/jwks",
"/readyz",
"/readyz/autoregister-completion",
"/readyz/etcd",
"/readyz/etcd-readiness",
"/readyz/informer-sync",
"/readyz/log",
"/readyz/ping",
"/readyz/poststarthook/aggregator-reload-proxy-client-cert",
"/readyz/poststarthook/apiservice-openapi-controller",
"/readyz/poststarthook/apiservice-openapiv3-controller",
"/readyz/poststarthook/apiservice-registration-controller",
"/readyz/poststarthook/apiservice-status-available-controller",
"/readyz/poststarthook/bootstrap-controller",
"/readyz/poststarthook/crd-informer-synced",
"/readyz/poststarthook/generic-apiserver-start-informers",
"/readyz/poststarthook/kube-apiserver-autoregistration",
"/readyz/poststarthook/priority-and-fairness-config-consumer",
"/readyz/poststarthook/priority-and-fairness-config-producer",
"/readyz/poststarthook/priority-and-fairness-filter",
"/readyz/poststarthook/rbac/bootstrap-roles",
"/readyz/poststarthook/scheduling/bootstrap-system-priority-classes",
"/readyz/poststarthook/start-apiextensions-controllers",
"/readyz/poststarthook/start-apiextensions-informers",
"/readyz/poststarthook/start-cluster-authentication-info-controller",
"/readyz/poststarthook/start-kube-aggregator-informers",
"/readyz/poststarthook/start-kube-apiserver-admission-initializer",
"/readyz/poststarthook/start-kube-apiserver-identity-lease-controller",
"/readyz/poststarthook/start-kube-apiserver-identity-lease-garbage-collector",
"/readyz/poststarthook/start-legacy-token-tracking-controller",
"/readyz/poststarthook/storage-object-count-tracker-hook",
"/readyz/shutdown",
"/version"
]
}
</code></pre>
<p>One of my pods has the following logs</p>
<pre><code>Skipping virtualenv creation, as specified in config file.
INFO: Will watch for changes in these directories: ['/cakes']
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
INFO: Started reloader process [1] using StatReload
INFO: Started server process [12]
INFO: Waiting for application startup.
INFO: Application startup complete.
</code></pre>
<p>Where is my python api located?</p>
<h1>Edit</h1>
<p>I added the following file <code>nginx-chart/templates/ingress.yaml</code></p>
<pre class="lang-yaml prettyprint-override"><code>apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Release.Name }}-ingress
annotations:
app: nginx
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: {{ .Values.ingress.host }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Release.Name }}-service
port:
number: 80
</code></pre>
<p>with the following update to values.</p>
<pre class="lang-yaml prettyprint-override"><code>ingress:
host: hello-world.info
</code></pre>
|
<python><docker><kubernetes><nginx><minikube>
|
2023-04-09 17:04:10
| 1
| 582
|
AngusTheMan
|
75,971,704
| 10,395,139
|
"Zoom" in the most concentrated part of a plot
|
<p>I'm creating a scatter plot in matplot where I'm showing the spread of images width and height values. Is there a way to use the <code>plt.xlim</code> or another method to "zoom" in on the most concentrated values. Now I have a plot with many values ranging between 200 - 500, it would be nice to "zoom" in on them, and keep the xticks wider apart with the less populated values.</p>
<p><a href="https://i.sstatic.net/bSHTm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/bSHTm.png" alt="enter image description here" /></a></p>
|
<python><matplotlib>
|
2023-04-09 16:35:39
| 1
| 579
|
Krullmizter
|
75,971,521
| 7,093,241
|
Is the time complexity the same in finding target value in a 2D matrix with these 2 approaches?
|
<p>The <a href="https://leetcode.com/problems/search-a-2d-matrix/description/" rel="nofollow noreferrer">leetcode question</a> calls for searching for a target value in a 2D matrix. Both approaches use binary search.</p>
<p><strong>My approach:</strong></p>
<p>Consider the matrix as a single array where of length <code>rows x columns</code>, then use integer division and modulus to get the <code>row</code> and <code>column</code> indices in the binary search. The time complexity for this is <code>O(log [rows x colums])</code></p>
<pre><code># Binary search on rows and then on columns
# Time : O(log nm) # binary search on total numbers of cells
# Space : O(1) # same number of pointers (left, right, mid) regardless of size of matrix
from typing import List
class Solution:
def searchMatrix(self, matrix: List[List[int]], target: int) -> bool:
ROWS = len(matrix)
COLUMNS = len(matrix[0])
left = 0
right = ROWS * COLUMNS - 1
# run regular binary search as if dealing with just an array
while left <= right: # <= handles case when left, right are the same and value not considered before
mid = left + ( (right - left) // 2 )
row = mid // COLUMNS
col = mid % COLUMNS # has to be COLUMNS and not ROWS.
# Try [[1,1]] with ROWS, will get index-out-of-range error
# print(f"left = {left} right = {right} mid = {mid} row = {row} col = {col}")
if target < matrix[row][col]:
right = mid - 1
elif target > matrix[row][col]:
left = mid + 1
else:
return True
# not found
return False
</code></pre>
<p><strong>Neetcode's approach:</strong></p>
<p>I then bumped into <a href="https://youtu.be/Ber2pi2C0j0" rel="nofollow noreferrer">neetcode's solution video</a>, where he eliminates <code>rows</code> and then focuses on just the <code>row</code> that may contain the target. Because of the elimination of the rows and then subsequent searching of the columns of the possible row, the time complexity is <code>O( log[rows] + O[columns] )</code>.</p>
<pre><code># Binary search on rows and then on columns
# Time : O(log n + log m) # binary search on rows then on columns
# Space : O(1) # same number of pointers (top, bottom, row, left, right) regardless of size of matrix
from typing import List
class Solution:
def searchMatrix(self, matrix: List[List[int]], target: int) -> bool:
ROWS = len(matrix)
COLUMNS = len(matrix[0])
top = 0
bottom = ROWS - 1
while top <= bottom:
row = top + ((bottom-top)//2)
if target < matrix[row][0]:
bottom = row - 1
elif target > matrix[row][-1]:
top = row + 1
else:
# break so you can retain row value and check within the row
break
# ambiguous here: could have ended here because
# of the break in the while loop or the while
# condition failed so check
if top > bottom:
return False
# check within the row. 2nd binary search
left = 0
right = COLUMNS - 1
while left <= right:
# avoid overflow in other languages
mid = left + ((right - left) // 2)
if target < matrix[row][mid]:
right = mid - 1
elif target > matrix[row][mid]:
left = mid + 1
else:
return True
return False
</code></pre>
<p>In trying to figure out what was optimal in big-O notation, it occurred to me that they might be the same because <code>log(xy) = log(x) + log(y)</code> in math. Are they essentially the same?</p>
|
<python><algorithm><time-complexity>
|
2023-04-09 16:00:46
| 1
| 1,794
|
heretoinfinity
|
75,971,479
| 16,813,096
|
Can we change the height of tkinter menu?
|
<p>I am adding lots of options in the tkinter popup menu widget, but it fills the whole screen height (as shown in image).
I want to limit the number of displayed options in tkinter menu like the <strong>tkinter combobox's listbox</strong> height (which can be changed by passing the <code>height</code> parameter)</p>
<p><strong>Is there any way to do the same with menu widget? Please help!</strong></p>
<p><strong>Example:</strong></p>
<pre class="lang-py prettyprint-override"><code>from tkinter import Tk, Menu
# root window
root = Tk()
# create a menubar
menubar = Menu(root)
root.config(menu=menubar)
# create the file_menu
file_menu = Menu(menubar, tearoff=0, bg="grey50", activebackground="blue")
# add lots of menu items to the File menu
for i in range(1,100):
file_menu.add_command(label=i)
# add the File menu to the menubar
menubar.add_cascade(label="Menu",menu=file_menu)
root.mainloop()
</code></pre>
<p><a href="https://i.sstatic.net/d5U9x.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/d5U9x.jpg" alt="screenshot" /></a></p>
|
<python><python-3.x><tkinter><tkinter-canvas><ttk>
|
2023-04-09 15:50:12
| 0
| 582
|
Akascape
|
75,971,433
| 4,115,378
|
How do I "click" accept using requests in Python
|
<p>Here is the html that I am looking to "click"</p>
<pre><code><input id="ctl00_mainContentPlaceHolder_acceptButton" name="ctl00$mainContentPlaceHolder$acceptButton" type="submit" value="I accept the above terms"/>
</code></pre>
<p>The general template that I am using is</p>
<pre><code>import requests
url = 'SOME WEBSITE'
data = {'button': 'click'}
r = requests.post(url, params=data)
</code></pre>
<p>How do I specifically use request to click this one? I tried a few combinations but all failed.
Example: <code>data = {'ctl00$mainContentPlaceHolder$acceptButton': 'submit'}</code>
returns with r.content</p>
<pre><code>b'<html>\r\n<body>\r\n\xc2\xa0\r\n<script type="text/javascript">\r\n\xc2\xa0\xc2\xa0 window.location = "/SecurityResponse/?SupportID=4623646759448296673";\r\n</script>\r\n</body>\r\n</html>'
</code></pre>
|
<python><python-requests>
|
2023-04-09 15:39:00
| 1
| 1,364
|
A1122
|
75,971,357
| 10,392,393
|
dockerized kafka consumer stuck at "Attaching to..." when having the line for iterating over messages
|
<p>I am very new to docker and Kafka, and have a simple kafka python publisher shown below</p>
<p>The following are in my dockerfile:</p>
<pre><code>FROM python:3.10
WORKDIR /app
COPY . /app
RUN pip install --user pip==23.0.1 && pip install pipenv && pipenv install --system
ENV ENVIRONMENT=production
CMD ["python3", "src/consumer.py"]
</code></pre>
<p>as well as my yaml file for compose:</p>
<pre><code>version: '3'
services:
zookeeper:
image: confluent/zookeeper
container_name: zookeeper
ports:
- "2181:2181"
kafka:
image: confluent/kafka
container_name: kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
publisher:
container_name: publisher
build:
context: ./publisher
dockerfile: Dockerfile
environment:
KAFKA_BOOTSTRAP_SERVERS: kafka:9092
KAFKA_TOPIC: metrics
KAFKA_BROKER: kafka:9092
depends_on:
- kafka
consumer:
container_name: consumer
build:
context: ./consumer
dockerfile: Dockerfile
environment:
KAFKA_BOOTSTRAP_SERVERS: kafka:9092
KAFKA_TOPIC: test_topic
KAFKA_BROKER: kafka:9092
depends_on:
- kafka
</code></pre>
<p>in <code>consumer.py</code> I have:.</p>
<pre><code>print('-- consumer script -- ')
import json
from kafka import KafkaProducer, KafkaConsumer
test_topic = "test_topic"
consumer = KafkaConsumer(auto_offset_reset='earliest', bootstrap_servers=['kafka:9092'],
api_version=(0, 10), consumer_timeout_ms=10000)
consumer.subscribe([test_topic])
for message in consumer: # the line that seems to influence docker attachement
print('-----loop works: ', message)
</code></pre>
<p>In one terminal I frist run:</p>
<pre><code>docker-compose -f docker-compose.yml up zookeeper
</code></pre>
<p>and similarly for Kafka:</p>
<pre><code>docker-compose -f docker-compose.yml up kafka
</code></pre>
<p>and then:</p>
<pre><code>docker-compose -f docker-compose.yml up consumer
</code></pre>
<p>But the terminal seems stuck with the following lines:</p>
<pre><code>Starting consumer ... done
Attaching to consumer
</code></pre>
<p>When I remove the line <code>for message in consumer</code>, the code follows without issue</p>
|
<python><docker><apache-kafka><docker-compose><kafka-consumer-api>
|
2023-04-09 15:25:16
| 1
| 979
|
Alejandro
|
75,971,333
| 6,430,403
|
Getting error when calling lambda function running locally on docker through browser
|
<p>I have a docker image that I use to deploy my python code on AWS lambda. Then I enable function url and call it using javascript's fetch written inside an HTML file, which runs in a browser. I enabled CORS setting in the lambda configuration. It is functioning properly.<br />
Here is my lambda function:</p>
<pre><code>import json
import base64
def handler(event, context):
print("EVENT")
print(event)
return {
"statusCode": 200,
"body": json.dumps({
"message": "Cool!!!"
}),
"isBase64Encoded": False
}
</code></pre>
<p>Here is my HTML file:</p>
<pre><code><!DOCTYPE html>
<html>
<head>
<meta charset="UTF-8">
</head>
<body>
<div class="container">
<h1>Testing AWS Lambda</h1>
<button type="submit" id="button">Click to send request</button>
</div>
<script>
const button = document.getElementById('button');
button.addEventListener('click', async (e) => {
// const endpointUrl = 'http://localhost:9000/2015-03-31/functions/function/invocations';
const endpointUrl = "https://<hidden-my-function-url>.lambda-url.ap-south-1.on.aws/"
var myHeaders = new Headers();
myHeaders.append("Content-Type", "application/json");
var raw = JSON.stringify({ "from": "browser" });
var requestOptions = {
method: 'POST',
headers: myHeaders,
body: raw,
};
fetch(endpointUrl, requestOptions)
.then(response => response.text())
.then(result => console.log(result))
.catch(error => console.log('error', error));
});
</script>
</body>
</html>
</code></pre>
<p>Here is my Dockerfile:</p>
<pre><code>FROM public.ecr.aws/lambda/python:3.9
COPY app.py ./
CMD ["app.handler"]
</code></pre>
<p>Here is my docker-compose.yml file:</p>
<pre><code>version: '3'
services:
backtest-report-dashboard:
build: .
ports:
- "9000:9000"
</code></pre>
<p>Now when I run this image in docker locally, I get the following errors:<br />
In browser console:
<a href="https://i.sstatic.net/lyqff.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lyqff.png" alt="enter image description here" /></a>
In docker container log:</p>
<pre><code>2023-04-09 20:36:23 START RequestId: d5663539-81bd-45aa-b385-6e02cb12be46 Version: $LATEST
Traceback (most recent call last):e.UnmarshalError: Unable to unmarshal input: Expecting value: line 1 column 1 (char 0)
2023-04-09 20:36:23 END RequestId: d5663539-81bd-45aa-b385-6e02cb12be46
2023-04-09 20:36:23 REPORT RequestId: d5663539-81bd-45aa-b385-6e02cb12be46 Duration: 15.74 ms Billed Duration: 16 ms Memory Size: 3008 MB Max Memory Used: 3008 MB
</code></pre>
<p>I don't know how to resolve this, I am doing this to quickly develop my application locally and then deploy it on lambda. Thank you.<br />
One more thing, when I call my API using Postman, it is working properly.</p>
|
<javascript><python><amazon-web-services><docker><cors>
|
2023-04-09 15:19:52
| 0
| 401
|
Rishabh Gupta
|
75,971,245
| 2,510,104
|
How to collate several protobuf messages into one object
|
<p>I have a a protobuf message in the form of:</p>
<pre><code>message CustomMessage
{
SubMessageTypeA a_msg = 1;
SubMessageTypeB b_msg = 2;
SubMessageTypeC c_msg = 3;
}
</code></pre>
<p>I have several objects (assume 3) of this type that each have one of their respective fields populated.
I want to collate these objects into 1 in Python.
Here is a snippet of code I'm unsuccessfully using. Appreciate your help in finding the right answer:</p>
<pre><code>collated_obj = CustomMessage()
for obj in [obj_1, obj_2, obj_3]:
for desc in obj.DESCRIPTOR.fields:
getattr(collated_obj, desc.name).CopyFrom(getattr(obj, desc.name))
</code></pre>
<p>What I'm doing is very brittle and have not been working. As a started, if the field is a basic type (e.g uint32), getattr is failing.
Is there a way to find a reference to the field of a proto other than using <code>getattr</code>? Seems like that's the main part I'm stuck on.
I can convert everything to json and have an easier life. But trying to avoid repeated conversion and serialization/deserialization if possible.</p>
|
<python><descriptor><proto3><protobuf-python>
|
2023-04-09 15:02:19
| 1
| 541
|
Amir
|
75,971,147
| 2,304,735
|
Q Learning code error while running how can I fix it?
|
<p>I am trying to write a simple python program that implements Q-Learning on the OpenAI Gym Environment Frozen Lake. I found the program code on data camp website you will find the code and link below:</p>
<p>Link:
<a href="https://www.datacamp.com/tutorial/introduction-q-learning-beginner-tutorial" rel="nofollow noreferrer">Q_Learning_Code</a></p>
<pre><code>import numpy as np
import gym
import random
from tqdm import trange
env = gym.make("FrozenLake-v1", render_mode="rgb_array")
env.reset()
env.render()
print("Observation Space", env.observation_space)
print("Sample Observation", env.observation_space.sample())
print("Action Space Shape", env.action_space.n)
print("Action Space Sample", env.action_space.sample())
state_space = env.observation_space.n
print("There are ", state_space, " possible states")
action_space = env.action_space.n
print("There are ", action_space, " possible actions")
def initialize_q_table(state_space, action_space):
Qtable = np.zeros((state_space, action_space))
return Qtable
Qtable_frozenlake = initialize_q_table(state_space, action_space)
def epsilon_greedy_policy(Qtable, state, epsilon):
random_init = random.uniform(0, 1)
if(random_init > epsilon):
action = np.argmax(Qtable[state])
else:
action = env.action_space.sample()
return action
def greedy_policy(Qtable, state):
action = np.argmax(Qtable[state])
return action
n_training_episodes = 10000
learning_rate = 0.7
n_eval_episodes = 100
env_id = "FrozenLake-v1"
max_steps = 99
gamma = 0.95
eval_seed = []
max_epsilon = 1.0
min_epsilon = 0.05
decay_rate = 0.0005
def train(n_training_episodes, min_epsilon, max_epsilon, decay_rate, env, max_steps, Qtable):
for episode in trange(n_training_episodes):
epsilon = min_epsilon + (max_epsilon - min_epsilon)*np.exp(-decay_rate*episode)
state = env.reset()
step = 0
done = False
for step in range(max_steps):
action = epsilon_greedy_policy(Qtable, state, epsilon)
new_state, reward, done, trunc, info = env.step(action)
Qtable[state][action] = Qtable[state][action] + learning_rate * (reward + gamma * np.max(Qtable[new_state]) - Qtable[state][action])
if(done):
break
state = new_state
return Qtable
Qtable_frozenlake = train(n_training_episodes, min_epsilon, max_epsilon, decay_rate, env, max_steps, Qtable_frozenlake)
</code></pre>
<p>When I run the program I get the following error:</p>
<p>Traceback (most recent call last):</p>
<pre><code> File "/tmp/ipykernel_15859/3962363982.py", line 80, in <module>
Qtable_frozenlake = train(n_training_episodes, min_epsilon, max_epsilon, decay_rate, env, max_steps, Qtable_frozenlake)
File "/tmp/ipykernel_15859/3962363982.py", line 71, in train
Qtable[state][action] = Qtable[state][action] + learning_rate * (reward + gamma * np.max(Qtable[new_state]) - Qtable[state][action])
IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices
</code></pre>
<p>What does the error mean? How can I fix this error?</p>
|
<python><numpy><reinforcement-learning><openai-gym><q-learning>
|
2023-04-09 14:42:13
| 1
| 515
|
Mahmoud Abdel-Rahman
|
75,971,144
| 4,451,315
|
How to mimic pandas' duplicated in polars
|
<p>In this example:</p>
<pre class="lang-py prettyprint-override"><code>In [6]: pd.Series([1, 2, 2, 3]).duplicated()
Out[6]:
0 False
1 False
2 True
3 False
dtype: bool
In [7]: pl.Series([1, 2, 2, 3]).is_duplicated()
Out[7]:
shape: (4,)
Series: '' [bool]
[
false
true
true
false
]
</code></pre>
<p>pandas tells me that the second <code>2</code> is the duplicated one. However, polars flags both of them as duplicated.</p>
<p>Is there a way to mimic the pandas <code>duplicated</code> behaviour here?</p>
|
<python><pandas><python-polars>
|
2023-04-09 14:41:59
| 1
| 11,062
|
ignoring_gravity
|
75,971,119
| 20,999,526
|
How to reset permissions granted/denied to a pyinstaller executable?
|
<p>I have created an app using pyinstaller that uses <strong>pywebview</strong> and Zoom Meeting SDK. When the user opens the Zoom Meeting for the first time in my app, it automatically asks for microphone and camera permission. If the user denies the permission, there is no way to enable the permission again. How can I ask for microphone and camera permission again? Or is there any way to clear that data so that the app asks for permission again?</p>
|
<python><pyinstaller><zoom-sdk><pywebview>
|
2023-04-09 14:34:49
| 0
| 337
|
George
|
75,971,106
| 3,413,239
|
Parsing complicated polynomial expressions in python
|
<p>I have complicated polynomial expressions in string format. For example, <code>-1/4 + 1/(2*Sqrt[5]) + u^2</code>, <code>u^2 + Root[1 + 84*#1 + 560*#1^2 + 448*#1^3 & , 2, 0]</code>. Is there a way to parse this using <code>sympy</code> or any other library in Python?</p>
<p>Here <code>Root[..., n, 0]</code> is a mathematical expression, where <code>Root[-1 - 2*#1 + 39*#1^2 - 7*#1^3 - 261*#1^4 + 63*#1^5 + 441*#1^6 & , 1, 0]</code> should mean the first root of the polynomial <code>-1-2*x+39”x^2-7*x^3-… +4413x^6</code> denote the root of the equation.</p>
<p><strong>EDIT</strong></p>
<p>I have the following code which converts the mathematical expression to sympy module. I am struggling to extract various information about the polynomial itself.</p>
<pre><code>from sympy.parsing import mathematical
from sympy import Function, Max, Min, sympify
# Define the Mathematica expressions as strings
expr1_str = "-1/4 + 1/(2*Sqrt[5]) + u^2"
expr2_str = "u^2 + Root[1 + 84*#1 + 560*#1^2 + 448*#1^3 &, 2, 0]"
x = mathematica.parse_mathematica(expr2_str)
</code></pre>
<p>It returns <code>sympy.core.add.Add</code>. It is not clear to me how to convert this into a polynomial expression. I want to know various things about these polynomials, like, degree, coefficients of each term, whether there is a sqrt involved or not, etc.</p>
|
<python><parsing><sympy><polynomials>
|
2023-04-09 14:32:29
| 1
| 5,248
|
pg2455
|
75,971,046
| 3,579,151
|
XML and probably LXML issue
|
<p>I have many XML files that look like this</p>
<pre><code><?xml version="1.0" encoding="utf-8" standalone="no"?>
<reiXmlPrenos>
<Qfl>1808</Qfl>
<fOVE>13.7</fOVE>
<NetoVolumen>613</NetoVolumen>
<Hv>104.2</Hv>
<energenti>
<energent>
<sifra>energy_e</sifra>
<naziv>EE [kWh]</naziv>
<vrednost>238981</vrednost>
</energent>
<energent>
<sifra>energy_to</sifra>
<naziv>Do</naziv>
<vrednost>16359</vrednost>
</energent>
<energent>
<sifra>energy_en</sifra>
<naziv>En</naziv>
<vrednost>0</vrednost>
</energent>
</energenti>
<rei>
<zavetrovanost>2</zavetrovanost>
<cone>
<cona>
<cona_id>1</cona_id>
<cc_si_cona>1110000</cc_si_cona>
<visina_cone>2.7</visina_cone>
<dolzina_cone>14</dolzina_cone>
</cona>
<cona>
<cona_id>2</cona_id>
<cc_si_cona>120000</cc_si_cona>
</cona>
</rei>
</reiXmlPrenos>
</code></pre>
<p>I would like to extract certain values from those XML files. So I put together with the help from people here code below that is suppose to work:</p>
<pre><code>import pandas as pd
import glob
import os
from lxml import etree
os.chdir(r'R:\...\XML-1122_test')
dir_path = glob.glob('*.xml')
xmls = dir_path
#note: For simplicity, I'm using the well formed version of the xml strings in your question; you'll have to use actual file names and paths
energies = ["xml", "energy_ge", "energy_en", "energy_dteu", "energy_dtlb"]
#I just made up some names - you'll have to use actual names, of course; the first one is for the file identifier - see below
rows = []
for xml in xmls:
row = []
id = "xml-"+str(xmls.index(xml)+1)
#this creates the file identifier
row.append(id)
root = etree.XML(xml.encode())
#in real life, you'll have to use the parse() method
for energy in energies[1:]:
#the '[1:]' is used to skip the first "energy"; it's only used as the file identifier
target = root.xpath(f'//energent[./sifra[.="{energy}"]]/vrednost/text()')
#note the use of f-strings
row.extend( target if len(target)>0 else "0" )
rows.append(row)
print(pd.DataFrame(rows,columns=energies))
</code></pre>
<p>But in the end I get a warning:</p>
<pre><code>File "<string>", line 1
XMLSyntaxError: Start tag expected, '<' not found, line 1, column 1
</code></pre>
<p>Is this an XML issue? Or maybe lxml issue? Does anyone know how to approach this?</p>
<p>Ideally, the result would look like this</p>
<pre><code>xml energy_e energy_en energy_to
xml-1 238981 0 16539
xml-2 ... .. ..
</code></pre>
|
<python><xml><dataframe><lxml>
|
2023-04-09 14:17:38
| 2
| 419
|
energyMax
|
75,970,820
| 4,508,962
|
Numba compiler fails with np.sum?
|
<p>I have a function that I want to accelerate using Numba (it computes the log-likelihood sum of residual given cov-var matrices, just for the context but <strong>this is not important for the question</strong>)</p>
<pre><code>@jit(nopython=True)
def log_ll_norm_multivar(sigma, epsilon, mean=None) -> float:
"""
This function computes the log-likelihood of a multivariate normal law applied to t observations of n size
Args:
sigma : the variance-covariance matrix, at each t, or constant. Must be ndarray(n,n) or ndarray(t,n,n)
If it is (n,n), it will be copied at all times to have a (t,n,n)
epsilon : ndarray(t, n) residuals
Returns:
float : Sum of the log likelihood of the residual, given the sigma variance-covariance matrices
"""
t_max, n = epsilon.shape
if sigma.shape == (n, n):
sigma = np.array([sigma for _ in range(0, t_max)])
if sigma.shape != (t_max, n, n):
raise IllegalParameterException("Sigma shape must be t*n*n")
if mean is None:
mean = np.zeros((t_max, n))
if mean.shape != (t_max, n):
raise Exception("If provided, mean must be of shape (T,n)")
epsilon_centered = epsilon - mean
sum_det_sigma = np.sum(np.log(np.linalg.det(sigma)))
inv_sigma = inv(sigma)
third_term = np.array([
epsilon_centered[t].transpose()
.dot(inv_sigma[t])
.dot(epsilon_centered[t])
for t in range(0, t_max)
]).sum()
return -1 / 2 * (t_max * n * log(np.pi * 2) + sum_det_sigma + third_term)
</code></pre>
<p>Numba fails to "compile" line <code>sum_det_sigma = np.sum(np.log(np.linalg.det(sigma)))</code> and says:</p>
<pre><code>numba.core.errors.TypingError: Failed in nopython mode pipeline (step: nopython frontend)
No implementation of function Function(<function sum at 0x10d1dfdc0>) found for signature:
>>> sum(float64)
There are 2 candidate implementations:
- Of which 2 did not match due to:
Overload of function 'sum': File: numba/core/typing/npydecl.py: Line 379.
With argument(s): '(float64)':
No match.
....
def log_ll_norm_multivar(sigma, epsilon, mean=None) -> float:
<source elided>
np.sum(np.log(np.linalg.det(sigma)))
</code></pre>
<p>After I debug the code without <code>@jit(nopython=True)</code>, it appears that <code>np.log(np.linalg.det(sigma))</code> is a np.array of shape(1000,), so we're not in the context of this post : <a href="https://stackoverflow.com/questions/46619855/numba-nopython-error-with-np-sum">Numba nopython error with np.sum</a> where the data on which <code>np.sum</code> was applied was a scalar.</p>
<p>Just to be sure, I tried this code:</p>
<pre><code>@jit(nopython=True)
def test():
arr_log = np.log(np.ones((1000,), dtype=np.float64))
return arr_log.sum()
</code></pre>
<p>And it works perfectly.</p>
<p>What is going on with my code ?</p>
|
<python><numpy><numba>
|
2023-04-09 13:30:09
| 0
| 1,207
|
Jerem Lachkar
|
75,970,819
| 11,925,464
|
Frequency rolling count with groupby, Pandas
|
<p>I'm trying to get the frequency count of a groupby which is grouped by name and date. I am having trouble figuring out how to do a 3 days roll count prior to the current day.</p>
<p>example: on 2022-01-05, John's 3 days range are 2022-01-05 and 2022-01-01 with 2 times each, hence, showing 4 in the '3 days roll count' column. And '3 day completion count' column shows that John complete the work 1 time when working on 2022-01-05 and 2022-01-01. Kindly scroll down 'result trying to get' for df sample.</p>
<p>for clarification, 3 days range is a rolling range counting between 1 to 3, hence, the example shows 2 days. And the data starts on 2022-01-01, therefore, any dates before 2022-01-01 are reflects as 0. Lastly, on 2022-01-08, John's 3 days roll count equals to 4 is because the 3 days are 3rd, 5th, and 7th.</p>
<p>sample df</p>
<pre><code>╔════╦════════════╦══════╦════════════╗
║ ║ date_id ║ name ║ completion ║
╠════╬════════════╬══════╬════════════╣
║ 0 ║ 2022-01-01 ║ John ║ Y ║
║ 1 ║ 2022-01-01 ║ John ║ N ║
║ 2 ║ 2022-01-02 ║ Sam ║ N ║
║ 3 ║ 2022-01-02 ║ Sam ║ N ║
║ 4 ║ 2022-01-03 ║ John ║ N ║
║ 5 ║ 2022-01-03 ║ John ║ N ║
║ 6 ║ 2022-01-03 ║ May ║ Y ║
║ 7 ║ 2022-01-04 ║ Sam ║ Y ║
║ 8 ║ 2022-01-04 ║ Sam ║ Y ║
║ 9 ║ 2022-01-05 ║ John ║ Y ║
║ 10 ║ 2022-01-05 ║ May ║ N ║
║ 11 ║ 2022-01-05 ║ May ║ Y ║
║ 12 ║ 2022-01-06 ║ Sam ║ N ║
║ 13 ║ 2022-01-07 ║ John ║ Y ║
║ 14 ║ 2022-01-08 ║ John ║ N ║
╚════╩════════════╩══════╩════════════╝
</code></pre>
<p>result trying to get</p>
<pre><code>╔════╦════════════╦══════╦════════════╦═══════════════════╦════════════════════════╗
║ ║ date_id ║ name ║ completion ║ 3 days roll count ║ 3 day completion count ║
╠════╬════════════╬══════╬════════════╬═══════════════════╬════════════════════════╣
║ 0 ║ 2022-01-01 ║ John ║ Y ║ 0 ║ 0 ║
║ 1 ║ 2022-01-01 ║ John ║ N ║ 0 ║ 0 ║
║ 2 ║ 2022-01-02 ║ Sam ║ N ║ 0 ║ 0 ║
║ 3 ║ 2022-01-02 ║ Sam ║ N ║ 0 ║ 0 ║
║ 4 ║ 2022-01-03 ║ John ║ N ║ 2 ║ 1 ║
║ 5 ║ 2022-01-03 ║ John ║ N ║ 2 ║ 1 ║
║ 6 ║ 2022-01-03 ║ May ║ Y ║ 0 ║ 0 ║
║ 7 ║ 2022-01-04 ║ Sam ║ Y ║ 2 ║ 0 ║
║ 8 ║ 2022-01-04 ║ Sam ║ Y ║ 2 ║ 0 ║
║ 9 ║ 2022-01-05 ║ John ║ Y ║ 4 ║ 1 ║
║ 10 ║ 2022-01-05 ║ May ║ N ║ 1 ║ 1 ║
║ 11 ║ 2022-01-05 ║ May ║ Y ║ 1 ║ 1 ║
║ 12 ║ 2022-01-06 ║ Sam ║ N ║ 4 ║ 2 ║
║ 13 ║ 2022-01-07 ║ John ║ Y ║ 5 ║ 2 ║
║ 14 ║ 2022-01-08 ║ John ║ N ║ 4 ║ 2 ║
╚════╩════════════╩══════╩════════════╩═══════════════════╩════════════════════════╝
</code></pre>
<p>code for sample df</p>
<pre><code> 'date_id': [
'2022-01-01','2022-01-01','2022-01-02','2022-01-02','2022-01-03',
'2022-01-03','2022-01-03','2022-01-04','2022-01-04','2022-01-05',
'2022-01-05','2022-01-05','2022-01-06','2022-01-07','2022-01-08',
],
'name': [
'John','John','Sam','Sam','John',
'John','May','Sam','Sam','John',
'May','May','Sam','John','John',
],
'completion': [
'Y','N','N','N','N',
'N','Y','Y','Y','Y',
'N','Y','N','Y','N'
],
})
ps. i've tried using apply,lambda,shift with pd.to_timedelta found here to shift the dates but wasn't able to get the count right.
kindly advise
</code></pre>
|
<python><pandas><group-by><count><shift>
|
2023-04-09 13:29:45
| 1
| 597
|
ManOnTheMoon
|
75,970,673
| 653,397
|
Unable to call Language Studio Custom Named Entity Recognition Endpoint
|
<p>I have trained and deployed a <code>Custom Named Entity Recognition</code> model in <code>Language Studio</code>. The model is successfully deployed and I can test it from Language Studio UI, I can see the detected entities. But when I try to access the endpoint from either Postman or Python I get the message <code><Response [202]></code>, below are the configuration I am using for accessing the endpoint from Python or Postman.</p>
<p><strong>Code</strong></p>
<pre><code>import json
import requests
url = "https://<language_service>.cognitiveservices.azure.com/language/analyze-text/jobs?api-version=2022-10-01-preview"
payload = json.dumps({
"tasks": [
{
"kind": "CustomEntityRecognition",
"parameters": {
"projectName": "<project_name>",
"deploymentName": "<deployment_name>",
"stringIndexType": "TextElement_v8"
}
}
],
"displayName": "CustomTextPortal_CustomEntityRecognition",
"analysisInput": {
"documents": [
{
"id": "1",
"text": "<text>",
"language": "en-us"
},
{
"id": "2",
"text": "<text>",
"language": "en-us"
}
]
}
})
headers = {
'Content-Type': 'application/json',
'Ocp-Apim-Subscription-Key': '<key>'
}
response = requests.post(url, headers=headers, data=payload)
print(response)
</code></pre>
<p>Can anyone please tell me what am I missing?</p>
|
<python><azure><named-entity-recognition><azure-cognitive-services><language-studio>
|
2023-04-09 13:04:27
| 1
| 1,930
|
Atinesh Singh
|
75,970,490
| 9,494,140
|
how to edit XML file in python
|
<p>I have an XML file that I need to change only 2 attributes inside :</p>
<pre class="lang-xml prettyprint-override"><code><?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2" xmlns:kml="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom">
<Document>
<name>lines.kmz</name>
<Style id="LineStyle00">
<LabelStyle>
<color>00000000</color>
<scale>0</scale>
</LabelStyle>
..............
</code></pre>
<p>All I need to change is the <code>colo</code> and <code>scale</code> inside the <code>labelstyle</code> tag.</p>
<p>here is what I have tried :</p>
<pre class="lang-py prettyprint-override"><code>import xml.etree.ElementTree as ET
def update_label_style(kml_path, new_color, new_scale):
# parse the KML file
tree = ET.parse(kml_path)
root = tree.getroot()
# define the namespace for KML elements
ns = {'kml': 'http://www.opengis.net/kml/2.2'}
# find all LabelStyle elements and update their color and scale values
for label_style in root.findall('.//kml:LabelStyle', ns):
label_style.find('kml:color', ns).text = new_color
label_style.find('kml:scale', ns).text = new_scale
# write the updated KML file back to disk
tree.write(kml_path, encoding='utf-8', xml_declaration=True)
print("Changed the label style")
</code></pre>
<p>after the edits done the XML file are like :</p>
<pre class="lang-xml prettyprint-override"><code><?xml version='1.0' encoding='utf-8'?>
<ns0:kml xmlns:ns0="http://www.opengis.net/kml/2.2">
<ns0:Document>
<ns0:name>lines.kmz</ns0:name>
<ns0:Style id="LineStyle00">
<ns0:LabelStyle>
</code></pre>
<p>the problem is that it has added <code>ns0</code> before every tag, also it has removed an entire line which is</p>
<pre><code><kml xmlns="http://www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2" xmlns:kml="http://www.opengis.net/kml/2.2" xmlns:atom="http://www.w3.org/2005/Atom">
</code></pre>
|
<python><xml><elementtree><xml-namespaces>
|
2023-04-09 12:22:35
| 0
| 4,483
|
Ahmed Wagdi
|
75,970,233
| 6,057,371
|
pandas dataframe get rows when list values in specific columns meet certain condition
|
<p>I have a dataframe:</p>
<pre><code>df = A B
1 [0.2,0.8]
2 [0.6,0.9]
</code></pre>
<p>I want to get only rows where all the values of B are >= 0.5
So here:</p>
<pre><code>new_df = A B
2 [0.6, 0.9]
</code></pre>
<p>What is the best way to do it?</p>
|
<python><pandas><dataframe><data-munging>
|
2023-04-09 11:31:17
| 4
| 2,050
|
Cranjis
|
75,970,153
| 2,707,864
|
Get rid of prefactors, without altering the rest of expression
|
<p>I have this expression in <code>sympy</code></p>
<pre><code>expr = 60378125*139**(13/16)*2143750**(1/8)*x**(9/4)*m**(1/4)/(15420489728*y**(1/4)*z**(3/4))
</code></pre>
<p>(Note: this actually comes from other calculations, which are too long to post here. So I am not sure I can work with anything different, e.g., an alternative expression using <code>Rational</code>).</p>
<p>I want to get rid of all numerical factors, keeping the rest of the expression, as in</p>
<pre><code>expr2 = x**(9/4)*m**(1/4)/(y**(1/4)*z**(3/4))
</code></pre>
<p>How can I easily do that?</p>
|
<python><sympy>
|
2023-04-09 11:10:14
| 3
| 15,820
|
sancho.s ReinstateMonicaCellio
|
75,970,040
| 216,190
|
Workaround Mutable Default Arguments in Python
|
<p>Going through the <a href="https://docs.python.org/3/tutorial/controlflow.html#default-argument-values" rel="nofollow noreferrer">python documentation</a>, I came across below.</p>
<blockquote>
<p><strong>Important warning</strong>: The default value is evaluated only once. This
makes a difference when the default is a mutable object such as a
list, dictionary, or instances of most classes. For example, the
following function accumulates the arguments passed to it on
subsequent calls:</p>
<pre><code>def f(a, L=[]):
L.append(a)
return L
print(f(1))
print(f(2))
print(f(3))
</code></pre>
<p>This will print</p>
<pre><code>[1]
[1, 2]
[1, 2, 3]
</code></pre>
<p>If you don’t want the default to be shared between subsequent calls, you can write the function like this instead:</p>
<pre><code>def f(a, L=None):
if L is None:
L = []
L.append(a)
return L
</code></pre>
</blockquote>
<p>Now as I understand, the first example behaves the way it does, because <code>L</code> binds to an empty list at the definition, and hence any changes to that (on multiple calls) will reflect in the original list.</p>
<p>And hence, in the second example, we don't bind <code>L</code> to anything (i.e. <code>None</code>), and hence on multiple calls, we'll always get it set to <code>None</code>.
So my question is why do we even need the if statement in the second example <code>if L is None</code>, wouldn't it always be true?</p>
|
<python><default-parameters>
|
2023-04-09 10:42:14
| 1
| 24,333
|
Kraken
|
75,969,974
| 1,335,473
|
OpenAI GPT-3 API: Why do I get an unexpected response?
|
<p>I am connecting to the GPT-3 API through a Jupyter Notebook. This is the code:</p>
<pre><code>import openai
import os
# Set up your API key
openai.api_key = os.environ["OPENAI_API_KEY"]
# Choose the API endpoint
model_engine = "davinci"
# Create a prompt
prompt = "Hello, ChatGPT!"
# a temperature of 0.5 returns gibberish
# Generate a response
response = openai.Completion.create(
engine = model_engine,
prompt = prompt,
max_tokens = 1024,
temperature = 0.5,
frequency_penalty = 0.5,
presence_penalty = 0.5
)
# Print the response
print(response.choices[0].text)
</code></pre>
<p>Attempting to debug the code led to me playing around with the <code>temperature</code>, <code>frequency_penalty</code> and <code>presence_penalty</code>. I figure I'm doing something wrong if I can't make it work with such a simple prompt.</p>
<p>If you want an example of the unexpected responses I am getting from the simple prompt above, here is the beginning of a few of them:</p>
<blockquote>
<p>I’m here to review a product that I was sent for free. This is not a
paid advertisement and all opinions are my own. I have been using the
new Bamboo Pen & Touch tablet from Wacom for about a month now and I
have to say that I am very impressed with this product! The Bamboo is
a tablet designed for the everyday user, whether you are an artist or
just someone who likes to sketch on the computer. It’s also great for
people like me who use their tablets primarily for writing.</p>
</blockquote>
<blockquote>
<p>ChatGPT is a chat bot powered by Google Assistant. It can handle up to
10,000 messages per month for free and more if you pay for premium
services. This bot can be used for customer support, sales and
marketing, human resources, and more. In this tutorial, I will show
you how to create a chatbot using ChatGPT with PHP. We will use
Laravel as a backend framework. If you don’t have an account at
ChatGPT yet, sign up here first.</p>
</blockquote>
<p>I was expecting a simple "Hi, how can I assist". What's incorrect here?</p>
|
<python><jupyter-notebook><openai-api><gpt-3>
|
2023-04-09 10:26:58
| 1
| 824
|
JoeyC
|
75,969,972
| 2,744,242
|
How to build Vue to be served by Python?
|
<p>I started a project in Vue 3 (Vite) with the name <code>poc_test</code> and chose everything <code>No</code></p>
<pre><code>npm init vue@latest
✔ Project name: poc_test
✔ Add TypeScript? … No / Yes
✔ Add JSX Support? … No / Yes
✔ Add Vue Router for Single Page Application development? … No / Yes
✔ Add Pinia for state management? … No / Yes
✔ Add Vitest for Unit testing? … No / Yes
✔ Add Cypress for both Unit and End-to-End testing? … No / Yes
✔ Add ESLint for code quality? … No / Yes
✔ Add Prettier for code formatting? … No / Yes
</code></pre>
<p>I executed the commands to install and then build:</p>
<pre><code>cd poc_test
npm install
npm run build
cd dist
</code></pre>
<p>And finally, I wanted a simple web server to serve the <code>poc_test</code> project, so I executed the command.</p>
<pre><code>python -m http.server
</code></pre>
<p>But this error appears in <code>http://localhost:8000/</code>:</p>
<pre><code>Failed to load module script: Expected a JavaScript module script but the server responded with a MIME type of "text/plain". Strict MIME type checking is enforced for module scripts per HTML spec.
</code></pre>
<p>How could I build a Vue 3 project that generates only HTML, JS, and CSS files to be served by <code>python -m http.server</code>?</p>
<p>If I use <code>npm install -g http-server</code> and inside dist <code>http-server -p 8000</code> it works, but it doesn't work with <code>python -m http.server</code></p>
<p><strong>vite.config.js</strong>
this is the configuration file I am using</p>
<pre><code>import { fileURLToPath, URL } from 'node:url'
import { defineConfig } from 'vite'
import vue from '@vitejs/plugin-vue'
// https://vitejs.dev/config/
export default defineConfig({
plugins: [vue()],
resolve: {
alias: {
'@': fileURLToPath(new URL('./src', import.meta.url))
}
}
})
</code></pre>
|
<python><vue.js><vuejs3><vite>
|
2023-04-09 10:26:35
| 0
| 13,406
|
rafaelcb21
|
75,969,938
| 6,543,779
|
Not able to divide forloop.counter in django template?
|
<p>I am using django 4.1.5, and i am trying to divide the current forloop counter with 3.
Problem is, i tried so many different syntax, but nothing is working</p>
<pre><code><h3> {{ forloop.counter|div:3 }} </h3>
Exception Value:
Invalid filter: 'div'
<h3> {{ forloop.counter|div:"3" }} </h3>
Exception Value:
Invalid filter: 'div'
<h3> {{ forloop.counter//3 }} </h3>
Exception Value:
Could not parse the remainder: '//3' from 'forloop.counter//3'
<h3> {{ forloop.counter/3 }} </h3>
Exception Value:
Could not parse the remainder: '/3' from 'forloop.counter//3'\
</code></pre>
<p>how can i get the divided value?</p>
|
<python><django>
|
2023-04-09 10:20:28
| 2
| 499
|
Tarun K
|
75,969,875
| 11,098,908
|
What is the need to define a function to return a class's attribute?
|
<p>I came across some code like this</p>
<pre><code>class Student(Person):
def __init__(self, age):
self.age = age
def get_age(self):
return self.age
</code></pre>
<p>Could you please explain the purpose of the function <code>get_age()</code>? If we want to know a student's age, can't we simply do this:</p>
<pre><code>student = Student(20)
student_age = student.age
</code></pre>
|
<python><class><oop><instance>
|
2023-04-09 10:06:26
| 1
| 1,306
|
Nemo
|
75,969,842
| 18,157,326
|
is the process still running in docker container
|
<p>When I login into the python container running in Kubernetes and using this command to check the process:</p>
<pre><code>root@chat-server-service-594dc96485-rpphl:~/chat-server# ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
root 1 0.0 0.2 97084 17560 ? Ssl 17:19 0:01 python3 -m uvicorn main:app --port 8001 --host 0.0.0.0 --reload
root 8 0.0 0.0 12224 5920 ? S 17:19 0:00 /usr/local/bin/python3 -c from multiprocessing.resource_tracker import main;main(4)
root 9 0.0 0.0 0 0 ? Z 17:19 0:01 [python3] <defunct>
root 14 0.0 0.0 4292 1504 pts/0 Ss 17:47 0:00 bash
root 65 0.0 0.0 6752 1536 pts/0 R+ 17:54 0:00 ps aux
</code></pre>
<p>shows the pid 1 are listening on port 8001, but I could not access the 8001 port, shows that connection refused:</p>
<pre><code>root@chat-server-service-594dc96485-rpphl:~/chat-server# curl http://0.0.0.0:8001
curl: (7) Failed to connect to 0.0.0.0 port 8001: Connection refused
root@chat-server-service-594dc96485-rpphl:~/chat-server# curl http://0.0.0.0:8001
curl: (7) Failed to connect to 0.0.0.0 port 8001: Connection refused
root@chat-server-service-594dc96485-rpphl:~/chat-server# curl http://0.0.0.0:8001
</code></pre>
<p>and I still could run this command in terminal:</p>
<pre><code>root@chat-server-service-594dc96485-rpphl:~/chat-server# python3 -m uvicorn main:app --port 8001 --host 0.0.0.0 --reload
INFO: Will watch for changes in these directories: ['/root/chat-server']
INFO: Uvicorn running on http://0.0.0.0:8001 (Press CTRL+C to quit)
INFO: Started reloader process [58] using WatchFiles
^CINFO: Stopping reloader process [58]
</code></pre>
<p>is that 1 process still running? what should I do to make the startup command work. this is my docker file:</p>
<pre><code>FROM python:3.8
LABEL example (example@gmail.com)
ENV LANG=en_US.UTF-8 \
LC_ALL=en_US.UTF-8 \
TZ=Asia/Shanghai
RUN ln -snf /usr/share/zoneinfo/$TZ /etc/localtime \
&& echo $TZ > /etc/timezone \
&& mkdir -p /root/chat-server
ADD . /root/chat-server/
EXPOSE 8001
RUN pip3 install -r /root/chat-server/requirements.txt
WORKDIR /root/chat-server/
ENTRYPOINT exec python3 -m uvicorn main:app --port 8001 --host 0.0.0.0 --reload
</code></pre>
<p>I added <code>lsof</code> package and found the fastapi did not listening on this port:</p>
<pre><code>root@chat-server-service-6f59d77848-wcj6c:~/chat-server# ps aux|grep python
root 1 0.1 0.2 97084 17784 ? Ssl 17:27 0:00 python3 -m uvicorn main:app --port 8001 --host 0.0.0.0 --reload
root 7 0.0 0.0 12224 6852 ? S 17:27 0:00 /usr/local/bin/python3 -c from multiprocessing.resource_tracker import main;main(4)
root 8 0.9 0.0 0 0 ? Z 17:27 0:02 [python3] <defunct>
root 27 0.0 0.0 3236 720 pts/0 S+ 17:31 0:00 grep python
root@chat-server-service-6f59d77848-wcj6c:~/chat-server# lsof -i:8001
root@chat-server-service-6f59d77848-wcj6c:~/chat-server#
</code></pre>
<p>now the problem is: the process running, but did not listening on port 8001. I also tried to pull the image to my local machine, this problem still exists.</p>
|
<python><docker><uvicorn>
|
2023-04-09 09:59:57
| 0
| 1,173
|
spark
|
75,969,694
| 11,092,636
|
cmap not behaving like it should although the values of the array are all in the cmap indices
|
<p>MRE:</p>
<pre class="lang-py prettyprint-override"><code>import numpy as np
import matplotlib.pyplot as plt
# Define a custom colormap (for the masks images)
colors = ['black', 'red', 'blue', 'purple', 'green']
labels = ['a', 'b', 'c', 'd', 'e']
cmap = mcolors.ListedColormap(colors)
# Function to display the legend (for the masks images)
def show_legend(ax):
patches = [mpatches.Patch(color=colors[i], label=labels[i]) for i in range(len(labels))]
ax.legend(handles=patches, bbox_to_anchor=(1.05, 1), loc='upper left', borderaxespad=0.)
arr = np.array([[4., 4., 4., 4., 4., 4., 4., 4., 5., 5.],
[4., 4., 4., 4., 4., 4., 4., 4., 5., 5.],
[4., 4., 4., 4., 4., 4., 4., 4., 5., 5.],
[4., 4., 4., 4., 4., 4., 5., 5., 5., 5.],
[4., 4., 4., 4., 4., 4., 5., 5., 5., 5.],
[4., 4., 4., 4., 4., 4., 5., 5., 5., 5.],
[4., 4., 4., 4., 4., 4., 5., 5., 5., 5.],
[5., 5., 4., 4., 5., 5., 5., 5., 5., 5.],
[5., 5., 4., 4., 5., 5., 5., 5., 5., 5.],
[5., 5., 4., 4., 5., 5., 5., 5., 5., 5.]]
)
plt.imshow(arr, cmap=cmap)
show_legend(plt.gca())
</code></pre>
<p>The image I get is:
<a href="https://i.sstatic.net/nR40N.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/nR40N.png" alt="enter image description here" /></a></p>
<p>From my understanding, I don't need to use <code>norm</code> because the mask values (my array values) are already within the range of the colormap indices.</p>
<p>However, I should get purple and green.</p>
<p>My intuition is that because there is no other values than 4 and 5, it maps the 4 to 0? But I don't understand why it would work like that? And how to fix it easily?</p>
|
<python><matplotlib>
|
2023-04-09 09:25:31
| 1
| 720
|
FluidMechanics Potential Flows
|
75,969,529
| 11,246,330
|
`pip install --upgrade ` behaves differently for pyproject.toml and requirements.txt
|
<p>If I have a <code>venv</code> with <code>black==22.12.0</code> installed and I run <code>pip install --U black</code> then <code>pip</code> will update to the newest version of black. I will see same behavior if <code>black</code> is listed in a requirements.txt and I run <code>pip install -U -r requirements.txt</code>. In this case pip eagerly updates if a new version is available.</p>
<p>Now suppose I have a <code>venv</code> with and editable install created with <code>pip install -e .</code> from a <code>pyproject.toml</code> (example below). If this has <code>black==22.12.0</code> installed and if I then run <code>pip install -U -e .</code> then black is not updated to the latest version. If I pin in the version of black in <code>pyproject.toml</code> to say <code>black>=23" then </code>pip install -U -e .` will update black. In this case pip only updates if the current version does not satisfy the requirements.</p>
<p>I cannot find anything in the docs describing this behavior. Is it intentional or a fault that <code>pip install -U</code> behaves differently in these two contexts?</p>
<p>Tests are made pip version 23.0.1 and python 3.10.10</p>
<pre class="lang-ini prettyprint-override"><code>[build-system]
requires = ["setuptools>=65.5"]
build-backend = "setuptools.build_meta"
[project]
name = "packaging_example"
version = "0.0.7"
description = "python packaging example"
requires-python = "==3.10"
dependencies = [
"black"
]
[tool.setuptools.packages.find]
where = ["src"]
</code></pre>
|
<python><pip><pyproject.toml>
|
2023-04-09 08:43:38
| 1
| 370
|
PerJensen
|
75,969,388
| 4,399,016
|
Using Python APIs to download data and make pandas
|
<p>I want to download data from <a href="https://tablebuilder.singstat.gov.sg/table/TS/M250141" rel="nofollow noreferrer">this website</a> using APIs that are made available.</p>
<p>However, the <a href="https://tablebuilder.singstat.gov.sg/view-api/for-developers" rel="nofollow noreferrer">code I have</a> with me only returns metadata.</p>
<pre><code>import requests
import pandas as pd
url_path = "https://tablebuilder.singstat.gov.sg/api/table/tabledata/M250141?seriesNoORrowNo=13.1&between=0%2C4&sortby=value%20asc&offset=1&limit=5&timefilter=2013%201Q&search=land"
response = requests.get(url_path)
if response.ok:
data = response.json()
df_SG_DATA = pd.DataFrame(data['Data'])
print(df_SG_DATA)
else:
print(response)
</code></pre>
|
<python><json><pandas>
|
2023-04-09 07:57:50
| 1
| 680
|
prashanth manohar
|
75,969,386
| 1,949,081
|
Python - PDF to HTML failing with OSError in AWS Lambda
|
<p>I have following code executed in AWS Lambda</p>
<pre><code>def handler(event, context):
s3 = boto3.resource('s3')
env = Environment(loader=FileSystemLoader(searchpath='templates'), trim_blocks=True, lstrip_blocks=True,
autoescape=select_autoescape(['html', 'xml']))
template = env.get_template('dag.template')
res = template.render(headers=headers, args=args)
with open('/tmp/w.html', 'w+') as fp:
fp.write(res)
config = pdfkit.configuration(wkhtmltopdf='/usr/local/bin/wkhtmltopdf')
pdfkit.from_file('/tmp/w.html', '/tmp/out.pdf', configuration=config, options={"enable-local-file-access": True})
</code></pre>
<p>Above code runs through successfully in local docker container test but it failed when executed in AWS Lambda with error message</p>
<pre><code>START RequestId: 4c1b6640-08c5-4f32-a935-bbd03d946385 Version: $LATEST
[ERROR] OSError: wkhtmltopdf exited with non-zero code 1. error:
QPainter::begin(): Returned false
Exit with code 1, due to unknown error.
Traceback (most recent call last):
File "/var/task/wbr.py", line 62, in handler
pdfkit.from_file('/tmp/wbr.html', '/opt/out.pdf', configuration=config, options={"enable-local-file-access": True})
File "/var/task/pdfkit/api.py", line 51, in from_file
return r.to_pdf(output_path)
File "/var/task/pdfkit/pdfkit.py", line 201, in to_pdf
self.handle_error(exit_code, stderr)
File "/var/task/pdfkit/pdfkit.py", line 158, in handle_error
raise IOError("wkhtmltopdf exited with non-zero code {0}. error:\n{1}".format(exit_code, error_msg))
END RequestId: 4c1b6640-08c5-4f32-a935-bbd03d946385
REPORT RequestId: 4c1b6640-08c5-4f32-a935-bbd03d946385 Duration: 580.91 ms Billed Duration: 944 ms Memory Size: 2046 MB Max Memory Used: 101 MB Init Duration: 362.31 ms
</code></pre>
<p>My lambda memory is 2046MB and storage is 2046MB. I am using Python3.8</p>
<p>Thank you</p>
|
<python><amazon-web-services><aws-lambda><pdfkit>
|
2023-04-09 07:57:31
| 1
| 5,528
|
slysid
|
75,969,294
| 3,667,693
|
AssertionError: No inf checks were recorded for this optimizer
|
<p>Code was based off on <a href="https://github.com/tloen/alpaca-lora/blob/main/finetune.py" rel="noreferrer">https://github.com/tloen/alpaca-lora/blob/main/finetune.py</a></p>
<p>My objective of this training was to made use of unsupervised training dataset to get the model to understand how words are written in my domain (basically masked language modelling). Reason i don't use the conventional instructional fine tuning is because there's no such dataset of sufficient quantity available to me.</p>
<p>The 2 main changes i've made are as follows</p>
<ol>
<li>instead of fine tuning from Llama's weights, i'll finetune from a existing alpaca-lora's weight. As such, i've edited the code as follows</li>
</ol>
<pre><code>from peft import (
# LoraConfig,
PeftModel,
get_peft_model,
get_peft_model_state_dict,
prepare_model_for_int8_training,
set_peft_model_state_dict,
)
</code></pre>
<p>as well as</p>
<pre><code># config = LoraConfig(
# r=lora_r,
# lora_alpha=lora_alpha,
# target_modules=lora_target_modules,
# lora_dropout=lora_dropout,
# bias="none",
# task_type="CAUSAL_LM",
# )
# model = get_peft_model(model, config)
# replace with this to load directly from alpaca
LORA_WEIGHTS = "tloen/alpaca-lora-7b"
model = PeftModel.from_pretrained(
model,
LORA_WEIGHTS,
torch_dtype=torch.float16,
)
</code></pre>
<ol start="2">
<li>edited the dataset to my own. (I am not using the prompt template). My code for generating the dataset as follows:</li>
</ol>
<pre><code>def chunk_text(data):
concantenated_text = ''
all_result = []
for i in range(data['train'].num_rows):
concantenated_text += data['train']['combined'][i]
tokenized_concantenated_text = tokenizer.encode(concantenated_text)[1:]
tokenized_prompt = tokenizer.encode("### Text: ")[1:]
full_length = len(tokenized_concantenated_text)
for i in range(0, full_length, chunk_size):
text = tokenized_concantenated_text[i: i+chunk_size+overlap_size]
text = tokenized_prompt + text
text = tokenizer.decode(text)
result = tokenizer(text, padding=False)
if result["input_ids"][-1] != tokenizer.eos_token_id:
result["input_ids"].append(tokenizer.eos_token_id)
result["attention_mask"].append(1)
result["labels"] = result["input_ids"].copy()
all_result.append(result)
return all_result
</code></pre>
<p>However, i keep facing the following error no matter how i tweak the code. Really appreciate any help rendered!</p>
<pre><code>╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮
│ in <cell line: 2>:2 │
│ │
│ /usr/local/lib/python3.9/dist-packages/transformers/trainer.py:1662 in train │
│ │
│ 1659 │ │ inner_training_loop = find_executable_batch_size( │
│ 1660 │ │ │ self._inner_training_loop, self._train_batch_size, args.auto_find_batch_size │
│ 1661 │ │ ) │
│ ❱ 1662 │ │ return inner_training_loop( │
│ 1663 │ │ │ args=args, │
│ 1664 │ │ │ resume_from_checkpoint=resume_from_checkpoint, │
│ 1665 │ │ │ trial=trial, │
│ │
│ /usr/local/lib/python3.9/dist-packages/transformers/trainer.py:1991 in _inner_training_loop │
│ │
│ 1988 │ │ │ │ │ │ │ xm.optimizer_step(self.optimizer) │
│ 1989 │ │ │ │ │ elif self.do_grad_scaling: │
│ 1990 │ │ │ │ │ │ scale_before = self.scaler.get_scale() │
│ ❱ 1991 │ │ │ │ │ │ self.scaler.step(self.optimizer) │
│ 1992 │ │ │ │ │ │ self.scaler.update() │
│ 1993 │ │ │ │ │ │ scale_after = self.scaler.get_scale() │
│ 1994 │ │ │ │ │ │ optimizer_was_run = scale_before <= scale_after │
│ │
│ /usr/local/lib/python3.9/dist-packages/torch/cuda/amp/grad_scaler.py:368 in step │
│ │
│ 365 │ │ if optimizer_state["stage"] is OptState.READY: │
│ 366 │ │ │ self.unscale_(optimizer) │
│ 367 │ │ │
│ ❱ 368 │ │ assert len(optimizer_state["found_inf_per_device"]) > 0, "No inf checks were rec │
│ 369 │ │ │
│ 370 │ │ retval = self._maybe_opt_step(optimizer, optimizer_state, *args, **kwargs) │
│ 371 │
╰──────────────────────────────────────────────────────────────────────────────────────────────────╯
AssertionError: No inf checks were recorded for this optimizer.
</code></pre>
<p>Environment:
Python: 3.9
cuda: 11.8</p>
|
<python><pytorch><alpaca>
|
2023-04-09 07:31:20
| 4
| 405
|
John Jam
|
75,969,212
| 1,815,710
|
flask db migrate adds project name to generated script
|
<p>When I run <code>flask db upgrade</code>, I am getting this error</p>
<pre><code> File "/Users/user/Development/projects/zen/zen_api/zen_api/migrations/versions/88e9c6a7b904_.py", line 23, in upgrade
sa.Column('first_name', sa.String(length=zen_api.config.Config['NAME_LENGTH']), nullable=False),
^^^^^^^
NameError: name 'zen_api' is not defined
</code></pre>
<p>I'm not sure why when I run <code>flask db migrate</code> a migration script is generated with <code>zen_api.config.Config[...]</code>. This causes the following error when <code>flask db upgrade</code> is ran</p>
<pre><code> sa.String(length=zen_api.config.Config["NAME_LENGTH"]),
^^^^^^^
NameError: name 'zen_api' is not defined
</code></pre>
<p>migration script generated</p>
<pre><code>def upgrade():
# ### commands auto generated by Alembic - please adjust! ###
op.create_table('user',
sa.Column('id', sa.Integer(), nullable=False),
sa.Column('last_name', sa.String(length=zen_api.config.Config['NAME_LENGTH']), nullable=False),
sa.Column('email', sa.String(length=zen_api.config.Config['EMAIL_LENGTH']), nullable=False),
sa.PrimaryKeyConstraint('id')
</code></pre>
<p>Should I be adding <code>import zen_api</code> to the <code>script.py.mako</code>? I"m new to flask and I'm not sure if this is the proper way to do things or if there a better approach.</p>
<p>File structure for my project</p>
<pre><code>/zen_api
/tests
/zen_api
/migrations
/versions
88efjd8f_.py
/models
</code></pre>
|
<python><flask><flask-sqlalchemy><flask-migrate>
|
2023-04-09 07:11:07
| 0
| 16,539
|
Liondancer
|
75,969,176
| 12,667,081
|
Is there a faster way to parse large json list of lists?
|
<p>I am fetching responses from thousands of API calls, each of which is a new JSON, as the api is paginated. The result I get is a list of lists, with each inner list the JSON of one page. The following code is how I am successfully parsing the data:</p>
<pre><code>import csv
import...
def save(csv_data):
with open(today_date+".csv", "w", newline="", encoding="utf-8") as f:
writer = csv.writer(f)
writer.writerows(csv_data)
for p in range(pet_length):
loop_length = 0
try:
pet_count += len(all_pet_data[p]['leaderboard'])
except Exception as e:
print(f'{e} in outer loop')
print(f"Data is {all_pet_data[p]}")
error_count += 1
try:
loop_length = len(all_pet_data[p]['leaderboard'])
except Exception as e:
print(f'{e} in inner loop length check')
error_count += 1
for i in range(loop_length):
ids.append(all_pet_data[p]['leaderboard'][i]['id'])
csv_data.append([all_pet_data[p]['leaderboard'][i]['id'], all_pet_data[p]['leaderboard'][i]['level'],
all_pet_data[p]['leaderboard'][i]['name']])
# multiple cleaning methods to eliminate any blanks or repeats in the final:
csv_data = [c for c in csv_data if c != []]
for c in range(len(csv_data)):
for s in range(len(csv_data)):
if c != s and csv_data[c][0] == csv_data[s][0]:
csv_data[s] = ['delete', 'delete', 'delete']
csv_data = [c for c in csv_data if c != ['delete', 'delete', 'delete']]
# debugging checks:
print(f'error_count: {error_count}')
print(f'Total pets grabbed: {pet_count}')
print(f'Total ids grabbed: {len(ids)}')
print(f'Unique pets grabbed: {len(csv_data)}')
csv_data = [c for c in csv_data if c != [[]]]
save(csv_data) # saves the data as a CSV
</code></pre>
<p>The issue is, with thousands of pages of the API, and up to 1k "pets" per page, these nested loops are massive. In general, the final result is expected to be around 80k rows. So, roughly, 4k outer loops with around 20-25 inner loops on average. So, it is running very slowly. I believe it's the two nested for loops, but I've included the actual CSV save in case that is poorly written. The calls themselves take a long time and are prone to errors, so I haven't determined exactly where the final slowdown is. Is there something I could be doing better here to speed all of this up?</p>
<p>I should note, I can't figure out any logic to remove duplicates before the csv nested for loop, because the repeated ids could happen anywhere. Plus, I'm using multi-threading to fetch everything, so they are in random orders.</p>
|
<python><list><nested-for-loop>
|
2023-04-09 06:58:45
| 1
| 545
|
Brian - RGY Studio
|
75,968,905
| 3,398,324
|
Create txt file from Pandas Dataframe
|
<p>I would like to save my dataframe in a way that matches an existing txt file (I have a trained model based on the this txt file and I now want to predict on new data, that needs to match this format).</p>
<p>The target txt file looks like this (first3 rows):</p>
<pre><code>2 qid:0 0:0.4967141530112327 1:-0.1382643011711847 2:0.6476885381006925 3:1.523029856408025 4:-0.234153374723336
1 qid:2 0:1.465648768921554 1:-0.2257763004865357 2:0.06752820468792384 3:-1.424748186213457 4:-0.5443827245251827
2 qid:0 0:0.7384665799954104 1:0.1713682811899705 2:-0.1156482823882405 3:-0.3011036955892888 4:-1.478521990367427
</code></pre>
<p>First column is just a random integer (here the 2 and the 1)
The qid is always connected via colon to an integer.
Then an integer is followed by a float, for the rest of the columns.</p>
<p>My dataframe looks like this:</p>
<pre><code>data = {'label': [2,3,2],
'qid': ['qid:0', 'qid:1','qid:0'],
'0': [0.4967, 0.4967,0.4967],
'1': [0.4967, 0.4967,0.4967],
'2': [0.4967, 0.4967,0.4967],
'3': [0.4967, 0.4967,0.4967],
'4': [0.4967, 0.4967,0.4967]}
df = pd.DataFrame(data)
</code></pre>
|
<python><pandas><txt>
|
2023-04-09 05:21:55
| 2
| 1,051
|
Tartaglia
|
75,968,903
| 11,720,193
|
Glue Python script to read xml (from SOAP) and write as Parquet to S3
|
<p>I have a requirement to create a Glue Python script to read XML using SOAP API and store the returned xml string in Parquet format on S3.</p>
<p>I got the SOAP communication working fine. Now, for the later part, my intention was to create a <code>Pandas</code> <code>dataframe</code> by using <code>read_xml()</code> method with the returned XML string and then use the <code>df.to_parquet()</code> to store the xml in <code>parquet</code> format on <code>S3</code>.</p>
<p>Unfortunately, I am <strong>unable</strong> to parse the xml with <code>read_xml()</code>. Here is the code that I had created -</p>
<pre><code>import requests
import xmltodict
import pandas as pd
import pyarrow
url = "https://www.w3schools.com/xml/tempconvert.asmx"
payload = """<?xml version="1.0" encoding="utf-8"?>
<soap12:Envelope
xmlns:xsi="http://w3.org/2002/XMLSchema-instance"
xmlns:xsd="http://www.w3.org/2001/XMLSchema"
xmlns:soap12="http://schemas.xmlsoap.org/soap/envelope/">
<soap12:Body>
<CelsiusToFahrenheit xmlns="https://www.w3schools.com/xml/">
<Celsius>20</Celsius>
</CelsiusToFahrenheit>
</soap12:Body>
</soap12:Envelope>"""
headers = {
'Content-Type': 'text/xml; charset=utf-8'
}
response = requests.request("POST", url, headers=headers, data=payload)
result = xmltodict.parse(response.text)['soap:Envelope']['soap:Body']['CelsiusToFahrenheitResponse']['CelsiusToFahrenheitResult']
print(f"The final result is : {result}")
df = pd.read_xml(response.text)
s3_url = 's3://bucket/folder/bucket.parquet'
df.to_parquet(s3_url)
</code></pre>
<p>When I run <code>print(df)</code> the following is returned -</p>
<p>Output:</p>
<pre><code>CelsiusToFahrenheitResponse
0 NaN
</code></pre>
<p>Any help is appreciated.</p>
|
<python><pandas><soap><xml-parsing>
|
2023-04-09 05:20:35
| 1
| 895
|
marie20
|
75,968,750
| 13,916,049
|
How to extract substring from pandas column?
|
<p>I want to retrieve only the first part of the string for the entire column.</p>
<pre><code>meta["Cell_type"].str.rstrip(" ")[0]
</code></pre>
<p>Data:</p>
<p><code>meta.iloc[1:5]</code></p>
<pre><code>pd.DataFrame({'Assay Type': {'SRR9200814': 'RNA-Seq',
'SRR9200815': 'RNA-Seq',
'SRR9200816': 'RNA-Seq',
'SRR9200817': 'RNA-Seq'},
'Cell_type': {'SRR9200814': 'normal neural stem cells',
'SRR9200815': 'normal neural stem cells',
'SRR9200816': 'normal neural stem cells',
'SRR9200817': 'normal neural stem cells'},
'Type': {'SRR9200814': 'diploid',
'SRR9200815': 'diploid',
'SRR9200816': 'diploid',
'SRR9200817': 'diploid'}})
</code></pre>
<p>Current output:</p>
<pre><code>'normal neural stem cells'
</code></pre>
<p>Desired output:</p>
<pre><code>Run
SRR9200813 normal
SRR9200814 glioblastoma
SRR9200815 normal
SRR9200816 normal
SRR9200817 normal
</code></pre>
|
<python><pandas>
|
2023-04-09 04:16:53
| 4
| 1,545
|
Anon
|
75,968,724
| 9,565,958
|
how to remove some of one array's elements according to other array's value in numpy
|
<p>In numpy,</p>
<pre><code>a = np.array([[0, 0, 0, 1, 1, 1, 0], [0, 0, 1, 1, 1, 0, 0]])
b = np.array([[5, 2, 0, 1, 0, 2, 3], [0, 0, 0, 1, 5, 3, 0]])
</code></pre>
<p>I want to remove some of <code>b</code>'s elements according to <code>a</code>'s values. I want to keep the elements of <code>b</code> only when the value of <code>a</code>'s matching elements is 1, and discard the other elements of <code>b</code>.<br />
In this example what I want is:</p>
<pre><code>>>> [[1, 0, 2], [0, 1, 5]]
</code></pre>
<p>How can I get this result?</p>
|
<python><numpy>
|
2023-04-09 04:03:36
| 3
| 512
|
June Yoon
|
75,968,675
| 3,233,017
|
DistributionNotFound exception in Jupyter but not in standard Python
|
<p>I have a fork of an open-source Python module (CLTK) which is needed for a particular application. To make it easier on people running my code, I specifically modify the PATH before importing it, so that they can have standard CLTK for most applications and my forked CLTK for this one purpose.</p>
<pre><code>import sys
sys.path.insert(1, '/path/to/fork/cltk/')
import cltk
</code></pre>
<p>In standard Python (either through the REPL or in a script), this works fine. It imports my forked version without any issues.</p>
<p>In Jupyter, though, I get the following exception:</p>
<pre><code>/path/to/fork/cltk/__init__.py in <module>
20 __url__ = 'http://cltk.org'
21
---> 22 __version__ = get_distribution('cltk').version # pylint: disable=no-member
23
24 if 'CLTK_DATA' in os.environ:
/usr/lib/python3/dist-packages/pkg_resources/__init__.py in get_distribution(dist)
469 dist = Requirement.parse(dist)
470 if isinstance(dist, Requirement):
--> 471 dist = get_provider(dist)
472 if not isinstance(dist, Distribution):
473 raise TypeError("Expected string, Requirement, or Distribution", dist)
/usr/lib/python3/dist-packages/pkg_resources/__init__.py in get_provider(moduleOrReq)
345 """Return an IResourceProvider for the named module or requirement"""
346 if isinstance(moduleOrReq, Requirement):
--> 347 return working_set.find(moduleOrReq) or require(str(moduleOrReq))[0]
348 try:
349 module = sys.modules[moduleOrReq]
/usr/lib/python3/dist-packages/pkg_resources/__init__.py in require(self, *requirements)
889 included, even if they were already activated in this working set.
890 """
--> 891 needed = self.resolve(parse_requirements(requirements))
892
893 for dist in needed:
/usr/lib/python3/dist-packages/pkg_resources/__init__.py in resolve(self, requirements, env, installer, replace_conflicting, extras)
775 if dist is None:
776 requirers = required_by.get(req, None)
--> 777 raise DistributionNotFound(req, requirers)
778 to_activate.append(dist)
779 if dist not in req:
DistributionNotFound: The 'cltk' distribution was not found and is required by the application
</code></pre>
<p>Now, since this fork is under my control, I can just comment out the offending line, or hardcode a version number, or otherwise remove the call to get_distribution.</p>
<p>But, I'd like to know <em>why</em> this happens specifically in Jupyter and not in standard Python.</p>
<p>What's happening differently in my Jupyter notebook that causes this to break, when it works just fine in the REPL or in a Python script file?</p>
|
<python><jupyter-notebook>
|
2023-04-09 03:45:19
| 0
| 3,547
|
Draconis
|
75,968,672
| 844,936
|
Python LangChain CSV Agent need help in Chart Image saving
|
<p>I am pretty new in LangChain, playing with Langchain's CSV Agent. It is giving me the desired result. Look at the attached image</p>
<p><a href="https://i.sstatic.net/vrDGo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/vrDGo.png" alt="enter image description here" /></a></p>
<p>Now, I want to use the code outside of Google Colab. However, in the python script, it is giving me the text, but as expected, no figure. How can I extract the figure from Langchain's CSV Agent if any? I need your guidance.</p>
|
<python><google-colaboratory><langchain>
|
2023-04-09 03:42:14
| 2
| 520
|
LuckyCoder
|
75,968,670
| 20,262,902
|
Change the cell size/width of imshow or similar function
|
<p>I need the first and last cells to be half the width.</p>
<p>My goal is to get something like this:</p>
<p><a href="https://i.sstatic.net/ereWv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ereWv.png" alt="enter image description here" /></a></p>
<p>But I got this:</p>
<p><a href="https://i.sstatic.net/TZrWc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/TZrWc.png" alt="enter image description here" /></a></p>
<p>My code:</p>
<pre><code>import numpy as np
import matplotlib.pyplot as plt
data = np.array([0. , 0.2 , 0.4 , 0.6 , 0.8 , 1. ])
fig, ax = plt.subplots()
matrix = data.reshape(1, -1)
ax.imshow(matrix, cmap='hot')
plt.show()
</code></pre>
<p>Is there an option to do this?</p>
|
<python><matplotlib><plot><imshow>
|
2023-04-09 03:41:14
| 1
| 670
|
Joao_PS
|
75,968,600
| 13,776,631
|
Why does my simple linear model learn a threshold function f(x) = (x > 0) on the dataset g(X) but perform poorly on X?
|
<p>Why does the <em>validation loss</em> increase while <em>train loss</em> decreases when training data is x and validation data is gen(x)?</p>
<p>I made a <code>classifier(SimpleModel)</code>.
when I train a classifier with <code>input=torch.randn(100,1)</code> and <code>label=(1 if input>0, else 0)</code>, the classifier is trained well with validation accuracy~1.</p>
<p><a href="https://i.sstatic.net/lZY2S.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lZY2S.png" alt="enter image description here" /></a></p>
<p>However, when I train the clssifier with <code>input=gen(torch.randn(100,1))</code> and <code>label=(1 if input>0, else 0)</code>, wherein <code>gen()</code> is a model I made. the classifier is not trained well. The <em>train loss</em> decreases, but validation accuracy doesn't decrease. The final validation accuracy was ~0.5</p>
<p><a href="https://i.sstatic.net/429rx.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/429rx.png" alt="enter image description here" /></a></p>
<p>What's the problem?</p>
<p>The code is:</p>
<pre><code>import torch
from torch import nn
from torch import optim
import matplotlib.pyplot as plt
class SimpleModel(nn.Module):
def __init__(self):
super(SimpleModel, self).__init__()
self.linear1=nn.Linear(1,1)
self.sigmoid=nn.Sigmoid()
def forward(self,x):
x=self.linear1(x)
x=self.sigmoid(x)
return x
class Gen(nn.Module):
def __init__(self):
super(Gen, self).__init__()
self.linear1=nn.Linear(1,2)
self.relu1=nn.ReLU()
self.linear2=nn.Linear(2,1)
def forward(self,x):
x=self.linear1(x)
x=self.relu1(x)
x=self.linear2(x)
return x
model=SimpleModel()
gen=Gen()
criterion = nn.BCELoss()
optimizer = optim.Adam(model.parameters(), lr=0.0001)
loss_history=[]
val_loss_history=[]
acc_history=[]
val_acc_history=[]
for step in range(10000):
z=torch.randn(100,1)
x=gen(z)
label=(x>0).float().reshape(-1,1)
out=model(x)
loss=criterion(out,label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
acc=(out.round()==label).float().mean()
val_x=torch.randn(100,1)
val_label=(val_x>0).float().reshape(-1,1)
val_out=model(val_x)
val_loss=criterion(val_out,val_label)
val_acc=(val_out.round()==val_label).float().mean()
print(loss.item(), acc.item(), val_loss.item(), val_acc.item())
loss_history.append(loss.item())
val_loss_history.append(val_loss.item())
acc_history.append(acc.item())
val_acc_history.append(val_acc.item())
plt.figure()
plt.plot(loss_history, label="train loss")
plt.plot(val_loss_history, label="validation loss")
plt.legend()
</code></pre>
|
<python><deep-learning><pytorch><neural-network>
|
2023-04-09 03:13:11
| 1
| 301
|
beginner
|
75,968,481
| 4,019,495
|
What causes the difference between `f"{a}"` vs `f"{a=}"` in python's f strings?
|
<p>In the following python code, what causes the difference in outputs?</p>
<pre><code>try:
open('nonexistent')
except Exception as err:
print(f'{err=}')
print(f'{err}')
</code></pre>
<p>result:</p>
<pre><code>err=FileNotFoundError(2, 'No such file or directory')
[Errno 2] No such file or directory: 'nonexistent'
</code></pre>
<p>In this instance, the second is more useful, but I'm surprised there's a difference at all. What's the cause of this, and what's the logic behind this design decision?</p>
<p>I'm using Python 3.8.3.</p>
|
<python><python-3.x><f-string>
|
2023-04-09 02:12:43
| 1
| 835
|
extremeaxe5
|
75,968,471
| 3,725,925
|
Google-Drive-API File not found using FORM_ID
|
<p>I'm a novice Python hobbyist programmer who is trying to automate some vocabulary tests for my students. Everything in the below code works except for sharing it with the list of emails. I know for a fact that the Drive ID is the same as the Forms ID as I have manually checked with testing. I'm able to add all of the questions in the script using <code>FORM_ID</code>, but I'm not able to set permissions for some reason with the same <code>FORM_ID</code>. I always get the below error. I have also tried authenticating with the same email that created the Google Project, but I get the same error, so I don't think it's related to Google Drive permissions for the authenticating email. I have spent so much time on this error, please help.</p>
<pre><code>An error occurred: <HttpError 404 when requesting https://www.googleapis.com/drive/v3/files/FORM_ID/permissions?alt=json returned "File not found: FORM_ID.". Details: "[{'message': 'File not found: FORM_ID.', 'domain': 'global', 'reason': 'notFound', 'location': 'fileId', 'locationType': 'parameter'}]">
</code></pre>
<p>FULL CODE</p>
<pre><code>from __future__ import print_function
from google.oauth2 import service_account
from googleapiclient.discovery import build
from googleapiclient.errors import HttpError
from googleapiclient.discovery import build
import time
def share_form_with_emails(service_account_file, form_id, emails):
SCOPES_DRIVE = ["https://www.googleapis.com/auth/drive"]
credentials = service_account.Credentials.from_service_account_file(service_account_file, scopes=SCOPES_DRIVE)
drive_service = build('drive', 'v3', credentials=credentials)
def share_form(attempt):
if attempt > 2:
print("Failed to share the form after 5 attempts.")
return
try:
for email in emails:
user_permission = {
'type': 'user',
'role': 'reader', # sufficient to submit and view the form
'emailAddress': email
}
drive_service.permissions().create(fileId=form_id, body=user_permission).execute()
print(f'Successfully shared form with {len(emails)} email addresses.')
except HttpError as error:
print(f'An error occurred: {error}')
if "File not found" in str(error):
print(f"Attempt {attempt}: Retrying in {2 ** attempt} seconds...")
time.sleep(2 ** attempt)
share_form(attempt + 1)
else:
return None
share_form(attempt=1)
def __main__():
import pandas as pd
from apiclient import discovery
from httplib2 import Http
from oauth2client import client, file, tools
EMAILS = ['blah.blah@gmail.com']
SCOPES = "https://www.googleapis.com/auth/forms.body"
DISCOVERY_DOC = "https://forms.googleapis.com/$discovery/rest?version=v1"
SERVICE_ACCOUNT_FILE = r'service_account_keys.json'
store = file.Storage('token.json')
creds = None
if not creds or creds.invalid:
flow = client.flow_from_clientsecrets(r'credentials.json', SCOPES)
creds = tools.run_flow(flow, store)
form_service = discovery.build('forms', 'v1', http=creds.authorize(
Http()), discoveryServiceUrl=DISCOVERY_DOC, static_discovery=False)
file_name = r'output.xlsx'
dfs = pd.read_excel(file_name, sheet_name=None)
for df in dfs:
if df == 'U2W2' or df == 'U2W3': # TODO: Update this each time to avoid duplicates
print(f"Setting up Spelling Test: {df}")
name_title = f"Spelling Test: {df}"
class_name = 'Green Bears'
# TODO: Figure out how to share this automatically with list of emails or something
NEW_FORM = {
"info": {
"title": name_title,
"documentTitle": name_title,
}
}
result = form_service.forms().create(body=NEW_FORM).execute() # Create the new spelling test form
FORM_ID = result['formId']
print("FORM_ID", FORM_ID)
share_form_with_emails(SERVICE_ACCOUNT_FILE, FORM_ID, EMAILS) # Share the newly created forms
NEW_FORM_EDITED_INFO = {
"requests": [
{
"updateFormInfo": {
"info": {
"title": name_title,
"description": f"Type the correct word for the given definition. Good luck, {class_name}!",
},
"updateMask": "*",
}
}
]
}
NEW_FORM_EDITED_SETTINGS = {
"requests": [
{
"updateSettings": {
"settings": {
"quizSettings": {
"isQuiz": True
}
},
"updateMask": "*",
}
}
]
}
print(f"-- Setting up Form Info: {df}")
form_service.forms().batchUpdate(formId=result["formId"], body=NEW_FORM_EDITED_INFO).execute() # Update form info
print(f"-- Setting up Form Settings: {df}")
form_service.forms().batchUpdate(formId=result["formId"], body=NEW_FORM_EDITED_SETTINGS).execute() # Update form settings
# Add definition questions
for index, row in dfs[df].iterrows(): # Iterate over each row as each row is a spelling test question
print(f"---- Adding question {index + 1} / 20")
NEW_QUESTION = {
"requests": [{
"createItem": {
"item": {
"title": f"𝐃𝐞𝐟𝐢𝐧𝐢𝐭𝐢𝐨𝐧: {dfs[df]['DEFINITION'][index]}",
"questionItem": {
"question": {
"required": False, # Students don't have to answer any of the questions
"textQuestion": { # Create text question
"paragraph": False # False means short text
},
"grading": {
"pointValue": 1,
"correctAnswers": {
"answers": [
{
"value": dfs[df]['WORD'][index].lower()
},
{
"value": dfs[df]['WORD'][index].upper()
},
{
"value": dfs[df]['WORD'][index].title()
}
]
}
}
}
},
},
"location": {
"index": index
}
}
}]
}
question_setting = form_service.forms().batchUpdate(formId=result["formId"], body=NEW_QUESTION).execute() # Add the question
# Add paragraph questions
for i in range(5):
print(f"---- Adding sentence question {i + 1} / 5")
NEW_QUESTION_SENTENCE = {
"requests": [{
"createItem": {
"item": {
"title": "Create an original sentence using any of the above vocabulary words.",
"questionItem": {
"question": {
"required": False, # Students don't have to answer any of the questions
"textQuestion": { # Create text question
"paragraph": True # False means short text
},
"grading": {
"pointValue": 1,
}
}
},
},
"location": {
"index": i + 20
}
}
}]
}
question_setting = form_service.forms().batchUpdate(formId=result["formId"], body=NEW_QUESTION_SENTENCE).execute() # Add the question
print("\n\n ~~~~~ FINISHED ~~~~~ \n\n")
__main__()
</code></pre>
|
<python><google-drive-api><google-forms-api>
|
2023-04-09 02:09:27
| 1
| 340
|
Mikitz06
|
75,968,433
| 13,854,064
|
How to access Android notifications with a Toga app
|
<p>I am trying to build an app with Toga/Beeware, mainly for Android but ideally cross-platform. I need the app to send push notifications at user-specified times. I see <a href="https://github.com/beeware/toga/issues/907" rel="nofollow noreferrer">some have previously attempted the same challenge</a>, but <a href="https://github.com/beeware/beeware/issues/163" rel="nofollow noreferrer">there's currently no official documentation</a>. Has anyone accomplished this? And if so, how?</p>
|
<python><android><push-notification><notifications><beeware>
|
2023-04-09 01:48:07
| 1
| 434
|
gimi
|
75,968,376
| 4,930,914
|
Lowercase text with regex pattern
|
<p>I use regex pattern to block acronyms while lower casing text.</p>
<p>The code is</p>
<pre><code># -*- coding: utf-8 -*-
#!/usr/bin/env python
from __future__ import unicode_literals
import codecs
import os
import re
text = "This sentence contains ADS, NASA and K.A. as acronymns."
pattern = r'([A-Z][a-zA-Z]*[A-Z]|(?:[A-Z]\.)+)'
matches = re.findall(pattern, text)
def lowercase_ignore_matches(match):
word = match.group()
if word in matches:
return word
return word.lower()
text2 = re.sub(r"\w+", lowercase_ignore_matches, text)
print(text)
print(text2)
matches = re.findall(pattern, text)
print (matches)
</code></pre>
<p>output is</p>
<pre><code>This sentence contains ADS, NASA and K.A. as acronymns.
this sentence contains ADS, NASA and k.a. as acronymns.
['ADS', 'NASA', 'K.A.']
</code></pre>
<p>The issue is why is it ignoring <code>k.a.</code> while identifying it as acronymns.</p>
<p>I wish to retain k.a. as K.A.</p>
<p>Kindly help</p>
|
<python><regex>
|
2023-04-09 01:20:58
| 1
| 915
|
Programmer_nltk
|
75,968,226
| 274,610
|
How can I install Tensorflow and CUDA drivers?
|
<p>I have a clean install of CUDA drivers and TensorFlow, but I cannot get TensorFlow to use my CUDA drivers.</p>
<p>My version of Windows 11 Pro is 22H2.</p>
<p>I'm running Anaconda Python 3.9.15.</p>
<p>Here is the information on my CUDA drivers (by running nvidia-smi):</p>
<pre><code>Sat Apr 8 16:44:56 2023
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 531.14 Driver Version: 531.14 CUDA Version: 12.1 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA GeForce RTX 2080 S... WDDM | 00000000:01:00.0 On | N/A |
| N/A 48C P8 10W / N/A| 2049MiB / 8192MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
</code></pre>
<p>My installation of TensorFlow showed:</p>
<pre><code>>pip install tensorflow
Requirement already satisfied: tensorflow in h:\anaconda3\lib\site-packages (2.10.0)
Requirement already satisfied: wrapt>=1.11.0 in h:\anaconda3\lib\site-packages (from tensorflow) (1.15.0)
Requirement already satisfied: keras<2.11,>=2.10.0 in h:\anaconda3\lib\site-packages (from tensorflow) (2.10.0)
Requirement already satisfied: flatbuffers>=2.0 in h:\anaconda3\lib\site-packages (from tensorflow) (23.1.21)
Requirement already satisfied: typing-extensions>=3.6.6 in h:\anaconda3\lib\site-packages (from tensorflow) (4.5.0)
Requirement already satisfied: packaging in h:\anaconda3\lib\site-packages (from tensorflow) (23.0)
Requirement already satisfied: absl-py>=1.0.0 in h:\anaconda3\lib\site-packages (from tensorflow) (1.4.0)
Requirement already satisfied: opt-einsum>=2.3.2 in h:\anaconda3\lib\site-packages (from tensorflow) (3.3.0)
Requirement already satisfied: numpy>=1.20 in h:\anaconda3\lib\site-packages (from tensorflow) (1.21.6)
Requirement already satisfied: six>=1.12.0 in h:\anaconda3\lib\site-packages (from tensorflow) (1.16.0)
Requirement already satisfied: libclang>=13.0.0 in h:\anaconda3\lib\site-packages (from tensorflow) (14.0.6)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in h:\anaconda3\lib\site-packages (from tensorflow) (0.27.0)
Requirement already satisfied: keras-preprocessing>=1.1.1 in h:\anaconda3\lib\site-packages (from tensorflow) (1.1.2)
Requirement already satisfied: tensorflow-estimator<2.11,>=2.10.0 in h:\anaconda3\lib\site-packages (from tensorflow) (2.10.0)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in h:\anaconda3\lib\site-packages (from tensorflow) (1.46.0)
Requirement already satisfied: setuptools in h:\anaconda3\lib\site-packages (from tensorflow) (65.6.3)
Requirement already satisfied: astunparse>=1.6.0 in h:\anaconda3\lib\site-packages (from tensorflow) (1.6.3)
Requirement already satisfied: termcolor>=1.1.0 in h:\anaconda3\lib\site-packages (from tensorflow) (2.2.0)
Requirement already satisfied: tensorboard<2.11,>=2.10 in h:\anaconda3\lib\site-packages (from tensorflow) (2.10.1)
Requirement already satisfied: protobuf<3.20,>=3.9.2 in h:\anaconda3\lib\site-packages (from tensorflow) (3.19.6)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in h:\anaconda3\lib\site-packages (from tensorflow) (0.4.0)
Requirement already satisfied: h5py>=2.9.0 in h:\anaconda3\lib\site-packages (from tensorflow) (2.10.0)
Requirement already satisfied: google-pasta>=0.1.1 in h:\anaconda3\lib\site-packages (from tensorflow) (0.2.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in h:\anaconda3\lib\site-packages (from astunparse>=1.6.0->tensorflow) (0.40.0)
Requirement already satisfied: google-auth<3,>=1.6.3 in h:\anaconda3\lib\site-packages (from tensorboard<2.11,>=2.10->tensorflow) (2.17.2)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in h:\anaconda3\lib\site-packages (from tensorboard<2.11,>=2.10->tensorflow) (0.6.1)
Requirement already satisfied: werkzeug>=1.0.1 in h:\anaconda3\lib\site-packages (from tensorboard<2.11,>=2.10->tensorflow) (2.2.3)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in h:\anaconda3\lib\site-packages (from tensorboard<2.11,>=2.10->tensorflow) (0.4.6)
Requirement already satisfied: requests<3,>=2.21.0 in h:\anaconda3\lib\site-packages (from tensorboard<2.11,>=2.10->tensorflow) (2.28.2)
Requirement already satisfied: markdown>=2.6.8 in h:\anaconda3\lib\site-packages (from tensorboard<2.11,>=2.10->tensorflow) (3.4.3)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in h:\anaconda3\lib\site-packages (from tensorboard<2.11,>=2.10->tensorflow) (1.8.1)
Requirement already satisfied: rsa<5,>=3.1.4 in h:\anaconda3\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard<2.11,>=2.10->tensorflow) (4.9)
Requirement already satisfied: pyasn1-modules>=0.2.1 in h:\anaconda3\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard<2.11,>=2.10->tensorflow) (0.2.7)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in h:\anaconda3\lib\site-packages (from google-auth<3,>=1.6.3->tensorboard<2.11,>=2.10->tensorflow) (5.3.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in h:\anaconda3\lib\site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.11,>=2.10->tensorflow) (1.3.1)
Requirement already satisfied: importlib-metadata>=4.4 in h:\anaconda3\lib\site-packages (from markdown>=2.6.8->tensorboard<2.11,>=2.10->tensorflow) (6.2.0)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in h:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.11,>=2.10->tensorflow) (1.26.15)
Requirement already satisfied: certifi>=2017.4.17 in h:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.11,>=2.10->tensorflow) (2022.12.7)
Requirement already satisfied: idna<4,>=2.5 in h:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.11,>=2.10->tensorflow) (3.4)
Requirement already satisfied: charset-normalizer<4,>=2 in h:\anaconda3\lib\site-packages (from requests<3,>=2.21.0->tensorboard<2.11,>=2.10->tensorflow) (2.1.1)
Requirement already satisfied: MarkupSafe>=2.1.1 in h:\anaconda3\lib\site-packages (from werkzeug>=1.0.1->tensorboard<2.11,>=2.10->tensorflow) (2.1.2)
Requirement already satisfied: zipp>=0.5 in h:\anaconda3\lib\site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard<2.11,>=2.10->tensorflow) (3.15.0)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in h:\anaconda3\lib\site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.11,>=2.10->tensorflow) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in h:\anaconda3\lib\site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.11,>=2.10->tensorflow) (3.2.2)
</code></pre>
<p>When I run the following test it shows that TensorFlow cannot find the CUDA driver</p>
<pre><code>>python -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
[]
</code></pre>
|
<python><tensorflow>
|
2023-04-09 00:13:47
| 2
| 529
|
user274610
|
75,968,070
| 6,423,456
|
How do I fix the return type of a Django model manager's method?
|
<p>I'm using Django 4.1.7 with django-stubs 1.16.0, and mypy 1.1.1.</p>
<p>I have code that looks like this:</p>
<pre class="lang-py prettyprint-override"><code>class ProductQuerySet(QuerySet):
...
class ProductManager(Manager):
def create_red_product(self, **kwargs) -> "Product":
return self.model(color=Product.Color.RED, **kwargs)
_product_manager = ProductManager.from_queryset(ProductQuerySet)
class Product(Model):
...
objects = _product_manager()
</code></pre>
<p>When mypy looks at this, it says:</p>
<pre><code>models/product.py:46: error: Incompatible return value type (got "_T", expected "Product") [return-value]
</code></pre>
<p>It seems like the type of <code>self.model</code> in a model manager method is <code>_T</code>, which from what I understand is a generic type bound to the model, which in my case should be "Product".</p>
<p>Why isn't this working? How can I fix it?</p>
|
<python><django><mypy>
|
2023-04-08 23:01:14
| 1
| 2,774
|
John
|
75,968,037
| 6,423,456
|
How do I type a custom User model manager for mypy in Django?
|
<p>I'm using Django 4.1.7 with django-stubs 1.16.0, and mypy 1.1.1.</p>
<p>I created a custom user manager for my User model like this:</p>
<pre class="lang-py prettyprint-override"><code>from django.contrib.auth.models import UserManager
class MyUserManager(UserManager):
def delete_test_data(self):
return self.filter(test_data=True).delete()
class User(AbstractUser):
test_data = models.BooleanField()
...
objects = MyUserManager()
</code></pre>
<p>Everything works as expected, but when I run mypy on this, it complains with:</p>
<pre><code>models.py:32: error: Incompatible types in assignment (expression has type "MyUserManager[User]", base class "AbstractUser" defined the type as "UserManager[AbstractUser]") [assignment]
</code></pre>
<p>Am I doing something wrong? Is this a mypy bug? Bad types defined in django-stubs? Is there a way to type this without forcing mypy to ignore it?</p>
|
<python><django><mypy>
|
2023-04-08 22:49:03
| 1
| 2,774
|
John
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.