QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
⌀ |
|---|---|---|---|---|---|---|---|---|
75,777,596
| 19,318,120
|
building user-based collaborative filtering system in Django
|
<p>I'm trying to build a simple <code>user based collaborative filtering</code> in <code>Django</code> for an <code>E-commerce</code> using just the purchase history. <br />
Here are the steps I use, I know it needs more improvements but I've no idea what's the next move.</p>
<p>here's the product model</p>
<pre><code>class Product(models.Model):
name = models.CharField(max_length=100)
description = models.TextField()
</code></pre>
<p>here's the purcashe model</p>
<pre><code>class Purchase(models.Model):
user = models.ForeignKey(User, on_delete=models.CASCADE)
product = models.ForeignKey(Product, on_delete=models.CASCADE)
purchase_date = models.DateTimeField(auto_now_add=True)
</code></pre>
<p>Now to get similar users</p>
<pre><code>def find_similar_users(user, k=5):
all_users = User.objects.exclude(id=user.id)
similarities = [(other_user, jaccard_similarity(user, other_user)) for other_user in all_users]
similarities.sort(key=lambda x: x[1], reverse=True)
return [user_similarity[0] for user_similarity in similarities[:k]]
</code></pre>
<p>and to calculate similarity between each:</p>
<pre><code>def jaccard_similarity(user1, user2):
user1_purchases = set(Purchase.objects.filter(user=user1).values_list('product_id', flat=True))
user2_purchases = set(Purchase.objects.filter(user=user2).values_list('product_id', flat=True))
intersection = user1_purchases.intersection(user2_purchases)
union = user1_purchases.union(user2_purchases)
return len(intersection) / len(union) if len(union) > 0 else 0
</code></pre>
<p>now here's my entry function:</p>
<pre><code>def recommend_products(user, k=5):
similar_users = find_similar_users(user, k)
recommended_products = set()
for similar_user in similar_users:
purchases = Purchase.objects.filter(user=similar_user).exclude(product__in=recommended_products)
for purchase in purchases:
recommended_products.add(purchase.product)
return recommended_products
</code></pre>
<p>Now, obviously that'd be really slow, I was thinking of using a copy of the data in another <code>no-sql</code> database.</p>
<p>Now if user <code>A</code> purchase something, I copy the data to the other database, do the calculation and store the returned similar products "obviously using background service like celery" in the no-sql database, and just retrieve them later for user <code>A</code> if needed, is that the right approach?</p>
|
<python><django><recommendation-engine><collaborative-filtering>
|
2023-03-18 17:38:56
| 1
| 484
|
mohamed naser
|
75,777,573
| 15,520,615
|
How to query for the maximum / highest value in an field with PySpark
|
<p>The following dataframe will produce values 0 to 3.</p>
<pre><code>df = DeltaTable.forPath(spark, '/mnt/lake/BASE/SQLClassification/cdcTest/dbo/cdcmergetest/1').history().select(col("version"))
</code></pre>
<p><a href="https://i.sstatic.net/n9Ylv.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/n9Ylv.png" alt="enter image description here" /></a></p>
<p>Can someone show me how to modify the dataframe such that it only provides the maximum value i.e 3?</p>
<p>I have tried</p>
<pre><code>df.select("*").max("version")
</code></pre>
<p>And</p>
<pre><code>df.max("version")
</code></pre>
<p>But no luck</p>
<p>Any thoughts?</p>
|
<python><apache-spark><pyspark><apache-spark-sql><databricks>
|
2023-03-18 17:35:08
| 1
| 3,011
|
Patterson
|
75,777,298
| 914,641
|
problem with the romberg method in scipy.integrate
|
<p>I'm running the following script with anaconda (scipy 1.10.0)</p>
<p>'''</p>
<pre><code>from math import cos, pi
from scipy.integrate import romberg
f = lambda x: x**2*cos(x)**2
res = romberg(f, -pi/2, pi/2)
print(res)
res = romberg(f, 0, pi/2)
print(res)
dx = 1e-4
res = romberg(f, -pi/2+dx, pi/2)
print(res)
'''
</code></pre>
<p>It prints the following results:</p>
<pre><code> '''
9.687909744833307e-33
0.25326501581059374
0.5065300316142199
'''
</code></pre>
<p>The result should be 0.5065300316211875.
It seem to me that scipy.integrate.romberg has problem with the lower integration limit pi/2.
Any hint would be appreciated.</p>
<p>Kind Regards
Klaus</p>
|
<python><scipy>
|
2023-03-18 16:47:37
| 1
| 832
|
Klaus Rohe
|
75,777,161
| 5,623,899
|
Python3.10 decorator confusion: how to wrap a class to augment a class's __init__ e.g. to track function calls
|
<p>I'm getting myself confused with python decorators. Yes, there are a lot of useful resources out there (and I have consulted them prior to posting)</p>
<p>e.g.</p>
<ul>
<li><a href="https://stackoverflow.com/questions/627501/how-can-i-use-named-arguments-in-a-decorator">How can I use named arguments in a decorator?</a></li>
<li><a href="https://stackoverflow.com/questions/9443725/add-method-to-a-class-dynamically-with-decorator">add method to a class dynamically with decorator</a></li>
<li><a href="https://docs.python.org/3/library/functools.html#functools.update_wrapper" rel="nofollow noreferrer">functools.update_wrapper</a></li>
<li><a href="https://code.activestate.com/recipes/577555-object-wrapper-class/" rel="nofollow noreferrer">object wrapper recipe</a></li>
</ul>
<p>I am sure there are better ways to do what I am about to describe, that is because this is just a simple toy example to understand the properties not what I am actually trying to do.</p>
<p>Suppose you have a class</p>
<pre class="lang-py prettyprint-override"><code>class foo:
def __init__(self):
pass
def call_me(self, arg):
return arg
</code></pre>
<p>and I want to extend <code>foo</code>'s constructor to take in two keyword arguments and modify the <code>call_me</code> method e.g. the desired end class would be:</p>
<pre class="lang-py prettyprint-override"><code>class foo:
def __init__(self, keep_track:bool=False, do_print:bool=True):
self.keep_track = keep_track
self.memory = []
...
def call_me(self, arg):
arg = self.old_call_me(arg)
if self.do_print:
print('hi')
if self.keep_track:
self.memory.append(arg)
return arg
def old_call_me(self, arg):
...
return arg
</code></pre>
<p>Ideally, I would like a decorator so that I can wrap a bunch of classes which I am <strong>assuming</strong> have a method <code>call_me</code> like</p>
<pre class="lang-py prettyprint-override"><code>@keep_track(do_print=False) # <--- sets default state
class foo:
...
@keep_track(keep_track=True) # <--- sets default state
class bar:
...
foo_instance = foo(do_print=True) # <-- I can change behavior of instance
</code></pre>
<p>How can I achieve this?</p>
<p>I have tried defining functions inside the decorator and setting them with</p>
<pre class="lang-py prettyprint-override"><code>setattr(cls, 'old_call_me', call_me)
def call_me(self, arg):
# see above
setattr(cls, 'call_me', call_me)
</code></pre>
<p>and using <code>functools.wrap</code></p>
<p>I would appreciate the guidance in this matter.</p>
|
<python><python-decorators>
|
2023-03-18 16:26:42
| 1
| 5,218
|
SumNeuron
|
75,777,110
| 1,204,527
|
Django and Huey task issue, record in DB doesn't exist when the task is ran
|
<p>I am testing Huey with Django and found one issue, tasks can't find the record in DB.</p>
<ul>
<li>Postgres 14</li>
<li>Redis 7.0.5</li>
<li>Django 4</li>
<li>use with docker</li>
</ul>
<p>Here is the code:</p>
<pre><code># settings.py
USE_HUEY = env.bool("USE_HUEY", False)
HUEY = {
"huey_class": "huey.RedisHuey",
"name": "huey",
"immediate": not USE_HUEY,
"connection": {"url": REDIS_URL},
}
# app/signals.py
@receiver(post_save, sender=Post)
def post_signal(sender, instance, **kwargs):
from app.tasks import create_or_update_related_objects
create_or_update_related_objects(instance.pk)
# app/tasks.py
@db_task()
def create_or_update_related_objects(object_pk):
post = Post.objects.get(pk=object_pk)
...
</code></pre>
<p>This is running an async task but I am getting the error:</p>
<pre><code>app.models.Post.DoesNotExist: Post matching query does not exist.
</code></pre>
<p>This is not correct, there is a post, and this task is running on a <code>post_save</code> signal.</p>
<p>What is weird, is if I do something like this, it works fine:</p>
<pre><code>@db_task()
def create_or_update_related_objects(object_pk):
import time
time.sleep(3)
post = Post.objects.get(pk=object_pk)
...
</code></pre>
<p>What am I doing wrong here?</p>
|
<python><django><redis><python-huey>
|
2023-03-18 16:17:15
| 2
| 4,381
|
Mirza Delic
|
75,777,109
| 1,615,703
|
Unsupported result column Struct()[] for DuckDB 0.7.1 from_json
|
<p>I am trying to get a large set of nested JSON files to load into a table, each file is a single record and there are ~25k files. However when I try to declare the schema it errors out when trying to declare the data type if it is a struct. For reference, I was reading this article from DuckDB as well <a href="https://duckdb.org/2023/03/03/json.html" rel="nofollow noreferrer">https://duckdb.org/2023/03/03/json.html</a>.</p>
<p>The Draft, MatchAwards, Players, and TeamPeriodicXPBreakdown are all arrays of dictionaries. Players has a another array called "Talents" that is also an array of dictionaries. The following query will work, but only if I declare those columns as VARCHAR. This is not inherently an issue because I will need to UNNEST them anyways, however even then I am not able to extract the JSON and use it because of the error.</p>
<pre><code>select *
from read_json(
'D:\parsed_replays\*'
, columns={
RandomValue: 'VARCHAR'
, ReplayFingerPrint: 'VARCHAR'
, BattlegroundName: 'VARCHAR'
, ShortName: 'VARCHAR'
, WinningTeam: 'UBIGINT'
, FirstToTen: 'BIGINT'
, DraftFirstTeam: 'UBIGINT'
, GameType: 'VARCHAR'
, GameLength: 'VARCHAR'
, GameLengthTimestamp: 'UBIGINT'
, GameDate: 'VARCHAR'
, GameDateFormatted: 'VARCHAR'
, VersionBuild: 'VARCHAR'
, VersionMajor: 'VARCHAR'
, Version: 'VARCHAR'
, Draft: 'VARCHAR'
, Players: 'VARCHAR'
, MatchAwards: 'VARCHAR'
, TeamPeriodicXPBreakdown: 'VARCHAR'
, ReplayFileName: 'VARCHAR'
, ReplayFileNameFormatted: 'VARCHAR'
, ReplayFilePath: 'VARCHAR'
}
-- , json_format='auto'
)
</code></pre>
<p>This query will also work to return the json.</p>
<pre><code> select *
FROM read_json_objects('D:\parsed_replays\*.json')
limit 100
</code></pre>
<p>Now an example of a draft looks like this. This will fail with the error <code>Unsupported result column type STRUCT("Draft" STRUCT("ReplayFingerPrint" VARCHAR, "DraftIndex" UBIGINT, "Team" UBIGINT, "Battletag" VARCHAR, "AltName" VARCHAR, "SelectedPlayerSlotId" UBIGINT, "PickType" VARCHAR)[])</code>.</p>
<pre><code> SELECT from_json(
'{
"Draft": [
{
"ReplayFingerPrint": "eba25103-6b64-96de-cab8-8baee0aa349a",
"DraftIndex": 0,
"Team": 1,
"Battletag": null,
"AltName": "FaerieDragon",
"SelectedPlayerSlotId": 2,
"PickType": "Banned"
}]}'
, '{"Draft":[{"ReplayFingerPrint":"VARCHAR","DraftIndex":"UBIGINT","Team":"UBIGINT","Battletag":"VARCHAR","AltName":"VARCHAR","SelectedPlayerSlotId":"UBIGINT","PickType":"VARCHAR"}]}'
)
</code></pre>
<p>I can't seem to figure out what I am missing in order to do this, but it seems like this should work. Essentially I want to flatten the original document, and break it into a few tables to work on, Replay, ReplayPlayers, PlayerTalents, TeamExp, Draft, and MatchAwards.</p>
<p>This is the structure query from <code>json_structure</code>.</p>
<pre><code>STRUCT(
"RandomValue" VARCHAR
, "ReplayFingerPrint" VARCHAR
, "BattlegroundName" VARCHAR
, "ShortName" VARCHAR
, "WinningTeam" UBIGINT
, "FirstToTen" BIGINT
, "DraftFirstTeam" UBIGINT
, "GameType" VARCHAR
, "GameLength" VARCHAR
, "GameLengthTimestamp" UBIGINT
, "GameDate" VARCHAR
, "GameDateFormatted" VARCHAR
, "VersionBuild" VARCHAR
, "VersionMajor" VARCHAR
, "Version" VARCHAR
, "Draft" STRUCT("ReplayFingerPrint" VARCHAR, "DraftIndex" UBIGINT, "Team" UBIGINT, "Battletag" VARCHAR, "AltName" VARCHAR, "SelectedPlayerSlotId" UBIGINT, "PickType" VARCHAR)[]
, "Players" STRUCT("ReplayFingerPrint" VARCHAR, "Battletag" VARCHAR, "HeroId" VARCHAR, "AttributeId" VARCHAR, "Team" UBIGINT, "Party" BIGINT, "IsWinner" BOOLEAN, "Character" VARCHAR, "CharacterLevel" UBIGINT, "AccountLevel" UBIGINT, "FirstToTen" BOOLEAN, "PlayerType" VARCHAR, "Region" UBIGINT, "BlizzardId" UBIGINT, "Level" UBIGINT, "Takedowns" UBIGINT, "SoloKills" UBIGINT, "Assists" UBIGINT, "Deaths" UBIGINT, "HeroDamage" UBIGINT, "SiegeDamage" UBIGINT, "StructureDamage" UBIGINT, "MinionDamage" UBIGINT, "CreepDamage" UBIGINT, "SummonDamage" UBIGINT, "TimeCCdEnemyHeroes" UBIGINT, "Healing" UBIGINT, "SelfHealing" UBIGINT, "RegenGlobes" UBIGINT, "DamageTaken" UBIGINT, "DamageSoaked" UBIGINT, "ExperienceContribution" UBIGINT, "TownKills" UBIGINT, "TimeSpentDead" UBIGINT, "MercCampCaptures" UBIGINT, "WatchTowerCaptures" UBIGINT, "MetaExperience" UBIGINT, "HighestKillStreak" UBIGINT, "ProtectionGivenToAllies" UBIGINT, "TimeSilencingEnemyHeroes" UBIGINT, "TimeRootingEnemyHeroes" UBIGINT, "TimeStunningEnemyHeroes" UBIGINT, "ClutchHealsPerformed" UBIGINT, "EscapesPerformed" UBIGINT, "VengeancesPerformed" UBIGINT, "OutnumberedDeaths" UBIGINT, "TeamfightEscapesPerformed" UBIGINT, "TeamfightHealingDone" UBIGINT, "TeamfightDamageTaken" UBIGINT, "TeamfightHeroDamage" UBIGINT, "Multikill" UBIGINT, "PhysicalDamage" UBIGINT, "SpellDamage" UBIGINT, "OnFireTimeonFire" UBIGINT
, "Talents" STRUCT("ReplayFingerPrint" VARCHAR, "Battletag" VARCHAR, "TalentIndex" UBIGINT, "TalentTier" VARCHAR, "TalentId" UBIGINT, "TalentName" VARCHAR, "TimeSpanSelectedString" VARCHAR, "TimeSpanSelected" UBIGINT)[])[]
, "MatchAwards" STRUCT("ReplayFingerPrint" VARCHAR, "Battletag" VARCHAR, "BlizzardId" UBIGINT, "Award" VARCHAR)[]
, "TeamPeriodicXPBreakdown" STRUCT("ReplayFingerPrint" VARCHAR, "Team" UBIGINT, "TeamLevel" UBIGINT, "TimeSpan" UBIGINT, "MinionXP" UBIGINT, "CreepXP" UBIGINT, "StructureXP" UBIGINT, "HeroXP" UBIGINT, "TrickleXP" UBIGINT, "TotalXP" UBIGINT)[]
, "ReplayFileName" VARCHAR
, "ReplayFileNameFormatted" VARCHAR
, "ReplayFilePath" VARCHAR
)
</code></pre>
<p>And the full columns dict that I am passing where it declares the children.</p>
<pre><code>
columns ={
RandomValue: 'VARCHAR'
, ReplayFingerPrint: 'VARCHAR'
, BattlegroundName: 'VARCHAR'
, ShortName: 'VARCHAR'
, WinningTeam: 'UBIGINT'
, FirstToTen: 'BIGINT'
, DraftFirstTeam: 'UBIGINT'
, GameType: 'VARCHAR'
, GameLength: 'VARCHAR'
, GameLengthTimestamp: 'UBIGINT'
, GameDate: 'VARCHAR'
, GameDateFormatted: 'VARCHAR'
, VersionBuild: 'VARCHAR'
, VersionMajor: 'VARCHAR'
, Version: 'VARCHAR'
, Draft: 'STRUCT(
ReplayFingerPrint VARCHAR
, DraftIndex UBIGINT
, Team UBIGINT
, Battletag VARCHAR
, AltName VARCHAR
, SelectedPlayerSlotId UBIGINT
, PickType VARCHAR
)[]'
, Players: 'STRUCT(
ReplayFingerPrint VARCHAR
, Battletag VARCHAR
, HeroId VARCHAR
, AttributeId VARCHAR
, Team UBIGINT
, Party BIGINT
, IsWinner BOOLEAN
, Character VARCHAR
, CharacterLevel UBIGINT
, AccountLevel UBIGINT
, FirstToTen BOOLEAN
, PlayerType VARCHAR
, Region UBIGINT
, BlizzardId UBIGINT
, Level UBIGINT
, Takedowns UBIGINT
, SoloKills UBIGINT
, Assists UBIGINT
, Deaths UBIGINT
, HeroDamage UBIGINT
, SiegeDamage UBIGINT, StructureDamage UBIGINT, MinionDamage UBIGINT, CreepDamage UBIGINT, SummonDamage UBIGINT, TimeCCdEnemyHeroes UBIGINT
, Healing UBIGINT, SelfHealing UBIGINT, RegenGlobes UBIGINT, DamageTaken UBIGINT, DamageSoaked UBIGINT, ExperienceContribution UBIGINT
, TownKills UBIGINT, TimeSpentDead UBIGINT, MercCampCaptures UBIGINT, WatchTowerCaptures UBIGINT, MetaExperience UBIGINT, HighestKillStreak UBIGINT
, ProtectionGivenToAllies UBIGINT, TimeSilencingEnemyHeroes UBIGINT, TimeRootingEnemyHeroes UBIGINT, TimeStunningEnemyHeroes UBIGINT
, ClutchHealsPerformed UBIGINT, EscapesPerformed UBIGINT, VengeancesPerformed UBIGINT, OutnumberedDeaths UBIGINT, TeamfightEscapesPerformed UBIGINT
, TeamfightHealingDone UBIGINT, TeamfightDamageTaken UBIGINT, TeamfightHeroDamage UBIGINT, Multikill UBIGINT
, PhysicalDamage UBIGINT, SpellDamage UBIGINT, OnFireTimeonFire UBIGINT
, Talents STRUCT(
ReplayFingerPrint VARCHAR
, Battletag VARCHAR
, TalentIndex UBIGINT
, TalentTier VARCHAR
, TalentId UBIGINT
, TalentName VARCHAR
, TimeSpanSelectedString VARCHAR
, TimeSpanSelected UBIGINT
)[]
)[]'
, MatchAwards: 'STRUCT(ReplayFingerPrint VARCHAR, Battletag VARCHAR, BlizzardId UBIGINT, Award VARCHAR)[]'
, TeamPeriodicXPBreakdown: 'STRUCT(ReplayFingerPrint VARCHAR, Team UBIGINT, TeamLevel UBIGINT, TimeSpan UBIGINT, MinionXP UBIGINT, CreepXP UBIGINT, StructureXP UBIGINT, HeroXP UBIGINT, TrickleXP UBIGINT, TotalXP UBIGINT)[]'
, ReplayFileName: 'VARCHAR'
, ReplayFileNameFormatted: 'VARCHAR'
, ReplayFilePath: 'VARCHAR'
}
</code></pre>
|
<python><json><python-3.x><duckdb>
|
2023-03-18 16:17:12
| 0
| 313
|
Mitchell Hamann
|
75,776,827
| 13,122,378
|
Best way to build application triggering reminder at certain time
|
<p>I want to build a python app sending reminder at certain time. It would have several subscribers. Each subscriber sets specific moment when they want to be notified. For example, John wants to have a reminder every days whereas Cassandra wants it every months.</p>
<p>I see several ways to do it :</p>
<ul>
<li>Use a script running 24/7 with a while loop and checks if it's time to do send the reminder.</li>
<li>Use a cron tab that runs the script every minutes to check if there's reminder to send</li>
<li>Create a simple api (in Flask for example). It checks every minutes or so if there is a reminder to send to subscribers or even make a request to the api.</li>
</ul>
<p>What is the best way to build such application in Python for few subscribers (10) and for a larger amount(1000)</p>
|
<python><web-applications><architecture>
|
2023-03-18 15:28:41
| 1
| 654
|
Pi-R
|
75,776,824
| 15,776,933
|
Running time in Python
|
<p>I wrote code to find the current time in Python. Its result is: 'Time: 20:49:08' my current time. Can I make the time show in seconds running, so the value of <code>%S</code> increases?</p>
<p>Code:</p>
<pre><code>import time
def current_time():
t = time.localtime()
current_time = time.strftime("%H:%M:%S", t)
print('Time:',current_time)
</code></pre>
<p>I expect a running result without a GUI interface.</p>
|
<python><time>
|
2023-03-18 15:27:56
| 3
| 955
|
Anubhav
|
75,776,701
| 63,898
|
Python flask debug or run keep getting : Unable to find python module message
|
<p>i have simple flask server running in vscode , i enable to debug and run but i keep getting the message :</p>
<pre><code>Unable to find python module.
</code></pre>
<p>on each step i do in the IDE</p>
<pre><code>foo.foo-PC MINGW64 /c/dev/my/microservices_tests/python/b37 -- -m flask run --no-debugger --no-reload ^C
roservices_tests\\python\\broker-service\\app.py \\debugpy\\launcher 63493 -- c:\\dev\\my\\micr
meir.yanovich@MEIRYANO-PC MINGW64 /c/dev/my/microservices_tests/python/broker-service
$ cd c:\\dev\\my\\microservices_tests\\python\\broker-service ; /usr/bin/env c:\\msys64\\mingw64\\bin\\python.exe c:\\Users\\meir.yanovich\\.vscode\\extensions\\ms-python.python-2023.4.1\\pythonFiles\\lib\\python\\debugpy\\adapter/../..\\debugpy\\launcher 63741 -- -m flask run --no-debugger --no-reload
Unable to find python module.
Unable to find python module.
Unable to find python module.
Unable to find python module.
Unable to find python module.
Unable to find python module.
Unable to find python module.
* Serving Flask app 'app.py'
* Debug mode: on
WARNING: This is a development server. Do not use it in a production deployment. Use a production WSGI server instead.
* Running on http://127.0.0.1:5000
Press CTRL+C to quit
python -m pip list
Package Version
------------------ -------
click 8.1.3
colorama 0.4.6
distlib 0.3.3
Flask 2.2.3
Flask-Cors 3.0.10
importlib-metadata 6.0.0
itsdangerous 2.1.2
Jinja2 3.1.2
MarkupSafe 2.1.2
pip 22.0.4
PyYAML 6.0
ruamel.yaml 0.17.21
ruamel.yaml.clib 0.2.6
setuptools 59.8.0
six 1.16.0
Werkzeug 2.2.3
zipp 3.15.0
which python
/c/msys64/mingw64/bin/python
$ python --version
Python 3.9.10
</code></pre>
|
<python><visual-studio-code><flask><module>
|
2023-03-18 15:07:41
| 0
| 31,153
|
user63898
|
75,776,672
| 1,806,566
|
Can pip be used with a custom metapath finder?
|
<p>In my environment, I need to manually install python packages in a non-standard location, and I need to archive the source tarballs of the packages I install. I'm using python 3.11. I install packages by downloading tarballs and running:</p>
<p><code>python3 -m pip install --prefix=xxx --no-deps --no-index --ignore-installed .</code></p>
<p>where <code>xxx</code> is the location where I'm installing. I realize that this is tedious and in many cases it would be unnecessary, but in this particular environment, it's what I'm stuck with.</p>
<p>I've written a metapath finder that is able to locate packages where I have them installed (by implementing <code>find_spec()</code>), and it is able to find package metadata (by implementing <code>find_distributions()</code>). Both work fine. I can import modules I've installed, and I can use <code>pip3 list</code> to get a list of the available packages.</p>
<p>The way I load my metapath finder is through <code>usercustomize.py</code>, so the finder is in place before any other module begins and this whole process is transparent to users.</p>
<p>Everything works, except that when I try to install a package which requires another (via <code>requires</code> in the <code>build_system</code> section of <code>pyproject.toml</code>), required packages are not found, even if they are installed and visible via <code>pip3 list</code>.</p>
<p>My question is what does <code>pip3 install</code> do that <code>pip3 list</code> doesn't do in terms of locating dependent packages?</p>
<p><strong>Edit</strong></p>
<p>If I instrument my metapth finder's <code>find_distributions()</code> to print the distributions it finds, I see that it's being called from ```pip install`` and it returns all of the distributions. I can print the distribution version numbers, and they are correct, so I know the metadata has been parsed correctly.</p>
<p>Yet, <code>pip install</code> complains with:</p>
<pre><code>ERROR: Could not find a version that satisfies the requirement flit_core>=3.4 (from versions: none)
</code></pre>
<p>Looking through pip's internals, it looks like the <code>versions: none</code> is coming from an exploration of the installable packages. I don't have any installable packages because I'm using <code>--no-index</code>. But why do we need the installable packages if what it's looking for is already installed?</p>
<p><strong>Edit 2</strong></p>
<p>To reproduce the error:</p>
<ol>
<li><p>Start with a new installation of python 3.11.2, and make sure that's the python3/pip3 in the path.</p>
</li>
<li><p>Get <code>pip</code> upgraded and <code>wheel</code> installed:</p>
<pre><code>pip3 install wheel
pip3 install --upgrade pip
</code></pre>
</li>
<li><p>Download <code>flit_core-3.8.0.tar.gz</code> and <code>build-0.10.0.tar.gz</code> from pypi.</p>
</li>
<li><p>Install flit_core in a non-standard place:</p>
<pre><code>mkdir /tmp/nonstandard
tar zxvf flit_core-3.8.0.tar.gz
cd flit_core-3.8.0
python3 -m pip install --prefix=/tmp/nonstandard --no-deps --no-index --ignore-installed .
</code></pre>
</li>
<li><p>Although the installation works, python doesn't know to look in /tmp/nonstandard, so fix that by putting the following in <code>usercustomize.py</code>.</p>
<pre class="lang-py prettyprint-override"><code>import sys
import os
import importlib.abc
import importlib.metadata
import importlib.util
MyPackages = {
'flit_core' :
{
'location':'/tmp/nonstandard/lib/python3.11/site-packages/flit_core',
'metadata':'/tmp/nonstandard/lib/python3.11/site-packages/flit_core-3.8.0.dist-info',
},
}
class MyMetaPathFinder(importlib.abc.MetaPathFinder):
def find_spec(self, module_name, path, target=None):
if module_name in sys.modules:
return sys.modules[module_name].__spec__
if module_name in MyPackages:
module_dir = MyPackages[module_name]['location']
file = os.path.join(module_dir, '__init__.py')
spec = importlib.util.spec_from_file_location(module_name, file,
submodule_search_locations=[])
# Hack so if there are any subsequent submodule imports, they will
# be found through the path
if module_dir in sys.path:
sys.path.remove(module_dir)
sys.path.insert(0, module_dir)
return spec
else:
return None
def find_distributions(self, context=importlib.metadata.DistributionFinder.Context()):
if context.name is None:
distributions = [ importlib.metadata.PathDistribution.at(MyPackages[p]['metadata']) for p in MyPackages ]
else:
try:
distributions = [ importlib.metadata.PathDistribution.at(MyPackages[context.name]['metadata'])]
except KeyError:
distributions = []
return distributions
sys.meta_path.insert(2, MyMetaPathFinder())
</code></pre>
</li>
</ol>
<p>With that in place, we can now import <code>flit_core</code>, and it will show up in <code>pip3 list</code>.</p>
<ol start="6">
<li><p>Now try to install <code>build-0.10.0</code>, which has:</p>
<pre><code>[build-system]
requires = ["flit-core >= 3.4"]
</code></pre>
</li>
</ol>
<p>in its <code>pyproject.toml</code>.</p>
<pre><code>tar zxvf build-0.10.0.tar.gz
cd build-0.10.0/
python3 -m pip install --prefix=/tmp/nonstandard --no-deps --no-index --ignore-installed .
</code></pre>
<p>The installation fails with:</p>
<pre><code> ERROR: Could not find a version that satisfies the requirement flit-core>=3.4 (from versions: none)
ERROR: No matching distribution found for flit-core>=3.4
</code></pre>
<p>even though I can tell by instrumenting <code>find_distributions()</code>, that it is being called and is returning <code>flit_core</code>.</p>
|
<python><pip>
|
2023-03-18 15:02:41
| 1
| 1,241
|
user1806566
|
75,776,610
| 1,763,602
|
Is there a difference between `import a as b` and `import a ; b = a`?
|
<p>It's common for me to write in REPL, for example:</p>
<pre><code>from datetime import datetime as dt
</code></pre>
<p>Is there a difference between this sentence and</p>
<pre><code>from datetime import datetime
dt = datetime
</code></pre>
<p>? For now I can list two:</p>
<ul>
<li>the first sentence is more readable</li>
<li>the first sentence does not create a <code>datetime</code> variable</li>
</ul>
<p>PS: I'm trying to play with pyPEG for creating a toy parser.</p>
|
<python><syntax><include><python-import>
|
2023-03-18 14:51:25
| 1
| 16,005
|
Marco Sulla
|
75,776,368
| 9,601,748
|
How can I separate column by deliminator and make new columns from the split strings?
|
<p>Here's the dataset I'm currently trying to work with:</p>
<p><a href="https://i.sstatic.net/RY9Aq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/RY9Aq.png" alt="enter image description here" /></a></p>
<p>What I'm trying to do is separate all the 'genres' values by the comma, and then create new columns called 'genres_Comedy', 'genres_Drama', 'genres_Family' etc where each row will have the value of 0 or 1 depending on if it is that genre. I'm fairly certain it's possible, I know how to separate columns by deliminator, but I don't know how to generate the necessary columns based on the split strings, or how I would then add each rows correct value (0 or 1) with the newly generated columns.</p>
<p>I've tried to look for solutions, but my problem is a bit specific and I can't find any applicable solutions, though maybe I'm looking wrong. Does anyone know how I can go about accomplishing this? Please let me know if there's any other info I can provide that could be helpful, thanks for reading.</p>
|
<python><pandas><dataframe>
|
2023-03-18 14:03:50
| 1
| 311
|
Marcus
|
75,776,286
| 1,176,573
|
Join or Merge Pandas Dataframes filling null values
|
<p>I have two dataframes - <code>Deployment</code> and <code>HPA</code> as shown. Both dataframes mostly share the values in columns <code>Deployment-Label</code> and <code>HPA-Label</code> which need to merge/join by filling-up null values in all other columns in both dataframes (wherever the values are missing in either of columns <code>Deployment-Label</code> and <code>HPA-Label</code>.)</p>
<p>Tried using <code>pd.merge</code>, but the it is dropping the non-common rows in both df.</p>
<pre class="lang-py prettyprint-override"><code># Merge the two dataframes, using LABEL common col, after renaming
df3 = pd.merge(df1, df2, on = 'LABEL')
# Write it to a new CSV file
df3.to_csv('output/merged.csv')
</code></pre>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Deployment-Label</th>
<th>Deployment Table - Other Columns</th>
</tr>
</thead>
<tbody>
<tr>
<td>accountdataservice</td>
<td>1</td>
</tr>
<tr>
<td>accountservice-grpc</td>
<td>2</td>
</tr>
<tr>
<td>fmsdataservice</td>
<td>3</td>
</tr>
<tr>
<td>fuelbenchmarkservice-grpc</td>
<td>4</td>
</tr>
<tr>
<td>fueldeviationservice-grpc</td>
<td>5</td>
</tr>
<tr>
<td>httpclientservice</td>
<td>6</td>
</tr>
<tr>
<td>packageservice-grpc</td>
<td>7</td>
</tr>
<tr>
<td>provisioningdataservice</td>
<td>8</td>
</tr>
<tr>
<td>traefik-ingress-ilb</td>
<td>9</td>
</tr>
<tr>
<td>translationservice-grpc</td>
<td>10</td>
</tr>
</tbody>
</table>
</div><div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>HPA-Label</th>
<th>HPA Table - Other Columns</th>
</tr>
</thead>
<tbody>
<tr>
<td>accountdataservice</td>
<td>A</td>
</tr>
<tr>
<td>accountservice-grpc</td>
<td>B</td>
</tr>
<tr>
<td>fmsdataservice</td>
<td>C</td>
</tr>
<tr>
<td>fuelbenchmarkservice-grpc</td>
<td>D</td>
</tr>
<tr>
<td>fueldeviationservice-grpc</td>
<td>E</td>
</tr>
<tr>
<td>hangfire</td>
<td>F</td>
</tr>
<tr>
<td>httpclientservice</td>
<td>G</td>
</tr>
<tr>
<td>packageservice-grpc</td>
<td>H</td>
</tr>
<tr>
<td>portalservicerest</td>
<td>I</td>
</tr>
<tr>
<td>provisioningdataservice</td>
<td>J</td>
</tr>
<tr>
<td>schedularservice-grpc</td>
<td>K</td>
</tr>
<tr>
<td>translationservice-grpc</td>
<td>L</td>
</tr>
</tbody>
</table>
</div><h2>Expected Outcome:</h2>
<p><a href="https://i.sstatic.net/rPCUm.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/rPCUm.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe>
|
2023-03-18 13:47:06
| 1
| 1,536
|
RSW
|
75,776,250
| 6,455,731
|
How to import a module from a module within a package?
|
<p>I've heard that Python imports can be weird and I feel really stupid for not being able to fix this myself; so please somebody kindly explain how this import situation can possibly be a problem.</p>
<p>Directory structure:</p>
<pre><code>├── main.py
└── sub
├── sub1.py
└── sub2.py
</code></pre>
<p>Say sub1 defines function1:</p>
<pre><code># sub1.py
def function1():
return "function 1 result"
</code></pre>
<p>sub2 imports sub1 and uses function1 in function2:</p>
<pre><code># sub2.py
import sub1
def function2():
return f"function 2 result: {sub1.function1()}"
</code></pre>
<p>Calling function2 from main.py:</p>
<pre><code>from sub.sub2 import function2
print(function2()
</code></pre>
<p>This throws an exception <code>ModuleNotFoundError: No module named 'sub1'</code>.</p>
<p>Why? Shouldn't main.py see sub and sub.sub1 and sub.sub2; and shouldn't sub1 and sub2 see each other?</p>
|
<python><import><module><package>
|
2023-03-18 13:39:48
| 1
| 964
|
lupl
|
75,776,198
| 13,448,993
|
How do I refresh a gradio button after pressing submit?
|
<p>I have a simple input/output gradio interface:</p>
<pre><code>import gradio as gr
def generate_output(input_text):
output_text = "Hello, " + input_text + "!"
return output_text
iface = gr.Interface(fn=generate_output,
inputs= gr.Textbox(label="Input Text"),#gr.Textbox(label="Input Text"),
outputs="text",
title="Basic Text Input and Output",
description="Enter some text and get a modified version of it as output")
iface.launch(share=False)
</code></pre>
<p>My objective is using gradio to answer questions in a sequence, like a chatbot.</p>
<p>When I press the submit button, how can I change the label of the input textbox from "input text" to "question 2"?</p>
|
<python><user-interface><gradio>
|
2023-03-18 13:31:39
| 1
| 523
|
ardito.bryan
|
75,776,058
| 8,511,822
|
Python seedir treat zip/gzip files as directories?
|
<p>I am using the <a href="https://github.com/earnestt1234/seedir" rel="nofollow noreferrer">seedir library</a> to handle folder/file STDOUT and it works quite nicely. I wonder if there is an option to treat zip/gzip files as folder and list the individual files contained?</p>
<p>If this is not built in, would the proper solution be to <a href="https://earnestt1234.github.io/seedir/seedir/index.html#extending-seedir" rel="nofollow noreferrer">extend seedir</a>?</p>
<p>thx</p>
|
<python>
|
2023-03-18 13:01:19
| 1
| 1,642
|
rchitect-of-info
|
75,775,975
| 7,403,752
|
Python requests stop redirects does not work
|
<p>I want to access the content of a web page, but I'm being redirected to another page even though I've set allow_redirects to False in my requests call. Here's an example code snippet:</p>
<pre><code>import requests
from bs4 import BeautifulSoup
headers = {'User-Agent': user_agent} # assume I inserted my user agent here
URL = "https://stackoverflow.com/questions/73909641/program-is-about-space-utilisation-i-am-getting-error-72g-value-too-great-for"
html_content = requests.get(URL, allow_redirects=False, headers = headers)
soup = BeautifulSoup(html_content.content, "html.parser")
</code></pre>
<p>When I run this code, I don't get any content from the web page. However, if I set allow_redirects to True, I'm redirected to this URL: <a href="https://stackoverflow.com/questions/37015073/convert-between-byte-count-and-human-readable-string">Convert between byte count and "human-readable" string</a>.</p>
|
<python><http-redirect><python-requests>
|
2023-03-18 12:45:29
| 1
| 2,326
|
edyvedy13
|
75,775,969
| 1,169,091
|
What is the solid black rectangle adjacent to the decision tree?
|
<p>I adapted this code from <a href="https://www.dasca.org/world-of-big-data/article/know-how-to-create-and-visualize-a-decision-tree-with-python" rel="nofollow noreferrer">https://www.dasca.org/world-of-big-data/article/know-how-to-create-and-visualize-a-decision-tree-with-python</a>.</p>
<p>I removed two arguments to the DecisionTreeClassifier constructor, min_impurity_split=None and presort=False, but othewise the code is the same as I found it.</p>
<pre><code>import sklearn.datasets as datasets
import pandas as pd
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
#from sklearn.externals.six import StringIO
from IPython.display import Image
from sklearn.tree import export_graphviz
import pydotplus
from six import StringIO
iris=datasets.load_iris()
df=pd.DataFrame(iris.data, columns=iris.feature_names)
y=iris.target
dtree=DecisionTreeClassifier()
dtree.fit(df,y)
# Limit max depth
model = RandomForestClassifier(max_depth = 3, n_estimators=10)
# Train
model.fit(iris.data, iris.target)
# Extract single tree
estimator_limited = model.estimators_[5]
estimator_limited
# Removed min_impurity_split=None and presort=False because they caused "unexpected keyword argument" errors
DecisionTreeClassifier(class_weight=None, criterion='gini', max_depth=3,
max_features='auto', max_leaf_nodes=None,
min_impurity_decrease=0.0,
min_samples_leaf=1, min_samples_split=2,
min_weight_fraction_leaf=0.0,
random_state=1538259045, splitter='best')
# No max depth
model = RandomForestClassifier(max_depth = None, n_estimators=10)
model.fit(iris.data, iris.target)
estimator_nonlimited = model.estimators_[5]
dot_data = StringIO()
export_graphviz(dtree, out_file=dot_data,
filled=True, rounded=True,
special_characters=True)
graph = pydotplus.graph_from_dot_data(dot_data.getvalue())
Image(graph.create_png())
</code></pre>
<p>The decision tree looks like this:
<a href="https://i.sstatic.net/4rP2G.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/4rP2G.png" alt="enter image description here" /></a></p>
|
<python><graphviz><decision-tree><pydotplus>
|
2023-03-18 12:44:40
| 1
| 4,741
|
nicomp
|
75,775,967
| 6,301,394
|
Passing arguments to apply
|
<p>In Pandas, I currently use groupby apply this way (this is a MRE only):</p>
<pre><code>import pandas as pd
import numpy as np
class ApplyFunction:
def Close(df, *args):
return pd.DataFrame(df['Close'] * args[0])
class Placeholder:
def __init__(self):
self.ind = ApplyFunction.Close
self.args = [2]
call = Placeholder()
date_range = np.arange(np.datetime64("2020-01-01"), np.datetime64("2020-01-03"), np.timedelta64(1, "D"))
idx = pd.MultiIndex.from_product([['AAA', 'BBB'], date_range], names=['Symbol','Date'])
df = pd.DataFrame({"Close": [8,9,10,11]}, idx, ['Close'])
df["Result"] = df.groupby("Symbol", group_keys=False).apply(*[call.ind, *call.args])
>> Close Result
>> Symbol Date
>> AAA 2020-01-01 8 16
>> 2020-01-02 9 18
>> BBB 2020-01-01 10 20
>> 2020-01-02 11 22
</code></pre>
<p>While I can call the ApplyFunction in polars, I cannot figure out how to pass arguments in a similar fashion:</p>
<pre><code>import polars as pl
class ApplyFunction:
def Close(df, *args):
return pl.DataFrame(df['Close'])
df = pl.from_pandas(df.reset_index(drop=False))
df = df.with_columns(
df.groupby("Symbol", maintain_order=True).apply(
function=call.ind,
)
)
shape: (4, 3)
┌────────┬─────────────────────┬───────┐
│ Symbol ┆ Date ┆ Close │
│ --- ┆ --- ┆ --- │
│ str ┆ datetime[ns] ┆ i64 │
╞════════╪═════════════════════╪═══════╡
│ AAA ┆ 2020-01-01 00:00:00 ┆ 8 │
│ AAA ┆ 2020-01-02 00:00:00 ┆ 9 │
│ BBB ┆ 2020-01-01 00:00:00 ┆ 10 │
│ BBB ┆ 2020-01-02 00:00:00 ┆ 11 │
└────────┴─────────────────────┴───────┘
</code></pre>
<p>This is what I'd like to use but it obviously fails:</p>
<pre><code>df = df.with_columns(
df.groupby("Symbol", maintain_order=True).apply(
function=*[call.ind, *call.args],
)
)
>> TypeError: GroupBy.apply() takes 2 positional arguments but 3 were given
</code></pre>
|
<python><python-polars>
|
2023-03-18 12:44:26
| 1
| 2,613
|
misantroop
|
75,775,941
| 10,441,038
|
Assign to a whole column of a Pandas DataFrame with MultiIndex?
|
<p>I have a DataFrame(called <code>midx_df</code>) with a MultiIndex, I want to assign values from a whole column of another DataFrame(called <code>sour_df</code>) with single level index to <code>midx_df</code>.</p>
<p>All of index values of <code>sour_df</code> exist in the top level index of <code>midx_df</code>, I need to specify the level-1 index to add/modify all values of rows with same level-1 index.</p>
<p>For example:</p>
<pre><code>beg_min = pd.to_datetime('2023/03/18 18:50', yearfirst=True)
end_min = pd.to_datetime('2023/03/18 18:53', yearfirst=True)
minutes = pd.date_range(start=beg_min, end=end_min, freq='1min')
actions = ['Buy', 'Sell']
m_index = pd.MultiIndex.from_product([minutes, actions], names=['time', 'action'])
sour_df = pd.DataFrame(index=minutes, columns=['price'])
sour_df.index.rename('time', inplace=True)
sour_df.loc[minutes[0], 'price'] = 'b0'
sour_df.loc[minutes[1], 'price'] = 'b1'
sour_df.loc[minutes[3], 'price'] = 'b2'
midx_df = pd.DataFrame(index=m_index, columns=['price'])
print(midx_df)
midx_df.loc[(beg_min, 'Buy'), 'price'] = 123 # works but only for one row!
midx_df.loc[(end_min, 'Buy')]['price'] = 124 # doesn't work!
print(midx_df)
midx_df.loc[(slice(None), 'Buy'), 'price'] = sour_df # doesn't work!
print(midx_df)
midx_df.loc[(slice(None), 'Buy'), 'price'] = sour_df['price'] # doesn't work!
print(midx_df)
#midx_df.loc[(slice(None), 'Buy')]['price'] = sour_df['price'] # doesn't work!
#print(midx_df)
midx_df.loc[pd.IndexSlice[:, 'Buy'], :] = sour_df # doesn't work!
print(midx_df)
</code></pre>
<p>What is the correct way to do that?</p>
|
<python><pandas><dataframe><slice><multi-index>
|
2023-03-18 12:38:28
| 1
| 2,165
|
Leon
|
75,775,937
| 3,442,409
|
tensorflow.data.Dataset.from_generator() converts string arguments to bytes
|
<p>I have a custom generator which takes two dates in string format as arguments and yields (features, labels) from within the date range. I want to create a dataset out of it, but <code>tf.data.Dataset.from_generator()</code> stubbornly converts date strings to bytes causing the generator function to fail. Let me demonstrate this behaviour:</p>
<pre><code>def some_generator(date1, date2):
print(date1, date2)
yield [1, 2], [3]
feats, label = next(some_generator('2010-01-01', '2017-12-31'))
signature = (tf.type_spec_from_value(tf.convert_to_tensor(feats)),
tf.type_spec_from_value(tf.convert_to_tensor(label)))
ds = Dataset.from_generator(some_generator, args=('2010-01-01', '2017-12-31'), output_signature=signature)
for feats, label in ds.take(1):
print(feats, label)
</code></pre>
<p>The output from this code is:</p>
<pre><code>2010-01-01 2017-12-31
b'2010-01-01' b'2017-12-31'
tf.Tensor([1 2], shape=(2,), dtype=int32) tf.Tensor([3], shape=(1,), dtype=int32)
</code></pre>
<p>The first line is the result of the first call to <code>some_generator</code> where feats and label are generated for the signature, and here the dates are printed out as strings.</p>
<p>The second line is the result of iterating over the dataset in the for loop and there the dates are printed as byte strings. This problem does not occur with integers, but I need strings. If anyone has an idea how to solve this please share.</p>
|
<python><tensorflow><dataset>
|
2023-03-18 12:36:33
| 1
| 2,713
|
mac13k
|
75,775,892
| 12,023,442
|
FlightSql works on Linux but fails on Windows with error
|
<p>The below code works fine on Linux OS from colab</p>
<pre><code>from flightsql import FlightSQLClient
from urllib.parse import urlparse
parsed_uri = urlparse('https://us-east-1-1.aws.cloud2.influxdata.com' )
bucket_name="VLBA"
host_name=parsed_uri.hostname
data_frame_measurement_name="UnitPrice"
token = "u10r0Iyf8jnCwaHd5ktpHPZtBAoCyvZtY_8THsW84VMK73GmQ1Md2qOWs5UNaW1uRoLmD7Oqz4AW72WqQQ7cNQ=="
query_client = FlightSQLClient(
host = host_name,
token = token,
metadata={"bucket-name": bucket_name})
query = """SELECT *
FROM '{}'
""".format(data_frame_measurement_name)
info = query_client.execute(query)
</code></pre>
<p>But fails on Windows with the error</p>
<pre><code> raceback (most recent call last):
File "<stdin>", line 1, in <module>
File "C:\Users\arunj\anaconda3\envs\newenv\lib\site-packages\flightsql\client.py", line 123, in execute
return self._get_flight_info(flightsql.CommandStatementQuery(query=query), call_options)
File "C:\Users\arunj\anaconda3\envs\newenv\lib\site-packages\flightsql\util.py", line 8, in g
return f(self, *args, **kwargs)
File "C:\Users\arunj\anaconda3\envs\newenv\lib\site-packages\flightsql\client.py", line 293, in _get_flight_info
return self.client.get_flight_info(flight_descriptor(command), options)
File "pyarrow\_flight.pyx", line 1506, in pyarrow._flight.FlightClient.get_flight_info
File "pyarrow\_flight.pyx", line 71, in pyarrow._flight.check_flight_status
pyarrow._flight.FlightUnavailableError: Flight returned unavailable error, with message: empty address list:
</code></pre>
<p>And then keeps throwing the below error:</p>
<pre><code>Failed to create secure subchannel for secure name 'us-east-1-1.aws.cloud2.influxdata.com:443'; Got args: {grpc.client_channel_factory=0x23d61811d90, grpc.default_authority=us-east-1-1.aws.cloud2.influxdata.com:443, grpc.initial_reconnect_backoff_ms=100, grpc.internal.channel_credentials=0x23d62fed4f0, grpc.internal.subchannel_pool=0x23d62ff06a0, grpc.max_receive_message_length=-1, grpc.primary_user_agent=grpc-c++/1.50.1, grpc.resource_quota=0x23d62ff09d0, grpc.server_uri=dns:///us-east-1-1.aws.cloud2.influxdata.com:443, grpc.use_local_subchannel_pool=1}
</code></pre>
|
<python><influxdb><pyarrow>
|
2023-03-18 12:24:30
| 1
| 2,254
|
ArunJose
|
75,775,880
| 6,245,473
|
How do I get a dataframe to write all data pulled to csv and not skip any rows with missing data?
|
<p>The following is working great for hundreds of stocks. But some stocks have incomplete data. E.g., blank <code>shortRatio</code> amount, etc. When that happens, it skips the entire row instead of just leaving certain fields blank. For example, out of <code>symbols = ['AI', 'AG', 'MA']</code>, only <code>'MA'</code> outputs any data. How do I get it to write to csv whatever data is available and not skip rows just because one field is blank?</p>
<pre><code>import pandas as pd
from yahooquery import Ticker
import csv
symbols = ['AI', 'AG', 'MA']
modules = 'financialData defaultKeyStatistics price summaryDetail earningsTrend'
for tick in symbols:
tickers = Ticker(tick)
try:
df = pd.DataFrame([{'symbol': ticker,
'financialCurrency': data['financialData']['financialCurrency'],
'shortRatio': data['defaultKeyStatistics']['shortRatio'],
'shortPercentOfFloat': data['defaultKeyStatistics']['shortPercentOfFloat'],
'priceToBook': data['defaultKeyStatistics']['priceToBook'],
'earningsQuarterlyGrowth': data['defaultKeyStatistics']['earningsQuarterlyGrowth'],
'marketCap': data['price']['marketCap'],
'shortName': data['price']['shortName'],
'fiftyTwoWeekLow': data['summaryDetail']['fiftyTwoWeekLow'],
'fiftyTwoWeekHigh': data['summaryDetail']['fiftyTwoWeekHigh'],
'growth': data['earningsTrend']['trend'][-1]['growth']}
for ticker, data in tickers.get_modules(modules).items()])
except Exception:
continue
try:
df.to_csv('output.csv', mode='a', index=True, header=False)
except (KeyError, TypeError):
continue
</code></pre>
|
<python><pandas><dataframe><csv><yfinance>
|
2023-03-18 12:22:48
| 1
| 311
|
HTMLHelpMe
|
75,775,682
| 10,765,455
|
Read long single-line text file to JSON
|
<p>I have a long single-line text file looking like this:</p>
<pre><code>{"attributes":{"fieldAttrs":"{\"request_querystring\":{\"count\":6},\"server_duration\":{
...
"version":"iidfwu83"}
</code></pre>
<p>That spans one single line in a text file <code>myjson.json</code>.</p>
<p>I know from a database that this whole object is a legit deeply nested JSON object.</p>
<p>How do I read it into a Python JSON object?</p>
|
<python><json>
|
2023-03-18 11:48:10
| 0
| 1,621
|
Tristan Tran
|
75,775,641
| 5,908,918
|
AWS Lambda error when adding a layer for snowflake-connector-python package
|
<p>I am trying to implement a python lambda function that would connect to a Snowflake database. To do that, I need to create a layer for snowflake-connector-python, so I create a zip package of this layer on my Jenkins pipeline, which runs in an Ubuntu Linux container, python 3.10:</p>
<pre><code>mkdir -p ./python
python -m pip install --upgrade pip
pip install snowflake-connector-python -t ./python
zip -r snowflake-connector-python.zip ./python
</code></pre>
<p>I upload this to my Lambda via terraform:</p>
<pre><code>resource "aws_lambda_layer_version" "lambda_layer" {
filename = "${path.module}/../snowflake-connector-python.zip"
layer_name = "snowflake"
compatible_runtimes = ["python3.8"]
}
</code></pre>
<p>This works and I can see my layer added in my lambda upon deployment:
<a href="https://i.sstatic.net/ktxiq.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/ktxiq.png" alt="enter image description here" /></a></p>
<p>The problem is that when I try to import this library in my lamda by doing a <code>import snowflake.connector</code>, I get the below error:</p>
<blockquote>
<p>"Unable to import module 'lambda_function1': No module named '_cffi_backend'"</p>
</blockquote>
<p>I have verified that if I change my package to <code>requests</code>, it compiles succesfully, so it seems to be OS specific C components inside snowflake-connector-python that is causing the issue.</p>
<p>Any ideas on how to make this work?</p>
|
<python><aws-lambda><snowflake-cloud-data-platform><aws-lambda-layers>
|
2023-03-18 11:41:59
| 0
| 1,241
|
Vin Shahrdar
|
75,775,624
| 1,118,780
|
Pydantic behaves abnormally
|
<p>I want to build a Pydantic model in Pydantic version 1.10.2 and python 3.7 with the following fields:</p>
<ul>
<li><code>overWrite</code></li>
<li><code>nas_path</code></li>
<li><code>hdfs_dir</code></li>
<li><code>convert</code></li>
<li><code>file_size_limit</code></li>
</ul>
<p>with these conditions:</p>
<ol>
<li><code>file_size_limit</code> can't be blank or anything other than <code>int</code></li>
<li><code>convert</code> can be passed <code>Y</code> or <code>YES</code> , <code>N</code> or <code>No</code> it is mandatory and when it is <code>Yes</code> or <code>Y</code>, <code>hdfs_dir</code> field must present, and if it is <code>N</code> or <code>No</code>, <code>nas_path</code> must present.</li>
<li>Either <code>nas_path</code> or <code>hdfs_dir</code> must be there, and it can't be blank.</li>
<li><code>overWrite</code> must be <code>N</code> or <code>Y</code>; if not passed it should be <code>N</code></li>
</ol>
<p>Based on this I developed the follwong models</p>
<pre class="lang-py prettyprint-override"><code>from pydantic import BaseModel, validator
class FileModel(BaseModel):
overWrite: str = 'N'
nas_path: str = None
hdfs_dir: str = None
convert: str
file_size_limit: int
@validator('file_size_limit', pre=True)
def validate_file_size_limit(cls, value):
if not isinstance(value, str):
raise ValueError('file_size_limit must be an integer')
return value
@validator('convert')
def validate_convert(cls, value, values):
if value.upper() not in ['Y', 'YES', 'N', 'NO']:
raise ValueError('convert must be Y, YES, N, or NO')
if values.get('hdfs_dir') and value.upper() in ['N', 'NO']:
raise ValueError('convert must be Y or YES when hdfs_dir is present')
if values.get('nas_path') and value.upper() in ['Y', 'YES']:
raise ValueError('convert must be N or NO when nas_path is present')
if value.upper() in ['Y', 'YES'] and not values.get('hdfs_dir'):
raise ValueError('hdfs_dir is required when convert is Y or YES')
if value.upper() in ['N', 'NO'] and not values.get('nas_path'):
raise ValueError('nas_path is required when convert is N or NO')
return value.upper() in ['Y', 'YES']
@validator('nas_path', 'hdfs_dir', pre=True, always= True)
def validate_paths(cls, value, values):
if not values.get('nas_path') and not values.get('hdfs_dir'):
raise ValueError('nas_path or hdfs_dir must be provided')
return value
val = FileModel(**{
"convert": 'N',
"overWrite": "Y",
"nas_path": "sdds",
"file_size_limit": "2000",
})
print (val, dir(val))
</code></pre>
<p>But when I call the model with above data it gives me an error:</p>
<pre><code>nas_path
nas_path or hdfs_dir must be provided (type=value_error)
hdfs_dir
nas_path or hdfs_dir must be provided (type=value_error)
convert
nas_path is required when convert is N or NO (type=value_error)
</code></pre>
<p>Ideally it should accept this value as it has <code>convert</code> <code>N</code> and <code>nas_path</code>, both fields are not blank and <code>file_size_limit</code>.</p>
|
<python><pydantic>
|
2023-03-18 11:39:22
| 1
| 2,596
|
burning
|
75,775,587
| 755,371
|
Python poetry Failed to clone verify ref exists on remote
|
<p>I am using poetry 1.4.0 on ubuntu 22.04, and trying to add a specific git branch into my project :</p>
<pre><code>poetry add git+ssh://git@dev/home/git/projects/jaydebeapi#nanosecond_fix
Failed to clone ssh://git@dev/home/git/projects/jaydebeapi at 'nanosecond_fix', verify ref exists on remote.
</code></pre>
<p>This is strange because manual git clone works :</p>
<pre><code>git clone -b nanosecond_fix ssh://git@dev/home/git/projects/jaydebeapi
Cloning into 'jaydebeapi'...
remote: Counting objects: 1592, done.
remote: Compressing objects: 100% (572/572), done.
remote: Total 1592 (delta 879), reused 1592 (delta 879)
Receiving objects: 100% (1592/1592), 397.30 KiB | 4.14 MiB/s, done.
Resolving deltas: 100% (879/879), done.
</code></pre>
<p>Any idea ?</p>
<p>NOTE : GIT version is 2.34.1 on both server and client</p>
|
<python><git><python-poetry>
|
2023-03-18 11:31:44
| 2
| 5,139
|
Eric
|
75,775,559
| 7,168,244
|
Include notes in seaborn barplot
|
<p>I have a barplot that I would like to include a note at the bottom.
The current code is shown below</p>
<pre><code>import pandas as pd
import numpy as np
import seaborn as sns
import matplotlib.pyplot as plt
from matplotlib.ticker import PercentFormatter
data = {
'id': [1, 1, 2, 2, 3, 3, 4, 4, 5, 5, 6, 6, 7, 7, 8, 8, 9, 9, 10, 10, 11, 11, 12, 12, 13, 13, 14, 14, 15, 15, 16, 16, 17, 17, 18, 18, 19, 19, 20, 20, 21, 21, 22, 22],
'survey': ['baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline', 'baseline', 'endline'],
'level': ['low', 'high', 'medium', 'low', 'high', 'medium', 'medium', 'high', 'low', 'low', 'medium', 'high', 'low', 'medium', 'low', 'high', 'low', 'low', 'medium', 'high', 'high', 'high', 'high', 'medium', 'low', 'low', 'medium', 'high', 'low', 'medium', 'high', 'medium', 'low', 'high', 'high', 'medium', 'medium', 'low', 'high', 'low', 'low', 'low', 'low', 'low']
}
df = pd.DataFrame(data)
df_N = df.groupby(['level']).count().sort_index(ascending = True).reset_index()
df_N['%'] = 100 * df_N['id'] / df_N['id'].sum()
sns.set_style('white')
ax = sns.barplot(data=df_N, x='level', y='%', ci=None,
palette="rainbow")
N = df_N['id'].to_numpy()
N_it = '$\it{N}$'
labels=[f'{np.round(perc,1)}% ({N_it} = {n})'
for perc, n in zip(ax.containers[0].datavalues, N)]
ax.bar_label(ax.containers[0], labels = labels, fontsize = 10)
sns.despine(ax=ax, left=True)
ax.grid(True, axis='y')
ax.yaxis.set_major_formatter(PercentFormatter(100))
ax.set_xlabel('')
ax.set_ylabel('')
ax.margins(y=0.15) # optionally some more free space at the top
plt.legend(bbox_to_anchor=(1.02, 1), loc='upper left', borderaxespad=0)
plt.tight_layout()
plt.show()
</code></pre>
<p>I would like to include a note based on disaggregation below</p>
<pre><code>df_N = df.groupby(['survey', 'level']).count().sort_index(ascending = True).reset_index()
df_N
</code></pre>
<p>Specifically:</p>
<p>Note: Baseline: high - 6, low - 10, medium - 6
Endline: high - 8, low - 8, medium - 6</p>
|
<python><pandas><matplotlib><seaborn><plot-annotations>
|
2023-03-18 11:27:18
| 2
| 481
|
Stephen Okiya
|
75,775,511
| 10,620,788
|
Get the stardard error from linear regression model in pySpark
|
<p>I have the following code to perform a linear regression on pySpark:</p>
<pre><code>lr = LinearRegression(featuresCol = 'features', labelCol='units_sold', maxIter=10, regParam=0.3, elasticNetParam=0.8)
lr_model = lr.fit(premodel_df)
summary =lr_model.summary
</code></pre>
<p>I am trying to get the standard error from the coefficients, which is supposed to be done with:</p>
<pre><code>summary.coefficientStandardErrors
</code></pre>
<p>But I obtain the folloging error:</p>
<blockquote>
<p>An error occurred while calling o704.coefficientStandardErrors. :
java.lang.UnsupportedOperationException: No Std. Error of coefficients
available for this LinearRegressionModel</p>
</blockquote>
<p>Is there another way to obtain the standard error?</p>
|
<python><apache-spark><pyspark><linear-regression>
|
2023-03-18 11:17:49
| 1
| 363
|
mblume
|
75,775,413
| 2,261,553
|
Slow performance of Python multiprocessing when run under SLURM
|
<p>I am running a Python code that employs <code>multiprocessing</code> for parallel tasks. When I run this on my local machine, everything works as expected, but when I use a cluster of 2x AMD with 64 cores per node, everything slows down significantly. I am using SLURM for the batch execution, <strong>and I want to run the parallel version on a single node</strong>, just employing the total 64x2 cores for that single node.</p>
<p>When no parallelization occurs (serial execution), calling the script is done via:</p>
<pre><code>#!/bin/bash
#SBATCH --mem-per-cpu=2G
#SBATCH --cpus_per_task=64
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --nstaks-per-node=1
module load python
python 0 myscript.py
</code></pre>
<p>The 0 input indicates that the script should run serialized, i.e. there is no call to the multiprocessing module internally. Let's say this takes about ~ 5 minutes.</p>
<p>Now, when I call for a parallel calculation, the sh file is changed to:</p>
<pre><code>#!/bin/bash
#SBATCH --mem-per-cpu=2G
#SBATCH --cpus_per_task=64
#SBATCH --nodes=1
#SBATCH --ntasks=1
#SBATCH --nstaks-per-node=1
module load python
export MKL_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1
export OMP_NUM_THREADS=1
python 1 64 myscript.py
</code></pre>
<p>The 1 input now indicates that the script should call the multiprocessing module for parallel calculation, whereas 64 indicates that a total of 64 processes should be created by the <code>multiprocessing.Pool()</code>. Here, each of the processes created from the multiprocessing module take about~30mins to complete, whereas in my local machine, each process takes around 5 mins, same as with the serialized call.</p>
<p>I have also seen that when using <code>sacct</code> to monitor the job, SLURM says the number of CPUS allocated equals 128, and not 64 as specified in the <code>cpus_per_task</code>.</p>
<p>So I am failing to understand this time differences, and I feel this could be on how SLURM is called from the batch, but the resources specifications seem correct to me. It is also strange that in my machine, which has 4x2 cores, the parallel processing works as expected, with each process taking ~ 5 mins.</p>
<p><strong>EDIT</strong>: I have made some tests. I observe that when I keep the MKL_NUM_THREADS=2 and run for 16 processes to complete 32 workers in the parallelization, it takes shorter time than when I run with 32 processes to complete 32 workers in the parallelization. Could this be due to some inter-processing communication taking place? In my parallel function, there is no modified shared data between processes, and being a fork parallelization, I would expect each child process to be independent from the parent one.</p>
|
<python><multiprocessing><slurm>
|
2023-03-18 10:58:20
| 0
| 411
|
Zarathustra
|
75,775,315
| 12,466,687
|
What is the polars equivalent of converting a numeric (Year) column to date in python?
|
<p>I am new to <code>polars</code> and have been trying to convert just <code>Year</code> column to <code>date</code> column but failed to do so.</p>
<p><strong>Data</strong></p>
<pre><code>import polars as pl
import pandas as pd
import datetime as dt
import plotly.express as px
# creating pandas df
df_gapminder = px.data.gapminder()
# creating polars df
pl_gapminder = pl.DataFrame(df_gapminder)
</code></pre>
<p><strong>Converting to date column</strong>:</p>
<p><code>In pandas</code></p>
<pre><code>df_gapminder['date'] = pd.to_datetime(df_gapminder.year, format="%Y")
df_gapminder.date.head()
############## output ###########
# 0 1952-01-01
# 1 1957-01-01
# 2 1962-01-01
# 3 1967-01-01
# 4 1972-01-01
# Name: date, dtype: datetime64[ns]
</code></pre>
<p><code>In polars</code> - getting error below</p>
<pre><code>pl_gapminder.with_columns(pl.col('year').str.to_date(format='%Y').alias('date'))
</code></pre>
<p><strong>Error</strong> SchemaError: invalid series dtype: expected <code>String</code>, got <code>i64</code></p>
<p><code>In Polars</code> - Getting Wrong Results below</p>
<pre><code>pl_gapminder.with_columns(pl.col('year').cast(pl.Date).alias('date')).head()
</code></pre>
<p>Wrong Output (Instead of increasing <code>Years</code> the <code>days</code> are increasing in <code>date</code> column):</p>
<pre><code>country continent year lifeExp pop gdpPercap iso_alpha iso_num date
str str i64 f64 i64 f64 str i64 date
"Afghanistan" "Asia" 1952 28.801 8425333 779.445314 "AFG" 4 1975-05-07
"Afghanistan" "Asia" 1957 30.332 9240934 820.85303 "AFG" 4 1975-05-12
"Afghanistan" "Asia" 1962 31.997 10267083 853.10071 "AFG" 4 1975-05-17
"Afghanistan" "Asia" 1967 34.02 11537966 836.197138 "AFG" 4 1975-05-22
"Afghanistan" "Asia" 1972 36.088 13079460 739.981106 "AFG" 4 1975-05-27
</code></pre>
|
<python><date><datetime><python-polars>
|
2023-03-18 10:36:57
| 1
| 2,357
|
ViSa
|
75,775,189
| 401,041
|
How to access RDP Active X AdvancedSettings2 using PytQT5 QAxWidget?
|
<p>I am trying to implemente a remote desktop client using PyQt5 QAxWidget to expose an MsRdpClient10NotSafeForScripting ActiveX object.</p>
<p>The connection works but I can't figure how to access the AdvancedSettings2 interface.</p>
<pre class="lang-py prettyprint-override"><code>import sys
from PyQt5.QtWidgets import QApplication
from PyQt5.QAxContainer import QAxWidget
def window():
app = QApplication(sys.argv)
RDP_Widget = QAxWidget()
# https://github.com/MicrosoftDocs/win32/blob/docs/desktop-src/TermServ/msrdpclient10notsafeforscripting.md
RDP_Widget.setControl('{A0C63C30-F08D-4AB4-907C-34905D770C7D}')
RDP_Widget.setProperty('Server', '192.168.33.10')
#print(RDP_Widget.AdvancedSettings2()) <- AttributeError: 'QAxWidget' object has no attribute 'AdvancedSettings2'
RDP_Widget.setGeometry(50, 50, 800, 600)
RDP_Widget.show()
RDP_Widget.Connect()
sys.exit(app.exec_())
if __name__ == "__main__":
window()
</code></pre>
|
<python><pyqt5><activex><remote-desktop>
|
2023-03-18 10:13:05
| 1
| 5,647
|
João Pinto
|
75,775,141
| 12,466,687
|
How to mention custom daterange using polars Dataframe in python plotly plots?
|
<p>I am new to python <code>polars</code> and trying to create a <code>line plot</code> using <code>plotly express</code> with <code>custom start date</code> & <code>max date</code> as the x-axis date range of the plot.</p>
<p><strong>Doubt</strong>: Do I need to pass the whole <code>polars dataframe</code> again in <code>fig_pop.update_layout(xaxis_range = )</code> to set the date range and do the entire <code>df</code> processing again or is there an alternative way to this as I already passed the <code>dataframe</code> to the <code>figure object</code>?</p>
<p><strong>My code attempt</strong>:</p>
<p>Data processing</p>
<pre class="lang-py prettyprint-override"><code>import polars as pl
import pandas as pd
import plotly.express as px
import datetime as dt
# creating pandas df
df_gapminder = px.data.gapminder()
df_gapminder['date'] = pd.to_datetime(df_gapminder.year, format="%Y")
# polars dataframe
pl_gapminder = pl.DataFrame(df_gapminder.query('country == "India"'))
pl_gapminder = pl_gapminder.with_columns(pl.col('date').cast(pl.Date))
</code></pre>
<p><strong>Doubt</strong> in <code>fig_pop.update_layout(xaxis_range = )</code> below section of code</p>
<pre class="lang-py prettyprint-override"><code>################# Plotting Below #################
fig_pop = px.line(pl_gapminder.select(pl.col(['country','date','gdpPercap'])
).to_pandas(),
x='date',y='gdpPercap',title="India's GDPPerCap Growth")
fig_pop.update_xaxes(rangeslider_visible = True)
# DOUBT in below line: do we need to pass whole dataset again or there is a better way?
fig_pop.update_layout(xaxis_range = ['1977-01-01',pl_gapminder.select(pl.col('date')).max().item()],
showlegend = False,
title = {'x':0.5},
plot_bgcolor = "rgba(0,0,0,0)",
xaxis = (dict(showgrid = False)),
yaxis = (dict(showgrid = False)),)
fig_pop.show()
</code></pre>
|
<python><plotly><python-polars>
|
2023-03-18 10:04:49
| 1
| 2,357
|
ViSa
|
75,774,873
| 13,266,105
|
OpenAI API error: "This is a chat model and not supported in the v1/completions endpoint"
|
<pre><code>import discord
import openai
import os
openai.api_key = os.environ.get("OPENAI_API_KEY")
#Specify the intent
intents = discord.Intents.default()
intents.members = True
#Create Client
client = discord.Client(intents=intents)
async def generate_response(message):
prompt = f"{message.author.name}: {message.content}\nAI:"
response = openai.Completion.create(
engine="gpt-3.5-turbo",
prompt=prompt,
max_tokens=1024,
n=1,
stop=None,
temperature=0.5,
)
return response.choices[0].text.strip()
@client.event
async def on_ready():
print(f"We have logged in as {client.user}")
@client.event
async def on_message(message):
if message.author == client.user:
return
response = await generate_response(message)
await message.channel.send(response)
discord_token = 'DiscordToken'
client.start(discord_token)
</code></pre>
<p>I try to use diferent way to access the API key, including adding to enviroment variables.</p>
<p>What else can I try or where I'm going wrong, pretty new to programming.
Error message:</p>
<blockquote>
<p>openai.error.AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = ', or you can set the environment variable OPENAI_API_KEY=). If your API key is stored in a file, you can point the openai module at it with 'openai.api_key_path = '. You can generate API keys in the OpenAI web interface. See https://onboard.openai.com for details, or email support@openai.com if you have any questions.</p>
</blockquote>
<hr />
<p><strong>EDIT</strong></p>
<p>I solved "No API key provided" error. Now I get the following error message:</p>
<blockquote>
<p>openai.error.InvalidRequestError: This is a chat model and not
supported in the v1/completions endpoint. Did you mean to use
v1/chat/completions?</p>
</blockquote>
|
<python><discord><openai-api><chatgpt-api>
|
2023-03-18 09:11:27
| 9
| 527
|
RAFA 04128
|
75,774,838
| 10,284,437
|
How to implement the callback URL for 'Upwork' REST API?
|
<p>To be able to run this code (search jobs from upwork.com):</p>
<pre><code>#!/usr/bin/env python
import upwork
from upwork.routers.jobs import search
# https://developers.upwork.com/?lang=python#authentication_authorization-request
# https://www.upwork.com/developer/keys/apply
config = upwork.Config(
{
"client_id": <my_client_id>,
"client_secret": <my_client_secret>,
"redirect_uri": <my_redirect_uri>
}
)
client = upwork.Client(config)
try:
config.token
except AttributeError:
authorization_url, state = client.get_authorization_url()
# cover "state" flow if needed
authz_code = input(
"Please enter the full callback URL you get "
"following this link:\n{0}\n\n> ".format(authorization_url)
)
print("Retrieving access and refresh tokens.... ")
token = client.get_access_token(authz_code)
params = {'q': 'php', 'title': 'Web developer'}
search.Api(client).find(params)
</code></pre>
<p>I need to define a <em>callback URL</em>.</p>
<p>How do I implement it?</p>
<p>Anyone have a <a href="/questions/tagged/php" class="post-tag" title="show questions tagged 'php'" aria-label="show questions tagged 'php'" rel="tag" aria-labelledby="tag-php-tooltip-container">php</a> or <a href="/questions/tagged/python" class="post-tag" title="show questions tagged 'python'" aria-label="show questions tagged 'python'" rel="tag" aria-labelledby="tag-python-tooltip-container">python</a> script to do it?</p>
<p>How you fetch (client side) the returned values from the API from your callback (server side)? <s>Or you need a web service on your localhost?</s> I can't host a web service on my localhost, because <strong>I don't have the hand on the network to forward to my private IP</strong></p>
<p><strong>Looks like it's easier to scrape the site than using API :/</strong> What do you think?</p>
|
<python><php><rest><upwork-api>
|
2023-03-18 09:05:11
| 1
| 731
|
Mévatlavé Kraspek
|
75,774,747
| 1,497,720
|
Avro Json for Kafka Provider
|
<p>For the following code I got error at line
<code>value_serializer=lambda m: io.DatumWriter(avro_schema).write(m).bytes()</code>
saying
<code>TypeError: write() missing 1 required positional argument: 'encoder'</code>
what should be the write syntax for avro write?</p>
<pre><code>import os
import json
import time
import random
from kafka import KafkaProducer
from dotenv import load_dotenv
from avro import schema, io
from avro.datafile import DataFileWriter
load_dotenv()
BOOTSTRAP_SERVERS = os.getenv("KAFKA_BOOTSTRAP_SERVERS").split(",")
SSL_TRUSTSTORE = os.getenv("KAFKA_SSL_TRUSTSTORE")
TOPIC_NAME = os.getenv("KAFKA_TOPIC")
LINGER_DURATION = int(os.getenv("LINGER_DURATION"))
MESSAGE_BATCH = int(os.getenv("MESSAGE_BATCH"))
def random_with_N_digits(n):
range_start = 10 ** (n - 1)
range_end = (10 ** n) - 1
return random.randint(range_start, range_end)
def produce(message, avro_schema):
print(f'Sending {message}')
producer.send(TOPIC_NAME, value=message)
producer = KafkaProducer(
security_protocol="SSL",
ssl_cafile=SSL_TRUSTSTORE,
ssl_check_hostname=False,
bootstrap_servers=BOOTSTRAP_SERVERS,
linger_ms=LINGER_DURATION,
value_serializer=lambda m: io.DatumWriter(avro_schema).write(m).bytes()
)
# Define the Avro schema for the message
schema_str = """
{
"namespace": "example.avro",
"type": "record",
"name": "Message",
"fields": [
{"name": "id", "type": "int"},
{"name": "name", "type": "string"}
]
}
"""
avro_schema = schema.Parse(schema_str)
def startStream():
msg_count = 0
try:
while (1):
# Create a message using the Avro schema
message = {"id": random.randint(0, 9999999), "name": str(random_with_N_digits(12))}
# Publish the message to a Kafka topic
produce(message, avro_schema)
msg_count += 1
if msg_count % MESSAGE_BATCH == 0:
time.sleep(1)
except KeyboardInterrupt:
print('interrupted!')
print(f"Published {msg_count} messages")
startStream()
</code></pre>
|
<python><apache-kafka><avro>
|
2023-03-18 08:46:56
| 1
| 18,765
|
william007
|
75,774,699
| 10,266,998
|
problem in skiping a line in the text file
|
<p>I have a code which takes backup from my cisco switches. It reads a text file containing switches' IPs line by line, connect to each IP and takes backup from that switch. Here is the code:</p>
<pre><code>import sys
import time
import paramiko
import os
import cmd
import datetime
now = datetime.datetime.now()
user = input("Enter username:")
password = input("Enter Paswd:")
enable_password = input("Enter enable pswd:")
port=22
f0 = open('myswitches.txt')
for ip in f0.readlines():
ip = ip.strip()
filename_prefix ='/Users/apple/Documents' + ip
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
ssh.connect(ip,port, user, password, look_for_keys=False)
chan = ssh.invoke_shell()
time.sleep(2)
chan.send('enable\n')
chan.send(enable_password +'\n')
time.sleep(1)
chan.send('term len 0\n')
time.sleep(1)
chan.send('sh run\n')
time.sleep(20)
output = chan.recv(999999)
filename = "%s_%.2i%.2i%i_%.2i%.2i%.2i" % (ip,now.year,now.month,now.day,now.hour,now.minute,now.second)
f1 = open(filename, 'a')
f1.write(output.decode("utf-8") )
f1.close()
ssh.close()
f0.close()
</code></pre>
<p>The problem is that if the credentials of a switch is different, I receive this error and the program terminates. Here is the error:</p>
<pre><code>paramiko.ssh_exception.AuthenticationException: Authentication failed.
</code></pre>
<p>How can I change this code so that in case the credentials of a switch is different, skip that line of the text file and try the next switch.</p>
|
<python>
|
2023-03-18 08:36:11
| 1
| 535
|
Pablo
|
75,774,683
| 13,670,175
|
"errorMessage": "[Errno 30] Read-only file system: '/home/sbx_user1051'", Python Selenium on AWS Lambda
|
<p>I am trying to run selenium python on AWS Lambda. But when I try to execute it from the console, I get the following error.</p>
<pre><code>{
"errorMessage": "[Errno 30] Read-only file system: '/home/sbx_user1051'",
"errorType": "OSError",
"requestId": "820cf57b-8fd7-4ee4-82b9-6f93178ec148",
"stackTrace": [
" File \"/var/task/handler.py\", line 34, in crawler\n browser = uc.Chrome(options=options, data_dir='/tmp')\n",
" File \"/opt/python/lib/python3.9/site-packages/undetected_chromedriver/__init__.py\", line 237, in __init__\n patcher = Patcher(\n",
" File \"/opt/python/lib/python3.9/site-packages/undetected_chromedriver/patcher.py\", line 66, in __init__\n os.makedirs(self.data_path, exist_ok=True)\n",
" File \"/var/lang/lib/python3.9/os.py\", line 215, in makedirs\n makedirs(head, exist_ok=exist_ok)\n",
" File \"/var/lang/lib/python3.9/os.py\", line 215, in makedirs\n makedirs(head, exist_ok=exist_ok)\n",
" File \"/var/lang/lib/python3.9/os.py\", line 215, in makedirs\n makedirs(head, exist_ok=exist_ok)\n",
" File \"/var/lang/lib/python3.9/os.py\", line 225, in makedirs\n mkdir(name, mode)\n"
]
}
</code></pre>
<p>I looked up the solution as suggested by Mark B.</p>
<p><a href="https://stackoverflow.com/questions/73394593/aws-lambda-function-returns-errormessage-errno-30-read-only-file-system">AWS Lambda function returns "errorMessage": "[Errno 30] Read-only file system: '/home/sbx_user1051'"</a></p>
<p>Here are the runtime versions I am using</p>
<pre><code>Python: v3.9
Selenium: v4.6.1
undetected-chrome-driver: v3.1.7
</code></pre>
<p>I changed the code in my script file to insert it into the temp folder</p>
<pre><code>options = webdriver.ChromeOptions()
options.headless = True
browser = uc.Chrome(options=options, data_dir='/tmp')
</code></pre>
<p>But I see the same error on the AWS console logs. I was hoping you guys could help me figure out the solution for this. Thanks in advance!</p>
<p>Here is the selenium script that I am trying to run on AWS Lambda.</p>
<pre><code>import json
import undetected_chromedriver as uc
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.common.keys import Keys
from selenium.common.exceptions import NoSuchElementException, TimeoutException
import time
import json
import os
def responseGenerator(body, message, statusCode):
response = {}
response['statusCode'] = statusCode
response['body'] = json.dumps({
'message': message,
'body': body
})
return response
def crawler(event, context):
options = webdriver.ChromeOptions()
options.headless = False
browser = uc.Chrome(options=options, data_dir='/tmp')
# ... enterprise application logic that I can't really put here!!
return responseGenerator("End of crawler execution", None, 205)
</code></pre>
<p>It is running perfectly in my local machine. But failing there!</p>
|
<python><amazon-web-services><selenium-webdriver><aws-lambda>
|
2023-03-18 08:32:37
| 0
| 335
|
Umakanth Pendyala
|
75,776,067
| 9,481,613
|
In Python RSA broadcast attack, why am I using bit length in this binary search for the cube root?
|
<p><a href="https://stackoverflow.com/a/23622115/9481613">https://stackoverflow.com/a/23622115/9481613</a></p>
<p>Shows this function:</p>
<pre><code>def find_cube_root(n):
lo = 0
hi = 1 << ((n.bit_length() + 2) // 3)
while lo < hi:
mid = (lo+hi)//2
if mid**3 < n:
lo = mid+1
else:
hi = mid
return lo
</code></pre>
<p>My input for the function is <code>m</code><sup>3</sup> the message cubed so that is why I need this function.
The <code>len(n)</code> turns out to be 454 vs. 1507 for bit length.</p>
<p>I kept trying to cap <code>high</code> at <code>num // 2</code> or <code>num // 3</code> which was not working but this works. Why does bit length work here?</p>
|
<python><rsa><bisection>
|
2023-03-18 08:31:00
| 1
| 1,171
|
mLstudent33
|
75,774,357
| 1,033,591
|
Can I pass variables to the compare function in sorted method in Python?
|
<p>Here is the code:</p>
<pre><code> if(order=="dsc"):
return sorted(qs, key=compare_function, reverse=True)
elif (order=="asc"): # asc
# return sorted(qs, key=lambda obj: obj.teacher.name)
return sorted(qs, key=compare_function)
else:
pass
def compare_function(obj):
if obj == None:
return "aaaaaaa"
if obj.teacher == None:
return "aaaaaaa"
else:
test = {"teacher": obj.teacher}
return test.get("teacher").name.lower()
</code></pre>
<p>What I expect is to pass some additional params to compare_function like below:</p>
<pre><code> if(order=="dsc"):
return sorted(qs, key=compare_function(cls_name), reverse=True)
elif (order=="asc"): # asc
# return sorted(qs, key=lambda obj: obj.teacher.name)
return sorted(qs, key=compare_function(cls_name))
else:
pass
def compare_function(obj, cls_name):
if obj == None:
return "aaaaaaa"
if obj.teacher == None:
return "aaaaaaa"
else:
test = {"teacher": obj.teacher}
return test.get("teacher").name.lower()
</code></pre>
<p>But as I did this, no param was passed to compare_function. So the "obj" in compare_func will be None.</p>
<p>Then I tried to changed to</p>
<pre><code>return sorted(qs, key=compare_function(obj, cls_name), reverse=True)
</code></pre>
<p>However, the interpretor complained the obj can't be found this time.</p>
<p>How can I do?</p>
<p>Thanks.:)</p>
<p>p.s. I'm seeking for a general solution. :')</p>
|
<python>
|
2023-03-18 07:08:32
| 1
| 2,147
|
Alston
|
75,774,350
| 6,251,742
|
Special case when += for string concatenation is more efficient than =
|
<p>I have this code using python 3.11:</p>
<pre class="lang-py prettyprint-override"><code>import timeit
code_1 = """
initial_string = ''
for i in range(10000):
initial_string = initial_string + 'x' + 'y'
"""
code_2 = """
initial_string = ''
for i in range(10000):
initial_string += 'x' + 'y'
"""
time_1 = timeit.timeit(code_1, number=100)
time_2 = timeit.timeit(code_2, number=100)
print(time_1)
# 0.5770808999950532
print(time_2)
# 0.08363639999879524
</code></pre>
<p>Why <code>+=</code> is more efficient <strong>in this case</strong>?
As far as I know, there is the same number of concatenation, and the order of execution doesn't change the result.</p>
<p>Since strings are immutable, it's not because of inplace shinanigans, and the only thing I found about string concat is about <code>.join</code> efficiency, but I don't want the most efficient, just understand why <code>+=</code> seems more efficient than <code>=</code>.</p>
<p>With this code, performances between forms almost equals:</p>
<pre class="lang-py prettyprint-override"><code>import timeit
code_1 = """
initial_string = ''
for i in range(10000):
initial_string = initial_string + 'x'
"""
code_2 = """
initial_string = ''
for i in range(10000):
initial_string += 'x'
"""
time_1 = timeit.timeit(code_1, number=100)
time_2 = timeit.timeit(code_2, number=100)
print(time_1)
# 0.07953230000566691
print(time_2)
# 0.08027460001176223
</code></pre>
<hr />
<p>I noticed a difference using different Python version (<code>'x' + 'y'</code> form):</p>
<p>Python 3.7 to 3.9:</p>
<pre class="lang-py prettyprint-override"><code>print(time_1)
# ~0.6
print(time_2)
# ~0.3
</code></pre>
<p>Python 3.10:</p>
<pre class="lang-py prettyprint-override"><code>print(time_1)
# ~1.7
print(time_2)
# ~0.8
</code></pre>
<p>Python 3.11 for comparison:</p>
<pre class="lang-py prettyprint-override"><code>print(time_1)
# ~0.6
print(time_2)
# ~0.1
</code></pre>
<hr />
<p>Similar but not answering the question: <a href="https://stackoverflow.com/questions/69079181/how-is-the-s-sc-string-concat-optimization-decided">How is the s=s+c string concat optimization decided?</a></p>
<blockquote>
<p>If s is a string, then s = s + 'c' might modify the string in place, while t = s + 'c' can't. But how does the operation s + 'c' know which scenario it's in?</p>
</blockquote>
<p>In a nutshell: Optimization occur when <code>s = s + 'c'</code>, not when <code>t = s + 'c'</code> because python need to keep a ref to the first string and can't concatenate in-place.</p>
<p>Here, we are always assigning using simple assignment or augmented assignment to the original string, so in-place concatenation should apply in both cases.</p>
|
<python><string><concatenation><string-concatenation><python-3.11>
|
2023-03-18 07:07:24
| 2
| 4,033
|
Dorian Turba
|
75,774,293
| 18,356,681
|
Please set buildPython to your Python executable path. error
|
<p>I am trying to make some code in python and run it in to android studio and I got this error. I don't know how to solve this.</p>
<p><strong>My main Activity</strong></p>
<pre><code> class MainActivity : ComponentActivity() {
override fun onCreate(savedInstanceState: Bundle?) {
super.onCreate(savedInstanceState)
setContent {
if (!Python.isStarted()){
Python.start(AndroidPlatform(this))
}
val python=Python.getInstance()
val pyObject=python.getModule("myscript")
Column(modifier = Modifier.fillMaxSize()) {
var searchPromt by remember {
mutableStateOf("")
}
var result by remember {
mutableStateOf("")
}
TextField(value = searchPromt, onValueChange ={
searchPromt=it
})
Button(onClick = {
val obj=pyObject.callAttr("main",searchPromt)
result=obj.toString()
}) {
Text(text = "Generate")
}
Text(text = result)
}
}
}
}
</code></pre>
<p><strong>My build.gradel</strong></p>
<pre><code> android {
namespace 'com.example.textdetection'
compileSdk 33
defaultConfig {
applicationId "com.example.textdetection"
minSdk 24
targetSdk 33
versionCode 1
versionName "1.0"
testInstrumentationRunner "androidx.test.runner.AndroidJUnitRunner"
vectorDrawables {
useSupportLibrary true
}
ndk {
abiFilters "armeabi-v7a", "x86"
}
python {
buildPython "D:/setup/python/Python 2"
}
python{
pip{
install "transformers"
}
}
sourceSets {
main {
python.srcDir "src/main/python"
}
}
}
buildTypes {
release {
minifyEnabled false
proguardFiles getDefaultProguardFile('proguard-android-optimize.txt'), 'proguard-rules.pro'
}
}
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
kotlinOptions {
jvmTarget = '1.8'
}
buildFeatures {
compose true
}
composeOptions {
kotlinCompilerExtensionVersion '1.2.0'
}
packagingOptions {
resources {
excludes += '/META-INF/{AL2.0,LGPL2.1}'
}
}
}
</code></pre>
<p><strong>My Python file</strong></p>
<pre><code>from transformers import pipeline
def main(prompt):
//Some code
</code></pre>
<p><strong>My Logcat</strong></p>
<pre><code>FATAL EXCEPTION: main
Process: com.example.textdetection, PID: 8647
com.chaquo.python.PyException: ModuleNotFoundError: No module named 'transformers'
</code></pre>
<p><strong>Build error</strong></p>
<pre><code>A problem occurred starting process 'command 'D:/setup/python/Python''. Please set buildPython to your Python executable path. See https://chaquo.com/chaquopy/doc/current/android.html#buildpython.
</code></pre>
<p>I know it's probably to do with file path and pip but I don't know how to call pip block so please help me out.</p>
|
<python><android><android-studio><chaquopy>
|
2023-03-18 06:55:33
| 1
| 338
|
heet kanabar
|
75,774,025
| 3,009,657
|
How to generate custom table using python reportalab
|
<p>I have the below data frame, I am trying to generate a report in a table format with this data.</p>
<pre><code>import pandas as pd
data = {'MonthString': ['January', 'February', 'March'],
'sachin': [98.08, 99.27, 100.00],
'saurav': ['96.77', '99.85', '98.86']}
df = pd.DataFrame(data)
</code></pre>
<p>I want to generate a table in the below format using python's report lab library and save it as a pdf</p>
<pre><code>| Customer | %Uptime
|----------|--------|--------|---------|
| | Jan | Feb | March |
| |--------|--------|---------|
| Schin | 98.08% | 99.27% | 100.00% |
| Saurav | 96.77% | 99.85 | 98.86% |
</code></pre>
<p>Below is the code I tried</p>
<pre><code>from reportlab.lib.pagesizes import letter
from reportlab.lib.units import inch
from reportlab.pdfgen import canvas
from reportlab.lib import colors
from reportlab.platypus import Table, TableStyle
import pandas as pd
# create the DataFrame
data = {'MonthString': ['January', 'February', 'March'],
'sachin': [98.08, 99.27, 100.00],
'saurav': ['96.77', '99.85', '98.86']}
df = pd.DataFrame(data)
df = df.rename(columns={'MonthString': 'Month'})
df = df.set_index('Month').T.reset_index().rename(columns={'index': 'Customer'})
# create the table
table_data = [list(df.columns)]
for i in range(len(df)):
table_data.append([df.iloc[i][0], *df.iloc[i][1:]])
table = Table(table_data)
table.setStyle(TableStyle([('BACKGROUND', (0,0), (-1,0), colors.gray),
('TEXTCOLOR',(0,0),(-1,0),colors.whitesmoke),
('ALIGN', (0,0), (-1,-1), 'CENTER'),
('FONTNAME', (0,0), (-1,0), 'Helvetica-Bold'),
('FONTSIZE', (0,0), (-1,0), 14),
('BOTTOMPADDING', (0,0), (-1,0), 12),
('BACKGROUND',(0,1),(-1,-1),colors.beige),
('GRID',(0,0),(-1,-1),1,colors.black)]))
# create the PDF
pdf_file = 'table.pdf'
c = canvas.Canvas(pdf_file, pagesize=letter)
table.wrapOn(c, inch*7, inch*2)
table.drawOn(c, x=50, y=650)
c.save()
</code></pre>
<p>But I can't get the table format correct. Can anyone help?</p>
|
<python><python-3.x><reportlab>
|
2023-03-18 05:27:32
| 2
| 987
|
Hound
|
75,773,966
| 17,973,259
|
Code refactoring in a game made with Python and pygame
|
<p>As I kept building my game, I started to notice that I was putting all the methods in the main class and it got too big and hard to follow. I started to refactor it into multiple modules / classes and for example I created a new module called 'game_collisions' and a class named CollisionManager, and in that class I moved all collision related methods from the main class.
This is the class:</p>
<pre><code>class CollisionManager:
"""The Collision Manager class manages collisions between game entities like
ships, aliens, bullets, and asteroids."""
def __init__(self, game):
self.game = game
self.stats = game.stats
self.settings = game.settings
self.score_board = game.score_board
</code></pre>
<p>And one of the methods for example, is this one:</p>
<pre><code>def check_asteroids_collisions(self, thunderbird_hit, phoenix_hit):
"""Check for collisions between the ships and asteroids"""
# loop through each player and check if it's alive,
# then check for collisions with asteroids and which player collided
# and activate the corresponding method
for ship in self.game.ships:
if ship.state['alive']:
if collision := pygame.sprite.spritecollideany(
ship, self.game.asteroids
):
if ship is self.game.thunderbird_ship:
thunderbird_hit()
else:
phoenix_hit()
collision.kill()
</code></pre>
<p>In the main class I am instantiating the Manager class like this:</p>
<pre><code>self.collision_handler = CollisionManager(self)
</code></pre>
<p>And calling the method like this, passing as attributes the appropriate methods:</p>
<pre><code>self.collision_handler.check_asteroids_collisions(self._thunderbird_ship_hit,
self._phoenix_ship_hit)
</code></pre>
<p>I did this with most of the methods I moved.
And now for my question, Is this good practice? Creating the method in the CollisionManager like this, and calling it with methods from the main class as attributes. Can this lead to something bad? As for being readable, to me it looks good enough. I would appreciate any advice.</p>
|
<python><class><oop><methods>
|
2023-03-18 05:01:44
| 1
| 878
|
Alex
|
75,773,844
| 132,438
|
read a .7z file in memory with Python, and process each line as a stream
|
<p>I'm working with a huge .7z file that I need to process line by line.</p>
<p>First I tried <code>py7zr</code>, but it only works by first decompressing the whole file into an object. This runs out of memory.</p>
<p>Then <code>libarchive</code> is able to read block by block, but there's no straightforward way of splitting these binary blocks into lines.</p>
<p>What can I do?</p>
<p>Related questions I researched first:</p>
<ul>
<li><a href="https://stackoverflow.com/questions/32797851/how-to-read-contents-of-7z-file-using-python">How to read contents of 7z file using python</a>: The answers only decompress the whole file.</li>
<li><a href="https://stackoverflow.com/questions/20104460/how-to-read-from-a-text-file-compressed-with-7z">How to read from a text file compressed with 7z?</a>: Seeks Python 2.7 answers.</li>
<li><a href="https://stackoverflow.com/questions/34135592/python-how-can-i-read-a-line-from-a-compressed-7z-file-in-python">Python: How can I read a line from a compressed 7z file in Python?</a>: Focuses on a single line, no accepted answer - only answer posted 7 years ago.</li>
</ul>
<p>I'm looking for ways to improve the temporary solution I built myself - posted as an answer here. Thanks!</p>
|
<python><string><7zip><in-memory><libarchive>
|
2023-03-18 04:18:30
| 1
| 59,753
|
Felipe Hoffa
|
75,773,791
| 10,284,437
|
Selenium: scroll down, then no more links than previous state
|
<p>I have this code, I know it scroll down very well in non headless mode.</p>
<pre><code>lenght = 0
last = 0
full_items_list_temp = []
# scroll down code. Number of links is 25
while True:
html = driver.find_element(By.TAG_NAME, 'html')
html.send_keys(Keys.END)
full_items_list_temp = self.driver.find_elements(
By.XPATH, '//a[contains(@href, "/item/")]')
length = len(full_items_list_temp)
if length == last:
break
else:
last = length
sleep(1)
# Fetch all links, should be > 100
full_items_list = self.driver.find_elements(
By.XPATH, '//a[contains(@href, "/item/")]')
for item in full_items_list:
try:
link = item.get_attribute('href')
print(link)
except:
pass
</code></pre>
<p>But the number of links is not updated. What I'm doing wrong?</p>
|
<python><selenium-webdriver><selenium-chromedriver>
|
2023-03-18 04:01:15
| 1
| 731
|
Mévatlavé Kraspek
|
75,773,786
| 9,680,491
|
Why can't I access GPT-4 models via API, although GPT-3.5 models work?
|
<p>I'm able to use the gpt-3.5-turbo-0301 model to access the ChatGPT API, but not any of the gpt-4 models. Here is the code I am using to test this (it excludes my openai API key). The code runs as written, but when I replace "gpt-3.5-turbo-0301" with "gpt-4", "gpt-4-0314", or "gpt-4-32k-0314", it gives me an error</p>
<pre><code>openai.error.InvalidRequestError: The model: `gpt-4` does not exist
</code></pre>
<p>I have a ChatGPT+ subscription, am using my own API key, and can use gpt-4 successfully via OpenAI's own interface.</p>
<p>It's the same error if I use gpt-4-0314 or gpt-4-32k-0314. I've seen a couple articles claiming this or similar code works using 'gpt-4' works as the model specification, and the code I pasted below is from one of them.</p>
<p>Is it possible to access the gpt-4 model via Python + API, and if so, how?</p>
<pre><code>openai_key = "sk..."
openai.api_key = openai_key
system_intel = "You are GPT-4, answer my questions as if you were an expert in the field."
prompt = "Write a blog on how to use GPT-4 with python in a jupyter notebook"
# Function that calls the GPT-4 API
def ask_GPT4(system_intel, prompt):
result = openai.ChatCompletion.create(model="gpt-3.5-turbo-0301",
messages=[{"role": "system", "content": system_intel},
{"role": "user", "content": prompt}])
print(result['choices'][0]['message']['content'])
# Call the function above
ask_GPT4(system_intel, prompt)
</code></pre>
|
<python><openai-api><gpt-4>
|
2023-03-18 03:59:30
| 4
| 403
|
Autodidactyle
|
75,773,580
| 21,370,799
|
Why isn't my if statement functioning properly in Python?
|
<p>So I was doing an experiment where a user enters a text and the program responds according to the condition. If the condition returns false, then the program prints the message "IDK". But when I ran it, I got the proper response and below it, I got the "IDK" message. What should I do to solve it?</p>
<p>My code :</p>
<pre class="lang-py prettyprint-override"><code>#!/usr/bin/python3
print("Hey.\n")
while 1:
inp = input("Enter text - ")
if "hey" in inp or "hello" in inp or "hi" in inp or "greetings" in inp:
print("Greeting\n")
if "sirwalterraleigh" in inp or "abrahamlincoln" in inp:
print("History\n")
else:
print("IDK.")
</code></pre>
|
<python><if-statement><conditional-statements>
|
2023-03-18 02:48:34
| 1
| 441
|
lunix
|
75,773,430
| 37,213
|
Problem With Scikit Learn One Hot and Ordinal Encoders
|
<p>I'm having a problem with Scikit Learn's one-hot and ordinal encoders that I hope someone can explain to me.</p>
<p>I'm following along with a <a href="https://towardsdatascience.com/guide-to-encoding-categorical-features-using-scikit-learn-for-machine-learning-5048997a5c79" rel="nofollow noreferrer">Towards Data Science article</a> that uses a <a href="https://www.kaggle.com/datasets/spscientist/students-performance-in-exams" rel="nofollow noreferrer">Kaggle data set</a> to predict student academic performance. I'm running on a MacBook Pro M1 and Python 3.11 using a virtual environment.</p>
<p>The data set has four nominal independent variables <code>['gender', 'race/ethnicity', 'lunch', 'test preparation course']</code> , one ordinal independent variable <code>['parental level of education']</code>, and aspires to create a regression model that predicts the average of math, reading, and writing scores.</p>
<p>I'd like to set up an SKLearn pipeline that uses their <code>OneHotEncoder</code> and <code>OrdinalEncoder</code>. I am not interested in using Pandas DataFrame <code>get_dummies</code>. My goal is a pure SK Learn pipeline.</p>
<p>I tried to approach the problem in steps. The first attempt uses the encoders without a <code>column_transformer</code>:</p>
<pre><code>ohe = OneHotEncoder(handle_unknown='ignore', sparse_output=False).set_output(transform='pandas')
education_levels = [
"some high school",
"high school",
"some college",
"associate's degree",
"bachelor's degree",
"master's degree",
"doctorate"]
oe = OrdinalEncoder(categories=education_levels)
encoded_data = pd.DataFrame(ohe.fit_transform(df[categorical_attributes]))
df = df.join(encoded_data)
encoded_data = pd.DataFrame(ohe.fit_transform(df[ordinal_attributes]))
df = df.join(encoded_data)
</code></pre>
<p>The data frame <code>df</code> has both ordinal and categorical variables encoded perfectly. It's ready to feed to whatever regression algorithm I want. Note the <code>.join</code> steps that add the encoded data to the data frame.</p>
<p>The second attempt instantiates a column transformer:</p>
<pre><code>column_transformer = make_column_transformer(
(ohe, categorical_attributes),
(oe, ordinal_attributes))
column_transformer.fit_transform(X)
</code></pre>
<p>The data frame <code>X</code> contains only the ordinal and categorical independent variables.</p>
<p>When I run this version I get a stack trace that ends with the following error:</p>
<pre><code>ValueError: Shape mismatch: if categories is an array, it has to be of shape (n_features,).ValueError: Shape mismatch: if categories is an array, it has to be of shape (n_features,).
</code></pre>
<p>Is the error telling me that I need to turn the categories list into an array and transpose it? Why does it work perfectly when I apply <code>.fit_transform</code> for each encoder sequentially without the pipeline?</p>
<p>The code I have appears to be what the article recommends.</p>
<p>I'm troubled by the column transformer. If it applies <code>.fit_transform</code> for each encoder sequentially it won't be doing those intermediate <code>.join</code> steps that my first solution has. Those are key to adding the new columns to the data frame.</p>
<p>I want to make the pipeline work. What am I missing, and how do I fix it?</p>
<p>Edit:</p>
<p>I logged into ChatGPT and asked it for comments on my code. Making this change got me almost there:</p>
<pre><code># Wrapping education_levels with [] did it.
oe = OrdinalEncoder(categories=[education_levels])
</code></pre>
<p>I have just one question remaining. The Pandas DataFrame that I get out has all the right values, but the column names are missing. How do I add them? Why were they not generated for me, as they were in the solution that did not use the pipeline?</p>
|
<python><pandas><machine-learning><scikit-learn><pipeline>
|
2023-03-18 01:56:12
| 2
| 309,526
|
duffymo
|
75,773,377
| 10,107,738
|
How do I crawl a page to the href elements using scrapy?
|
<p>I am trying to crawl a page and print the <code>hrefs</code>, however I am not getting any response. Here is the spider -</p>
<pre class="lang-py prettyprint-override"><code>import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule
class Shoes2Spider(CrawlSpider):
name = "shoes2"
allowed_domains = ["stockx.com"]
start_urls = ["https://www.stockx.com/sneakers"]
rules = (
Rule(
LinkExtractor(restrict_xpaths = "//div[@class='css-pnc6ci']/a"),
callback = "parse_item",
follow = True
),
)
def parse_item(self, response):
print(response.url)
</code></pre>
<p>Also here is an example of the <code>hrefs</code> I'm trying to extract -
<a href="https://i.sstatic.net/DMbGn.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DMbGn.png" alt="enter image description here" /></a></p>
<p>When I run the spider, I'm expecting to see 40 <code>hrefs</code>, however I'm getting nothing. What am I doing wrong?</p>
<p>Also here is code to create the project in terminal -</p>
<pre class="lang-bash prettyprint-override"><code>scrapy startproject stockx
cd stockx
scrapy genspider -t crawl shoes2 www.stockx.com/sneakers
</code></pre>
|
<python><scrapy><web-crawler>
|
2023-03-18 01:34:49
| 2
| 939
|
The Rookie
|
75,773,353
| 19,425,874
|
How to export full Google Sheet tab as PDF and then print to local printer using Python
|
<p>After several weeks, I'm getting very close into solving my overall issue. I'm looking to connect my Google Sheet to a local printer & print the tabs "Label" and "Claims". Label needs to be printed x times, x being defined by the value in cell f2. Claims should be printed once as well.</p>
<p>This seems all taken care of, but for some reason the full Google Sheet tab isn't being exported to a PDF correctly.</p>
<p>It seems like the label is printing in portrait, when it should be landscape & the text maybe needs to be larger (image below how it is printing)?</p>
<p>[![Wrong Label][1]][1]</p>
<p>I can't seem to figure out how to adjust this. The "Claims" tab just prints blank, but I think it is the same issue for both.</p>
<p>Any thoughts on how I could adjust this code to fully print the tab? Sample URL added:</p>
<p><a href="https://docs.google.com/spreadsheets/d/1MmUb7aPR9JVy-o2erbIDFzoY1iH9M5skcBRYNTuB_Ug/edit#gid=904472643" rel="nofollow noreferrer">https://docs.google.com/spreadsheets/d/1MmUb7aPR9JVy-o2erbIDFzoY1iH9M5skcBRYNTuB_Ug/edit#gid=904472643</a></p>
<pre><code>import os
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import win32print
from reportlab.pdfgen import canvas
from reportlab.lib.pagesizes import A5
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
# Set up Google Sheets API credentials
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('creds.json', scope)
client = gspread.authorize(creds)
# Set up printer name and font
printer_name = "Rollo Printer"
font_path = "arial.ttf"
# Register Arial font
pdfmetrics.registerFont(TTFont('Arial', font_path))
# Open the Google Sheet and select Label and Claims worksheets
sheet_url = "URL"
label_sheet = client.open_by_url(sheet_url).worksheet("Label")
claims_sheet = client.open_by_url(sheet_url).worksheet("Claims")
# Get the number of label copies to print
num_copies = int(label_sheet.acell("F2").value)
# Set up label filename and PDF canvas
label_file = "label.pdf"
label_canvas = canvas.Canvas(label_file, pagesize=A5)
# Write label text to PDF canvas
label_text = label_sheet.get_all_values()
x_pos = 10 # set initial x position
y_pos = 260 # set initial y position
line_height = 14 # set line height for text
label_canvas.setFont('Arial', 8)
for row in label_text:
for col in row:
textobject = label_canvas.beginText()
textobject.setFont('Arial', 8)
textobject.setTextOrigin(x_pos, y_pos)
lines = col.split("\n")
for line in lines:
textobject.textLine(line)
y_pos -= line_height
label_canvas.drawText(textobject)
x_pos += 90
y_pos = 260
x_pos = 10
# Save the label PDF and print to the printer
label_canvas.save()
for i in range(num_copies):
os.startfile(label_file, 'printto', printer_name, "", 1)
# Set up claims filename and PDF canvas
claims_file = "claims.pdf"
claims_canvas = canvas.Canvas(claims_file, pagesize=A5)
# Write claims text to PDF canvas
claims_text = claims_sheet.get_all_values()
y_pos = 260 # set initial y position
claims_canvas.setFont('Arial', 8)
for row in claims_text:
textobject = claims_canvas.beginText()
textobject.setFont('Arial', 8)
lines = row[0].split("\n")
for line in lines:
textobject.textLine(line)
y_pos -= line_height
claims_canvas.drawText(textobject)
y_pos = 260
# Save the claims PDF and print to the printer
claims_canvas.save()
os.startfile(claims_file, 'printto', printer_name, "", 1)
</code></pre>
<p>I figured out switching to landscape, but the labels still do not look right. is there a way to designate them as 4x6? or export from Google Sheet differently? The issue is the tabs are not being exporting as a PDF the way they should be formatted for a 4x6 label.</p>
<pre><code> import os
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import win32print
from reportlab.pdfgen import canvas
from reportlab.lib.pagesizes import landscape, A5
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
# Set up Google Sheets API credentials
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('creds.json', scope)
client = gspread.authorize(creds)
# Set up printer name and font
printer_name = "Rollo Printer"
font_path = "arial.ttf"
# Register Arial font
pdfmetrics.registerFont(TTFont('Arial', font_path))
# Open the Google Sheet and select Label and Claims worksheets
sheet_url = "https://docs.google.com/spreadsheets/d/1eRO-30eIZamB5sjBp-Mz2QMKCfGxV037MQO8nS7G7AI/edit#gid=1988901382"
label_sheet = client.open_by_url(sheet_url).worksheet("Label")
claims_sheet = client.open_by_url(sheet_url).worksheet("Claims")
# Get the number of label copies to print
num_copies = int(label_sheet.acell("F2").value)
# Set up label filename and PDF canvas
label_file = "label.pdf"
label_canvas = canvas.Canvas(label_file, pagesize=landscape(A5))
# Write label text to PDF canvas
label_text = label_sheet.get_all_values()
x_pos = 10 # set initial x position
y_pos = 260 # set initial y position
line_height = 14 # set line height for text
label_canvas.setFont('Arial', 8)
for row in label_text:
for col in row:
textobject = label_canvas.beginText()
textobject.setFont('Arial', 8)
textobject.setTextOrigin(x_pos, y_pos)
lines = col.split("\n")
for line in lines:
textobject.textLine(line)
y_pos -= line_height
label_canvas.drawText(textobject)
x_pos += 90
y_pos = 260
x_pos = 10
# Save the label PDF and print to the printer
label_canvas.save()
for i in range(num_copies):
os.startfile(label_file, 'printto', printer_name, "", 1)
# Set up claims filename and PDF canvas
claims_file = "claims.pdf"
claims_canvas = canvas.Canvas(claims_file, pagesize=landscape(A5))
# Write claims text to PDF canvas
claims_text = claims_sheet.get_all_values()
y_pos = 260 # set initial y position
claims_canvas.setFont('Arial', 8)
for row in claims_text:
textobject = claims_canvas.beginText()
textobject.setFont('Arial', 8)
lines = row[0].split("\n")
for line in lines:
textobject.textLine(line)
y_pos -= line_height
claims_canvas.drawText(textobject)
y_pos = 260
# Save the claims PDF and print to the printer
claims_canvas.save()
os.startfile(claims_file, 'printto', printer_name, "", 1)
</code></pre>
<p>I am thinking the code needs to look something more like this, but I'm receiving a traceback error (also below).</p>
<p>Code:</p>
<pre><code>import os
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import win32print
from reportlab.pdfgen import canvas
from reportlab.lib.pagesizes import landscape, portrait
from reportlab.lib.units import mm
from reportlab.pdfbase import pdfmetrics
from reportlab.pdfbase.ttfonts import TTFont
# Set up Google Sheets API credentials
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
creds = ServiceAccountCredentials.from_json_keyfile_name('creds.json', scope)
client = gspread.authorize(creds)
# Set up printer name and font
printer_name = "Rollo Printer"
font_path = "arial.ttf"
# Register Arial font
pdfmetrics.registerFont(TTFont('Arial', font_path))
# Open the Google Sheet and select Label and Claims worksheets
sheet_url = "https://docs.google.com/spreadsheets/d/1eRO-30eIZamB5sjBp-Mz2QMKCfGxV037MQO8nS7G7AI/edit#gid=1988901382"
label_sheet = client.open_by_url(sheet_url).worksheet("Label")
claims_sheet = client.open_by_url(sheet_url).worksheet("Claims")
# Get the number of label copies to print
num_copies = int(label_sheet.acell("F2").value)
# Set up label filename and PDF canvas
label_file = "label.pdf"
label_canvas = canvas.Canvas(label_file, pagesize=landscape((6*72*mm), (4*72*mm)))
# Write label text to PDF canvas
label_text = label_sheet.get_all_values()
x_pos = 10 # set initial x position
y_pos = 260 # set initial y position
line_height = 20 # set line height for text
label_canvas.setFont('Arial', 18)
for row in label_text:
for col in row:
textobject = label_canvas.beginText()
textobject.setFont('Arial', 18)
textobject.setTextOrigin(x_pos, y_pos)
lines = col.split("\n")
for line in lines:
textobject.textLine(line)
y_pos -= line_height
label_canvas.drawText(textobject)
x_pos += 145
y_pos = 260
x_pos = 10
# Save the label PDF and print to the printer
label_canvas.save()
for i in range(num_copies):
os.startfile(label_file, 'printto', printer_name, "", 1)
# Set up claims filename and PDF canvas
claims_file = "claims.pdf"
claims_canvas = canvas.Canvas(claims_file, pagesize=landscape((6*72*mm), (4*72*mm)))
# Write claims text to PDF canvas
claims_text = claims_sheet.get_all_values()
y_pos = 260 # set initial y position
claims_canvas.setFont('Arial', 18)
for row in claims_text:
textobject = claims_canvas.beginText()
textobject.setFont('Arial', 18)
lines = row[0].split("\n")
for line in lines:
textobject.textLine(line)
y_pos -= line_height
claims_canvas.drawText(textobject)
y_pos = 260
# Save the claims PDF and print to the printer
claims_canvas.save()
os.startfile(claims_file, 'printto', printer_name, "", 1)
</code></pre>
<p>Error:</p>
<pre><code>PS C:\Users\amadl\JRPrintFinal> & "C:/Program Files/Python311/python.exe" c:/Users/amadl/JRPrintFinal/printerv7testing.py
Traceback (most recent call last):
File "c:\Users\amadl\JRPrintFinal\printerv7testing.py", line 34, in <module>
label_canvas = canvas.Canvas(label_file, pagesize=landscape((6*72*mm), (4*72*mm)))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: landscape() takes 1 positional argument but 2 were given
[1]: https://i.sstatic.net/UUv4C.png
</code></pre>
|
<python><canvas><operating-system><pywin32><reportlab>
|
2023-03-18 01:25:09
| 1
| 393
|
Anthony Madle
|
75,773,259
| 14,729,820
|
How to convert date and time inside data frame to float64 datatype?
|
<p>I have this Excel file <a href="https://github.com/Mohammed20201991/DataSets/blob/main/data.xlsx" rel="nofollow noreferrer">data</a>
as in the image below <img src="https://i.sstatic.net/kbI7C.png" alt="enter image description here" />] following this <a href="https://kgptalkie.com/multi-step-time-series-predicting-using-rnn-lstm/" rel="nofollow noreferrer">tutorial</a> with data mentioned (<a href="https://i.sstatic.net/kbI7C.png" rel="nofollow noreferrer">https://i.sstatic.net/kbI7C.png</a>)
I use colab notebook : by writing code down</p>
<pre><code>import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from numpy import nan
from tensorflow.keras import Sequential
from tensorflow.keras.layers import LSTM, Dense
from sklearn.metrics import mean_squared_error
from sklearn.preprocessing import MinMaxScaler
#Reading the dataset
data_path= "/content/data.xlsx"
data = pd.read_excel(data_path)
data.head()
</code></pre>
<p>When try to check all data columns type by using <code>data.info()</code> I got :</p>
<pre><code><class 'pandas.core.frame.DataFrame'>
RangeIndex: 84960 entries, 0 to 84959
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Time 84960 non-null datetime64[ns]
1 Fridge 84960 non-null float64
2 Lights 84960 non-null float64
3 Microwave 84960 non-null float64
4 Pump1 84960 non-null float64
5 Pump2 84960 non-null float64
6 TV 84960 non-null float64
7 Washing Machine 84960 non-null float64
8 Total Load 84960 non-null float64
dtypes: datetime64[ns](1), float64(8)
memory usage: 5.8 MB
</code></pre>
<p>I am trying to convert Time type <code>datetime64</code> to <code>float64</code> by</p>
<pre><code># data = data.astype('float')
x = data['Time'].values.astype("float64")
x
</code></pre>
<p>but got this issue :</p>
<pre><code>3632 except TypeError:
3633 # If we have a listlike key, _check_indexing_error will raise
KeyError: 'Time'
</code></pre>
<pre><code>## What I am expect :
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 84960 entries, 0 to 84959
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Time 84960 non-null float64
1 Fridge 84960 non-null float64
2 Lights 84960 non-null float64
3 Microwave 84960 non-null float64
4 Pump1 84960 non-null float64
5 Pump2 84960 non-null float64
6 TV 84960 non-null float64
7 Washing Machine 84960 non-null float64
8 Total Load 84960 non-null float64
dtypes: float64(9)
memory usage: 5.8 MB
</code></pre>
<p>The format of Time: dd/mm/yyyy hh:mm
<a href="https://i.sstatic.net/gmzuf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/gmzuf.png" alt="enter image description here" /></a></p>
|
<python><pandas><dataframe><data-science>
|
2023-03-18 00:51:01
| 2
| 366
|
Mohammed
|
75,773,245
| 2,024,493
|
Take snapshots of Youtube videos to create a preview GIF programatically
|
<p>I use the Youtube API to integrate content into my site. But, it doesn't provide any good snapshots of the videos. It provides a few very low quality snapshots that are virtually unusable.</p>
<pre><code>https://img.youtube.com/vi/<insert-youtube-video-id-here>/0.jpg
https://img.youtube.com/vi/<insert-youtube-video-id-here>/1.jpg
https://img.youtube.com/vi/<insert-youtube-video-id-here>/2.jpg
https://img.youtube.com/vi/<insert-youtube-video-id-here>/3.jpg
</code></pre>
<p>For example: <code>https://img.youtube.com/vi/XdwaASKJGxo/1.jpg</code> creates the following:</p>
<p><a href="https://i.sstatic.net/oF0AT.jpg" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oF0AT.jpg" alt="enter image description here" /></a></p>
<p><em>This snapshot is such low quality it's unusable. So I need to make my own!</em></p>
<p>My goal is to have a unique thumbnail/preview for each video. I would like to programmatically take 3 snapshots of the video within the middle of the video to avoid intros/outros. Then I would like to take these snapshots and generate a GIF.</p>
<p>I know how to generate a GIF using PHP, but I cannot find a way to take snapshots of a video.</p>
<p>Is it possible to do this using PHP or Python? Or any other code for that matter.</p>
<p><em>Example:</em></p>
<p><a href="https://i.sstatic.net/cRb7i.gif" rel="nofollow noreferrer"><img src="https://i.sstatic.net/cRb7i.gif" alt="enter image description here" /></a></p>
|
<python><php><gif>
|
2023-03-18 00:46:58
| 2
| 2,010
|
Maciek
|
75,773,088
| 1,241,786
|
What can I do to improve the performance of a simple string search and replace script?
|
<p>I have a spreadsheet that contains 2 columns, the 1st is a column of strings that I need to search for, and the 2nd is a column of strings that the 1st column needs to be replaced with. There are close to 4000 rows in this spreadsheet. I have an example of the data shown below.</p>
<p>All of the strings in the "Tag Names" column are unique, however, there are some similarities - for example, <code>e1\di\BC-B29hiTor</code>, <code>e1\di\BC-B29hiTorq</code>, and <code>e1\di\BC-B29hiTorqLim</code>. That is, some strings can be strict subsets of others. I want to avoid inadvertently replacing a shorter version when a longer match is present, and I also want to be able to match these strings in a case-insensitive manner.</p>
<pre class="lang-none prettyprint-override"><code>Tag Name Address
e1\di\BC-B29DisSwt ::[e1]mccE1:I.data[2].28
e1\di\BC-B29hiTor ::[e1]Rack5:3:I.Data.3
e1\di\BC-B29hiTorq ::[e1]Rack5:3:I.Data.4
e1\di\BC-B29hiTorqLim ::[E1]BC_B29HiTorqueLimit
e1\di\BC-B29PlcRem ::[e1]Rack5:3:I.Data.2
e1\di\BC-B29Run ::[e1]Rack5:3:I.Data.0
e1\di\BC-B30DisSwt ::[e1]mccE2:I.data[2].28
e1\di\BC-B30hiTor ::[e1]Rack5:6:I.Data.3
e1\di\BC-B30hiTorq ::[e1]Rack5:6:I.Data.4
e1\di\BC-B30PlcRem ::[e1]Rack5:6:I.Data.2
e1\di\BC-B30Run ::[e1]Rack5:6:I.Data.0
e1\di\BC-B32DisSwt ::[E1]Rack5:1:I.Data.10
e1\di\BC-B32hiTor ::[E1]Rack5:1:I.Data.13
</code></pre>
<p>I also have a little over 600 XML files that will need to be searched for the above strings and replaced with their appropriate replacement.</p>
<p>As a first step, I wrote a little script that would search all of the XML files for all of the strings that I am wanting to replace and am logging the locations of those found strings. My logging script works, but it is horrendously slow (5ish hours to process 100 XML files). Implementing a replace routine would only slow things down further, so I clearly need to rethink how I'm handling this. What can I do to speed things up?</p>
<p>Edit: Another requirement of mine is that the replace routine will need to preserve the capitalization of the remainder of the files that are being searched, so converting everything to lowercase ultimately would not work in my case.</p>
<pre><code># Import required libs
import pandas as pd
import os
import openpyxl
from Trie import Trie
import logging
logging.basicConfig(filename='searchResults.log', level=logging.INFO, format='%(asctime)s %(message)s', datefmt='%m/%d/%Y %I:%M:%S %p')
# Load the hmi tags into a Trie data structure and the addresses into an array.
# The Trie accepts a (key, value) pair, where key is the tag and value is the
# index of the associated array.
df_HMITags = pd.read_excel('Tags.xlsx')
logging.info('Loaded excel file')
HMITags = Trie()
addresses = []
for i in df_HMITags.index:
HMITags.insert(str(df_HMITags[' Tag Name'][i]).lower(), i)
addresses.append(str(df_HMITags[' Address'][i]))
# Assign directory
directory = 'Graphics'
# Iterate over the files in the directory
for filename in os.listdir(directory):
file = os.path.join(directory, filename)
# Checking if it is a file
if os.path.isfile(file):
logging.info('Searching File: ' + str(filename))
print('Searching File:', filename)
# Open the file
with open(file,'r') as fp:
# Search the file, one line at a time.
lines = fp.readlines()
lineNumber = 1
for line in lines:
if lineNumber %10 == 0:
print('Searching line number:', lineNumber)
#logging.debug('Searching Line: ' + str(lineNumber))
#print('Searching Line:', lineNumber)
# Convert to lower case, as this will simplify searching.
lineLowered = line.lower()
# Iterate through the line searching for various tags.
searchString = ''
potentialMatchFound = False
charIndex = 0
while charIndex < len(lineLowered):
#logging.debug('charIndex: ' + str(charIndex))
#print('charIndex = ', charIndex, '---------------------------------------')
searchString = searchString + lineLowered[charIndex]
searchResults = HMITags.query(searchString)
#if lineNumber == 2424:
###print('searchString:', searchString)
###print('searchResults length:', len(searchResults))
# If the first char being searched does not return any results, move on to the next char.
if len(searchResults) > 0:
potentialMatchFound = True
###print('Potential Match Found:', potentialMatchFound)
elif len(searchResults) == 0 and potentialMatchFound:
###print('Determining if exact match exists')
# Remove the last char from the string.
searchString = searchString[:-1]
searchResults = HMITags.query(searchString)
#Determine if an exact match exists in the search results
exactMatchFound = False
exactMatchIndex = 0
while exactMatchIndex < len(searchResults) and not exactMatchFound:
if searchString == searchResults[exactMatchIndex][0]:
exactMatchFound = True
exactMatchIndex = exactMatchIndex + 1
if exactMatchFound:
logging.info('Match Found! File: ' + str(filename) + ' Line Number: ' + str(lineNumber) + ' Column: ' + str(charIndex - len(searchString) + 1) + ' HMI Tag: ' + searchString)
print('Found:', searchString)
charIndex = charIndex - 1
else:
###print('Not Found:', searchString)
charIndex = charIndex - len(searchString)
searchString = ''
potentialMatchFound = False
else:
searchString = ''
charIndex = charIndex + 1
lineNumber = lineNumber + 1
</code></pre>
<p>And my Trie implementation:</p>
<pre><code>class TrieNode:
"""A node in the trie structure"""
def __init__(self, char):
# the character stored in this node
self.char = char
# whether this can be the end of a key
self.is_end = False
# The value from the (key, value) pair that is to be stored.
# (if this node's is_end is True)
self.value = 0
# a dictionary of child nodes
# keys are characters, values are nodes
self.children = {}
class Trie(object):
"""The trie object"""
def __init__(self):
"""
The trie has at least the root node.
The root node does not store any character
"""
self.root = TrieNode("")
def insert(self, key, value):
"""Insert a key into the trie"""
node = self.root
# Loop through each character in the key
# Check if there is no child containing the character, create a new child for the current node
for char in key:
if char in node.children:
node = node.children[char]
else:
# If a character is not found,
# create a new node in the trie
new_node = TrieNode(char)
node.children[char] = new_node
node = new_node
# Mark the end of a key
node.is_end = True
# Set the value from the (key, value) pair.
node.value = value
def dfs(self, node, prefix):
"""Depth-first traversal of the trie
Args:
- node: the node to start with
- prefix: the current prefix, for tracing a
key while traversing the trie
"""
if node.is_end:
self.output.append((prefix + node.char, node.value))
for child in node.children.values():
self.dfs(child, prefix + node.char)
def query(self, x):
"""Given an input (a prefix), retrieve all keys stored in
the trie with that prefix, sort the keys by the number of
times they have been inserted
"""
# Use a variable within the class to keep all possible outputs
# As there can be more than one key with such prefix
self.output = []
node = self.root
# Check if the prefix is in the trie
for char in x:
if char in node.children:
node = node.children[char]
else:
# cannot found the prefix, return empty list
return []
# Traverse the trie to get all candidates
self.dfs(node, x[:-1])
# Sort the results in reverse order and return
return sorted(self.output, key = lambda x: x[1], reverse = True)
</code></pre>
|
<python>
|
2023-03-18 00:00:22
| 1
| 728
|
kubiej21
|
75,773,085
| 4,258,228
|
subprocess.run(["huggingface-cli", "login", "--token", TOKEN]) works on mac but not on Ubuntu
|
<p>I am tring to run <code>subprocess.run(["huggingface-cli", "login", "--token", TOKEN])</code> in a Jupyter notebook, which works on Mac but gets the following error on Ubuntu. I checked that <code>pip install huggingface_hub</code> has been executed.</p>
<p><code>subprocess.run(["git", "lfs", "install"])</code> works.</p>
<p><code>!huggingface-cli login</code> in the jupyter cell gets the error <code>/bin/bash: huggingface-cli: command not found</code></p>
<p>Anyone can help?</p>
|
<python><ubuntu><subprocess>
|
2023-03-18 00:00:03
| 3
| 417
|
dami.max
|
75,773,019
| 743,730
|
What exceptions can be raised by Python HTTPX's json() method?
|
<p>The excellent <a href="https://www.python-httpx.org" rel="nofollow noreferrer">Python HTTPX package</a> has a <a href="https://www.python-httpx.org/quickstart/#json-response-content" rel="nofollow noreferrer"><code>.json()</code></a> method for conveniently decoding resposnes that are in JSON format. But the documentation does not mention what exceptions can be raised by <code>.json()</code>, and even looking through the code, it is not obvious to me what the possible exceptions might be.</p>
<p>For purposes of writing code like the following,</p>
<pre><code>try:
return response.json()
except EXCEPTION:
print('This is not the JSON you are looking for')
</code></pre>
<p>what exceptions should I test for?</p>
|
<python><httpx>
|
2023-03-17 23:43:32
| 1
| 2,489
|
mhucka
|
75,772,871
| 4,790,871
|
Custom endpoint for `ros3` driver in `h5py`
|
<p><code>h5py</code> supports the native S3 driver for HDF5 (the <code>ros3</code> driver). We've enabled this with a local build of <code>HDF5</code>.</p>
<pre><code>>>> import h5py
>>>
>>> print(f'Registered drivers: {h5py.registered_drivers()}')
Registered drivers: frozenset({ 'ros3', 'sec2', 'fileobj', 'core', 'family', 'split', 'stdio', 'mpio'})
</code></pre>
<p>We have a custom endpoint for our S3 service (not AWS). We run a Ceph/S3 service.</p>
<p><strong>Is there a way to specify the S3 endpoint?</strong></p>
<p>The <a href="https://docs.h5py.org/en/stable/high/file.html?highlight=ros3" rel="nofollow noreferrer">documentation here</a> makes no mention of it.</p>
<p>If we attempt to run the following we get a generic error that we presume has to do with the obvious missing endpoint.</p>
<pre><code>>>> h5py.File(
f,
driver='ros3',
aws_region=bytes('unused', 'utf-8'), # unused by our S3 but required
secret_id=bytes(access_key, 'utf-8'),
secret_key=bytes(secret_key, 'utf-8')
)
Traceback (most recent call last):
File "mycode.py", line 32, in <module>
with h5py.File(f, driver='ros3', aws_region=b'west', secret_id=access_key, secret_key=secret_key) as fh5:
File "opt/anaconda3/envs/smart_open_env/lib/python3.9/site-packages/h5py/_hl/files.py", line 567, in __init__
fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
File "opt/anaconda3/envs/smart_open_env/lib/python3.9/site-packages/h5py/_hl/files.py", line 231, in make_fid
fid = h5f.open(name, flags, fapl=fapl)
File "h5py/_objects.pyx", line 54, in h5py._objects.with_phil.wrapper
File "h5py/_objects.pyx", line 55, in h5py._objects.with_phil.wrapper
File "h5py/h5f.pyx", line 106, in h5py.h5f.open
OSError: Unable to open file (curl cannot perform request)
</code></pre>
<p>I checked that the commonly used environment variables <code>ENDPOINT</code> and <code>ENDPOINT_URL</code> were set, but had no effect.</p>
|
<python><amazon-s3><hdf5><h5py><ceph>
|
2023-03-17 23:06:37
| 1
| 32,449
|
David Parks
|
75,772,788
| 9,415,280
|
tensorflow with custom training loop and tf.data.dataset
|
<p>I'm trying to build a Tensorflow model with custom training loop to use the forecast to feed the inputs of the next time step. The model has two heads and inputs set. I got problem to find how to pass my two inputs and manage how to deal with the custom call function.</p>
<p>I use this ref to build my code:
<a href="https://www.tensorflow.org/tutorials/structured_data/time_series#advanced_autoregressive_model" rel="nofollow noreferrer">https://www.tensorflow.org/tutorials/structured_data/time_series#advanced_autoregressive_model</a></p>
<p>and here an example of my data and the way I want to use it for each timestep iteration.</p>
<p>Input #1:</p>
<pre><code>|---input warm up/observed data---|--- forcast---------------------|
t-5 t-4 t-3 t-2 t-1 t-0 t+1 t+2 t+3
var#1 3 2 1 5 4 2 1 3 4
var#2 1 -1 -3 1 0 -5 -11 0 1
var#3 66 67 69 71 73 75 ? ? ?
</code></pre>
<p>Using this example, var1-2 are inputs, var3 inputs AND labels. Var3 is known on first times steps with a lag on 1 day (today we know last day value of var3) to start model.</p>
<p>First iteration I get these inputs and try to forecast a value that will be inserted in var3 t+1 fore the next iteration:</p>
<pre><code> t-5 t-4 t-3 t-2 t-1 t-0 t+1
var#1 3 2 1 5 4 2
var#2 1 -1 -3 1 0 -5
var#3 66 67 69 71 73 75 ?
</code></pre>
<p>If ? =88 the next inputs for next iteration will look like this:</p>
<pre><code> t-5 t-4 t-3 t-2 t-1 t-0 t+1 t+2
var#1 3 2 1 5 4 2 1
var#2 1 -1 -3 1 0 -5 -11
var#3 66 67 69 71 73 75 ?=88 ?
</code></pre>
<p>Input #2 is just a sequence of 3 values for the second head of model using only dense layer.</p>
<pre><code>class FeedBack(tf.keras.Model):
def __init__(self, num_timesteps_in, num_timesteps_out, nb_features, nb_attributs,
nb_lstm_units, nb_dense_units):
super(FeedBack, self).__init__()
self.num_timesteps_in = num_timesteps_in
self.num_timesteps_out = num_timesteps_out
self.nb_features = nb_features
self.nb_attributs = nb_attributs
self.nb_lstm_units = nb_lstm_units
self.nb_dense_units = nb_dense_units
self.lstm_cell = tf.keras.layers.LSTMCell(nb_lstm_units)
self.dense = tf.keras.layers.Dense(nb_lstm_units)
def call(self, inputs, training=None):
predictions = []
inputs1 = tf.keras.Input(shape=(self.num_timesteps_in + self.num_timesteps_out, self.nb_features))
inputs2 = tf.keras.Input(shape=(self.nb_attributs))
# Run prediction steps by step with a rolling window on the inputs
for i in range(0, self.num_timesteps_out):
input_chunk = inputs1[i:i + self.num_timesteps_in, :]
# Execute one step.
extraction_info = self.lstm_cell(self.nb_lstm_units,
training=training,
return_sequences=False,
stateful=False)(input_chunk)
extraction_info = tf.keras.layers.Dropout(0.2)(extraction_info)
# -------------- Merge les inputs météo/apports avec les attributs physiographiques --------------------
merged_input = tf.keras.layers.Concatenate(axis=1, name='merged_head')([extraction_info, inputs2])
merged_input = self.Dense(self.nb_dense_units)(merged_input)
merged_input = tf.keras.layers.Dropout(0.2)(merged_input)
prediction = self.Dense(1, activation='linear')(merged_input)
# insert this forecast into the input1 on the next timestep
inputs1[i + self.num_timesteps_in + 1, -1] = prediction
# Add the prediction to the record for extracting at the end.
predictions.append(prediction)
return predictions
</code></pre>
<p>Now format to tf. dataset an train model</p>
<pre><code>optimizer = tf.keras.optimizers.Adam(learning_rate=learning_rates[1])
# inputs convertion to tf.data.dataset
inputs_hydro = tf.data.Dataset.from_tensor_slices((X1))
inputs_static = tf.data.Dataset.from_tensor_slices((X2))
output = tf.data.Dataset.from_tensor_slices((y))
combined_dataset = tf.data.Dataset.zip(((inputs_hydro, inputs_static), output))
input_dataset = combined_dataset.batch(5)
base_model = cr.FeedBack(10, 5, 2, 3, 50, 50)
base_model.call = cr.call
model.compile(optimizer=optimizer, loss=tf.keras.losses.MeanSquaredError())
history = model.fit(input_dataset,
verbose=0,
epochs=10)
</code></pre>
<p>At this point, I tried a lot of things and read many examples and questions but still am unable to make it run. The actual version I got this error:</p>
<blockquote>
<p>File "E:\Anaconda3\envs\tf2.7_bigData\lib\site-packages\keras\layers\rnn\lstm.py", line 190, in build<br />
input_dim = input_shape[-1]</p>
<p>IndexError: tuple index out of range</p>
<p>Call arguments received by layer "feed_back" " f"(type FeedBack):<br />
• inputs=('tf.Tensor(shape=(None, 15, 3), dtype=float32)', 'tf.Tensor(shape=(None, 24), dtype=float32)')<br />
• training=True</p>
</blockquote>
<p>Any help will be really welcome! It is my first use of <code>tf.data.dataset</code> and of custom layer/model, it is a hard step in my learning.</p>
|
<python><tensorflow><tensorflow-datasets>
|
2023-03-17 22:47:17
| 0
| 451
|
Jonathan Roy
|
75,772,718
| 10,646,643
|
Running "python setup.py install" second time fails with [WinError 32] error message
|
<p>Trying to install several well-established Python modules from the source using "python setup.py install" command on Windows running Python 3.11. Everything works during the first run, no error messages. However, when running the same command second time (for no practical reason, just trying to make sure that my client will not be able to crash anything if the command is run second time inadvertently), I am getting the following error message:</p>
<pre><code>Processing PyYAML-6.0-py3.11-win-amd64.egg
Removing c:\users\wdagutilityaccount\appdata\local\programs\python\python311\lib\site-packages\PyYAML-6.0-py3.11-win-amd64.egg
error: [WinError 32] The process cannot access the file because it is being used by another process: 'c:\\users\\wdagutilityaccount\\appdata\\local\\programs\\python\\python311\\lib\\site-packages\\PyYAML-6.0-py3.11-win-amd64.egg'
</code></pre>
<p>Important:</p>
<ol>
<li>A specific module (PyYAML) is not important here, I get the same message with other modules like graphviz for instance, so the issue is not caused by the module being installed.</li>
<li>The file in question (PyYAML-6.0-py3.11-win-amd64.egg) is actually not locked by any other process right after the script terminates, I can delete it manually later, so either the file lock is transient during the script execution, or something else is causing this error.</li>
<li>Furthermore, if I simply delete this file prior to running the second time (without uninstalling anything else), the second run completes without any error messages.</li>
<li>I tried "python setup.exe install --force", but it did not help.</li>
<li>No, I can't install using any other higher-level alternatives like "pip3 install pyyaml" or from the wheel file - the requirement is that I install from a local copy of the source.</li>
</ol>
<p>So, hence the questions:</p>
<ol>
<li>What is a <strong>reliable way to install from the source unlimited number of times?</strong></li>
<li>I would be fine with the <strong>second run terminating with something like "already installed, nothing to do here, all good" type of message</strong>, is there any way to do that?</li>
</ol>
|
<python><setup.py>
|
2023-03-17 22:33:22
| 0
| 648
|
Regus Pregus
|
75,772,707
| 1,074,593
|
How to compare two columns in Excel using Python?
|
<p>I have to excel files with the following fields<br />
<strong>file1</strong><br />
col1,col2,col3,col4,col5,col6,col7,col8,col9</p>
<p>server1,java_yes,....<br />
server2,java_no,....<br />
server4,java_no,....<br />
server8,java_no,....</p>
<p><strong>file2</strong><br />
col1,col2,col3,col4,col5,col6,col7,col8,col9</p>
<p>server1,java_yes,....<br />
server3,java_no,....<br />
server4,java_yes,....<br />
server8,java_no,....</p>
<p>I want to<br />
a. Iterate over file1<br />
b. Compare each entry in col1 in file1 against col1 in file2<br />
c. If it exists, I want to see if the value in file1->col2 matches the entry in file2->col2<br />
d. If file1->col2 does not match file2->col2 then I want to update file1->col2 to equal file2->col2</p>
<p><strong>Update</strong></p>
<p>Running in strange issue and providing the details here.
It works fine for most of the entries but for some entries it displays NaN even though the dataframe has java_yes in both places.
To figure this out, I added a filter and then printed it at various stages.<br />
When I print for df1, df2 and merged it works fine.<br />
When I print the same at the very end, it displays NaN for certain entries
Very strange.</p>
<pre>
my_filter = ( df1['col1'] == 'server1' )
print(df1.loc(my_filter, 'col2')
</pre>
<p>All except the last print returns</p>
<pre>Yes</pre>
<p>The very last print (for df1) returns</p>
<pre>NaN</pre>
|
<python><excel><numpy>
|
2023-03-17 22:31:41
| 2
| 660
|
user1074593
|
75,772,510
| 20,736,637
|
Selenium Stale elements in Python
|
<p>I am trying to complete a problem involved in Angela's '100 Days of Python' course.</p>
<p>It involves building a cookie clicker website that <strong>uses selenium</strong> to <strong>click the cookie</strong> from** <a href="http://orteil.dashnet.org/experiments/cookie/" rel="nofollow noreferrer">http://orteil.dashnet.org/experiments/cookie/</a> **continually. Every 5 seconds, the program is supposed to check the upgrades (on the right of the cookie site) and click the most expensive one that I can afford to purchase it.</p>
<p>I have tried this challenge and my code generates the following error at random times (yoiu may need to run my code).</p>
<pre><code>selenium.common.exceptions.StaleElementReferenceException: Message: stale element reference: element is not attached to the page document
</code></pre>
<p>My code follows:</p>
<pre><code>from selenium import webdriver
from selenium.webdriver.chrome.service import Service
import time
service = Service("~/Downloads/chromedriver_mac64/chromedriver")
driver = webdriver.Chrome(service=service)
driver.get("http://orteil.dashnet.org/experiments/cookie/")
def format_items():
"""Format the data in [{name, cost, element (to click)}]"""
store_items = driver.find_elements("css selector", "#store div b")
items_ = []
for store_item in store_items:
if store_item.text != "":
description = store_item.text.split(" - ")
name = description[0]
cost = int(description[1].replace(",", ""))
element = store_item
items_.append({"name": name, "cost": cost, "element": element})
return items_
cookie = driver.find_element("id", "cookie")
def find_most_expensive():
affordable_upgrades = []
for item in items:
cost = item["cost"]
if cost <= cookie_count:
affordable_upgrades.append(item)
expensive_affordable_cost = 0
expensive_affordable_dict = {}
for upgrade in affordable_upgrades:
if upgrade["cost"] > expensive_affordable_cost:
expensive_affordable_cost = upgrade["cost"]
expensive_affordable_dict = upgrade
return expensive_affordable_dict
base = time.time() + 5
while True:
cookie.click()
items = format_items()
if time.time() >= base:
cookie_count = int(driver.find_element("id", "money").text.replace(",", ""))
most_expensive_item = find_most_expensive()
if len(most_expensive_item) != 0:
most_expensive_item["element"].click()
base += 5
driver.quit()
</code></pre>
<p>I have no idea why this is because everything should work properly.</p>
<p>I have reviewed other similar articles on Stack overflow and none seem to solve my issue; many talk about the element being obsolete from the DOM, but I don't know how this can be because the items = format_items() happens every iteration of the While loop.</p>
<p>Would you mind taking a look at my code and telling me why this error is raised.</p>
|
<python><selenium-webdriver><staleelementreferenceexception>
|
2023-03-17 21:52:20
| 2
| 636
|
Daniel Crompton
|
75,772,275
| 7,903,749
|
How to suppress the `RemovedInDjango40Warning` warnings in a unit test?
|
<p>We are working on a unit test in PyCharm and the test itself is passing now. However, the terminal output contains warning messages in the <code>RemovedInDjango40Warning</code> category. As we do not plan to upgrade to Django 4.0 for now, these warnings are verbose and distracting.</p>
<p>We tried including the statement <code>warnings.simplefilter("ignore", category=RemovedInDjango40Warning) </code> but it did not suppress the warnings.</p>
<p>So, we wonder how to suppress the <code>RemovedInDjango40Warning</code> warnings.</p>
<h4>Technical Details:</h4>
<h5>Partial source code of the unit test:</h5>
<pre class="lang-py prettyprint-override"><code>from unittest import TestCase
import Django
class TestPocFunctional(TestCase):
@classmethod
def setUpClass(cls):
django.setup()
return
def test_all(self):
from application.poc import PocFunctional
import warnings
from django.utils.deprecation import RemovedInDjango40Warning
warnings.simplefilter("ignore", category=RemovedInDjango40Warning)
# ... testing steps
return
</code></pre>
<h5>The terminal output of warnings:</h5>
<pre><code>/data/app-py3/venv3.7/bin/python /var/lib/snapd/snap/pycharm-professional/316/plugins/python/helpers/pycharm/_jb_unittest_runner.py --target test_poc.TestPocFunctional
Testing started at 1:54 PM ...
Launching unittests with arguments python -m unittest test_poc.TestPocFunctional in /data/app-py3/APPLICATION/tests
/data/app-py3/APPLICATION/eav/models.py:84: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
value = models.CharField(_(u"value"), db_index=True,
/data/app-py3/APPLICATION/eav/models.py:100: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
name = models.CharField(_(u"name"), unique=True, max_length=100)
/data/app-py3/APPLICATION/eav/models.py:102: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
enums = models.ManyToManyField(EnumValue, verbose_name=_(u"enum group"))
/data/app-py3/APPLICATION/eav/models.py:173: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
(TYPE_TEXT, _(u"Text")),
/data/app-py3/APPLICATION/eav/models.py:174: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
(TYPE_FLOAT, _(u"Float")),
/data/app-py3/APPLICATION/eav/models.py:175: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
(TYPE_INT, _(u"Integer")),
/data/app-py3/APPLICATION/eav/models.py:176: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
(TYPE_DATE, _(u"Date and Time")),
/data/app-py3/APPLICATION/eav/models.py:177: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
(TYPE_BASICDATE, _(u"Date")),
/data/app-py3/APPLICATION/eav/models.py:178: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
(TYPE_BOOLEAN, _(u"True / False")),
/data/app-py3/APPLICATION/eav/models.py:179: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
(TYPE_OBJECT, _(u"Django Object")),
/data/app-py3/APPLICATION/eav/models.py:180: RemovedInDjango40Warning: django.utils.translation.ugettext_lazy() is deprecated in favor of django.utils.translation.gettext_lazy().
(TYPE_ENUM, _(u"Multiple Choice")),
... more warnings in the same category ...
</code></pre>
|
<python><django><pycharm><warnings>
|
2023-03-17 21:13:19
| 1
| 2,243
|
James
|
75,772,170
| 19,980,284
|
Produce Predictive Margins in Statsmodels Output for Logistic Regression
|
<p>I've used the <code>get_margeff()</code> function to try to generate margins for a logit model I've run. Here is the data and code:</p>
<pre class="lang-py prettyprint-override"><code>model_5a_1 = smf.logit('''fluid ~ C(examq3_n, Treatment(reference = 2.0)) + C(pmhq3_n) + C(fluidq3_n) + C(mapq3_n, Treatment(reference = 3.0)) +
C(examq6_n, Treatment(reference = 2.0)) + C(pmhq6_n) + C(fluidq6_n) + C(mapq6_n, Treatment(reference = 3.0)) +
+ C(case, Treatment(reference = 2))''',
data = case1_2_vars).fit()
print(model_5a_1.summary())
</code></pre>
<p><a href="https://i.sstatic.net/9tPBR.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/9tPBR.png" alt="enter image description here" /></a></p>
<p>I'd like to now calculate the margins, much like you can with the <code>margin</code> command in Stata. This is as far as I'm getting:</p>
<pre><code>margins_5a_1 = model_5a_1.get_margeff(at = 'mean', dummy = True, count = True)
margins_5a_1.summary()
</code></pre>
<p><a href="https://i.sstatic.net/xTECp.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/xTECp.png" alt="enter image description here" /></a></p>
<p>And I'd like to see the actual margins instead of <code>dy/dx</code>. Is there a way to do this in statsmodels?</p>
<p>Here is my data set. In this regression I'm only regressing on <code>fluid</code> and all variables to the left of <code>fluid</code> are the independent variables (those to the right will be used for other logit models).
<a href="https://i.sstatic.net/PUfoa.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/PUfoa.png" alt="enter image description here" /></a></p>
<p>And I've tried <code>get_prediction</code> like so:</p>
<pre><code>model_5a_1.get_prediction(exog = case1_2_vars[['examq3_n', 'pmhq3_n', 'examq3_n', 'pmhq3_n', 'fluidq3_n', 'mapq3_n',
'examq6_n', 'pmhq6_n', 'fluidq6_n', 'mapq6_n', 'fluid']])
</code></pre>
<p>But I get <code>PatsyError: categorical data cannot be >1-dimensional</code></p>
|
<python><stata><logistic-regression><statsmodels>
|
2023-03-17 20:56:56
| 1
| 671
|
hulio_entredas
|
75,772,148
| 5,833,797
|
Python - pip install private repo in remote environment
|
<p>I am using a private Github repo in my project as a library. This works fine in my local environment but when I push this to a remote host, it won't install because the remote host doesn't have access to the repo.</p>
<p>Adding the credentials to the url like this works fine:
git+https://${GITHUB_USER}:${GITHUB_TOKEN}@github.com/user/project.git@{version}</p>
<p>However, my problem is that whenever I run <code>pip freeze > requirements.txt</code> to update my libraries, the repo's URL is updated and the credentials are stripped out.</p>
<p>Is there a way to set my git credentials as an environment variable in my remote environment so that pip can install the library using the "normal" github link?</p>
|
<python><github><pip>
|
2023-03-17 20:54:17
| 1
| 727
|
Dave Cook
|
75,772,015
| 595,305
|
Make use of a patch set up by an autouse fixture?
|
<p>Follows on from <a href="https://stackoverflow.com/a/38763328/595305">this answer</a> about disabling autouse fixtures.</p>
<p>If I have this fixture</p>
<pre><code>@pytest.fixture(autouse=True)
def mock_logger_error(request):
with mock.patch('logging.Logger.error') as mock_error:
yield
</code></pre>
<p>... how would I actually make use of <code>mock_error</code> (e.g. <code>mock_error.assert_called_once_with(...)</code>) in a test?</p>
<pre><code>def test_something_possibly_emitting_error_message():
...
if connection_ok:
mock_error.assert_not_called()
else:
mock_error.assert_called_once_with(AnyStringWithRegex(r'ERROR\..*some error messsage.*'))
</code></pre>
<p>---> "Undefined variable: mock_error". I.e. although the patch is doing its job here, i.e. patching <code>Logger.error</code>, I don't know how to get a reference to it.</p>
<p>As can be seen <a href="https://docs.pytest.org/en/stable/how-to/fixtures.html#using-markers-to-pass-data-to-fixtures" rel="nofollow noreferrer">here</a>, it is possible to pass data from a fixture to a test, but that shows that you have to use <code>return</code> to do that. If you're using <code>yield</code> I don't think you can also use <code>return</code>...</p>
|
<python><mocking><pytest><patch><fixtures>
|
2023-03-17 20:34:58
| 2
| 16,076
|
mike rodent
|
75,771,920
| 9,422,114
|
Python: regex pattern fails after being split into multiple lines
|
<p>I have a regex pattern that works fine if I write it in a single line.</p>
<p>For instance, the pattern work if I do the following:</p>
<pre><code>MAIN_TREE_START_STRING_PATTERN = (
r"\*{3}\s+\bTotal Things\b:\s+(?P<number_of_things>\d+)"
)
compiled_pattern = re.compile(MAIN_TREE_START_STRING_PATTERN, flags=re.X)
match = compiled_pattern.match(string="*** Total Things: 348, abcdefghi ***")
if match:
print("Success")
else:
print("Failed")
</code></pre>
<p>But if I changed the regex pattern to be a multiline string, and using the <code>VERBOSE</code> flag, it doesn't work.</p>
<pre><code>MAIN_TREE_START_STRING_PATTERN = r"""
\*{3}\s+\bTotal Things\b:\s+
(?P<number_of_things>\d+)
"""
compiled_pattern = re.compile(MAIN_TREE_START_STRING_PATTERN)
match = compiled_pattern.match(string="*** Total Things: 348, abcdefghi ***")
if match:
print("Success")
else:
print("Failed")
</code></pre>
<p>I'm not sure what I'm doing wrong during the multiline pattern declaration.</p>
|
<python><python-3.x><regex><python-re>
|
2023-03-17 20:22:03
| 2
| 1,401
|
Jacobo
|
75,771,896
| 11,501,370
|
Get Geometric Mean Over Window in Pyspark Dataframe
|
<p>I have the following pyspark dataframe</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>Car</th>
<th>Time</th>
<th>Val1</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>1</td>
<td>3</td>
</tr>
<tr>
<td>2</td>
<td>1</td>
<td>6</td>
</tr>
<tr>
<td>3</td>
<td>1</td>
<td>8</td>
</tr>
<tr>
<td>1</td>
<td>2</td>
<td>10</td>
</tr>
<tr>
<td>2</td>
<td>2</td>
<td>21</td>
</tr>
<tr>
<td>3</td>
<td>2</td>
<td>33</td>
</tr>
</tbody>
</table>
</div>
<p>I want to get the geometric mean of all the cars at each time, resulting df should look like this:</p>
<div class="s-table-container">
<table class="s-table">
<thead>
<tr>
<th>time</th>
<th>geo_mean</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td>5.2414827884178</td>
</tr>
<tr>
<td>2</td>
<td>19.065333718304</td>
</tr>
</tbody>
</table>
</div>
<p>I know how to calculate the arithmetic average with the following code:</p>
<pre><code>
from pyspark.sql import functions as F
df = df.withColumn(
"aritmethic_average",
F.avg(F.col("Val1")).over(W.partitionBy("time"))
)
</code></pre>
<p>But I'm unsure how to accomplish the same thing with geometric means.</p>
<p>Thanks in advance!</p>
|
<python><pyspark>
|
2023-03-17 20:18:54
| 2
| 369
|
DataScience99
|
75,771,824
| 2,878,290
|
How to Expose Delta Table Data or Synapse Table to Rest API Service using Java/python
|
<p>We would like to expose the data from delta table or synapse table to REST API service which is java based or either python language. kindly provide sample for java or python to kick start my new implementation.</p>
<p>Thanks</p>
|
<python><java><web-services><azure-databricks><delta-lake>
|
2023-03-17 20:09:28
| 1
| 382
|
Developer Rajinikanth
|
75,771,782
| 5,527,646
|
Listing All Lambda Functions using AWS SDK for Python (boto3)
|
<p>I am trying to use <code>boto3</code> to get a list of all the lambdas I have in AWS. I will also need to get metrics from these lambdas (metrics are enabled in AWS console) but presently I just want to list them all. In addition, I just want to get the lambdas based on a filtered tag (ex: get all the lambdas with the tag mygroup:env=dev). This is what I have tried:</p>
<pre><code> import boto3
# Create CloudWatch client
cloudwatch = boto3.client('cloudwatch', region_name='us-west-2')
lambdas = boto3.client('lambda', region_name='us-west-2')
lambda_list = lambdas.list_functions(MasterRegion='us-west-2', FunctionVersion='ALL', MaxItems=123)
</code></pre>
<p>It returns the following, which essentially is an empty list of the Functions:</p>
<pre><code> {'ResponseMetadata': {'RequestId': 'eiopo-12mlk-12312-nm',
'HTTPStatusCode': 200,
'HTTPHeaders': {'date': 'Fri, 17 Mar 2023 19:43:00 GMT',
'content-type': 'application/json',
'content-length': '34',
'connection': 'keep-alive',
'x-amzn-requestid': 'e89098asf-123123-746b-a1d34-easd3213456'},
'RetryAttempts': 0},
'Functions': []
}
</code></pre>
<p>The Lambdas are there; I can see them in my AWS Lambdas dashboard. What am I doing wrong here and how can I apply a filter for my tags?</p>
|
<python><amazon-web-services><aws-lambda><boto3>
|
2023-03-17 20:04:51
| 1
| 1,933
|
gwydion93
|
75,771,627
| 10,983,470
|
How to install pytorch for cpu with conda from tar.bz2 sources on Windows?
|
<p>There are many question related to <code>pytorch</code> installation, I am aware. So far none combines all my requirements and their solutions don't work on that case :</p>
<p>I need to install torch on an isolated-Windows-with-cpu-only environment that can not access internet. Ideally the solution will use <code>conda</code> (or <code>mamba</code> or <code>micromamba</code>). So far I have tried using <code>conda install --offline package_name.tar.bz2</code> after a download of the sources on anaconda.org.</p>
<p>Here is a reproducible failed attempt :</p>
<pre><code>> conda create -n last_test
> conda activate last_test
> conda install -y python==3.10
> conda install -y --offline pytorch-1.13.1-py3.10_cpu_0.tar.bz2 torchvision-0.14.1-py310_cpu.tar.bz2 cpuonly-2.0-0.tar.bz2
> conda -V
conda 22.9.0
> python -V
3.10.0
> python -c "import torch;"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "C:\Users\$user\AppData\Local\mambaforge\envs\pytorch_test\lib\site-packages\torch\__init__.py", line 139, in <module>
raise err
OSError: [WinError 126] Can't find specified module. Error loading "C:\Users\$user\AppData\Local\mambaforge\envs\pytorch_test\lib\site-packages\torch\lib\shm.dll" or one of its dependencies.
</code></pre>
|
<python><pytorch>
|
2023-03-17 19:44:46
| 1
| 1,783
|
cbo
|
75,771,565
| 7,190,421
|
Trying to remove all occurances of a few patterns from a list
|
<p>So I have a list, something like this:</p>
<p><code>check = ['10.92.145.17/29', '10.92.145.25/29', '10.45.33.109/32', '10.202.1.113/32', '10.202.1.119/32', '10.202.1.122/32', '10.202.1.124/32', '10.202.1.126/32', '10.202.1.130/32', '10.202.1.132/32', '10.202.1.135/32', '10.202.1.136/32', '10.202.1.137/32', '10.202.1.139/32', '10.202.1.140/32', '10.202.1.141/32', '10.202.1.145/32', '10.202.1.146/32', '10.202.1.148/32', '10.202.1.149/32', '10.202.1.150/32', '10.202.1.151/32', '10.202.1.152/32', '10.202.1.154/32', '10.202.1.155/32', '10.202.1.156/32', '10.202.1.157/32', '10.202.1.158/32', '10.202.1.159/32', '10.202.1.160/32', '10.202.1.161/32', '10.202.1.162/32', '10.202.1.164/32', '10.202.1.165/32', '10.202.1.167/32', '10.202.1.168/32', '10.202.1.169/32', '10.202.1.170/32', '10.202.1.171/32', '10.202.1.172/32', '10.202.1.173/32', '10.202.1.174/32', '10.202.1.175/32', '10.202.1.176/32', '10.202.1.177/32', '10.202.1.178/32', '10.202.1.179/32', '10.202.1.180/32', '10.202.1.181/32', '10.202.1.182/32']</code></p>
<p>I'm trying to remove all the elements containing any of the elements of this list:</p>
<p><code>['/30', '/31', '/32']</code></p>
<p>For some reason, the list is being partially trimmed down, and I cannot quite understand why.</p>
<p>My code:</p>
<pre><code>
y = 30
while y <=32:
x.append('/'+str(y)) # To generate the list of elements
y = y + 1
check = ['10.92.145.17/29', '10.92.145.25/29', '10.45.33.109/32', '10.202.1.113/32', '10.202.1.119/32', '10.202.1.122/32', '10.202.1.124/32', '10.202.1.126/32', '10.202.1.130/32', '10.202.1.132/32', '10.202.1.135/32', '10.202.1.136/32', '10.202.1.137/32', '10.202.1.139/32', '10.202.1.140/32', '10.202.1.141/32', '10.202.1.145/32', '10.202.1.146/32', '10.202.1.148/32', '10.202.1.149/32', '10.202.1.150/32', '10.202.1.151/32', '10.202.1.152/32', '10.202.1.154/32', '10.202.1.155/32', '10.202.1.156/32', '10.202.1.157/32', '10.202.1.158/32', '10.202.1.159/32', '10.202.1.160/32', '10.202.1.161/32', '10.202.1.162/32', '10.202.1.164/32', '10.202.1.165/32', '10.202.1.167/32', '10.202.1.168/32', '10.202.1.169/32', '10.202.1.170/32', '10.202.1.171/32', '10.202.1.172/32', '10.202.1.173/32', '10.202.1.174/32', '10.202.1.175/32', '10.202.1.176/32', '10.202.1.177/32', '10.202.1.178/32', '10.202.1.179/32', '10.202.1.180/32', '10.202.1.181/32', '10.202.1.182/32']
print(len(check))
s = []
for i in x:
for j in check:
if i in j:
print("True")
print(i)
print(j)
check.remove(j)
print(len(check))
print(check)
</code></pre>
<p>Any help is appreciated, since I'm stuck debugging this.</p>
<p>Instead of <code>if i in j</code>, I have also used <code>if i == j[-3:]</code>, also with the same partially trimmed result.</p>
<p>The result, just FYI:</p>
<p>['10.92.145.17/29', '10.92.145.25/29', '10.202.1.113/32', '10.202.1.122/32', '10.202.1.126/32', '10.202.1.132/32', '10.202.1.136/32', '10.202.1.139/32', '10.202.1.141/32', '10.202.1.146/32', '10.202.1.149/32', '10.202.1.151/32', '10.202.1.154/32', '10.202.1.156/32', '10.202.1.158/32', '10.202.1.160/32', '10.202.1.162/32', '10.202.1.165/32', '10.202.1.168/32', '10.202.1.170/32', '10.202.1.172/32', '10.202.1.174/32', '10.202.1.176/32', '10.202.1.178/32', '10.202.1.180/32', '10.202.1.182/32']</p>
|
<python>
|
2023-03-17 19:36:15
| 5
| 325
|
Yogesh Sudheer Modak
|
75,771,422
| 1,687,652
|
One entry point in WORKSPACE for managing various python project requirements
|
<p>My Bazel project structure is like this</p>
<pre><code>├── BUILD
├── WORKSPACE
├── example.py
├── reqs.bzl
└── requirements.txt
</code></pre>
<p>I have created my <code>WORKSPACE</code> like this</p>
<pre><code>load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
http_archive(
name = "rules_python",
sha256 = "a30abdfc7126d497a7698c29c46ea9901c6392d6ed315171a6df5ce433aa4502",
strip_prefix = "rules_python-0.6.0",
url = "https://github.com/bazelbuild/rules_python/archive/0.6.0.tar.gz",
)
load("reqs.bzl","load_reqs")
load_reqs()
</code></pre>
<p>I did that as I want to have one entry point in <code>WORKSPACE</code> from where I can manage dependencies for various python based projects. my <code>reqs.bzl</code> looks like this</p>
<pre><code>load("@rules_python//python:pip.bzl", "pip_parse")
load("@rules_python//python:pip.bzl", "compile_pip_requirements")
def load_reqs():
compile_pip_requirements(
name = "pyreqs",
requirements_in = "//:requirements.txt",
visibility = ["/visbibility:public"]
)
</code></pre>
<p>My <code>BUILD</code> file looks like</p>
<pre><code>load("@pyreqs//:requirements.bzl", "requirement")
py_binary(
name="example",
srcs=["example.py"],
deps=[
requirement("absl-py"),
requirement("numpy"),
requirement("scipy"),
requirement("matplotlib")
],
visibility=["//visibility:public"]
)
</code></pre>
<p>This above setup is obviously failing to build my example.</p>
<p>I tried <code>compile_pip_requirement</code> as I couldnt pip_parse, load and invoke <code>install_deps()</code> from the <code>load_reqs()</code> function of <code>reqs.bzl</code></p>
<p>Appreciate any input from the community on how this has to be done correctly.</p>
<p>PS: Still climbing that Bazel learning curve and trying to figure out best practices. The primary motivation behind above project setup is, I have to add lot of python targets and I don't want to have a large <code>WORKSPACE</code> file and frequently make edits to it.</p>
<p>requirements.txt</p>
<pre><code>absl-py==1.4.0
cycler==0.11.0
kiwisolver==1.4.4
numpy==1.24.2
pillow
matplotlib==3.3.4
python-dateutil==2.8.2
pyparsing==3.0.9
pytz==2022.7.1
scipy==1.10.1
six==1.16.0
requests
</code></pre>
<p><code>bazel build //:example</code> gives error</p>
<pre><code>ERROR: Traceback (most recent call last):
File "/home/ubuntu/bazel-tests/python-ex-2/WORKSPACE", line 11, column 10, in <toplevel>
load_reqs()
File "/home/ubuntu/bazel-tests/python-ex-2/reqs.bzl", line 5, column 29, in load_reqs
compile_pip_requirements(
File "/home/ubuntu/.cache/bazel/_bazel_ubuntu/7d424a81034067ed3c4b3321d48bbdb4/external/rules_python/python/pip_install/requirements.bzl", line 40, column 21, in compile_pip_requirements
native.filegroup(
Error in filegroup: filegroup cannot be in the WORKSPACE file (used by //external:pyreqs)
ERROR: Error computing the main repository mapping: error loading package 'external': Package 'external' contains errors
</code></pre>
|
<python><bazel><bazel-python>
|
2023-03-17 19:20:14
| 1
| 561
|
a k
|
75,771,289
| 726,802
|
Issue while trying to run data script in django
|
<p>I have already checked this post but of no use for me.</p>
<p><a href="https://stackoverflow.com/questions/16230487/module-object-is-not-callable-django">'module' object is not callable Django</a></p>
<pre><code>from auth_app.models import UserModel
from django.contrib.auth.hashers import make_password
def UserData(apps, schema):
UserModel(
username="username",
email_address="user@gmail.com",
is_active=1,
password=make_password("12345aA!")
).save();
</code></pre>
<blockquote>
<p>Error Message: TypeError: 'module' object is not callable</p>
</blockquote>
|
<python><django>
|
2023-03-17 19:02:15
| 2
| 10,163
|
Pankaj
|
75,771,263
| 3,826,115
|
Link DynamicMap to two streams in Holoviews
|
<p>I have the following code that does mostly what I want it to do: make a scatter plot colored by a value, and a linked DynamicMap that shows an associated timeseries with each Point when you tap on it</p>
<pre><code>import pandas as pd
import holoviews as hv
import panel as pn
from holoviews import streams
import numpy as np
hv.extension('bokeh')
df = pd.DataFrame(data = {'id':['a', 'b', 'c', 'd'], 'type':['d', 'd', 'h', 'h'], 'value':[1,2,3,4], 'x':range(4), 'y':range(4)})
points = hv.Points(data=df, kdims=['x', 'y'], vdims = ['id', 'type', 'value']).redim.range(value=(0,None))
options = hv.opts.Points(size = 10, color = 'value', tools = ['hover', 'tap'])
df_a = pd.DataFrame(data = {'id':['a']*5, 'hour':range(5), 'value':np.random.random(5)})
df_b = pd.DataFrame(data = {'id':['b']*5, 'hour':range(5), 'value':np.random.random(5)})
df_c = pd.DataFrame(data = {'id':['c']*10, 'hour':range(10), 'value':np.random.random(10)})
df_d = pd.DataFrame(data = {'id':['d']*10, 'hour':range(10), 'value':np.random.random(10)})
df_ts = pd.concat([df_a, df_b, df_c, df_d])
df_ts = df_ts.set_index(['id', 'hour'])
stream = hv.streams.Selection1D(source=points)
empty = hv.Curve(df_ts.loc['a']).opts(visible = False)
def tap_station(index):
if not index:
return empty
id = df.iloc[index[0]]['id']
return hv.Curve(df_ts.loc[id], label = str(id)).redim.range(value=(0,None))
ts_curve = hv.DynamicMap(tap_station, kdims=[], streams=[stream]).opts(framewise=True, show_grid=True)
pn.Row(points.opts(options), ts_curve)
</code></pre>
<p>However, there is one more thing I want to do: have the 'd' and 'h' Points have different marker shapes.</p>
<p>One thing I can do is change the options line to this:</p>
<pre><code>options = hv.opts.Points(size = 10, color = 'value', tools = ['hover', 'tap'],
marker = hv.dim("type").categorize({'d': 'square', 'h':'circle'}))
</code></pre>
<p>But with this, holoviews doesn't show a legend that distinguishes between the two marker shapes</p>
<p>The other thing I can do is something like this:</p>
<pre><code>df_d = df[df['type'] == 'd']
df_h = df[df['type'] == 'h']
d_points = hv.Points(data=df_d, kdims=['x', 'y'], vdims = ['id', 'type', 'value'], label = 'd').redim.range(value=(0,None)).opts(marker = 's')
h_points = hv.Points(data=df_h, kdims=['x', 'y'], vdims = ['id', 'type', 'value'], label = 'h').redim.range(value=(0,None)).opts(marker = 'o')
d_stream = hv.streams.Selection1D(source=d_points)
h_stream = hv.streams.Selection1D(source=h_points)
</code></pre>
<p>This gets me the legend I want, but then I'm not sure how to make a DynamicMap that is linked to both of those streams, and responds to clicks on both marker shapes.</p>
<p>Again, ultimately what I want is a Point plot with two marker shapes (based on <code>type</code>), colored by <code>value</code>, that responds to clicks by pulling up a timeseries plot to the right.</p>
<p>Thanks for any help!</p>
|
<python><bokeh><holoviews>
|
2023-03-17 18:59:15
| 1
| 1,533
|
hm8
|
75,771,210
| 577,652
|
Create a generic logger for function start and finish time in Python
|
<p>I want to create a logger decorator that will print before and after a function, in addition I want to add some information that if avaiable will appear in each line, for example most of our current logs look like:</p>
<pre><code>START: reading_data product_id=123, message_id=abc, timesteamp=123456789
</code></pre>
<p>assuming that I want the logger to be generic, I can't assume that I will have all parameters in the function I'm decorating also I don't know if I will get them as <code>args</code> or <code>kwargs</code>.
I know that I can use <code>if 'product_id' in locals()</code> but that only works inside <code>reading_data</code> function and not in the decorator. I also thought about adding <code>args</code> and <code>kwargs</code> to the log, but they could have lots of info (big dictionaries or lists)</p>
<p>I want it to de something like:</p>
<pre class="lang-py prettyprint-override"><code>def log(func):
def decorate(*args, **kwargs):
text = ''
if product_id:
text += f'product_id={product_id}'
if message_id:
text += f'message_id={message_id}'
if timesteamp:
text += f'timesteamp={timesteamp}'
print(f'START: {func.__name__} {text}')
func(*args, **kwargs)
print(f'FINISH {func.__name__} {text}')
return decorate
@log
def reading_data(product_id, message_id, timesteamp=now()):
print('Do work')
</code></pre>
|
<python><python-decorators><python-logging>
|
2023-03-17 18:53:47
| 2
| 726
|
shlomiLan
|
75,771,154
| 19,980,284
|
Is it necessary to add a constant to a logit model run on categorical variables only?
|
<p>I have a dataframe that looks like this:
<img src="https://i.ibb.co/3zvJjnQ/Screen-Shot-2023-03-17-at-2-39-34-PM.png" alt="" /></p>
<p>And am running a logit model on fluid as dependent variable, and excluding <code>vp</code> and <code>perip</code>:</p>
<pre class="lang-py prettyprint-override"><code>model = smf.logit('''fluid ~ C(examq3_n, Treatment(reference = 2.0)) + C(pmhq3_n) + C(fluidq3_n) + C(mapq3_n, Treatment(reference = 3.0)) +
C(examq6_n, Treatment(reference = 2.0)) + C(pmhq6_n) + C(fluidq6_n) + C(mapq6_n, Treatment(reference = 3.0)) +
+ C(case, Treatment(reference = 2))''',
data = case1_2_vars).fit()
print(model.summary())
</code></pre>
<p>I get the following results:
<img src="https://i.ibb.co/3MFND46/Screen-Shot-2023-03-17-at-2-42-47-PM.png" alt="" /></p>
<p>I am wondering if I need to add a constant to the data and if so, how? I've tried adding a column to the dataframe called <code>const</code> which equals <code>1</code>, but when I then add <code>const</code> to the logit equation I get <code>LinAlgError: Singular Matrix,</code> and I don't know how to add it using <code>smf.add_constant()</code> because I have had to specify the categorical variables and their respective reference numbers in the equation, rather than defining <code>x</code> and <code>y</code> separately and simply inputting those into the <code>smf.logit()</code> call.</p>
<p>My questions are: a) do I need to add a constant, and b) how? There are some links online that seem to imply it might not be necessary for a categorical variable-based logit model, but I would rather do it if it's best practice.</p>
<p>I'm also wondering, does statsmodels automatically include a constant? Because <code>Intercept</code> is listed in the results.</p>
|
<python><pandas><logistic-regression><statsmodels>
|
2023-03-17 18:47:07
| 1
| 671
|
hulio_entredas
|
75,771,076
| 553,003
|
Importing script from local package during setuptools CustomInstall
|
<p>I have the following setup:</p>
<pre><code>`- MyLibrary
`- pyproject.toml
`- setup.cfg
`- setup.py
`- Package1
`- ...
`- Package2
`- ...
`- CodeGen
`- __init__.py
`- Generate.py
</code></pre>
<p>and I need the install script to run a <code>generate()</code> function from <code>CodeGen.Generate</code> to include some extra code in the distribution.</p>
<p>I have tried the following:</p>
<pre class="lang-py prettyprint-override"><code>from setuptools import setup, find_packages
from setuptools.command.install import install
from .CodeGen import generate
class CustomInstall(install):
def run(self):
install.run(self)
generate()
setup(
cmdclass={"install": CustomInstall},
name="MyLibrary",
packages=find_packages(),
)
</code></pre>
<p>With setuptools as my build backend. But running <code>pip install .</code> fails with the following error:</p>
<pre><code>ImportError: attempted relative import with no known parent package
</code></pre>
<p>on the line <code>from .CodeGen import generate</code>.</p>
<p>How can I make it so that the <code>CodeGen</code> script is importable in my <code>setup.py</code>? Note that it depends on some package dependencies, which are listed in the <code>[build-system]</code> <code>requires</code> field of my <code>pyproject.toml</code>.</p>
<p>Alternatively, how would I achieve this differently?</p>
|
<python><python-import><setuptools><setup.py><python-packaging>
|
2023-03-17 18:38:19
| 0
| 9,477
|
Ptival
|
75,771,075
| 10,754,437
|
Run Python script in isolation
|
<p>I need to run a Python script from within bash in a way so it does <strong>not</strong> have access to any environment variables or system files/configurations. This due to an AWS boto3 client that tries to read configuration information from the environment and/or a config file.</p>
<p>Searching only results in how to access environment variables.</p>
<p>Is what I am looking for possible, and if so how?</p>
|
<python><python-3.x><bash>
|
2023-03-17 18:38:15
| 1
| 2,597
|
SecretIndividual
|
75,771,046
| 6,348,055
|
Boto3 Dynamodb list_append Invalid type for parameter ExpressionAttributeNames type: <class 'set'>, valid types: <class 'dict'>
|
<p>Error message is throwing me. I have update_item updating an existing list. When I put everything in the UpdateExpression, it works fine. When I use the equivalent with ExpressionAttributeNames, I get the error in the title. They look equivalent to me, but hopefully someone can point out my error.</p>
<p>This works fine:</p>
<pre><code>update_resp = table.update_item(Key={
'client': client,
'run_date': today
},
UpdateExpression=f"SET results.{test_id}.trial_area = list_append(results.{test_id}.trial_area, :r)",
ExpressionAttributeValues={':r': prior_test_results})
</code></pre>
<p>This gives the error:</p>
<pre><code>update_resp = table.update_item(Key={
'client': client,
'run_date': today
},
UpdateExpression=f"SET #res = list_append(#res, :r)",
ExpressionAttributeNames= {'#res', f'results.{test_id}.trial_area'},
ExpressionAttributeValues={':r': prior_test_results})
</code></pre>
|
<python><amazon-web-services><amazon-dynamodb><boto3>
|
2023-03-17 18:35:37
| 1
| 307
|
Couch
|
75,770,930
| 2,590,824
|
post request to SOLR in json format with facet using python returns response 400 (Unknown top-level key in JSON request : facet.field)
|
<p>I am using python (Django) to post requests to SOLR (9.0.0) in json format (<a href="https://solr.apache.org/guide/8_1/json-request-api.html" rel="nofollow noreferrer">JSON request API</a>) to retrieve the data. When using faceting, I am facing a problem probably due to json format of the request. Can anyone please help?</p>
<p>My aim is to search any two lettered lowercase words (using regex <code>[a-z]{2}</code>) from the field 'sentence' from a core.</p>
<p>This query runs correctly from SOLR admin (URL decoded) and gives the desired result:</p>
<p><code>http://localhost:8983/solr/lexdb_genl93/select?facet.field=sentence&facet.matches=[a-z]{2}&facet.query=sentence:/[a-z]{2}/&facet=true&indent=true&q.op=OR&q=*:*&rows=0&start=0</code></p>
<p>My request json which returns 400 (<strong>Unknown top-level key in JSON request : facet.field</strong>):</p>
<pre class="lang-json prettyprint-override"><code>{
"query":"*:*",
"limit":0,
"facet":"true",
"facet.field":"sentence",
"facet.matches":"[a-z]{2}",
"facet.query":"sentence:/[a-z]{2}/"
}
</code></pre>
<p>The python code:</p>
<pre class="lang-python prettyprint-override"><code>import requests
prerule= '[a-z]{2}'
solr_query= {"query": "*:*", 'limit':0, "facet": "true", "facet.field": "sentence", "facet.matches": prerule, "facet.query": "sentence:/"+prerule+"/"}
solr_headers = {'Content-type': 'application/json'}
resp = requests.post(lexsettings.solr_url+'/query', data=json.dumps(solr_query), headers=solr_headers, auth=(lexsettings.solr_user, lexsettings.solr_pass))
jresp= json.loads(resp.content)
print('solrlist: '+str(jresp))
</code></pre>
<p>Error portion of the output (of <code>print</code> statement):</p>
<pre class="lang-json prettyprint-override"><code>"error":{
"metadata":[
"error-class",
"org.apache.solr.common.SolrException",
"root-error-class",
"org.apache.solr.common.SolrException"
],
"msg":"Unknown top-level key in JSON request : facet.field",
"code":400
}
</code></pre>
<p>Thank you for reading this far. Please let me know if you need any further information.</p>
|
<python><json><post><solr><facet>
|
2023-03-17 18:21:00
| 2
| 7,999
|
sariDon
|
75,770,878
| 7,535,556
|
Filter text in PDF by font with Borb using regex
|
<p>I am trying to extract text using Borb from a PDF and i can see there is a clear example to extract text with font names:</p>
<pre><code> # create FontNameFilter
l0: FontNameFilter = FontNameFilter("Helvetica")
# filtered text just gets passed to SimpleTextExtraction
l1: SimpleTextExtraction = SimpleTextExtraction()
l0.add_listener(l1)
# read the Document
doc: typing.Optional[Document] = None
with open("UMIR-01032023-EN_4.pdf", "rb") as in_file_handle:
doc = PDF.loads(in_file_handle, [l0])
# check whether we have read a Document
assert doc is not None
# print the names of the Fonts
print(l1.get_text()[0])# create FontNameFilter
l0: FontNameFilter = FontNameFilter("Helvetica")
# filtered text just gets passed to SimpleTextExtraction
l1: SimpleTextExtraction = SimpleTextExtraction()
l0.add_listener(l1)
# read the Document
doc: typing.Optional[Document] = None
with open("UMIR-01032023-EN_4.pdf", "rb") as in_file_handle:
doc = PDF.loads(in_file_handle, [l0])
# check whether we have read a Document
assert doc is not None
# print the names of the Fonts
print(l1.get_text()[0])
</code></pre>
<p>I wanted to know if there is a way to extract the text using regex in font names for example:
If font name one is: <code>ABCD-Font</code>
Font name two is: <code>ABCD-Bold-Font</code></p>
<p>How can i extract both.</p>
|
<python><pdf-extraction><borb>
|
2023-03-17 18:13:08
| 1
| 357
|
IamButtman
|
75,770,801
| 1,813,491
|
Pydantic does not validate as expected in a simple model
|
<p>I am first time learning pydantic, using this example. But I don't get any exception message as I expect, and the class has been made. What's the problem here?
I am using pedantic 0.18.2 on python 3.8.10 in a notebook in VScode environment:</p>
<p>from pydantic import BaseModel, ValidationError</p>
<pre><code>from pydantic import BaseModel, ValidationError
# Now defining the Person class
class Person(BaseModel):
name: str
age: int
is_married: bool
data ={
'name': 32,
'age': True,
'is_married': 'hhh'
}
</code></pre>
<p>Now the try and exception block:</p>
<pre><code>try:
# Now to parse the data and validate it
person = Person(**data)
except ValidationError as e:
print(e)
</code></pre>
<p>Now output. Looking into person:</p>
<pre><code>person
</code></pre>
<p>It shows this result:</p>
<pre><code><Person name='32' age=1 is_married=False>
</code></pre>
<p>What should I fix here? How to prevent automatic change of data types?
Thank you very much for your help.</p>
|
<python><python-3.x><pydantic>
|
2023-03-17 18:04:50
| 0
| 431
|
BobbyF
|
75,770,780
| 3,123,109
|
Returning queryset that uses a foreign key of a foreign key relationship
|
<p>It sounds simple enough: get all the Users for a specific Company. But it is complicated by a few things:</p>
<ul>
<li>The User model is extended</li>
<li>The model that extends it contains a UUID that uniquely identifies the user throughout various system integrations</li>
<li>This UUID is what is used in the company relationship.</li>
</ul>
<p>So the relationship is <code>user_to_company__user_uuid</code> -> <code>user_extended__user_id</code> -> <code>auth_user</code>. That's what I'd like to return is the User model.</p>
<p>What I have:</p>
<pre><code># url
/api/user_to_company/?company_uuid=0450469d-cbb1-4374-a16f-dd72ce15cf67
</code></pre>
<pre><code># views.py
class UserToCompanyViewSet(mixins.ListModelMixin,
mixins.RetrieveModelMixin,
viewsets.GenericViewSet):
filter_backends = [StrictDjangoFilterBackend]
filterset_fields = [
'id',
'user_uuid',
'company_uuid'
]
permission_classes = [CompanyPermissions]
def get_queryset(self):
if self.request.GET['company_uuid']:
queryset = User.objects.filter(
user_to_company__company_uuid=self.request.GET['company_uuid'])
return queryset
def get_serializer_class(self):
if self.request.GET['company_uuid']:
serializer_class = UserSerializer
return serializer_class
</code></pre>
<pre><code># serializers.py
class UserToCompanySerializer(serializers.ModelSerializer):
class Meta:
model = UserToCompany
fields = '__all__'
class UserExtendedSerializer(serializers.ModelSerializer):
class Meta:
model = UserExtended
fields = '__all__'
</code></pre>
<pre><code># models.py
class UserExtended(models.Model):
user_id = models.OneToOneField(
User, on_delete=models.CASCADE, db_column='user_id')
uuid = models.UUIDField(primary_key=True, null=False)
class Meta:
db_table = 'user_extended'
class UserToCompany(models.Model):
user_uuid = models.ForeignKey(
UserExtended, on_delete=models.CASCADE, db_column='user_uuid', related_name='user_to_company')
company_uuid = models.ForeignKey(
Companies, on_delete=models.CASCADE, db_column='company_uuid', related_name='user_to_company')
class Meta:
db_table = 'user_to_company'
unique_together = [['user_uuid', 'company_uuid']]
</code></pre>
<p>Understandably, in this setup <code>User.object.filter(user_to_company__company_uuid=self.reqest.GET['company_uuid']</code> doesn't make sense and it returns <code>django.core.exceptions.FieldError: Cannot resolve keyword 'user_to_company' into field</code>--There isn't a relationship between User and UserToCompany directly, but there is between User -> UserExtended -> UserToCompany</p>
<p>I could like do this by using <code>UserExtended.object.filter()</code>, but that returns an object like :</p>
<pre><code>[
{
"user_extended_stuff_1": "stuff",
"user_extended_stuff_2": "more stuff",
"auth_user": {
"auth_user_stuff_1": "stuff",
"auth_user_stuff_2": "more stuff"
}
}
]
</code></pre>
<p>But I need an object like:</p>
<pre><code>[
{
"auth_user_stuff_1": "stuff",
"auth_user_stuff_2": "more stuff",
"user_extended": {
"user_extended_stuff_1": "stuff",
"user_extended_stuff_2": "more stuff"
}
}
]
</code></pre>
<p>Is there a way to implement "foreign key of a foreign key" lookup?</p>
<p>I think a work around would get the list of users and then do something like <code>User.objects.filter(user_ext__user_uuid__in=[querset])</code></p>
|
<python><django><django-models><django-rest-framework><django-viewsets>
|
2023-03-17 18:02:09
| 2
| 9,304
|
cheslijones
|
75,770,564
| 900,078
|
Why "from . import re" fails but "import re; from . import re" succeeds?
|
<p>I'm playing with Python's relative import and the following result (with <strong>Python 3.7.3</strong> on Debian 10.12) surprises me:</p>
<pre class="lang-none prettyprint-override"><code>$ python3
>>> from . import re
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: cannot import name 're' from '__main__' (unknown location)
>>> import re
>>> re
<module 're' from '/usr/lib/python3.7/re.py'>
>>> from . import re
>>> re
<module 're' from '/usr/lib/python3.7/re.py'>
>>>
</code></pre>
<p>Why <code>from . import re</code> succeeds after <code>import re</code>? What's happening?</p>
<hr />
<h4>UPDATE:</h4>
<p><strong>Python 3.8.2</strong> on macOS 13.2 gives different result. Maybe a <strong>Python 3.7</strong> bug?</p>
<pre class="lang-none prettyprint-override"><code>>>> from . import re
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: attempted relative import with no known parent package
>>> import re
>>> re
<module 're' from '/Library/Developer/CommandLineTools/Library/Frameworks/
Python3.framework/Versions/3.8/lib/python3.8/re.py'>
>>> from . import re
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ImportError: attempted relative import with no known parent package
>>> re
<module 're' from '/Library/Developer/CommandLineTools/Library/Frameworks/
Python3.framework/Versions/3.8/lib/python3.8/re.py'>
>>>
</code></pre>
|
<python><python-3.x>
|
2023-03-17 17:33:23
| 1
| 21,248
|
pynexj
|
75,770,303
| 9,226,093
|
Get name of class from class type in python
|
<p>Given a class type, I'd like to get its name and print it to console. I have access to the class type but not an instance.</p>
<pre><code>class Foo:
pass
def print_class_names(klass):
print(klass.__some_dunder?__) # don't know how to do this
print_class_name(Foo) # would like this to print "Foo"
</code></pre>
<p>Bonus:
I would also like to access inherited types and generic type, so, for example, if I have <code>list[Foo]</code> I could access "list" and "Foo" as separate strings.</p>
<p>I can directly print the class like <code>print(Foo)</code> but that gives me <code><class '__main__.Student'></code> and I'd like to avoid string manipulation if possible.</p>
|
<python><python-3.x><typing>
|
2023-03-17 17:01:59
| 0
| 776
|
jacob_g
|
75,770,293
| 1,800,838
|
Python YAML *not* preserving order in lists
|
<p>We're using PyYAML version 5.3.1 under Python 3.7.</p>
<p>We're finding that the order of lists is not being preserved.</p>
<p>For example, assume that in the file <code>example.yaml</code>, we have the following ...</p>
<pre><code>---
data:
- start
- next
...
</code></pre>
<p>And suppose that our Python 3.7 program looks like this:</p>
<pre><code>import yaml
with open('example.yaml', 'r') as f:
input_data = f.read()
datadict = yaml.load(input_data, Loader=yaml.FullLoader)
data = datadict['data']
print(f'{data}')
</code></pre>
<p>When we run this program with the same input data on different machines and in different environments (command line, daemon, REST call, <em>etc</em>.), sometimes it prints out this:</p>
<pre><code>['start', 'next']
</code></pre>
<p>... and sometimes it prints out this:</p>
<pre><code>['next', 'start']
</code></pre>
<p>It's almost as if YAML is initially storing the list elements in a set and then converting that to a list, because element ordering of a set is not guaranteed. Or perhaps YAML sometimes tries to sort the data that goes into a list.</p>
<p>And we get the same behavior with <code>yaml.SafeLoader</code> instead of <code>yaml.FullLoader</code>.</p>
<p>Also, if we put a <code>print(input_data)</code> statement before the <code>yaml.load</code> statement, we <em><strong>always</strong></em> see the data in the correct order in the output of that <code>print(input_data)</code> statement, although the list ordering set by YAML still varies as described above.</p>
<p>Has anyone seen this behavior? And if so, what could be causing it, and how can it be corrected so that our list ordering can be maintained?</p>
<p>Thank you in advance.</p>
<p><strong>UPDATE</strong>: Responding to the latest comments ...</p>
<p>I tried <code>assert data[0] == 'start'</code> as suggested, and it indeed fails during those times when the list ordering fails.</p>
<p>I also tried this:</p>
<pre><code>for item in data:
print(item)
</code></pre>
<p>... and it also prints the items in the same incorrect order when the f-string printout shows the same thing.</p>
<p>Regarding the question of where this code is running: it's within the following Redhat linux environment:</p>
<pre><code>% uname -a
Linux [HOSTNAME] 3.10.0-1160.81.1.e17.x86_64 #1 SMP Thu Nov 24 12:21:22 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux
</code></pre>
<p>In one case, this python code is running from the command line, and it always works properly.</p>
<p>In the other case, it's running from a REST server which is resident on the same machine. In this REST-server case, the order of the list is changed to what seems to be alphabetical order.</p>
<p>In both cases, it's Python 3.7.5, and in both cases, it's PyYAML 5.3.1. And yes, I now agree that some subsidiary package that is imported by the REST server python module probably is indeed altering the behavior of `PyYAML'.</p>
<p>But does anyone know what python package could cause the ordering of list elements to be altered? We're using Flask within the REST server, and at first I wondered whether that could be responsible. However, none of the other lists in our software have reordered elements when running within that same module within that same Flask-based REST server.</p>
<p>The large company at which I'm now working has tight controls over the available software libraries that we can use, including python packages. We have to use PyYAML from our company's software repository. And although it's theoretically just an instance of the standard PyYAML 5.3.1 package, perhaps it has been altered in some way by our "Software Security" team. And again, it indeed could be that some subsidiary package used by PyYAML might have been altered at our installation such that it changes lists to ordered lists under certain conditions, or temporarily uses sets to hold list data before converting back to lists.</p>
<p>Anyway, it seems that I'm simply out of luck with the company's PyYAML package, and so I think that my only solution will be to get the source code for ruamel.yaml or some other YAML implementation and include a copy of that source code into the module I'm working on.</p>
<p>Thanks to all of you for all your help and feedback!</p>
<p>I'll keep this question open for a while longer, in case any new information might surface.</p>
<p><strong>PS</strong>: The data that is being read via PyYAML is configuration data for a program. Another solution might be to simply abandon YAML altogether here and switch to JSON or to some configuration-management tool.</p>
|
<python><list><yaml>
|
2023-03-17 17:01:09
| 0
| 2,350
|
HippoMan
|
75,770,281
| 12,042,094
|
Azure DataBricks ImportError: cannot import name dataclass_transform
|
<p>I have a python notebook running the following imports on a DataBricks cluster</p>
<pre><code>%pip install presidio_analyzer
%pip install presidio_anonymizer
import spacy.cli
spacy.cli.download("en_core_web_lg")
nlp = spacy.load("en_core_web_lg")
import csv
import pprint
import collections
from typing import List, Iterable, Optional, Union, Dict
import pandas as pd
from presidio_analyzer import AnalyzerEngine, BatchAnalyzerEngine, RecognizerResult, DictAnalyzerResult
from presidio_anonymizer import AnonymizerEngine
from presidio_anonymizer.entities import EngineResult
</code></pre>
<p>To install and run the Microsoft Presidio library to anonymise data.</p>
<p>The code works fine and runs when called through the Databricks notebooks UI, but when attempting to call this notebook as a step in Azure Data Factory pipelines, it gives the following error:</p>
<pre><code>"runError": "ImportError: cannot import name dataclass_transform"
</code></pre>
<p>From trial and error in the Databricks UI, I can determine that this error was generated due to missing certain parts of the imported libraries but the commands given at the beginning of the code resolved this in DataBricks notebooks.</p>
<p>I cannot reason why this step will not work when called as an ADF step.</p>
|
<python><azure><azure-data-factory><databricks><presidio>
|
2023-03-17 17:00:05
| 2
| 486
|
RAH
|
75,770,248
| 2,373,145
|
Eager map in Python
|
<p>In Python the <code>map</code> function is lazy, but most often I need an eager map.</p>
<p>For example, trying to slice a map object results in an error:</p>
<pre><code>>>>> map(abs, [3, -1, -4, 1])[1:]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'map' object is not subscriptable (key slice(1, None, None))
</code></pre>
<p>I guess I need to implement an eager map myself, so I wonder whether there's a standard way of doing that in Python.</p>
<p>I managed to implement it in a few different ways, but I'm not sure what alternative should be preferred. I'm asking for both CPython and for PyPy3, if the answer varies depending on the Python implementation, I'd prefer to know about all relevant options.</p>
<p>These are my implementations:</p>
<pre><code>def eager_map_impl0(f, *collections):
return list(map(f, *collections))
def eager_map_impl1(f, *collections):
return [x for x in map(f, *collections)]
def eager_map_impl2(f, *collections):
return [*map(f, *collections)]
def eager_map_impl3(f, *collections):
return [f(*x) for x in zip(*collections)]
</code></pre>
<p>Usage example:</p>
<pre><code>>>>> eager_map_impl0(abs, [3, -1, -4, 1])[1:]
[1, 4, 1]
>>>> eager_map_impl1(abs, [3, -1, -4, 1])[1:]
[1, 4, 1]
>>>> eager_map_impl2(abs, [3, -1, -4, 1])[1:]
[1, 4, 1]
>>>> eager_map_impl3(abs, [3, -1, -4, 1])[1:]
[1, 4, 1]
</code></pre>
<p>Regarding the duplicate vote, the linked question and some of its answers are interesting, but not an answer here, I think. I already know I want to use <code>map</code>, not list comprehensions; so I was hoping someone would say what is the most performant implementation in CPython vs Pypy as an answer here.</p>
|
<python><python-3.x><functional-programming><lazy-evaluation><lazy-sequences>
|
2023-03-17 16:56:31
| 1
| 363
|
user2373145
|
75,770,169
| 470,062
|
How do I check that an object is a class definition (In Python 3)?
|
<p>The following code Prints <code>False</code>:</p>
<pre><code>from enum import Enum
type_ = str
print(issubclass(type_, Enum))
</code></pre>
<p>The following code prints <code>True</code>:</p>
<pre><code>from enum import Enum
type_ = Enum('MyEnum', {'a': 1, 'b': 2})
print(issubclass(type_, Enum))
</code></pre>
<p>The following code throws an error:</p>
<pre><code>from enum import Enum
from typing import List
type_ = List[str]
print(issubclass(type_, Enum))
# TypeError: issubclass() arg 1 must be a class
</code></pre>
<p>Is there a better way than a try / except to check that an object is a class definition? Something like:</p>
<pre><code>print(isclass(type_) and issubclass(type_, Enum))
</code></pre>
|
<python><python-3.x>
|
2023-03-17 16:48:15
| 0
| 7,891
|
tofarr
|
75,770,005
| 11,594,202
|
Mapping class methods of python class
|
<p>I created a message class, which stores an arbitrary message like so:</p>
<pre><code>class MessageBase:
"""Wrapper class for messages from different resources"""
def __init__(
self,
subject,
body,
sender,
recipient,
received,
to
):
self.subject = subject
self.body = body
self.sender = sender
self.recipient = recipient
self.received = received
self.to = to
</code></pre>
<p>Messages can come from different sources, and I want my class to be flexible with these resources. I want to create such a syntax:</p>
<pre><code>message1 = Message.from_source(source='outlook', raw_outlook_message)
message2 = Message.from_source(source='sendinblue_email', raw_sendinblue_message)
message1.perform_action()
message2.perform_action()
</code></pre>
<p>od way to code a class, or whether this goes against some practices.</p>
<p>To achieve this, I created several class methods, and want to map them as such:</p>
<pre><code> self.sources = {
'outlook': self.from_outlook,
'sendinblue_email': self.from_sendinblue_email,
}
@classmethod
def from_source(cls, source_name, *args):
return cls.sources[source_name.lower()](*args)
@classmethod
def from_outlook(cls, raw_message: dict):
return cls(<parse_outlook_message>)
@classmethod
def from_sendinblue_email(cls, raw_message: dict):
return cls(<parse_outlook_message>)
</code></pre>
<p>I have one issue with this setup and one concern. The issue is that I cannot access the sources dictionary of <code>self.sources</code>, and I do not know how to achieve this functionality. The concern is whether this is a go</p>
|
<python><python-class>
|
2023-03-17 16:29:21
| 2
| 920
|
Jeroen Vermunt
|
75,769,948
| 17,741,308
|
Tensorflow-datasets "plant_village" dataset no examples were yield
|
<p>My Environment:</p>
<p>Python 3.11.2, Tensorflow 2.12.0-rc1, Tensorflow-datasets 4.8.3, perfect clean newly created virtual environment in Visual Studio Code with the only operations performed being pip install the two libraries above .</p>
<p>The import statements :</p>
<pre><code>import tensorflow as tf
import tensorflow_datasets as tfds
</code></pre>
<p>runs quickly without bugs. However, the next line:</p>
<pre><code>dataset, info = tfds.load('plant_village', split='train', with_info=True)
</code></pre>
<p>yields the error:</p>
<p><a href="https://i.sstatic.net/7fpWN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/7fpWN.png" alt="enter image description here" /></a></p>
<p>Attempts made to access other datasets, like mnist, are successful. Is this a bug specific to this data set? How do I overcome this bug and load this dataset? Note that exactly the same lines of code work perfectly on google colab .</p>
|
<python><python-3.x><tensorflow><tensorflow2.0><tensorflow-datasets>
|
2023-03-17 16:23:07
| 2
| 364
|
温泽海
|
75,769,839
| 5,410,188
|
Python - Pillow - Can't save resized image - Error seek of closed file
|
<p>I've been at this for a couple of hours. I'm trying to resize an image via pillow and save it but I continue to get the error <code>Error seek of closed file</code> and I'm not sure why. I've looked at numerous tutorials and it seems to work fine for them. When I run in the ipython shell, with <code>.show()</code> it seems to run fine.</p>
<p>from my research/understanding, it's throwing an error because it believes the file is still open when trying to do actions to it, but these actions should be fine since I'm interactive with the image. Any help is appreciated</p>
<p>I've also tried running with <code>with Image.open(file) as img:</code> and indenting the code below and removing the <code>img.close()</code> line but I still get the same error.</p>
<p>Here is my code:</p>
<pre><code>def resize_image(file):
img = Image.open(file) #opens image
#new dims to resize to
orig_height = int(img.height)
orig_width = int(img.width)
orig_height_to_140_ratio = orig_height / 140
new_height = 140
new_width = int(orig_width / orig_height_to_140_ratio)
resized_img = img.resize( #resize
(new_width, new_height), resample=Image.BICUBIC)
new_image = resized_img.save(new_name) #save
img.close() #close open image
return new_image #returns the image
</code></pre>
|
<python><python-imaging-library><image-resizing>
|
2023-03-17 16:10:31
| 1
| 1,542
|
Cflux
|
75,769,820
| 6,734,243
|
How to prevent matplotlib to be shown in popup window?
|
<p>I use matplotlib in my lib to display legend on a ipyleaflet map. In my CD/CI tests I run several checks on this legend (values displayed, colors etc...). My problem is when it's run on my local computer, matplotlib open a legend popup windows that stops the execution of the tests.</p>
<p>Is it possible to force matplotlib to remain non-interactive when I run my pytest session ?</p>
|
<python><matplotlib>
|
2023-03-17 16:08:51
| 1
| 2,670
|
Pierrick Rambaud
|
75,769,816
| 10,028,567
|
How do I retrieve a password from Keychain Access using keyring?
|
<p>"Keeping credentials safe in Jupyter Notebooks" (<a href="https://towardsdatascience.com/keeping-credentials-safe-in-jupyter-notebooks-fbd215a8e311" rel="nofollow noreferrer">https://towardsdatascience.com/keeping-credentials-safe-in-jupyter-notebooks-fbd215a8e311</a>) says that "Keyring integrates with your OS keychain manager... You would use it as"</p>
<pre><code>import keyring
keyring.get_keyring()
keyring.get_password("system", "username")
</code></pre>
<p>On Mac Ventura 13.2.1, I open Keychain Access, click "Create a new Keychain item.", enter Keychain Item Name "PostgreSQL_Password", account name "tslever", and password "my_password". I restart Anaconda Navigator, JupyterLab, and a Jupyter Notebook. I enter</p>
<pre><code>import keyring
keyring.get_keyring()
print(keyring.get_password("PostgreSQL_Password", "tslever"))
</code></pre>
<p><code>None</code> is printed.</p>
|
<python><jupyter-lab><keychain><python-keyring>
|
2023-03-17 16:08:22
| 0
| 437
|
Tom Lever
|
75,769,777
| 2,783,767
|
how to save best model in timeseries tsai
|
<p>I am using tsai TSCLassifier to train on my data.
However I donot know how to save the best model of all epochs.Like there is modelcheckpoint in keras/tensorflow. Among all epochs I want to save the model that is best val_loss on test set.</p>
<p>Below is my code, can somebody please help to let me know how to save best model in tsai library.</p>
<pre><code>import os
os.chdir(os.path.dirname(os.path.abspath(__file__)))
from pickle import load
from multiprocessing import Process
import numpy as np
from tsai.all import *
import matplotlib.pyplot as plt
from sklearn.metrics import precision_recall_curve
dataset_idx = 0
X_train = load(open(r"X_train_"+str(dataset_idx)+".pkl", 'rb'))
y_train = load(open(r"y_train_"+str(dataset_idx)+".pkl", 'rb'))
X_test = load(open(r"X_test_"+str(dataset_idx)+".pkl", 'rb'))
y_test = load(open(r"y_test_"+str(dataset_idx)+".pkl", 'rb'))
print("dataset loaded")
learn = TSClassifier(X_train, y_train, arch=InceptionTimePlus, arch_config=dict(fc_dropout=0.5))
print("training started")
learn.fit_one_cycle(5, 0.0005)
learn.export("tsai_"+str(dataset_idx)+".pkl")
</code></pre>
|
<python><deep-learning><time-series>
|
2023-03-17 16:04:16
| 1
| 394
|
Granth
|
75,769,692
| 1,627,466
|
Selective replacement of unicode characters in Python using regex
|
<p>There are many answers as to how one can use regex to remove unicode characters in Python.</p>
<p>See <a href="https://stackoverflow.com/questions/44010727/remove-unicode-code-uxxx-in-string-python">Remove Unicode code (\uxxx) in string Python</a> and <a href="https://stackoverflow.com/questions/70377962/python-regex-module-re-match-unicode-characters-with-u">Python regex module "re" match unicode characters with \u</a></p>
<p>However, in my case, I don't want to replace every unicode character but only the ones that are displayed with their \u code, not the ones that are properly shown as characters. I have tried both solutions and they remove both types of unicode characters.</p>
<p><code>\u2002pandemic</code> becomes <code>pandemic</code>
and <code>master’s</code> becomes <code>masters</code></p>
<p>Is there a general solutions to removing the first type of unicode characters but keeping the second kind?</p>
|
<python><regex><unicode>
|
2023-03-17 15:54:56
| 2
| 423
|
user1627466
|
75,769,639
| 2,482,149
|
Boto3- Type Hint For an S3 Bucket Resource
|
<p>I have instantiated an s3 bucket using boto3 below:</p>
<pre><code>import boto3
session = boto3.Session()
s3 = session.resource('s3')
src_bucket = s3.Bucket('input-bucket')
</code></pre>
<p>Then I created a function passing in said bucket in order to return the number of objects in it:</p>
<pre><code>def get_total_objects(bucket):
count = 0
for i in bucket.objects.all():
count = count + 1
return count
</code></pre>
<p>My question is, I would like to add type hints here. I have tried the below like</p>
<pre><code>from boto3.resources import base
from boto3.resources.base import ServiceResource
boto3.resources.model.s3.Bucket
</code></pre>
<p>But none of them seem to work. What is the correct type for <code>bucket</code> in <code>get_total_objects</code>?</p>
|
<python><amazon-s3><boto3>
|
2023-03-17 15:49:12
| 2
| 1,226
|
clattenburg cake
|
75,769,461
| 14,705,072
|
How to fill a polars dataframe from a numpy array in python
|
<p>I am currently working on a dataframe function that assigns values of a numpy array of shape 2 to a given column of a dataframe using the polars library in Python.</p>
<p>I have a dataframe <code>df</code> with the following columns : <code>['HZ', 'FL', 'Q']</code>. The column <code>'HZ'</code>takes values in <code>[0, EC + H - 1]</code> and the column <code>'FL'</code> takes values in <code>[1, F]</code>.</p>
<p>I also have a numpy array <code>q</code> of shape <code>(EC + H, F)</code>, and I want to assign its values to the column <code>'Q'</code> in this way :
if df['HZ'] >= EC, then df['Q'] = q[df['HZ']][df['F'] - 1].</p>
<p>You can find below the pandas instruction that does exactly what I want to do.</p>
<p><code>df.loc[df['HZ'] >= EC, 'Q'] = q[df.loc[df['HZ'] >= EC, 'HZ'], df.loc[df['HZ'] >= EC, 'F'] - 1]</code></p>
<p>Now I want to do it using polars, and I tried to do it this way:</p>
<p><code>df = df.with_columns(pl.when(pl.col('HZ') >= EC).then(q[pl.col('HZ')][pl.col('F') - 1]).otherwise(pl.col('Q')).alias('Q'))</code></p>
<p>And I get the following error :</p>
<p><code>IndexError: only integers, slices (`:`), ellipsis (`...`), numpy.newaxis (`None`) and integer or boolean arrays are valid indices</code></p>
<p>I understand that I don't give numpy the good format of indexes to get the corresponding value in the array, but I don't know how to replace it to get the desired behavior.</p>
<p>Thanks by advance</p>
|
<python><numpy><python-polars>
|
2023-03-17 15:32:38
| 1
| 319
|
Haeden
|
75,769,364
| 3,971,855
|
Doing a groupby only on common column values
|
<p>I have a query on how to do groupby on the values which are common to the keys on which we are doing a groupby. Its very hard for me to describe the exact problem in words.</p>
<p>I have the table below in this format.</p>
<p><a href="https://i.sstatic.net/5J9ZN.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/5J9ZN.png" alt="enter image description here" /></a></p>
<p>Now I want a dataframe which looks like the one below with groupby on code and Name which have common type and alias.</p>
<p><a href="https://i.sstatic.net/XhyZl.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XhyZl.png" alt="enter image description here" /></a></p>
<p>So if you can see [Card,Cardio] and name "cardio" is common to the top three entries . So thats how we get the first row in our resultant table. Since Heart is common to only type A and C, thats how we get the third row and so on.</p>
<p>I don't have any code on how to do this and cant think also of any solution on how to achieve this result.</p>
<p>And sorry for the phrasing of the question and I will be really grateful if you have any rephrasing suggestions</p>
<p><strong>Edit - Code for Type E will be 2 in first table</strong></p>
|
<python><pandas><dataframe><group-by>
|
2023-03-17 15:23:32
| 1
| 309
|
BrownBatman
|
75,769,250
| 19,328,707
|
Python - Webscraping POST login expired
|
<p>I'm trying to download a file from an network device. Therefore i need to be logged in to get to the download endpoint.</p>
<p>I copied the <code>POST</code> curl of the login request to the website and converted it to a python request that looks like so.</p>
<pre><code>import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:109.0) Gecko/20100101 Firefox/111.0',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,*/*;q=0.8',
'Accept-Language': 'de,en-US;q=0.7,en;q=0.3',
# 'Accept-Encoding': 'gzip, deflate',
'Content-Type': 'application/x-www-form-urlencoded',
'Origin': 'http://192.168.18.96',
'DNT': '1',
'Connection': 'keep-alive',
'Referer': 'http://192.168.18.96/v1/base/cheetah_login.html',
'Cookie': 'testcookie; SID=a243eb355e16a079226a443d4a0c5f0ff453f032a99e8752084796c681aff36a66de4e88bb0aad7f272b3be8fe011c9364f4659efcc0d08f2ee98f7180944760; testcookie; Language_In_Use=',
'Upgrade-Insecure-Requests': '1',
}
data = {
'CSRFToken': '1679065361ce165a07724a0553789eee63938e08ab7be0148728a5f79878469cefbb3bfb6938ed526996fd9cee8e8c307ad4177e4abb257f4ae8be878e88a8f40a2ad97e92',
'uname': 'admin',
'pwd': 'password',
'err_flag': '',
'err_msg': '',
'submt': '',
}
response = requests.post('http://192.168.18.96/v1/base/cheetah_login.html', headers=headers, data=data)
print(response.text)
</code></pre>
<p>If i run that <code>POST</code> i get an expired response back that looks like so.</p>
<pre><code><!--
/*********************************************************************
* Copyright 2016-2021 Broadcom.
**********************************************************************
*
* @filename
*
* @purpose Login page
*
* @comments
*
* @create
*
* @author
* @end
*
**********************************************************************/
-->
<html>
<head>
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<LINK REL=stylesheet HREF="/v1/base/style.css" TYPE="text/css">
<link rel=stylesheet href="/v1/base/nanoscroller.css" TYPE="text/css">
<META http-equiv="Pragma" content="no-cache">
<META HTTP-EQUIV="Content-Type" CONTENT="text/html; charset=UTF-8">
<TITLE>NETGEAR M4300-52G</TITLE> <!-- Cheetah Page Title -->
<link rel="shortcut icon" href="/v1/base/favicon.ico"> <!-- fav icon -->
<script type='text/javascript' src='/v1/base/js/jquery-3.6.0.min.js'></script>
<script type='text/javascript' src='/v1/base/js/jquery.cookie.js?v="12.0.17.7"'></script>
<script type='text/javascript' src='/v1/base/js/ng_help.js?v="12.0.17.7"'></script>
<script type='text/javascript' src='/v1/base/js/nav_nls.js?v="12.0.17.7"'></script>
<style>
.loginPage_inlineEr_padding
{
padding-left: 25px;
}
h1
{
font: 14pt Arial;
color: #672E8F;
white-space: nowrap;
padding-left: 25px;
text-align: center;
}
</style>
</head>
<script>
var timeOut = 0;
var timerNum = 0;
var presentHost;
function expiredLoad()
{
timerNum = setInterval(reloginTimeout, 1000);
var head = document.getElementById("expiredmessage");
var timeRemaining = 10-timeOut;
presentHost = top.location.protocol+ "//" +top.location.host;
var tag = "<a href=\""+presentHost+"\">"+js_NLS_here+"</a>";
var msg = js_NLS_valMapSessionExpiredMsg1 + tag + js_NLS_valMapSessionExpiredMsg2 + String(timeRemaining) + js_NLS_seconds;
head.innerHTML = msg;
}
function reloginTimeout()
{
timeOut = timeOut + 1;
if(timeOut == 10)
{
clearInterval(timerNum);
var presentHost;
presentHost = top.location.protocol+ "//" + top.location.host;
top.location.replace(presentHost);
}
var head = document.getElementById("expiredmessage");
var timeRemaining = 10-timeOut;
presentHost = top.location.protocol+ "//" +top.location.host;
var tag = "<a href=\""+presentHost+"\">"+js_NLS_here+"</a>";
var msg = js_NLS_valMapSessionExpiredMsg1 + tag + js_NLS_valMapSessionExpiredMsg2 + String(timeRemaining) + js_NLS_seconds;
head.innerHTML = msg;
}
</script>
<body onLoad="expiredLoad()">
<div class="spacer100Percent">
<div class="loginPage_mainPadding top">
<div class="netgearLogoPadding">
<a href="http://www.netgear.com/">
<img width="138" alt="Netgear" src="/v1/base/images/Netgear-logo.png"/>
</a>
</div>
<div class="separator13"></div>
<div>
<label class="productName">
M4300-52G ProSAFE 48-port 1G and 2-port 10GBASE-T and 2-port 10G SFP+
</label>
</div>
</div>
<h1 id = "expiredmessage"> The login page has expired. Will be redirected to login page Automatically </h1>
</div>
</div>
<div id="footer_image" class="bottom_image" style="height: 408px; padding-top: 40px;"></div>
<div id="branding_text" class="footer paddingLeft25 separator25" style="position: absolute; top: 745px; line-height: 25px; vertical-align: middle;">
&copy; 2022 NETGEAR, Inc. All rights reserved.
</div>
</body>
</html>
</code></pre>
<p>I've already tried to work within a <code>requests.session</code> and <code>webscrape</code> the CSRF token, but the result was the same.</p>
|
<python><web-scraping><request>
|
2023-03-17 15:12:33
| 1
| 326
|
LiiVion
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.