QuestionId
int64 74.8M
79.8M
| UserId
int64 56
29.4M
| QuestionTitle
stringlengths 15
150
| QuestionBody
stringlengths 40
40.3k
| Tags
stringlengths 8
101
| CreationDate
stringdate 2022-12-10 09:42:47
2025-11-01 19:08:18
| AnswerCount
int64 0
44
| UserExpertiseLevel
int64 301
888k
| UserDisplayName
stringlengths 3
30
β |
|---|---|---|---|---|---|---|---|---|
79,033,762
| 3,840,940
|
Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.flink.types.Row.getFieldNames(boolean)" because "from" is null
|
<p>I try to implement Apache Flink jdbc codes which read data from Apache Kafka and insert them into MySQL. First I make simple codes which read data from Kafka single topic and insert into MySQL single table. Below are the codes.</p>
<pre><code>from title_flink_stream import TITLE_FLINK_STREAM
from pyflink.common import WatermarkStrategy, Types, SimpleStringSchema, Row
from pyflink.common.typeinfo import RowTypeInfo
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.datastream.connectors.jdbc import JdbcConnectionOptions, JdbcExecutionOptions, JdbcSink
from pyflink.datastream.connectors.kafka import KafkaSource, KafkaOffsetsInitializer
from pyflink.datastream.functions import MapFunction
from pyflink.datastream.functions import MapFunction
import configparser
import os
config = configparser.ConfigParser()
path = os.path.dirname(__file__)
os.chdir(path)
config.read('resources/SystemConfig.ini')
env = StreamExecutionEnvironment.get_execution_environment()
env.set_parallelism(1)
env.add_jars('file:///home/joseph/flink/jars/flink-connector-kafka-3.1.0-1.18.jar',
'file:///home/joseph/flink/jars/kafka-clients-3.8.0.jar',
'file:///home/joseph/flink/jars/flink-connector-jdbc-3.1.2-1.18.jar',
'file:///home/joseph/flink/jars/mysql-connector-j-8.3.0.jar')
class CustomCsvMapFunction(MapFunction):
def map(self, value):
str_list = value.split(',')
# return Types.ROW(str_list)
if str_list[0] != '' and str_list[1] != '':
return Row(date=str_list[0], value=float(str_list[1]), state=str_list[2], id=str_list[3], title=str_list[4], frequency_short=str_list[5],\
units_short=str_list[6], seasonal_adjustment_short=str_list[7])
kafka_brokerlist = config['KAFKA_CONFIG']['kafka.brokerlist']
mysql_user = config['MYSQL_CONFIG']['mysql.user']
mysql_password = config['MYSQL_CONFIG']['mysql.password']
mysql_host_url = config['MYSQL_CONFIG']['mysql.host.url']
type_name = ['date','value','state','id','title','frequency_short','units_short','seasonal_adjustment_short']
type_schema = [Types.STRING(), Types.FLOAT(), Types.STRING(), Types.STRING(), Types.STRING(), Types.STRING(), Types.STRING(), Types.STRING()]
output_type = Types.ROW_NAMED(type_name, type_schema)
type_info = RowTypeInfo(type_schema, type_name)
source = KafkaSource.builder() \
.set_bootstrap_servers(kafka_brokerlist) \
.set_topics('topic_' + 'exportGoods') \
.set_starting_offsets(KafkaOffsetsInitializer.earliest()) \
.set_value_only_deserializer(SimpleStringSchema()) \
.build()
ds = env.from_source(source, WatermarkStrategy.no_watermarks(), "Kafka Source")
csv_ds = ds.filter(lambda str: not(str.startswith('date'))).map(CustomCsvMapFunction(), output_type=output_type)
# csv_ds.print()
jdbcConnOptions = JdbcConnectionOptions.JdbcConnectionOptionsBuilder()\
.with_url(mysql_host_url)\
.with_driver_name('com.mysql.cj.jdbc.Driver')\
.with_user_name(mysql_user)\
.with_password(mysql_password)\
.build()
jdbcExeOptions = JdbcExecutionOptions.builder()\
.with_batch_interval_ms(1000)\
.with_batch_size(200)\
.with_max_retries(5)\
.build()
csv_ds.add_sink(
JdbcSink.sink(
'INSERT INTO ' + 'tbl_' + 'exportGoods' + ' VALUES(?, ?, ?, ?, ?, ?, ?, ?)',
type_info, jdbcConnOptions, jdbcExeOptions))
env.execute('Flink Save2 MySQL')
env.close()
</code></pre>
<p>The codes work successfully without exceptions. But I have several kafka topics and corresponding several mysql tables. So I make for-loop of jdbc connectivity like below,</p>
<pre><code>for flink_stream in TITLE_FLINK_STREAM:
source = KafkaSource.builder() \
.set_bootstrap_servers(kafka_brokerlist) \
.set_topics('topic_' + flink_stream.suffix) \
.set_starting_offsets(KafkaOffsetsInitializer.earliest()) \
.set_value_only_deserializer(SimpleStringSchema()) \
.build()
ds = env.from_source(source, WatermarkStrategy.no_watermarks(), "Kafka Source")
csv_ds = ds.filter(lambda str: not(str.startswith('date'))).map(CustomCsvMapFunction(), output_type=output_type)
# csv_ds.print()
jdbcConnOptions = JdbcConnectionOptions.JdbcConnectionOptionsBuilder()\
.with_url(mysql_host_url)\
.with_driver_name('com.mysql.cj.jdbc.Driver')\
.with_user_name(mysql_user)\
.with_password(mysql_password)\
.build()
jdbcExeOptions = JdbcExecutionOptions.builder()\
.with_batch_interval_ms(1000)\
.with_batch_size(200)\
.with_max_retries(5)\
.build()
csv_ds.add_sink(
JdbcSink.sink(
'INSERT INTO ' + 'tbl_' + flink_stream.suffix + ' VALUES(?, ?, ?, ?, ?, ?, ?, ?)',
type_info, jdbcConnOptions, jdbcExeOptions))
env.execute('Flink Save2 MySQL')
env.close()
</code></pre>
<p>The insertion of data are done successfully for a while, but the loop codes is interrupted and throw the following exceptions.</p>
<pre><code>Exception in thread read_grpc_client_inputs:
Traceback (most recent call last):
File "/usr/lib/python/anaconda3/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/usr/lib/python/anaconda3/lib/python3.11/threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/apache_beam/runners/worker/data_plane.py", line 669, in <lambda>
target=lambda: self._read_inputs(elements_iterator),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/apache_beam/runners/worker/data_plane.py", line 652, in _read_inputs
for elements in elements_iterator:
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/grpc/_channel.py", line 543, in __next__
return self._next()
^^^^^^^^^^^^
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/grpc/_channel.py", line 969, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.CANCELLED
details = "Multiplexer hanging up"
debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:37507 {created_time:"2024-09-28T16:52:35.651536993+09:00", grpc_status:1, grpc_message:"Multiplexer hanging up"}"
>
Exception in thread read_grpc_client_inputs:
Traceback (most recent call last):
File "/usr/lib/python/anaconda3/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/usr/lib/python/anaconda3/lib/python3.11/threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/apache_beam/runners/worker/data_plane.py", line 669, in <lambda>
target=lambda: self._read_inputs(elements_iterator),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/apache_beam/runners/worker/data_plane.py", line 652, in _read_inputs
for elements in elements_iterator:
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/grpc/_channel.py", line 543, in __next__
return self._next()
^^^^^^^^^^^^
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/grpc/_channel.py", line 969, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.CANCELLED
details = "Multiplexer hanging up"
debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:46051 {grpc_message:"Multiplexer hanging up", grpc_status:1, created_time:"2024-09-28T16:52:35.83112255+09:00"}"
>
Exception in thread read_grpc_client_inputs:
Traceback (most recent call last):
File "/usr/lib/python/anaconda3/lib/python3.11/threading.py", line 1045, in _bootstrap_inner
self.run()
File "/usr/lib/python/anaconda3/lib/python3.11/threading.py", line 982, in run
self._target(*self._args, **self._kwargs)
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/apache_beam/runners/worker/data_plane.py", line 669, in <lambda>
target=lambda: self._read_inputs(elements_iterator),
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/apache_beam/runners/worker/data_plane.py", line 652, in _read_inputs
for elements in elements_iterator:
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/grpc/_channel.py", line 543, in __next__
return self._next()
^^^^^^^^^^^^
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/grpc/_channel.py", line 969, in _next
raise self
grpc._channel._MultiThreadedRendezvous: <_MultiThreadedRendezvous of RPC that terminated with:
status = StatusCode.CANCELLED
details = "Multiplexer hanging up"
debug_error_string = "UNKNOWN:Error received from peer ipv6:%5B::1%5D:44919 {created_time:"2024-09-28T16:52:35.970579357+09:00", grpc_status:1, grpc_message:"Multiplexer hanging up"}"
>
Traceback (most recent call last):
File "/home/joseph/VSCode_Workspace/etl-stream-python/com/aaa/etl/jdbctest.py", line 116, in <module>
env.execute('Flink Save2 MySQL')
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/pyflink/datastream/stream_execution_environment.py", line 824, in execute
return JobExecutionResult(self._j_stream_execution_environment.execute(j_stream_graph))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/py4j/java_gateway.py", line 1322, in __call__
return_value = get_return_value(
^^^^^^^^^^^^^^^^^
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/pyflink/util/exceptions.py", line 146, in deco
return f(*a, **kw)
^^^^^^^^^^^
File "/home/joseph/VSCode_Workspace/.venv-etl/lib/python3.11/site-packages/py4j/protocol.py", line 326, in get_return_value
raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o10.execute.
: org.apache.flink.runtime.client.JobExecutionException: Job execution failed.
at org.apache.flink.runtime.jobmaster.JobResult.toJobExecutionResult(JobResult.java:144)
at org.apache.flink.runtime.minicluster.MiniClusterJobClient.lambda$getJobExecutionResult$3(MiniClusterJobClient.java:141)
at java.base/java.util.concurrent.CompletableFuture$UniApply.tryFire(CompletableFuture.java:646)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
at org.apache.flink.runtime.rpc.pekko.PekkoInvocationHandler.lambda$invokeRpc$1(PekkoInvocationHandler.java:268)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
at org.apache.flink.util.concurrent.FutureUtils.doForward(FutureUtils.java:1287)
at org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$null$1(ClassLoadingUtils.java:93)
at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:68)
at org.apache.flink.runtime.concurrent.ClassLoadingUtils.lambda$guardCompletionWithContextClassLoader$2(ClassLoadingUtils.java:92)
at java.base/java.util.concurrent.CompletableFuture.uniWhenComplete(CompletableFuture.java:863)
at java.base/java.util.concurrent.CompletableFuture$UniWhenComplete.tryFire(CompletableFuture.java:841)
at java.base/java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:510)
at java.base/java.util.concurrent.CompletableFuture.complete(CompletableFuture.java:2147)
at org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$1.onComplete(ScalaFutureUtils.java:47)
at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:310)
at org.apache.pekko.dispatch.OnComplete.internal(Future.scala:307)
at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:234)
at org.apache.pekko.dispatch.japi$CallbackBridge.apply(Future.scala:231)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at org.apache.flink.runtime.concurrent.pekko.ScalaFutureUtils$DirectExecutionContext.execute(ScalaFutureUtils.java:65)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:72)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1(Promise.scala:288)
at scala.concurrent.impl.Promise$DefaultPromise.$anonfun$tryComplete$1$adapted(Promise.scala:288)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:288)
at org.apache.pekko.pattern.PromiseActorRef.$bang(AskSupport.scala:629)
at org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:34)
at org.apache.pekko.pattern.PipeToSupport$PipeableFuture$$anonfun$pipeTo$1.applyOrElse(PipeToSupport.scala:33)
at scala.concurrent.Future.$anonfun$andThen$1(Future.scala:536)
at scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:33)
at scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:33)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:64)
at org.apache.pekko.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:73)
at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:110)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:85)
at org.apache.pekko.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:110)
at org.apache.pekko.dispatch.TaskInvocation.run(AbstractDispatcher.scala:59)
at org.apache.pekko.dispatch.ForkJoinExecutorConfigurator$PekkoForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:57)
at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:373)
at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1182)
at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1655)
at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1622)
at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:165)
Caused by: org.apache.flink.runtime.JobException: Recovery is suppressed by NoRestartBackoffTimeStrategy
at org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailure(ExecutionFailureHandler.java:219)
at org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.handleFailureAndReport(ExecutionFailureHandler.java:166)
at org.apache.flink.runtime.executiongraph.failover.ExecutionFailureHandler.getFailureHandlingResult(ExecutionFailureHandler.java:121)
at org.apache.flink.runtime.scheduler.DefaultScheduler.recordTaskFailure(DefaultScheduler.java:281)
at org.apache.flink.runtime.scheduler.DefaultScheduler.handleTaskFailure(DefaultScheduler.java:272)
at org.apache.flink.runtime.scheduler.DefaultScheduler.onTaskFailed(DefaultScheduler.java:265)
at org.apache.flink.runtime.scheduler.SchedulerBase.onTaskExecutionStateUpdate(SchedulerBase.java:787)
at org.apache.flink.runtime.scheduler.SchedulerBase.updateTaskExecutionState(SchedulerBase.java:764)
at org.apache.flink.runtime.scheduler.SchedulerNG.updateTaskExecutionState(SchedulerNG.java:83)
at org.apache.flink.runtime.jobmaster.JobMaster.updateTaskExecutionState(JobMaster.java:515)
at jdk.internal.reflect.GeneratedMethodAccessor31.invoke(Unknown Source)
at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.base/java.lang.reflect.Method.invoke(Method.java:569)
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.lambda$handleRpcInvocation$1(PekkoRpcActor.java:318)
at org.apache.flink.runtime.concurrent.ClassLoadingUtils.runWithContextClassLoader(ClassLoadingUtils.java:83)
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcInvocation(PekkoRpcActor.java:316)
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleRpcMessage(PekkoRpcActor.java:229)
at org.apache.flink.runtime.rpc.pekko.FencedPekkoRpcActor.handleRpcMessage(FencedPekkoRpcActor.java:88)
at org.apache.flink.runtime.rpc.pekko.PekkoRpcActor.handleMessage(PekkoRpcActor.java:174)
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:33)
at org.apache.pekko.japi.pf.UnitCaseStatement.apply(CaseStatements.scala:29)
at scala.PartialFunction.applyOrElse(PartialFunction.scala:127)
at scala.PartialFunction.applyOrElse$(PartialFunction.scala:126)
at org.apache.pekko.japi.pf.UnitCaseStatement.applyOrElse(CaseStatements.scala:29)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:175)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
at scala.PartialFunction$OrElse.applyOrElse(PartialFunction.scala:176)
at org.apache.pekko.actor.Actor.aroundReceive(Actor.scala:547)
at org.apache.pekko.actor.Actor.aroundReceive$(Actor.scala:545)
at org.apache.pekko.actor.AbstractActor.aroundReceive(AbstractActor.scala:229)
at org.apache.pekko.actor.ActorCell.receiveMessage(ActorCell.scala:590)
at org.apache.pekko.actor.ActorCell.invoke(ActorCell.scala:557)
at org.apache.pekko.dispatch.Mailbox.processMailbox(Mailbox.scala:280)
at org.apache.pekko.dispatch.Mailbox.run(Mailbox.scala:241)
at org.apache.pekko.dispatch.Mailbox.exec(Mailbox.scala:253)
... 5 more
Caused by: java.io.IOException: Failed to deserialize consumer record due to
at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:56)
at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:33)
at org.apache.flink.connector.base.source.reader.SourceReaderBase.pollNext(SourceReaderBase.java:203)
at org.apache.flink.streaming.api.operators.SourceOperator.emitNext(SourceOperator.java:422)
at org.apache.flink.streaming.runtime.io.StreamTaskSourceInput.emitNext(StreamTaskSourceInput.java:68)
at org.apache.flink.streaming.runtime.io.StreamOneInputProcessor.processInput(StreamOneInputProcessor.java:65)
at org.apache.flink.streaming.runtime.tasks.StreamTask.processInput(StreamTask.java:638)
at org.apache.flink.streaming.runtime.tasks.mailbox.MailboxProcessor.runMailboxLoop(MailboxProcessor.java:231)
at org.apache.flink.streaming.runtime.tasks.StreamTask.runMailboxLoop(StreamTask.java:973)
at org.apache.flink.streaming.runtime.tasks.StreamTask.invoke(StreamTask.java:917)
at org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:970)
at org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:949)
at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:763)
at org.apache.flink.runtime.taskmanager.Task.run(Task.java:575)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:92)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:50)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:29)
at org.apache.flink.streaming.runtime.tasks.SourceOperatorStreamTask$AsyncDataOutputToOutput.emitRecord(SourceOperatorStreamTask.java:310)
at org.apache.flink.streaming.api.operators.source.SourceOutputWithWatermarks.collect(SourceOutputWithWatermarks.java:110)
at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter$SourceOutputWrapper.collect(KafkaRecordEmitter.java:67)
at org.apache.flink.api.common.serialization.DeserializationSchema.deserialize(DeserializationSchema.java:84)
at org.apache.flink.connector.kafka.source.reader.deserializer.KafkaValueOnlyDeserializationSchemaWrapper.deserialize(KafkaValueOnlyDeserializationSchemaWrapper.java:51)
at org.apache.flink.connector.kafka.source.reader.KafkaRecordEmitter.emitRecord(KafkaRecordEmitter.java:53)
... 14 more
Caused by: org.apache.flink.streaming.runtime.tasks.ExceptionInChainedOperatorException: Could not forward element to next operator
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:92)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.collect(CopyingChainingOutput.java:50)
at org.apache.flink.streaming.runtime.tasks.ChainingOutput.collectAndCheckIfChained(ChainingOutput.java:90)
at org.apache.flink.streaming.runtime.tasks.ChainingOutput.collectAndCheckIfChained(ChainingOutput.java:40)
at org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:83)
at org.apache.flink.streaming.runtime.tasks.BroadcastingOutputCollector.collect(BroadcastingOutputCollector.java:34)
at org.apache.flink.streaming.api.operators.TimestampedCollector.collect(TimestampedCollector.java:52)
at org.apache.flink.streaming.api.operators.python.process.collector.RunnerOutputCollector.collect(RunnerOutputCollector.java:52)
at org.apache.flink.streaming.api.operators.python.process.AbstractExternalOneInputPythonFunctionOperator.emitResult(AbstractExternalOneInputPythonFunctionOperator.java:133)
at org.apache.flink.streaming.api.operators.python.process.AbstractExternalPythonFunctionOperator.emitResults(AbstractExternalPythonFunctionOperator.java:142)
at org.apache.flink.streaming.api.operators.python.process.AbstractExternalPythonFunctionOperator.invokeFinishBundle(AbstractExternalPythonFunctionOperator.java:101)
at org.apache.flink.streaming.api.operators.python.AbstractPythonFunctionOperator.checkInvokeFinishBundleByCount(AbstractPythonFunctionOperator.java:292)
at org.apache.flink.streaming.api.operators.python.process.AbstractExternalOneInputPythonFunctionOperator.processElement(AbstractExternalOneInputPythonFunctionOperator.java:146)
at org.apache.flink.streaming.api.operators.python.process.ExternalPythonProcessOperator.processElement(ExternalPythonProcessOperator.java:112)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:75)
... 22 more
Caused by: java.lang.NullPointerException: Cannot invoke "org.apache.flink.types.Row.getFieldNames(boolean)" because "from" is null
at org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:137)
at org.apache.flink.api.java.typeutils.runtime.RowSerializer.copy(RowSerializer.java:69)
at org.apache.flink.streaming.runtime.tasks.CopyingChainingOutput.pushToOperator(CopyingChainingOutput.java:74)
</code></pre>
<p>Are these exceptions due to the Flink configuration errors? The exceptions seem not to be codes grammars mismatches. Then do I have to change the configuration of Flink? For your information, I made the Flink java sdk codes having similar architecture and those codes work without exceptions.</p>
<p><strong>== Updated Parts</strong></p>
<p>I found some interesting fact. The above exception was thrown at the exact 7000 input lines. Whenever the 7001th line of csv-type data was about to be inserted to mysql, the errors occured. I think the some type of limitation prevent the insertion of 7000 more data into mysql and this is the matter of configuration. I increased the with_batch_size function parameter, but the same errors were thrown. Any idea?</p>
|
<python><apache-flink><flink-streaming><pyflink>
|
2024-09-28 08:35:17
| 0
| 1,441
|
Joseph Hwang
|
79,033,456
| 3,156,085
|
Why does ctypes also load standard C library functions and how could I prevent that?
|
<p>EDIT: The problem was completely misdiagnosed and was related to linking rather than <code>ctypes</code> itself (the libc is implicitly linked with gcc) and I found an actual solution somewhere else. Hence the duplicate flag.</p>
<hr />
<ol>
<li><p>Loading a homemade shared library with <code>ctypes</code> seems to provide access to standard C library functions somehow.</p>
<p>Why?</p>
<p>I would've expected some kind of AttributeError when trying the MRE below.</p>
</li>
<li><p>How could I restrict this behavior (if possible)?</p>
</li>
<li><p>[tiebreaker] : How does a function name resolve when trying to access it and if my homemade library redefines a standard C library function?</p>
</li>
</ol>
<hr />
<p><strong>Context and motive</strong>:</p>
<p>From what I understand of <a href="https://docs.python.org/3/library/ctypes.html#ctypes.CDLL" rel="nofollow noreferrer">the documentation</a> it should return a representation of a loaded shared library. But it looks like it's behaving as something more/else than that.</p>
<p>I encountered this behavior while working on my own implementation of some standard C library functions, which caused unexpected results while trying to test it through python tools using ctypes.</p>
<p>The main problem is that editing my code doesn't have any impact on the behavior of the loaded foreign functions and the tests results. Now it looks clear that my tests were actually running the system's functions rather than my own ones.</p>
<p>While using different symbols (such as prepending a prefix to my homemade function names) is an easy workaround, I would really like to understand why this access to functions I never explicitly used or included is provided by <code>ctypes</code> apparently out of nowhere.</p>
<p>I found no trace of such behavior in the <code>ctypes</code> documentation.</p>
<hr />
<p><strong>MRE:</strong></p>
<ul>
<li><code>test.py</code></li>
</ul>
<pre><code>#!/usr/bin/env python3
import ctypes
import os
my_lib = ctypes.CDLL(os.path.abspath("libfoo.so"))
# I expect this to work (It does).
print(f"{my_lib.foo = }")
# I expect this to fail (It doesn't).
print(f"{my_lib.strncmp = }")
# I expect this to fail (It does).
print(f"{my_lib.bar = }")
</code></pre>
<ul>
<li><code>foo.s</code>:</li>
</ul>
<pre><code> global foo
foo:
ret
</code></pre>
<ul>
<li>shell session:</li>
</ul>
<pre><code>$ make
nasm -f elf64 -o foo.o foo.s
gcc -shared -o libfoo.so foo.o
$ nm -a libfoo.so
0000000000000000 a
w __cxa_finalize@GLIBC_2.2.5
0000000000004000 d __dso_handle
0000000000003e48 d _DYNAMIC
00000000000010f4 t _fini
00000000000010f0 T foo
0000000000000000 a foo.s
0000000000003fe8 d _GLOBAL_OFFSET_TABLE_
w __gmon_start__
0000000000001000 t _init
w _ITM_deregisterTMCloneTable
w _ITM_registerTMCloneTable
0000000000004008 d __TMC_END__
$ ./test.py
my_lib.foo = <_FuncPtr object at 0x75a09cb3e140>
my_lib.strncmp = <_FuncPtr object at 0x75a09cb3e200>
Traceback (most recent call last):
File "/home/vmonteco/code/MREs/MRE_ctypes_give_access_to_stdlib/./test.py", line 13, in <module>
print(f"{my_lib.bar = }")
File "/home/vmonteco/.pyenv/versions/3.10.14/lib/python3.10/ctypes/__init__.py", line 387, in __getattr__
func = self.__getitem__(name)
File "/home/vmonteco/.pyenv/versions/3.10.14/lib/python3.10/ctypes/__init__.py", line 392, in __getitem__
func = self._FuncPtr((name_or_ordinal, self))
AttributeError: /home/vmonteco/code/MREs/MRE_ctypes_give_access_to_stdlib/libfoo.so: undefined symbol: bar
$ ldd libfoo.so
linux-vdso.so.1 (0x00007610a363b000)
libc.so.6 => /usr/lib/libc.so.6 (0x00007610a341d000)
/usr/lib64/ld-linux-x86-64.so.2 (0x00007610a363d000)
$ readelf -ld libfoo.so
Elf file type is DYN (Shared object file)
Entry point 0x0
There are 8 program headers, starting at offset 64
Program Headers:
Type Offset VirtAddr PhysAddr
FileSiz MemSiz Flags Align
LOAD 0x0000000000000000 0x0000000000000000 0x0000000000000000
0x0000000000000428 0x0000000000000428 R 0x1000
LOAD 0x0000000000001000 0x0000000000001000 0x0000000000001000
0x0000000000000101 0x0000000000000101 R E 0x1000
LOAD 0x0000000000002000 0x0000000000002000 0x0000000000002000
0x0000000000000004 0x0000000000000004 R 0x1000
LOAD 0x0000000000002e38 0x0000000000003e38 0x0000000000003e38
0x00000000000001d0 0x00000000000001d8 RW 0x1000
DYNAMIC 0x0000000000002e48 0x0000000000003e48 0x0000000000003e48
0x0000000000000180 0x0000000000000180 RW 0x8
NOTE 0x0000000000000200 0x0000000000000200 0x0000000000000200
0x0000000000000024 0x0000000000000024 R 0x4
GNU_STACK 0x0000000000000000 0x0000000000000000 0x0000000000000000
0x0000000000000000 0x0000000000000000 RW 0x10
GNU_RELRO 0x0000000000002e38 0x0000000000003e38 0x0000000000003e38
0x00000000000001c8 0x00000000000001c8 R 0x1
Section to Segment mapping:
Segment Sections...
00 .note.gnu.build-id .gnu.hash .dynsym .dynstr .gnu.version .gnu.version_r .rela.dyn
01 .init .text .fini
02 .eh_frame
03 .init_array .fini_array .dynamic .got .got.plt .data .bss
04 .dynamic
05 .note.gnu.build-id
06
07 .init_array .fini_array .dynamic .got .got.plt
Dynamic section at offset 0x2e48 contains 20 entries:
Tag Type Name/Value
0x0000000000000001 (NEEDED) Shared library: [libc.so.6]
0x000000000000000c (INIT) 0x1000
0x000000000000000d (FINI) 0x10f4
0x0000000000000019 (INIT_ARRAY) 0x3e38
0x000000000000001b (INIT_ARRAYSZ) 8 (bytes)
0x000000000000001a (FINI_ARRAY) 0x3e40
0x000000000000001c (FINI_ARRAYSZ) 8 (bytes)
0x000000006ffffef5 (GNU_HASH) 0x228
0x0000000000000005 (STRTAB) 0x2e0
0x0000000000000006 (SYMTAB) 0x250
0x000000000000000a (STRSZ) 111 (bytes)
0x000000000000000b (SYMENT) 24 (bytes)
0x0000000000000007 (RELA) 0x380
0x0000000000000008 (RELASZ) 168 (bytes)
0x0000000000000009 (RELAENT) 24 (bytes)
0x000000006ffffffe (VERNEED) 0x360
0x000000006fffffff (VERNEEDNUM) 1
0x000000006ffffff0 (VERSYM) 0x350
0x000000006ffffff9 (RELACOUNT) 3
0x0000000000000000 (NULL) 0x0
$
</code></pre>
<hr />
<h3>[Addendum:] Versions and system info:</h3>
<ul>
<li><code>uname -a</code>: Archlinux (<code>Linux vmonteco-P15 6.10.10-arch1-1 #1 SMP PREEMPT_DYNAMIC Thu, 12 Sep 2024 17:21:02 +0000 x86_64 GNU/Linux</code>)</li>
<li><code>python</code>: <code>Python 3.10.14</code></li>
<li><code>nasm</code>: <code>NASM version 2.16.03 compiled on May 4 2024</code></li>
<li><code>gcc</code>: <code>gcc (GCC) 14.2.1 20240910</code></li>
<li><code>ld</code>: <code>GNU ld (GNU Binutils) 2.43.0</code></li>
</ul>
|
<python><ctypes>
|
2024-09-28 04:45:14
| 0
| 15,848
|
vmonteco
|
79,033,035
| 12,131,013
|
What binary format is the binary mesh export and how can it be converted to ASCII with Python?
|
<p>I have generated some large meshes using Gmsh via the Python API. The ASCII files can be Gigabytes. Since binary files are smaller and faster to read, I set Gmsh to save the mesh files in binary format. My current issue is trying to figure out how to read that binary mesh back into Python in an interpretable format.</p>
<p>Further, I need the code to be compatible with the current code base I am using, which expects the ASCII data that I get when saving the mesh in ASCII format. I believe that if I can get read in the binary file and convert it to ASCII in memory, I can trick the other library I am using into using that. Hopefully, this will still be faster than reading in ASCII data (though I do not yet know if that will be true).</p>
<p>So, my questions are:</p>
<ol>
<li>What binary encoding does Gmsh use for the save files?</li>
<li>How can the binary mesh file be read back into Python and converted to ASCII?</li>
</ol>
<p>Here is a simplified version of the Gmsh tutorial 1 code (<a href="https://gitlab.onelab.info/gmsh/gmsh/blob/gmsh_4_13_1/tutorials/python/t1.py" rel="nofollow noreferrer">ref</a>) modified to save the mesh in either ASCII or binary format.</p>
<pre class="lang-py prettyprint-override"><code>import gmsh
import sys
binary = True
if binary:
argv = ["","-bin"]
else:
argv = []
gmsh.initialize(argv=argv)
gmsh.model.add("t1")
lc = 1e-2
gmsh.model.geo.addPoint(0, 0, 0, lc, 1)
gmsh.model.geo.addPoint(.1, 0, 0, lc, 2)
gmsh.model.geo.addPoint(.1, .3, 0, lc, 3)
p4 = gmsh.model.geo.addPoint(0, .3, 0, lc)
gmsh.model.geo.addLine(1, 2, 1)
gmsh.model.geo.addLine(3, 2, 2)
gmsh.model.geo.addLine(3, p4, 3)
gmsh.model.geo.addLine(4, 1, p4)
gmsh.model.geo.addCurveLoop([4, 1, -2, 3], 1)
gmsh.model.geo.addPlaneSurface([1], 1)
gmsh.model.geo.synchronize()
gmsh.model.addPhysicalGroup(1, [1, 2, 4], 5)
gmsh.model.addPhysicalGroup(2, [1], name="My surface")
gmsh.model.mesh.generate(2)
if binary:
gmsh.write("t1_binary.msh")
else:
gmsh.write("t1_ascii.msh")
gmsh.finalize()
</code></pre>
<p>My initial naive approach was to read the file in binary format and try <code>.decode()</code>, but this approach fails once it gets past the <code>$Entities</code> line.</p>
<pre class="lang-py prettyprint-override"><code>mesh_file = []
with open("t1_binary.msh", "rb") as f:
for line in f.readlines():
print(line)
mesh_file.append(line.decode())
</code></pre>
<p>When it fails I get the error <code>UnicodeDecodeError: 'utf-8' codec can't decode byte 0x9a in position 72: invalid start byte</code>.</p>
<p>I'm on Linux and tried running <code>file t1_binary.msh</code> to see if that would give me any useful information, but all I got was <code>t1_binary.msh: data</code> (i.e. it didn't provide any information about the binary format).</p>
<hr />
<p>Addressing comments:</p>
<blockquote>
<p>Have you tried looking at the source code of gmsh.write()?</p>
</blockquote>
<p>Yes, I have tried looking into that code. <code>gmsh.py</code>'s <code>write</code> function (<a href="https://gitlab.onelab.info/gmsh/gmsh/blob/gmsh_4_13_1/api/gmsh.py#L361" rel="nofollow noreferrer">link</a>) calls <code>lib.gmshWrite</code> (I think that's this: <a href="https://gitlab.onelab.info/gmsh/gmsh/-/blob/gmsh_4_13_1/api/gmshc.cpp#L155" rel="nofollow noreferrer">link</a>), which is C++. That C++ function calls <code>gmsh::write</code> (I think that's this: <a href="https://gitlab.onelab.info/gmsh/gmsh/-/blob/gmsh_4_13_1/api/gmsh.h_cwrap#L203" rel="nofollow noreferrer">link</a>) which seems to call <code>gmshWrite</code>, going in a full circle. I'm not sure if I'm looking at the wrong function definitions.</p>
<blockquote>
<p>What kind of ASCII output do you expect? This is binary data, not text, right?</p>
</blockquote>
<p>I expect to be able to recover the ASCII data saved into <code>t1_ascii.msh</code> when I set <code>binary</code> to <code>False</code>.</p>
|
<python><binary-data><gmsh>
|
2024-09-27 22:10:23
| 1
| 9,583
|
jared
|
79,032,931
| 7,693,707
|
SciPy curve_fit covariance of the parameters could not be estimated
|
<p>I am trying to estimate the Schott coefficients of a glass material given only its <code>n_e</code>(refraction index at <code>e</code> line) and <code>V_e</code>(<a href="https://en.wikipedia.org/wiki/Abbe_number" rel="nofollow noreferrer">Abbe number</a> at <code>e</code> line).</p>
<p>Schott is one way to represent the dispersion of a material, which is the different index of refraction (RI) at different wavelength.</p>
<p><a href="https://i.sstatic.net/AMgv7G8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AMgv7G8J.png" alt="Glass dispersion plot" /></a></p>
<p>In the figure above, the horizontal axis is the wavelength (in micrometer) and the vertical axis is index of refraction (This figure is based on the glass type named <code>KZFH1</code>).</p>
<p>Because the glass dispersion have a common shape (higher at shorter wavelength and then tappers down), and the RI at key points (<a href="https://en.wikipedia.org/wiki/Fraunhofer_lines" rel="nofollow noreferrer">Fraunhofer lines</a>) have a stable relationship, my thought is that I can use the definition of Abbe number and the general relation of different Fraunhofer line RI to create some data points, and use them to fit a curve:</p>
<pre><code>import numpy as np
from scipy.optimize import curve_fit
# Definition of the Schott function
def _InvSchott(x, a0, a1, a2, a3, a4, a5):
return np.sqrt(a0 + a1* x**2 + a2 * x**(-2) + a3 * x**(-4) + a4 * x**(-6) + a5 * x**(-8))
# Sample input, material parameter from a Leica Summilux patent
n = 1.7899
V = 48
# 6 wavelengths, Fraunhofer symbols are not used due to there is another version that uses n_d and V_d
shorter = 479.99
short = 486.13
neighbor = 546.07
middle = 587.56
longc = 643.85
longer = 656.27
# Refraction index of the corresponding wavelengths.
# The linear functions are acquired from external regressions from 2000 glass materials
n_long = 0.984 * n + 0.0246 # n_C'
n_shorter = ( (n-1) / V) + n_long # n_F', from the definition of Abbe number
n_short = 1.02 * n -0.0272 # n_F
n_neighbor = n # n_e
n_mid = 1.013 * n - 0.0264 # n_d
n_longer = 0.982 * n + 0.0268 # n_C
# The /1000 is to convert the wavelength from nanometer to micrometers
x_data = np.array([longer, longc, middle, neighbor, short, shorter]) / 1000.0
y_data = np.array([n_longer, n_long, n_mid, n_neighbor, n_short, n_shorter])
# Provided estimate are average value from the 2000 Schott glasses
popt, pcov = curve_fit(_InvSchott, x_data, y_data, [2.75118, -0.01055, 0.02357, 0.00084, -0.00003, 0.00001])
</code></pre>
<p>The <code>x_data</code> and <code>y_data</code> in this case are as follow:</p>
<pre><code>[0.65627 0.64385 0.58756 0.54607 0.48613 0.47999]
[1.7844818 1.7858616 1.7867687 1.7899 1.798498 1.80231785]
</code></pre>
<p>And then I got the warning <code>OptimizeWarning: Covariance of the parameters could not be estimated</code>. The fit result were all but <code>[inf inf inf inf inf inf]</code>.</p>
<p>I know this question has been asked a lot but I have not found a solution that works in this case yet. 6 data point is certainly a bit poor but this does satisfy the minimum, and Schott function is continuous, so I cannot figure out which part went wrong.</p>
<p>TLDR:</p>
<p>How do I find the coefficients for the function</p>
<pre><code>def _InvSchott(x, a0, a1, a2, a3, a4, a5):
return np.sqrt(a0 + a1* x**2 + a2 * x**(-2) + a3 * x**(-4) + a4 * x**(-6) + a5 * x**(-8))
</code></pre>
<p>that fits the data below:</p>
<pre><code>x: [0.65627 0.64385 0.58756 0.54607 0.48613 0.47999]
y: [1.7844818 1.7858616 1.7867687 1.7899 1.798498 1.80231785]
</code></pre>
|
<python><scipy><curve-fitting>
|
2024-09-27 21:16:16
| 2
| 1,090
|
Amarth GΓ»l
|
79,032,873
| 418,413
|
How to arbitrarily nest some data in a django rest framework serializer
|
<p>An existing client is already sending data in a structure likeβ¦</p>
<pre><code>{
"hive_metadata": {"name": "hive name"},
"bees": [{"name": "bee 1", "name": "bee 2", ...}]
}
</code></pre>
<p>For models like:</p>
<pre><code>class Hive(models.Model):
name = models.CharField(max_length=32, help_text="name")
class Bee(models.Model):
name = models.CharField(max_length=32, help_text="name")
hive = models.ForeignKey(
Hive, help_text="The Hive associated with this Bee", on_delete=models.CASCADE
)
</code></pre>
<p>The code that makes this possible manually iterates over the incoming data. I would like to rewrite it using a django rest framework serializer; however, the fact that <code>hive_metadata</code> is nested itself has stumped me so far.</p>
<p>If I write</p>
<pre><code>class BeesSerializer(ModelSerializer):
class Meta:
model = models.Bee
fields = ("name",)
class PopulatedHiveSerializer(ModelSerializer):
bees = BeesSerializer(many=True, source="bee_set")
class Meta:
model = models.Hive
fields = ("name","bees",)
</code></pre>
<p>would produce</p>
<pre><code>{
"name": "hive name",
"bees": [{"name": "bee 1", "name": "bee 2", ...}]
}
</code></pre>
<p>readily enough. I had hoped I could solve it with a reference to a sub-serializer, something like</p>
<pre><code>class HiveMetaDataSerializer(ModelSerializer):
class Meta:
model = models.Hive
fields = ("name",)
class PopulatedHiveSerializer(ModelSerializer):
bees = BeesSerializer(many=True, source="bee_set")
hive_metadata = HiveMetaDataSerializer(source=???)
class Meta:
model = models.Hive
fields = ("hive_metadata","bees",)
</code></pre>
<p>but I can't seem to figure out what to put in the "source" so that the same object is passed through the outer serializer into the inner.</p>
<p>So, is there a way to do this using a django rest framework serializer?</p>
|
<python><django><django-rest-framework><django-serializer>
|
2024-09-27 20:52:56
| 1
| 77,713
|
kojiro
|
79,032,867
| 6,775,670
|
Coroutine function generic type alias
|
<p>I'm trying to achieve a typing shorthand for coroutines just like basic callables like Callable[[..args..], ..return value..]</p>
<p>My idea is to write something like that, but in the attempt, I cannot understand how TypeAlias would later match where the P, and where the RV...</p>
<pre><code>P = ParamSpec('P')
RV = TypeVar('RV')
CoroutineFunction: TypeAlias = Callable[P, Awaitable[RV]]
</code></pre>
<p>Then, I want to use it as follows</p>
<pre><code># this is just sample coroutine
async def my_afunc(a: int, b: str) -> bool:
return bool(a or b)
# this is usage example
def some_decorator(afunc: CoroutineFunction[[int, str], bool]):
...
# this should handle type checking correctly,
# and warn me if I change the return value type for my_afunc
smth = some_decorator(my_afunc)
</code></pre>
<p>I'm using python 3.11, but 3.12 ideas are also appreciable.</p>
|
<python><pycharm><python-typing>
|
2024-09-27 20:45:14
| 0
| 1,312
|
Nikolay Prokopyev
|
79,032,841
| 825,227
|
Parse dataframe into list of subinterval dataframes for processing
|
<p>I have a dataframe, <strong>esu_tos</strong>:</p>
<pre><code>Time TickType Price Size
2023-08-13 15:10:46.166 1 4487.25 1
2023-08-13 15:10:47.375 1 4487 1
2023-08-13 15:10:54.656 1 4487.25 2
2023-08-13 15:10:57.627 1 4487 1
2023-08-13 15:10:57.628 1 4487 1
2023-08-13 15:10:57.628 1 4487 1
2023-08-13 15:11:00.759 1 4487.25 1
2023-08-13 15:11:00.759 1 4487 3
2023-08-13 15:11:01.415 1 4487 3
2023-08-13 15:11:01.416 1 4487 1
</code></pre>
<p>That I'd like to parse into sub-intervals like below. Is there a better way to do this aside from the loop?</p>
<pre><code>interval = '5s' # 1min / 30s / 10s / 5s
st = datetime(2023,8,13,15,10)
end = datetime(2023,8,14,14,1)
rng = pd.date_range(st, end, freq = interval, inclusive='both')
d = []
d_t = []
for i, k in enumerate(rng):
try:
a = esu_tos[(esu_tos.Time >= k) & (esu_tos.Time < rng[i+1])]
except:
a = esu_tos[(esu_tos.Time >= k) & (esu_tos.Time <= end)]
d.append(a)
d_t.append(k)
</code></pre>
<p><strong>Looking specifically to return a list or other object which will allow me to access the data aggregated to each interval -- not simply as <code>pd.Grouper</code> or <code>groupby</code> object.</strong></p>
<p>For instance, the included code creates a list of dataframes, which I'm subsequently able to access process individually.</p>
|
<python>
|
2024-09-27 20:36:48
| 1
| 1,702
|
Chris
|
79,032,813
| 718,057
|
How do I tell mypy that except() will bail a pytest test?
|
<p>The following code works, but I fell like I shouldn't have to write it:</p>
<pre class="lang-py prettyprint-override"><code>def test_transaction(self, the_client):
[...]
transaction: Transaction = transactions.filter([...]).first()
# NOTE: this is stupid and unnecessary just to satisfy mypy.
# the next expect() will bail if this test is true,
# so this if statement is completely superfluous
if transaction is None:
raise AssertionError("missing transaction")
# I want to tell mypy this works like the above if statement
expect(transaction).not_to(be_none())
# if the if statement above isn't there, it tells me
# None | Transaction doesn't have the property "client"
expect(transaction.client).to(equal(the_client))
[...]
</code></pre>
<p>Is there an easier way to do this that will satisy mypy? I have 1000+ tests like this and I don't want to add 2000+ more lines of completely unnecessary, useless code just to make a freaking code checker happy.</p>
<p>I have the django and drf stubs installed.</p>
|
<python><django><testing><python-typing><mypy>
|
2024-09-27 20:26:01
| 0
| 626
|
Skaman Sam
|
79,032,773
| 277,329
|
Stuck on scraping with BeautifulSoup while learning. Need some pointers
|
<p>I started learning screen scraping using BeautifulSoup. To get started I took a wikipedia article in the following format</p>
<pre><code><table class="wikitable sortable jquery-tablesorter">
<caption></caption>
<thead>
<tr>
<th colspan="2" style="width: 6%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Opening</th>
<th style="width: 20%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Title</th>
<th style="width: 10%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Director</th>
<th style="width: 45%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Cast</th>
<th style="width: 30%;" class="headerSort" tabindex="0" role="columnheader button" title="Sort ascending">Production company</th>
<th class="unsortable" style="width: 1%;"><abbr title="Reference(s)">Ref.</abbr></th>
</tr>
</thead>
<tbody>
<tr>
<td rowspan="3" style="text-align: center; background: #77bc83;">
<b>
O<br />
C<br />
T
</b>
</td>
<td rowspan="1" style="text-align: center; background: #77bc83;"><b>11</b></td>
<td style="text-align: center;">
<i><a href="/wiki/Viswam_(film)" title="Viswam (film)">Viswam</a></i>
</td>
<td>Sreenu Vaitla</td>
<td>
<link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" />
<div class="hlist">
<ul>
<li><a href="/wiki/Gopichand_(actor)" title="Gopichand (actor)">Gopichand</a></li>
<li><a href="/wiki/Kavya_Thapar" title="Kavya Thapar">Kavya Thapar</a></li>
<li><a href="/wiki/Vennela_Kishore" title="Vennela Kishore">Vennela Kishore</a></li>
<li><a href="/wiki/Sunil" title="Sunil">Sunil</a></li>
<li><a href="/wiki/Naresh" title="Naresh">Naresh</a></li>
</ul>
</div>
</td>
<td>
Chitralayam Studios<br />
People Media Factory
</td>
<td style="text-align: center;">
<sup id="cite_ref-180" class="reference">
<a href="#cite_note-180"><span class="cite-bracket">[</span>178<span class="cite-bracket">]</span></a>
</sup>
</td>
</tr>
<tr>
<td rowspan="2" style="text-align: center; background: #77bc83;"><b>31</b></td>
<td style="text-align: center;">
<i><a href="/wiki/Lucky_Baskhar" title="Lucky Baskhar">Lucky Baskhar</a></i>
</td>
<td><a href="/wiki/Venky_Atluri" title="Venky Atluri">Venky Atluri</a></td>
<td>
<link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" />
<div class="hlist">
<ul>
<li><a href="/wiki/Dulquer_Salmaan" title="Dulquer Salmaan">Dulquer Salmaan</a></li>
<li><a href="/wiki/Meenakshi_Chaudhary" title="Meenakshi Chaudhary">Meenakshi Chaudhary</a></li>
</ul>
</div>
</td>
<td><a href="/wiki/S._Radha_Krishna" title="S. Radha Krishna">Sithara Entertainments</a></td>
<td style="text-align: center;">
<sup id="cite_ref-181" class="reference">
<a href="#cite_note-181"><span class="cite-bracket">[</span>179<span class="cite-bracket">]</span></a>
</sup>
</td>
</tr>
<tr>
<td style="text-align: center;">
<i><a href="/wiki/Mechanic_Rocky" title="Mechanic Rocky">Mechanic Rocky</a></i>
</td>
<td>Ravi Teja Mullapudi</td>
<td>
<link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" />
<div class="hlist">
<ul>
<li><a href="/wiki/Vishwak_Sen" title="Vishwak Sen">Vishwak Sen</a></li>
<li><a href="/wiki/Meenakshi_Chaudhary" title="Meenakshi Chaudhary">Meenakshi Chaudhary</a></li>
</ul>
</div>
</td>
<td>SRT Entertainments</td>
<td style="text-align: center;">
<sup id="cite_ref-182" class="reference">
<a href="#cite_note-182"><span class="cite-bracket">[</span>180<span class="cite-bracket">]</span></a>
</sup>
</td>
</tr>
<tr>
<td style="text-align: center; background: #77ea83;">
<b>
N<br />
O<br />
V
</b>
</td>
<td style="text-align: center; background: #77ea83;"><b>9</b></td>
<td style="text-align: center;">
<i><a href="/wiki/Telusu_Kada" title="Telusu Kada">Telusu Kada</a></i>
</td>
<td><a href="/wiki/Neeraja_Kona" title="Neeraja Kona">Neeraja Kona</a></td>
<td>
<link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" />
<div class="hlist">
<ul>
<li><a href="/wiki/Siddhu_Jonnalagadda" title="Siddhu Jonnalagadda">Siddhu Jonnalagadda</a></li>
<li><a href="/wiki/Raashii_Khanna" title="Raashii Khanna">Raashii Khanna</a></li>
<li><a href="/wiki/Srinidhi_Shetty" title="Srinidhi Shetty">Srinidhi Shetty</a></li>
</ul>
</div>
</td>
<td>People Media Factory</td>
<td style="text-align: center;">
<sup id="cite_ref-183" class="reference">
<a href="#cite_note-183"><span class="cite-bracket">[</span>181<span class="cite-bracket">]</span></a>
</sup>
</td>
</tr>
<tr>
<td rowspan="2" style="text-align: center; background: #f4ca16; textcolor: #000;">
<b>
D<br />
E<br />
C
</b>
</td>
<td rowspan="1" style="text-align: center; background: #f8de7e;"><b>6</b></td>
<td style="text-align: center;">
<i><a href="/wiki/Pushpa_2:_The_Rule" title="Pushpa 2: The Rule">Pushpa 2: The Rule</a></i>
</td>
<td><a href="/wiki/Sukumar" title="Sukumar">Sukumar</a></td>
<td>
<link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" />
<div class="hlist">
<ul>
<li><a href="/wiki/Allu_Arjun" title="Allu Arjun">Allu Arjun</a></li>
<li><a href="/wiki/Fahadh_Faasil" title="Fahadh Faasil">Fahadh Faasil</a></li>
<li><a href="/wiki/Rashmika_Mandanna" title="Rashmika Mandanna">Rashmika Mandanna</a></li>
</ul>
</div>
</td>
<td><a href="/wiki/Mythri_Movie_Makers" title="Mythri Movie Makers">Mythri Movie Makers</a></td>
<td style="text-align: center;">
<sup id="cite_ref-184" class="reference">
<a href="#cite_note-184"><span class="cite-bracket">[</span>182<span class="cite-bracket">]</span></a>
</sup>
</td>
</tr>
<tr>
<td rowspan="1" style="text-align: center; background: #f8de7e;"><b>20</b></td>
<td style="text-align: center;"><i>Robinhood</i></td>
<td><a href="/wiki/Venky_Kudumula" title="Venky Kudumula">Venky Kudumula</a></td>
<td>
<link rel="mw-deduplicated-inline-style" href="mw-data:TemplateStyles:r1129693374" />
<div class="hlist">
<ul>
<li><a href="/wiki/Nithiin" title="Nithiin">Nithiin</a></li>
<li><a href="/wiki/Sreeleela" title="Sreeleela">Sreeleela</a></li>
</ul>
</div>
</td>
<td><a href="/wiki/Mythri_Movie_Makers" title="Mythri Movie Makers">Mythri Movie Makers</a></td>
<td style="text-align: center;">
<sup id="cite_ref-185" class="reference">
<a href="#cite_note-185"><span class="cite-bracket">[</span>183<span class="cite-bracket">]</span></a>
</sup>
</td>
</tr>
</tbody>
<tfoot></tfoot>
</table>
</code></pre>
<p>This is my python script that I wrote:</p>
<pre><code>soup = BeautifulSoup(html_page, "html.parser")
tables = soup.find_all("table",{"class":"wikitable sortable"})
headers = ['month','day','movie','director','cast','producer','reference']
movie_tables = []
total_movies = 0
for table in tables:
caption = table.find("caption")
if not caption or not caption.get_text().strip():
movie_tables.append(table)
#captions = soup.find_all("caption")
max_columns = len(headers)
# List to store dictionaries
data_dict_list = []
movies= []
for movie_table in movie_tables:
table_rows = movie_table.find("tbody").find_all("tr")[1:]
for table_row in table_rows:
total_movies += 1
columns = table_row.find_all('td')
row_data = [col.get_text(strip=True) for col in columns]
# If the row has fewer columns than the max, pad it with None
if len(row_data) == 6:
row_data.insert(0, None)
elif len(row_data) == 5:
row_data.insert(0, None)
row_data.insert(1, None)
for col in columns:
li_tags = col.find_all('li')
if li_tags:
cast=""
for li in li_tags:
li_values = li.get_text(strip=True)
cast = ', '.join(li_values)
row_data.append(cast)
else:
row_data.append(col.get_text())
# Create a dictionary mapping headers to row data
row_dict = dict(zip(headers, row_data))
# Append the dictionary to the list
data_dict_list.append(row_dict)
# Print the list of dictionaries
for row_dict in data_dict_list:
print(row_dict)
</code></pre>
<p>This is the output I am getting (Just showing a few items here):</p>
<pre><code>{'month': 'OCT', 'day': '11', 'movie': 'Viswam', 'director': 'Sreenu Vaitla', 'cast': 'GopichandKavya ThaparVennela KishoreSunilNaresh', 'producer': 'Chitralayam StudiosPeople Media Factory', 'reference': '[178]'}
{'month': None, 'day': '31', 'movie': 'Lucky Baskhar', 'director': 'Venky Atluri', 'cast': 'Dulquer SalmaanMeenakshi Chaudhary', 'producer': 'Sithara Entertainments', 'reference': '[179]'}
{'month': None, 'day': None, 'movie': 'Mechanic Rocky', 'director': 'Ravi Teja Mullapudi', 'cast': 'Vishwak SenMeenakshi Chaudhary', 'producer': 'SRT Entertainments', 'reference': '[180]'}
{'month': 'NOV', 'day': '9', 'movie': 'Telusu Kada', 'director': 'Neeraja Kona', 'cast': 'Siddhu JonnalagaddaRaashii KhannaSrinidhi Shetty', 'producer': 'People Media Factory', 'reference': '[181]'}
{'month': 'DEC', 'day': '6', 'movie': 'Pushpa 2: The Rule', 'director': 'Sukumar', 'cast': 'Allu ArjunFahadh FaasilRashmika Mandanna', 'producer': 'Mythri Movie Makers', 'reference': '[182]'}
{'month': None, 'day': '20', 'movie': 'Robinhood', 'director': 'Venky Kudumula', 'cast': 'NithiinSreeleela', 'producer': 'Mythri Movie Makers', 'reference': '[183]'}
</code></pre>
<p>This is what I am trying to get(Just showing the last item here):</p>
<pre><code>{'month': 'DEC', 'day': '20', 'movie': 'Robinhood', 'director': 'Venky Kudumula', 'cast': 'Nithiin|Sreeleela', 'producer': 'Mythri Movie Makers', 'reference': '[183]'}
</code></pre>
<p>I've been trying to debug this for the last day or so but I cannot figure out where I went wrong.</p>
<p>I am expecting:</p>
<ol>
<li>the month, day filled up in all the items when those columns span multiple rows and are not represented in all rows.</li>
<li>Next, I want to have a separator between the different cast members so that it will be easy for me to create a graph later.</li>
<li>Also, how do I extract the hyperlinks and store in a separate key in the dictionary while doing all of this?</li>
</ol>
|
<python><web-scraping><beautifulsoup>
|
2024-09-27 20:08:15
| 1
| 2,212
|
motiver
|
79,032,746
| 7,387,243
|
no such option: --platform linux-x86-64 when running pip install --platform
|
<p>I get an error:</p>
<blockquote>
<p>no such option: --platform linux-x86-64</p>
</blockquote>
<p>when using:
<code>python -m pip install --platform linux-x86-64.</code></p>
<p><code>python -m pip --version</code> shows 24.2.</p>
<p><code>python -m pip install --help</code> shows --platform argument in the list.</p>
<p>I can't find literally any results on Google, is there anything I am doing fundamentally wrong?</p>
<p>For context full command looks like this:</p>
<p><code>python -m pip install -r requirements.txt --only-binary=:all: --target ./python_example_app --platform linux-x86_64</code></p>
|
<python><pip>
|
2024-09-27 19:55:06
| 1
| 777
|
Telion
|
79,032,626
| 607,722
|
What does _weakrefset calls in python stack trace mean?
|
<p>I captured stack traces of a running Python process and noticed many <code>_weakrefset.py</code> calls at the end of the trace.</p>
<p>These <code>_remove</code> calls are never explicitly called in the code.</p>
<p>Does anyone know what this is about?</p>
<p>Samples:</p>
<pre><code>/usr/local/lib/python3.11/dataclasses.py, line 1284, in asdict return _asdict_inner(obj, dict_factory)
File: /usr/local/lib/python3.11/dataclasses.py, line 1291, in _asdict_inner value = _asdict_inner(getattr(obj, f.name), dict_factory)
File: /usr/local/lib/python3.11/dataclasses.py, line 1290, in _asdict_inner for f in fields(obj):
File: /usr/local/lib/python3.11/dataclasses.py, line 1248, in fields return tuple(f for f in fields.values() if f._field_type is _FIELD)
File: /usr/local/lib/python3.11/_weakrefset.py, line 39, in _remove def _remove(item, selfref=ref(self))
</code></pre>
<pre><code>File: /usr/local/lib/python3.11/site-packages/ddtrace/internal/compat.py, line 163, in func_wrapper result = await coro(*args, **kwargs)
File: /usr/local/lib/python3.11/site-packages/integrity_rules_engine/core/rule_evaluator.py, line 57, in evaluate_rules rule_results_and_exceptions = await asyncio.gather(
File: /usr/local/lib/python3.11/asyncio/tasks.py, line 817, in gather fut = _ensure_future(arg, loop=loop)
File: /usr/local/lib/python3.11/asyncio/tasks.py, line 670, in _ensure_future return loop.create_task(coro_or_future)
File: /usr/local/lib/python3.11/_weakrefset.py, line 39, in _remove def _remove(item, selfref=ref(self)
</code></pre>
|
<python>
|
2024-09-27 19:05:52
| 1
| 1,686
|
user607722
|
79,032,620
| 279,097
|
Polars: series to column broadcast
|
<p>This code used to work with polars < 1:</p>
<pre><code>df.with_columns(pl.Series('k_list', [0, 1])).explode('k_list')
</code></pre>
<p>But now I have this error:
SΓ©ries k_list, length 2 doesn't match the dataframe height of ...</p>
<p>What is the proper way to add a series forn each row of a column?</p>
|
<python><python-polars>
|
2024-09-27 19:01:38
| 1
| 415
|
Mac Fly
|
79,032,568
| 774,575
|
How to force a lambda function to return an object rather than the None value of the last method on the object?
|
<p>Sometimes I cannot use a lambda function on an object because the last method applied to the object returns <code>None</code>. What is the trick to be able to return the object itself instead of the method returned value?</p>
<p>Example, I want to get the updated copy of the dictionary instead of <code>None</code>:</p>
<pre><code>dict_update = lambda d, upd: d.copy().update(upd)
d = dict(lw=3, ls='-')
upd = dict(c='C0', lw=4)
d1 = dict_update(d, upd)
for v in [d, upd, d1]: print(v)
</code></pre>
<hr />
<pre><code>{'lw': 3, 'ls': '-'}
{'c': 'C0', 'lw': 4}
None
</code></pre>
|
<python><lambda>
|
2024-09-27 18:39:21
| 1
| 7,768
|
mins
|
79,032,509
| 3,655,069
|
png file not creating in visual studio code for mingrammer diagram
|
<p>I am running a sample mingrammer diagram python code provided <a href="https://diagrams.mingrammer.com/docs/getting-started/examples" rel="nofollow noreferrer">here</a> in Microsoft Visual Studio Code. It will create a <code>.png</code> file as output but I am getting some file with some text content. Like below:</p>
<p>I am using this python code:</p>
<pre><code># diagram.py
from diagrams import Diagram
from diagrams.aws.compute import EC2
from diagrams.aws.database import RDS
from diagrams.aws.network import ELB
with Diagram("Web Service", show=False):
ELB("lb") >> EC2("web") >> RDS("userdb")
</code></pre>
<p>I should get <code>web_service.png</code> file as per <a href="https://diagrams.mingrammer.com/docs/getting-started/installation#quick-start" rel="nofollow noreferrer">documentation</a>. But I am getting "web_service" file:</p>
<pre><code>digraph "Web Service" {
graph [fontcolor="#2D3436" fontname="Sans-Serif" fontsize=15 label="Web Service" nodesep=0.60 pad=2.0 rankdir=LR ranksep=0.75 splines=ortho]
node [fixedsize=true fontcolor="#2D3436" fontname="Sans-Serif" fontsize=13 height=1.4 imagescale=true labelloc=b shape=box style=rounded width=1.4]
edge [color="#7B8894"]
"55c1dbf2b98947439179362b1fbdbf0b" [label=lb height=1.9 image="C:\Users\ravim\OneDrive\Desktop\mypython\.venv\Lib\site-packages\resources/aws/network\elastic-load-balancing.png" shape=none]
db6288db747346ceaec598588b40c452 [label=web height=1.9 image="C:\Users\ravim\OneDrive\Desktop\mypython\.venv\Lib\site-packages\resources/aws/compute\ec2.png" shape=none]
"55c1dbf2b98947439179362b1fbdbf0b" -> db6288db747346ceaec598588b40c452 [dir=forward fontcolor="#2D3436" fontname="Sans-Serif" fontsize=13]
b60dbfb720a14392b9969c69bb9653f9 [label=userdb height=1.9 image="C:\Users\ravim\OneDrive\Desktop\mypython\.venv\Lib\site-packages\resources/aws/database\rds.png" shape=none]
db6288db747346ceaec598588b40c452 -> b60dbfb720a14392b9969c69bb9653f9 [dir=forward fontcolor="#2D3436" fontname="Sans-Serif" fontsize=13]
}
</code></pre>
|
<python><python-3.x>
|
2024-09-27 18:14:11
| 0
| 3,594
|
Sun
|
79,032,494
| 14,958,374
|
User actions graph layout calculation
|
<p>So, I have a weighted directed graph. Each node represents a page in web site, each node represents user action on moving from one page to another, weight means the amount of users that have performed such action. I need to calculate a good informative layout for this graph, that will be informative for visual analysis, I want to see bottlenecks and popular user paths (there are <code>start</code> and <code>end</code> nodes). I want my solution to.be in python. What can I do?</p>
<p>Upd: I've tried Sugiyama, Kamada-Kawai and Fruchterman-Reingold algorythms, but they do not seem very informative, as my graph is strongly connected and they create a big unreadable mash (even in 1.5k edges with about 30 nodes)</p>
|
<python><ocr><directed-graph><graph-layout><user-activity>
|
2024-09-27 18:08:08
| 0
| 331
|
Nick Zorander
|
79,032,225
| 562,970
|
Aggregating output from langchain LCEL elements
|
<p>I have two chains, one that generates a document and one that creates a short document resume. I want to chain them, using the output from the first on inside the other one. But I want to get both outputs in the result.</p>
<p>Before LCEL, I could do it using LLMChain's output_key parameter. With LCEL, there seems to be a RunnablePassthrough class, but I don't seem to get how to use it to aggregate the output.
Code example:</p>
<pre><code>generate_document_chain = generate_document_prompt | llm | StrOutputParser()
resume_document_chain = resume_document_prompt | llm | StrOutputParser()
aggregated_chain = generate_document_chain | resume_document_chain
content = aggregated_chain.invoke({"topic": topic})
</code></pre>
|
<python><langchain><py-langchain>
|
2024-09-27 16:30:32
| 1
| 1,546
|
Igor Deruga
|
79,031,970
| 15,520,615
|
Unable to generate ERD diagram with Python code
|
<p>The following Python is designed to generate an ERD using Visual Studio Code.</p>
<p>The chart is to created locally with matplotlib. The code executes without any errors, however the ERD diagram shows blank.</p>
<p>The python code is as follows:</p>
<pre><code>import matplotlib.pyplot as plt
# Define the entities and their attributes for the ERD
entities = {
"Customer": ["CustomerID (PK)", "CustomerName", "ContactInfo"],
"CreditCardAccount": ["AccountID (PK)", "AccountStatus", "Balance", "CustomerID (FK)"],
"CreditCard": ["CardID (PK)", "CardNumber", "ExpiryDate", "AccountID (FK)", "BrandID (FK)"],
"CreditCardBrand": ["BrandID (PK)", "BrandName", "CardType"],
"SecondaryCardHolder": ["SecondaryHolderID (PK)", "HolderName", "RelationToPrimary", "AccountID (FK)"],
"PurchaseTransaction": ["TransactionID (PK)", "TransactionDate", "Amount", "CardID (FK)", "RetailerID (FK)"],
"Retailer": ["RetailerID (PK)", "RetailerName", "Location"],
"MonthlyStatement": ["StatementID (PK)", "StatementDate", "OutstandingBalance", "AccountID (FK)"],
"CustomerServiceInteraction": ["InteractionID (PK)", "InteractionDate", "Notes", "CustomerID (FK)"],
}
# Relationships between entities
relationships = [
("Customer", "CreditCardAccount", "1:M"),
("CreditCardAccount", "CreditCard", "1:M"),
("CreditCard", "CreditCardBrand", "M:1"),
("CreditCardAccount", "SecondaryCardHolder", "1:M"),
("CreditCard", "PurchaseTransaction", "1:M"),
("PurchaseTransaction", "Retailer", "M:1"),
("CreditCardAccount", "MonthlyStatement", "1:M"),
("Customer", "CustomerServiceInteraction", "1:M"),
]
# Plotting the ERD
fig, ax = plt.subplots(figsize=(12, 8))
# Define positions for the entities
positions = {
"Customer": (1, 5),
"CreditCardAccount": (4, 5),
"CreditCard": (7, 5),
"CreditCardBrand": (10, 5),
"SecondaryCardHolder": (4, 3),
"PurchaseTransaction": (7, 3),
"Retailer": (10, 3),
"MonthlyStatement": (4, 1),
"CustomerServiceInteraction": (1, 3),
}
# Draw entities as boxes
for entity, position in positions.items():
plt.text(position[0], position[1], f"{entity}\n" + "\n".join(entities[entity]),
ha='center', va='center', bbox=dict(facecolor='lightblue', edgecolor='black', boxstyle='round,pad=0.5'))
# Draw relationships as lines
for rel in relationships:
start_pos = positions[rel[0]]
end_pos = positions[rel[1]]
ax.annotate("",
xy=end_pos, xycoords='data',
xytext=start_pos, textcoords='data',
arrowprops=dict(arrowstyle="->", lw=1.5, color='black'),
)
# Add cardinality
midpoint = ((start_pos[0] + end_pos[0]) / 2, (start_pos[1] + end_pos[1]) / 2)
ax.text(midpoint[0], midpoint[1], rel[2], ha='center', va='center', fontsize=10)
# Hide axes
ax.set_axis_off()
# Show the ERD diagram
plt.title("Entity Relationship Diagram (ERD) for Credit Card Company", fontsize=16)
plt.show()
</code></pre>
<p>The output is as follows:
<a href="https://i.sstatic.net/8lYMYZTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8lYMYZTK.png" alt="enter image description here" /></a></p>
<p>Can someone let me know where why the ERD won't appear?</p>
|
<python><matplotlib>
|
2024-09-27 15:12:46
| 1
| 3,011
|
Patterson
|
79,031,934
| 2,626,865
|
match OptionMenu and Button width (and resizable images in expandable containers)
|
<p>I have an OptionMenu and a Button in adjacent rows of the same column of a grid. I don't want the grid to manage the size of these widgets, because it can make the widgets much too wide. Instead, I would like for the OptionMenu and Button to be a fixed size - the maximum of the Button width, the OptionMenu width, and the widths of all menu entries. But I'm having trouble with this.</p>
<p>The widths and heights of the OptionMenu and Button are initially 1, so I must first bind <code><Configure></code> to let the grid geometry manager inform me of the desired sizes. To prevent an infinite loop, I immediately unbind the event in the handler.</p>
<p>I can then calculate the width of all the widgets in either pixels or text units. If I enforce the maximum width in the handler, I find that the OptionMenu "handle" adds a bit of width, so that the Button and OptionMenu widths are mismatched.</p>
<p>This forces me to nest the OptionMenu and Button widgets within frames of their own (which in turn are within the grid). Then I pack the OptionMenu and Button to expand, but disable pack propagation on the frame. The idea is to set the width of the frame, and have the nested widgets expand to fill.</p>
<p>Unfortunately the grid geometry manager informs my handler that the OptionMenu and Button now request 1x1 pixels of space, since pack propagation is disabled.</p>
<p>Next I started from scratch. Where the OptionMenu and Button would be I placed a Frame with a <code>rowspan</code> of 2; the frame's contents also managed by a (nested) grid geometry manager. The frame is <code>sticky="w"</code>; it would shrink to fit its contents. The contents are <code>sticky="ew"</code>; they would fill to a uniform size.</p>
<p>This works well, except that where text across columns of a single row would share a baseline, the nested grid geometry manager does not share baselines with the parent grid geometry manager. For example, the label text in the first column no longer shares a baseline with the OptionMenu text in the second column. Instead the baseline of the label text in the first column is now aligned with the bottom of the OptionMenu.</p>
<p>How can I fix this?</p>
<p>For example, here I've moved the "Cell (0,0)" label into the inner grid. This allows me keep the text aligned, but I lose the ability to center the nested columns within the outer columns.</p>
<pre><code>#!/usr/bin/env python3
import tkinter as tk
import tkinter.font
import tkinter.scrolledtext
import PIL.Image
import PIL.ImageTk
class App:
options = ["short", "really really really really long"]
unselected = "<select item>"
def __init__(self, parent, **kwargs):
title = parent.winfo_toplevel().title()
font = tk.font.Font(family="Arial", size=12, weight="bold")
# Create a frame for the entire window
frame = tk.Frame(parent, **kwargs)
frame.pack(fill=tk.BOTH, expand=True)
# Set the title label from the window title
title_label = tk.Label(frame, text=title, font=font)
title_label.grid(row=0, columnspan=2, padx=10, pady=10)
# Load the logos
logo1_image = PIL.Image.new("RGB", (200,200), (0,255,0))
logo2_image = PIL.Image.new("RGB", (200,200), (255,0,0))
logo1_label = tk.Label(frame, borderwidth=0)
logo2_label = tk.Label(frame, borderwidth=0)
# TODO:
# I'm seeing slow-motion resize on large resizes. I believe that
# multiple resizes are being queued, but I can't cancel that queue. I
# need to make this callback schedule a cancellable callback that then
# resizes.
logo1_label.bind\
( "<Configure>"
, lambda e, i=logo1_image, w=logo1_label: self.resizeimage(e, i, w)
)
logo2_label.bind\
( "<Configure>"
, lambda e, i=logo2_image, w=logo2_label: self.resizeimage(e, i, w)
)
logo1_label.grid(row=1, column=0, padx=10, pady=10, sticky="nsew")
logo2_label.grid(row=1, column=1, padx=10, pady=10, sticky="nsew")
# Create a frame for interactive elements
#input_frame = tk.Frame(frame)
input_frame = tk.Frame(frame, bg="red")
input_frame.grid(row=2, columnspan=2)
input_frame.grid_columnconfigure(0, weight=1, uniform="column")
input_frame.grid_columnconfigure(1, weight=1, uniform="column")
# Label for cell (0,0)
c00_label = tk.Label\
(input_frame, font=font, text="cell (0,0)")
c00_label.grid(row=0, column=0, padx=10, pady=10, sticky="ew")
# Create the OptionMenu
options_var = tk.StringVar()
options_var.set(self.unselected)
options = tk.OptionMenu(input_frame, options_var, *self.options)
options.configure(font=font)
options["menu"].configure(font=font)
options.grid(row=0, column=1, padx=10, pady=10, sticky="ew")
# Create the Button
button = tk.Button\
( input_frame
, font=font, text="implement option"
)
button.grid(row=1, column=1, padx=10, pady=(0,0), sticky="ew")
# Create the labels for the 3rd row
c30_label = tk.Label\
(frame, font=font, text="cell (3,0)")
c30_label.grid(row=3, column=0, padx=10, pady=10, sticky="ew")
c31_label = tk.Label\
(frame, font=font, text="cell (3,1)")
c31_label.grid(row=3, column=1, padx=10, pady=10, sticky="ew")
text = tk.scrolledtext.ScrolledText\
(frame, state=tk.DISABLED, font=font)
text.grid(row=4, columnspan=2, padx=10, pady=10, sticky="nsew")
frame.grid_columnconfigure(0, weight=1, minsize=100)
frame.grid_columnconfigure(1, weight=1, minsize=100)
frame.grid_rowconfigure(0, weight=0)
frame.grid_rowconfigure(1, weight=1, minsize=100)
frame.grid_rowconfigure(2, weight=0)
frame.grid_rowconfigure(3, weight=0)
frame.grid_rowconfigure(4, weight=4)
@staticmethod
def resizeimage(event, image, widget):
image = image.copy()
image.thumbnail\
( (event.width, event.height)
, resample=PIL.Image.Resampling.BILINEAR
, reducing_gap=None
)
image = PIL.ImageTk.PhotoImage(image)
widget.config(image=image)
# Images are not referenced by tkinter
widget.image = image
#widget.unbind("<Configure>")
def main():
root = tk.Tk()
root.title("Helpful Title")
app = App(root)
root.mainloop()
if __name__ == "__main__":
main()
</code></pre>
<p><em>Edit</em>: Here is a mostly-working version of the callback variant with frames described above. First you pack into the frame with propagation enabled. In the callback explicitly set the height to the current height, and the width to the maximum width of all the contained widgets, <em>then</em> disable callback propagation.</p>
<p>Unfortunately this version suffers two problems:</p>
<ol>
<li>Measuring the OptionMenu labels is faulty and thus the buttons are sized too small to fit the really long menu entry.</li>
<li>Resizing the images triggers a callback storm; I don't know why!</li>
</ol>
<p>How do I fix these problems?</p>
<pre><code>#!/usr/bin/env python3
import tkinter as tk
import tkinter.font
import tkinter.scrolledtext
import PIL.Image
import PIL.ImageTk
class App:
options = ["short", "really really really really really long"]
unselected = "<select item>"
def __init__(self, parent, **kwargs):
title = parent.winfo_toplevel().title()
font = tk.font.Font(family="Arial", size=12, weight="bold")
# Create a frame for the entire window
frame = tk.Frame(parent, **kwargs)
frame.pack(fill=tk.BOTH, expand=True)
# Set the title label from the window title
title_label = tk.Label(frame, text=title, font=font)
title_label.grid(row=0, columnspan=2, padx=10, pady=10)
# Load the logos
logo1_image = PIL.Image.new("RGB", (200,200), (0,255,0))
logo2_image = PIL.Image.new("RGB", (200,200), (255,0,0))
logo1_label = tk.Label(frame, bg="orange", borderwidth=0)
logo2_label = tk.Label(frame, bg="blue", borderwidth=0)
logo1_label.resize_count = 0
logo2_label.resize_count = 0
logo1_label.resize_id = None
logo2_label.resize_id = None
print("logo1_label: ", logo1_label)
print("logo2_label: ", logo2_label)
# TODO:
# I'm seeing slow-motion resize on large resizes. I believe that
# multiple resizes are being queued, but I can't cancel the queue. I
# need to make this callback schedule a cancellable callback that then
# resizes.
logo1_label.bind\
("<Configure>", lambda e, i=logo1_image: self.onconfigure_image(e, i))
logo2_label.bind\
("<Configure>", lambda e, i=logo2_image: self.onconfigure_image(e, i))
logo1_label.grid(row=1, column=0, padx=0, pady=0, sticky="nsew")
logo2_label.grid(row=1, column=1, padx=0, pady=0, sticky="nsew")
# Label for cell (2,0)
c00_label = tk.Label\
(frame, font=font, text="cell (2,0)")
c00_label.grid(row=2, column=0, padx=10, pady=10, sticky="ew")
# Create the OptionMenu
options_var = tk.StringVar()
options_var.set(self.unselected)
options_frame = tk.Frame(frame, bg="red")
options = tk.OptionMenu(options_frame, options_var, *self.options)
options.configure(font=font)
options["menu"].configure(font=font)
options.pack(fill=tk.BOTH, expand=True)
options_frame.grid(row=2, column=1, padx=10, pady=10)
# Create the Button
button_frame = tk.Frame(frame, bg="red")
button = tk.Button\
(button_frame, font=font, text="implement option")
button.pack(fill=tk.BOTH, expand=True)
button_frame.grid(row=3, column=1, padx=10, pady=(0,0))
button.bind("<Configure>", self.onconfigure_button)
# not sure why I can't get the font from the widget, ie:
# font = tk.font.nametofont(self.options.cget("font"))
# *** _tkinter.TclError: named font font1 does not already exist
self.font = font
self.options_frame = options_frame
self.button_frame = button_frame
self.options = options
self.button = button
# Create the labels for the 4th row
c30_label = tk.Label\
(frame, font=font, text="cell (4,0)")
c30_label.grid(row=4, column=0, padx=10, pady=10, sticky="ew")
c31_label = tk.Label\
(frame, font=font, text="cell (4,1)")
c31_label.grid(row=4, column=1, padx=10, pady=10, sticky="ew")
text = tk.scrolledtext.ScrolledText\
(frame, state=tk.DISABLED, font=font)
text.grid(row=5, columnspan=2, padx=10, pady=10, sticky="nsew")
frame.grid_columnconfigure(0, weight=1, minsize=100, uniform="column")
frame.grid_columnconfigure(1, weight=1, minsize=100, uniform="column")
frame.grid_rowconfigure(0, weight=0)
frame.grid_rowconfigure(1, weight=1, minsize=100)
frame.grid_rowconfigure(2, weight=0)
frame.grid_rowconfigure(3, weight=0)
frame.grid_rowconfigure(4, weight=0)
frame.grid_rowconfigure(5, weight=4)
def onconfigure_button(self, event):
menu = self.options["menu"]
maxw = self.font.measure(self.unselected)
print(maxw)
for i in range(menu.index("end") + 1):
label = menu.entrycget(i, "label")
width = self.font.measure(label)
maxw = max(maxw, width)
print(label, width)
#print(label, label.winfo_width())
print("button width: ", self.button.winfo_width())
maxw = max(maxw, self.button.winfo_width())
print(maxw)
# Wrapping both widgets in frames allowed us to sidestep the padding
# added by the OptionMenu "handle". So while the widgets are now the
# same size, the "handle" width is not accounted for in the width
# calculations and may overlap with wide menu entries!
#
# I need a way to get the width of the menu "handle".
self.options_frame.config(width=maxw)
self.button_frame.config(width=maxw)
self.options_frame.config(height=self.options_frame.winfo_height())
self.button_frame.config(height=self.button_frame.winfo_height())
self.options_frame.pack_propagate(False)
self.button_frame.pack_propagate(False)
event.widget.unbind("<Configure>")
return
# Question
# When I maximize the window, four <Configure> events are generated, and I
# see the images scaled-up in a slow-motion animation. I try to cancel or
# skip older events, but each new <Configure> waits till after the resize
# is complete... This suggests that the resize then affects the widget
# size, leading the callback to be called in a loop... but the showsize()
# callback shows that resizing the image does not change the widget size.
#
# Changing the resize to not preserve aspect ratio makes the problem much
# worse. Clearly the change in height is forcing the geometry manager to
# re-evaluate the layout. How do I prevent this?
def onconfigure_image(self, event, image):
print()
widget = event.widget
print("onconfigure for: ", widget)
#if widget.resize_i is not None:
# print("cancelling previous resize: ", widget.resize_id)
# widget.after_cancel(widget.resize_id)
#widget.resize_id = widget.after_idle(self.resize_image, event, image)
widget.resize_id = widget.after(1, self.resize_image, event, image)
widget.resize_count += 1
print("scheduling resize with id: ", widget.resize_id)
def showsize(self, widget):
print()
print("showsize for: ", widget)
print("widget size: ", widget.winfo_width(), widget.winfo_height())
def resize_image(self, event, image):
widget = event.widget
widget.resize_count -= 1
print()
if widget.resize_count > 0:
print(f"{widget.resize_count} events remaining, skipping resize")
return
print("widget size: ", widget.winfo_width(), widget.winfo_height())
print("resize_image for: ", widget)
print("resize id, resize count: ", (widget.resize_id, widget.resize_count))
widget.resize_id = None
#image = image.copy()
#image.thumbnail\
image = image.resize\
( (event.width, event.height)
, resample=PIL.Image.Resampling.BILINEAR
, reducing_gap=None
)
image = PIL.ImageTk.PhotoImage(image)
widget.config(image=image)
# Images are not referenced by tkinter
widget.image = image
#widget.unbind("<Configure>")
widget.after(1, self.showsize, widget)
def main():
root = tk.Tk()
root.title("Helpful Title")
app = App(root)
root.mainloop()
if __name__ == "__main__":
main()
</code></pre>
<p><em>Edit</em>: This version fixes the first problem (and removes a callback by using the <code>req</code> version of <code>winfo_width()</code>, etc) but I'm still dealing with the second point, the callback storm.</p>
<pre><code>#!/usr/bin/env python3
import math
import tkinter as tk
import tkinter.font
import tkinter.scrolledtext
import PIL.Image
import PIL.ImageTk
class App:
options = ["short", "really really really long"]
unselected = "<select item>"
def __init__(self, parent, **kwargs):
title = parent.winfo_toplevel().title()
font = tk.font.Font(family="Arial", size=12, weight="bold")
# Create a frame for the entire window
frame = tk.Frame(parent, **kwargs)
frame.pack(fill=tk.BOTH, expand=True)
# Set the title label from the window title
title_label = tk.Label(frame, text=title, font=font)
title_label.grid(row=0, columnspan=2, padx=10, pady=10)
# Load the logos
logo1_image = PIL.Image.new("RGB", (200,200), (0,255,0))
logo2_image = PIL.Image.new("RGB", (200,200), (255,0,0))
logo1_label = tk.Label(frame, bg="orange", borderwidth=0, pady=0)
logo2_label = tk.Label(frame, bg="blue", borderwidth=0, pady=0)
logo1_label.resize_count = 0
logo2_label.resize_count = 0
logo1_label.resize_id = None
logo2_label.resize_id = None
print("logo1_label: ", logo1_label)
print("logo2_label: ", logo2_label)
# TODO:
# I'm seeing slow-motion resize on large resizes. I believe that
# multiple resizes are being queued, but I can't cancel the queue. I
# need to make this callback schedule a cancellable callback that then
# resizes.
logo1_label.bind\
("<Configure>", lambda e, i=logo1_image: self.onconfigure_image(e, i))
logo2_label.bind\
("<Configure>", lambda e, i=logo2_image: self.onconfigure_image(e, i))
logo1_label.grid(row=1, column=0, padx=0, pady=0, sticky="nsew")
logo2_label.grid(row=1, column=1, padx=0, pady=0, sticky="nsew")
# Label for cell (2,0)
c00_label = tk.Label\
(frame, font=font, text="cell (2,0)")
c00_label.grid(row=2, column=0, padx=10, pady=10, sticky="ew")
# Create the OptionMenu
options_var = tk.StringVar()
options_var.set(self.unselected)
options_frame = tk.Frame(frame, bg="red")
options = tk.OptionMenu(options_frame, options_var, *self.options)
options.configure(font=font, anchor="c")
options["menu"].configure(font=font)
options.pack(fill=tk.BOTH, expand=True)
options_frame.grid(row=2, column=1, padx=10, pady=10)
# Create the Button
button_frame = tk.Frame(frame, bg="red")
button = tk.Button\
(button_frame, font=font, text="implement option", anchor="c")
button.pack(fill=tk.BOTH, expand=True)
button_frame.grid(row=3, column=1, padx=10, pady=(0,0))
# not sure why I can't get the font from the widget, ie:
# font = tk.font.nametofont(self.options.cget("font"))
# *** _tkinter.TclError: named font font1 does not already exist
self.font = font
self.options_frame = options_frame
self.button_frame = button_frame
self.options = options
self.button = button
#parent.bind("<Configure>", self.onconfigure_button)
self.onconfigure_button(None)
# Create the labels for the 4th row
c30_label = tk.Label\
(frame, font=font, text="cell (4,0)")
c30_label.grid(row=4, column=0, padx=10, pady=10, sticky="ew")
c31_label = tk.Label\
(frame, font=font, text="cell (4,1)")
c31_label.grid(row=4, column=1, padx=10, pady=10, sticky="ew")
text = tk.scrolledtext.ScrolledText\
(frame, state=tk.DISABLED, font=font)
text.grid(row=5, columnspan=2, padx=10, pady=10, sticky="nsew")
frame.grid_columnconfigure(0, weight=1, minsize=100, uniform="column")
frame.grid_columnconfigure(1, weight=1, minsize=100, uniform="column")
frame.grid_rowconfigure(0, weight=0)
frame.grid_rowconfigure(1, weight=1, minsize=100)
frame.grid_rowconfigure(2, weight=0)
frame.grid_rowconfigure(3, weight=0)
frame.grid_rowconfigure(4, weight=0)
frame.grid_rowconfigure(5, weight=4)
def onconfigure_button(self, event):
menu = self.options["menu"]
maxw = self.font.measure(self.unselected)
for i in range(menu.index("end") + 1):
label = menu.entrycget(i, "label")
width = self.font.measure(label)
maxw = max(maxw, width)
maxw = maxw / self.font.measure("0")
maxw = math.ceil(maxw)
# Allow the widget to request its optimal width.
self.options.configure(width=maxw)
maxw = max(self.button.winfo_reqwidth(), self.options.winfo_reqwidth())
self.options_frame.config(width=maxw)
self.button_frame.config(width=maxw)
#print()
#print("button frame act height: ", self.button_frame.winfo_height())
#print("option frame act height: ", self.options_frame.winfo_height())
#print("button frame req height: ", self.button_frame.winfo_reqheight())
#print("option frame req height: ", self.options_frame.winfo_reqheight())
#print("button act height: ", self.button.winfo_height())
#print("option act height: ", self.options.winfo_height())
#print("button req height: ", self.button.winfo_reqheight())
#print("option req height: ", self.options.winfo_reqheight())
self.options_frame.config(height=self.options.winfo_reqheight())
self.button_frame.config(height=self.button.winfo_reqheight())
self.options_frame.pack_propagate(False)
self.button_frame.pack_propagate(False)
#event.widget.unbind("<Configure>")
# Question
# When I maximize the window, four <Configure> events are generated, and I
# see the images scaled-up in a slow-motion animation. I try to cancel or
# skip older events, but each new <Configure> waits till after the resize
# is complete... This suggests that the resize then affects the widget
# size, leading the callback to be called in a loop... but the showsize()
# callback shows that resizing the image does not change the widget size.
#
# Changing the resize to not preserve aspect ratio makes the problem much
# worse. Clearly the change in height is forcing the geometry manager to
# re-evaluate the layout. How do I prevent this?
def onconfigure_image(self, event, image):
print()
widget = event.widget
print("onconfigure for: ", widget)
#if widget.resize_i is not None:
# print("cancelling previous resize: ", widget.resize_id)
# widget.after_cancel(widget.resize_id)
#widget.resize_id = widget.after_idle(self.resize_image, event, image)
widget.resize_id = widget.after(1, self.resize_image, event, image)
widget.resize_count += 1
print("scheduling resize with id: ", widget.resize_id)
def showsize(self, widget):
print()
print("showsize for: ", widget)
print("widget size: ", widget.winfo_width(), widget.winfo_height())
def resize_image(self, event, image):
widget = event.widget
widget.resize_count -= 1
print()
if widget.resize_count > 0:
print(f"{widget.resize_count} events remaining, skipping resize")
return
print("widget size: ", widget.winfo_width(), widget.winfo_height())
print("resize_image for: ", widget)
print("resize id, resize count: ", (widget.resize_id, widget.resize_count))
widget.resize_id = None
#image = image.copy()
#image.thumbnail\
image = image.resize\
( (event.width, event.height)
, resample=PIL.Image.Resampling.BILINEAR
, reducing_gap=None
)
image = PIL.ImageTk.PhotoImage(image)
widget.config(image=image)
# Images are not referenced by tkinter
widget.image = image
#widget.unbind("<Configure>")
widget.after(1, self.showsize, widget)
def main():
root = tk.Tk()
root.title("Helpful Title")
app = App(root)
root.mainloop()
if __name__ == "__main__":
main()
</code></pre>
<p><em>Edit</em>: I assumed the geometry grid manager would keep a strict ratio between rows as per the defined weights. So given weights 1:4, a window resize would preserve a 1:4 ratio between the rows. However the geometry manager also considers the size requests of its children. If a window is resized to be larger and a child widget requests use of the new space, then the geometry manager will increase the weight of that child, triggering a new resize event. This happens up to a limit, at which the child can consume no further space. This effect even occurs between different (possibly nonadjacent) grid managers that share space.</p>
<p>Examining the events I could find no difference between window-resize events and grid-resize events. TKinter does not appear to tag the event source, so it isn't possible to simply filter out grid-resize events. I could also find no method of preventing child size requests from triggering grid-resize events. I though that disabling propagation might work, but no...</p>
<p>The only solution is to then bind the child resize to events originating further up the hierarchy, such as the root window, which will implicitly filter grid-resize events. This does unfortunately leave a gap as the grid manager then allocates more space for the child, so it is probably better to move affected children to another manager. It is possible to simulate grid-sizing through some non-linear ratio between the window size and the image size.</p>
<p><em>Edit</em>: What I wrote applies not just to grid but to any geometry manager that expands into additional available space, such as pack with <code>expand=True</code>.</p>
<hr />
<p>That said, instead of splitting up my existing grid I decided to try to set that row's weight to zero (disable expansion) and manually resize the cells upon changes to the outer frame... but that's a good way to crash tkinter:</p>
<pre><code>def frame_configure_resize_grid_cell(self, event, image, widget):
gridframe = event.widget
if not gridframe.height:
gridframe.height = event.height
return
print("frame old height, frame new height", gridframe.height, event.height)
aspect = event.height / gridframe.height
gridframe.height = event.height
old_height = widget.winfo_height()
#_,_,_,old_height = frame.grid_bbox(1,0)
new_height = round(old_height * aspect)
print("old height, aspect, new height: ", old_height, aspect, new_height)
widget.config(height=new_height)
</code></pre>
|
<python><tkinter><tkinter-layout>
|
2024-09-27 15:04:31
| 1
| 2,131
|
user19087
|
79,031,876
| 505,328
|
Why does redis report no subscribers to a channel after I subscribe to that channel without errors?
|
<p>I am trying to send realtime feedback over websockets, and I'm doing that by sending data over redis from a rest-api server to a websocker server (both written in python/django). In the websocket server, I subscribe to a channel named "events" like this:</p>
<pre><code>redis_connection = redis.Redis(
host=settings.REDIS_HOST,
port=settings.REDIS_PORT,
db=settings.REDIS_CACHE_DB,
)
pubsub = redis_connection.pubsub()
pubsub.subscribe(events=process_event)
</code></pre>
<p>In the rest-api server, I publish data to the same redis channel like this:</p>
<pre><code>connection.publish("events", json.dumps({"user_email": user.email, "message": message}))
</code></pre>
<p>While trying to figure out why no events ever made it to the "process_event" handler, I found that the publish method returns the number of subscribers to a channel, so I updated that line to this:</p>
<pre><code>count = connection.publish("events", json.dumps({"user_email": user.email, "message": message}))
print(f"Published to events channel with {count} subscribers")
</code></pre>
<p>The result is always 0, even after the subscription above. I went inside the redis-stack Docker container saw the following when I directly tried to query the subscriber count:</p>
<blockquote>
<h1>redis-cli</h1>
<p>127.0.0.1:6379> PUBSUB NUMSUB events</p>
<ol>
<li>"events"</li>
<li>(integer) 0</li>
</ol>
</blockquote>
<blockquote>
<p>127.0.0.1:6379></p>
</blockquote>
<p>The logs of the redis-stack itself doesn't seem to recognize any subscription or publication. However, if I use the command "redis-cli monitor" I see both show up:</p>
<blockquote>
<p>1727460056.950779 [0 172.17.0.1:58210] "CLIENT" "SETINFO" "LIB-NAME" "redis-py"
1727460056.952892 [0 172.17.0.1:58210] "CLIENT" "SETINFO" "LIB-VER" "5.0.8"
1727460056.954072 [1 172.17.0.1:58210] "SELECT" "1"</p>
</blockquote>
<blockquote>
<p>1727460056.956442 [1 172.17.0.1:58210] "SUBSCRIBE" "events"</p>
</blockquote>
<blockquote>
<p>1727460122.677189 [0 172.17.0.1:45874] "CLIENT" "SETINFO" "LIB-NAME" "redis-py"
1727460122.678973 [0 172.17.0.1:45874] "CLIENT" "SETINFO" "LIB-VER" "5.0.8"
1727460122.681286 [1 172.17.0.1:45874] "SELECT" "1"</p>
</blockquote>
<blockquote>
<p>1727460122.683120 [1 172.17.0.1:45874] "PUBLISH" "events" "{"user_email": "admin@domain.com", "message": "Message content"}"</p>
</blockquote>
<p>After all that, PUBSUB NUMSUB events still gives the same result.</p>
|
<python><django><redis><publish-subscribe><redis-stack>
|
2024-09-27 14:52:54
| 1
| 379
|
WindowsWeenie
|
79,031,830
| 15,358,800
|
How to convert list of calendar dates into to_datetime in pandas
|
<p>I have some function which returns list of holidays. The list looks so</p>
<pre><code>['30 May 2024','1 May 2024', '29 Aug 2024', '14 Aug 2024', '19 May 2024']
</code></pre>
<p>When iam trying to do</p>
<pre><code>print(pd.to_datetime(['30 May 2024','1 May 2024', '29 Aug 2024', '14 Aug 2024', '19 May 2024']))
</code></pre>
<p>Error</p>
<pre><code>============================================================================================== RESTART: C:\Users\Bhargav\Downloads\gapi.py =============================================================================================
Traceback (most recent call last):
File "C:\Users\Bhargav\Downloads\gapi.py", line 2, in <module>
print(pd.to_datetime(['30 May 2024','1 May 2024', '29 Aug 2024', '14 Aug 2024', '19 May 2024']))
File "C:\Users\Bhargav\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\tools\datetimes.py", line 1099, in to_datetime
result = convert_listlike(argc, format)
File "C:\Users\Bhargav\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\tools\datetimes.py", line 433, in _convert_listlike_datetimes
return _array_strptime_with_fallback(arg, name, utc, format, exact, errors)
File "C:\Users\Bhargav\AppData\Local\Programs\Python\Python312\Lib\site-packages\pandas\core\tools\datetimes.py", line 467, in _array_strptime_with_fallback
result, tz_out = array_strptime(arg, fmt, exact=exact, errors=errors, utc=utc)
File "strptime.pyx", line 501, in pandas._libs.tslibs.strptime.array_strptime
File "strptime.pyx", line 451, in pandas._libs.tslibs.strptime.array_strptime
File "strptime.pyx", line 583, in pandas._libs.tslibs.strptime._parse_with_format
ValueError: time data "29 Aug 2024" doesn't match format "%d %B %Y", at position 2. You might want to try:
- passing `format` if your strings have a consistent format;
- passing `format='ISO8601'` if your strings are all ISO8601 but not necessarily in exactly the same format;
- passing `format='mixed'`, and the format will be inferred for each element individually. You might want to use `dayfirst` alongside this.
</code></pre>
<p>But But it works perfect with this list</p>
<pre><code>print(pd.to_datetime(['30 Dec 2024','1 May 2024', '29 Aug 2024', '14 Aug 2024', '19 May 2024']))
</code></pre>
<p>I get</p>
<pre><code>============================================================================================== RESTART: C:\Users\Bhargav\Downloads\gapi.py =============================================================================================
DatetimeIndex(['2024-12-30', '2024-05-01', '2024-08-29', '2024-08-14',
'2024-05-19'],
dtype='datetime64[ns]', freq=None)
</code></pre>
<p>Am i missing anything here??</p>
|
<python><pandas>
|
2024-09-27 14:43:10
| 1
| 4,891
|
Bhargav
|
79,031,752
| 2,056,452
|
ipywidgets interactive with variable containers of widgets
|
<p>For some interactive data analysis with Jupyter and ipywidgets, I generate a number of widgets depending on the data.</p>
<p>I.e. eventually I have a number of GridBoxes with Checkboxes, RangeSliders and ColorPickers which I layout in a Tab-Widget.</p>
<p>Now I tried to use</p>
<pre><code>categories = ["A", "B", "C"] # retrieve from dataset
cbox = widgets.GridBox(children=[ widgets.Checkbox(value=True, description=v) for v in categories ])
rbox = widgets.GridBox(children=[ widgets.IntRangeSlider(value=[1,10], max=20, min=0, description=v) for v in categories ])
def analysis(variables, ranges):
print("Hello. Currently I do nothing with the input!")
display(widgets.Tab(children=[cbox, rbox], titles=('Variables', 'Ranges')))
display(widgets.interactive_output(analysis, {"variables":cbox, "ranges":rbox}))
</code></pre>
<p>Which does not work:</p>
<pre><code>AttributeError: 'GridBox' has no attribute 'value'
</code></pre>
<p>I also tried:</p>
<pre><code>display(widgets.interactive_output(analysis, {"variables":cbox.children, "ranges":rbox.children}))
</code></pre>
<p>which also does not work.</p>
<p>Is it somehow possible to pass any type of container of some sort to my interactivized function, or do I need to ressort to <em>kwargs</em>? And if so, how would you do that efficiently?</p>
<hr />
<p>I use ipywidget version 8.1.5</p>
|
<python><jupyter-notebook><ipywidgets>
|
2024-09-27 14:22:29
| 2
| 13,801
|
derM
|
79,031,616
| 268,847
|
Python request to second endpoint fails after using client certificate to first endpoint
|
<p>I am using Python's <code>requests</code> package to make API calls. I have <em>two</em> different API endpoints I am making calls against. The first is to "https://api.example.com" and the second is to "https://work.myserver.com". The first API endpoint requires the use of a client certificate while the second does not. I find that once I make the call to the endpoint that requires the client certificate subsequent calls to the <em>other</em> endpoint fail with the error "SSLError(SSLError(1, '[SSL: TLSV1_ALERT_UNKNOWN_CA] tlsv1 alert unknown ca".</p>
<p>My guess is that after I have made a call using a client certificate all the calls that come after <em>no matter to which endpoint</em> are made using the client certificate.</p>
<p>How do I tell <code>requests</code> to not use the client certificate for all the other endpoints, and only to use it for the endpoint that requires it?</p>
<p>My code looks like this:</p>
<pre><code>import requests
# Set up the key-pair paths
cert_path = "/etc/application/client.crt"
key_path = "/etc/application/client.key"
keypair_paths = (cert_path, key_path)
response1 = requests.get(
"https://api.example.com",
cert=keypair_paths,
)
response2 = request.get("https://work.myserver.com") # This endpoint does NOT use client certificates
</code></pre>
<p>I am using version 2.32.3 of the <code>requests</code> package.</p>
<p><strong>UPDATE</strong> At Andrej Kesely's suggestion I tried putting the requests in a requests.Session "with"-context, but that did not fix it. I have worked around the issue for now by using the <code>urllib</code> library for the call that requires Basic auth. However, the original problem remains unresolved.</p>
|
<python><python-requests><client-certificates>
|
2024-09-27 13:42:53
| 0
| 7,795
|
rlandster
|
79,031,470
| 1,914,781
|
python notify thread exit with multiple loops
|
<p>I have code with two loops in thread, I wish to exit the thread after done in main thread.</p>
<p>Currently I use a global variable to notify it but it looks not good since multiple thread access the same variable. And code looks ugly to break in multiple loops.</p>
<p>What's graceful way to exit thread in this case?</p>
<pre><code>#!/usr/bin/env python3
import subprocess
import threading
bexit = False
def threadLoop(idx):
global bexit
while bexit == False:
#do something here
while True:
#do something here
if bexit == True:
break
if bexit == True:
break
bexit = False
return
def main():
global bexit
thread = threading.Thread(target=threadLoop, args=(0, ))
thread.start()
#do something here
bexit = True
thread.join()
return
main(
</code></pre>
|
<python><python-3.x><multithreading>
|
2024-09-27 12:58:55
| 1
| 9,011
|
lucky1928
|
79,031,454
| 7,307,125
|
How to add vertical line to existing plot in matplotlib
|
<p>I have a large dataset (5M points) to visualise. There are datapoints and time axes.<br />
There are events in time, where values changing significantly.<br />
I want to add two 'clickable' elements in form of vertical lines to mark two of mentioned events, later I want to display time values of them and calculated difference.</p>
<pre><code>import matplotlib.pyplot as plt
time = [1,2,3,4,5,6,7,8,9,10,11,12,13,13,15,16,17,18,19,20]
ch1_scope = [0,0,0,0,5,5,5,5,0,0,0,0,5,5,5,5,0,0,0,0]
ch2_scope = [5,5,5,5,0,0,0,0,5,5,5,5,0,0,0,0,5,5,5,5]
fig, (ax1, ax2) = plt.subplots(2, sharex = True)
ax1.plot(time, ch1_scope, 'tab:red')
ax1.set_title("Vph1")
ax2.plot(time, ch2_scope, 'tab:olive')
ax2.set_title("Vph2")
</code></pre>
<p>This minimal code produces:<br />
<a href="https://i.sstatic.net/tCFZzk7y.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/tCFZzk7y.png" alt="Vin_time" /></a></p>
<p>How can I achieve it with matplotlib? Is it even possible without re-plotting/re-drawing?<br />
A simple script where adding vertical lines is possible it would be a good start.</p>
<p>After some tries I added this cursor:</p>
<pre><code>import matplotlib.pyplot as plt
from matplotlib.widgets import Cursor
time = [1,2,3,4,5,6,7,8,9,10,11,12,13,13,15,16,17,18,19,20]
ch1_scope = [0,0,0,0,5,5,5,5,0,0,0,0,5,5,5,5,0,0,0,0]
ch2_scope = [5,5,5,5,0,0,0,0,5,5,5,5,0,0,0,0,5,5,5,5]
fig, (ax1, ax2) = plt.subplots(2, sharex = True)
ax1.plot(time, ch1_scope, 'tab:red')
ax1.set_title("Vph1")
ax2.plot(time, ch2_scope, 'tab:olive')
ax2.set_title("Vph2")
cursor = Cursor(
ax1, useblit=True, horizOn=False, vertOn=True, color="red", linewidth=0.5)
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/oTaDrPlA.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/oTaDrPlA.png" alt="Cursor" /></a></p>
<p>This is not satisfactory, as doesn't bring cursor permanently (zooming kills it). Also, there is no option for dynamical addition of notations on the plot.</p>
<p>Added part of click event. This bring some current mouse position information to the console. How to transfer it to the plot?</p>
<pre><code>enter code here
import matplotlib.pyplot as plt
from matplotlib.widgets import Cursor
from matplotlib.backend_bases import MouseButton
time = [1,2,3,4,5,6,7,8,9,10,11,12,13,13,15,16,17,18,19,20]
ch1_scope = [0,0,0,0,5,5,5,5,0,0,0,0,5,5,5,5,0,0,0,0]
ch2_scope = [5,5,5,5,0,0,0,0,5,5,5,5,0,0,0,0,5,5,5,5]
fig, (ax1, ax2) = plt.subplots(2, sharex = True)
ax1.plot(time, ch1_scope, 'tab:red')
ax1.set_title("Vph1")
ax2.plot(time, ch2_scope, 'tab:olive')
ax2.set_title("Vph2")
cursor = Cursor(
ax1, useblit=True, horizOn=False, vertOn=True, color="red", linewidth=0.5
)
def on_move(event):
if event.inaxes:
print(f'data coords {event.xdata} {event.ydata},',
f'pixel coords {event.x} {event.y}')
def on_right_click(event):
if event.button is MouseButton.LEFT:
print(f'Clicked LMB: data coords {event.xdata} {event.ydata}')
if event.button is MouseButton.RIGHT:
print('disconnecting callback')
plt.disconnect(binding_id)
binding_id = plt.connect('motion_notify_event', on_move)
plt.connect('button_press_event', on_right_click)
plt.show()
</code></pre>
|
<python><matplotlib>
|
2024-09-27 12:54:48
| 1
| 351
|
smajli
|
79,031,420
| 8,722,421
|
Holoviews / Bokeh - redim an axis making no difference
|
<p>Trying to resequence the heatmap such that y-axis actually runs from 0 to -19 <em>under</em> the x-axis (like a normal negative would descend downward).</p>
<p>Instead, regardless of sort order onto redim.values it "ascends" from 0 to -19 on the y-axis. See encircled axis in red.</p>
<p><a href="https://i.sstatic.net/pzSpOJxf.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/pzSpOJxf.png" alt="enter image description here" /></a></p>
<pre><code>from pathlib import Path
import holoviews as hv
import pandas as pd
import webbrowser
hv.extension('bokeh')
renderer = hv.renderer('bokeh')
data = [{"x": str(i),
"y": str(-j),
"z": i*j }
for i in range(20)
for j in range(20)]
df = pd.DataFrame(data)
# X works ok... or ok by happenstance
x_vals = list(set(df["x"].tolist()))
x_vals.sort(key=int)
# Y making no difference
y_vals1 = list(set(df["y"].tolist()))
y_vals1.sort(key=int, reverse=True)
y_vals2 = list(set(df["y"].tolist()))
y_vals2.sort(key=int, reverse=False)
hm1 = hv.HeatMap(df,
)
hm1.opts(hv.opts.HeatMap(tools=['hover'],
colorbar=True,
height=800,
width=800,
toolbar='above',
),
)
hm1.redim.values(
#x=x_vals,
y=y_vals1,
)
html = renderer.html(hm1)
Path("test_True.html").write_text(html,
encoding="utf-8")
webbrowser.open("test_True.html")
#-------------------------------------------------------------------------
hm2 = hv.HeatMap(df,
)
hm2.opts(hv.opts.HeatMap(tools=['hover'],
colorbar=True,
height=800,
width=800,
toolbar='above',
),
)
hm2.redim.values(
#x=x_vals,
y=y_vals2,
)
html = renderer.html(hm2)
Path("test_False.html").write_text(html,
encoding="utf-8")
webbrowser.open("test_False.html")
</code></pre>
|
<python><holoviews>
|
2024-09-27 12:47:57
| 1
| 1,285
|
Amiga500
|
79,030,989
| 4,096,572
|
Problems moving f2py-based Python module from numpy.distutils to scikit-build-core
|
<p>I maintain <a href="https://github.com/johncoxon/tsyganenko" rel="nofollow noreferrer">a Python module called Tsyganenko</a> which currently has a <code>setup.py</code> file as below:</p>
<pre><code>from numpy.distutils.core import Extension, setup
ext = Extension('geopack',
sources=['src/tsyganenko/geopack_py.pyf',
'src/tsyganenko/geopack_py.f',
'src/tsyganenko/T96.f',
'src/tsyganenko/T02.f'])
with open("README.md", "r", encoding="utf-8") as f:
long_description = f.read()
setup(author="John Coxon and Sebastien de Larquier",
author_email="work@johncoxon.co.uk",
classifiers=[
"Development Status :: 4 - Beta",
"Intended Audience :: Science/Research",
"Natural Language :: English",
"Programming Language :: Python/Fortran"
],
description="A wrapper to call GEOPACK routines in Python.",
ext_modules=[ext],
install_requires=[
"numpy",
"matplotlib",
"pandas"
],
keywords=['Scientific/Space'],
long_description=long_description,
long_description_content_type="text/markdown",
name="Tsyganenko",
package_dir={"": "src"},
packages=setuptools.find_packages(where="src"),
python_requires=">=3.9",
url="https://github.com/johncoxon/tsyganenko",
version="2020.1",
)
</code></pre>
<p>I am trying to move this over to use <code>scikit-build-core</code> due to <code>numpy.distutils</code> being deprecated. I am trying to follow <a href="https://scikit-build-core.readthedocs.io/en/latest/getting_started.html" rel="nofollow noreferrer">the <code>scikit-build-core</code> documentation</a> to do this, and have arrived at a <code>pyproject.toml</code> that looks like this:</p>
<pre><code>[build-system]
requires = ["scikit-build-core", "numpy"]
build-backend = "scikit_build_core.build"
[project]
name = "Tsyganenko"
version = "2020.2"
dependencies = ["numpy"]
[tool.scikit-build]
ninja.version = ">=1.10"
cmake.version = ">=3.17.2"
[tool.setuptools.packages.find]
where = ["src"]
</code></pre>
<p>and a <code>CMakeLists.txt</code> file which looks like this:</p>
<pre><code>cmake_minimum_required(VERSION 3.17.2...3.29)
project(${SKBUILD_PROJECT_NAME} LANGUAGES C Fortran)
find_package(
Python
COMPONENTS Interpreter Development.Module NumPy
REQUIRED)
# F2PY headers
execute_process(
COMMAND "${PYTHON_EXECUTABLE}" -c
"import numpy.f2py; print(numpy.f2py.get_include())"
OUTPUT_VARIABLE F2PY_INCLUDE_DIR
OUTPUT_STRIP_TRAILING_WHITESPACE)
add_library(fortranobject OBJECT "${F2PY_INCLUDE_DIR}/fortranobject.c")
target_link_libraries(fortranobject PUBLIC Python::NumPy)
target_include_directories(fortranobject PUBLIC "${F2PY_INCLUDE_DIR}")
set_property(TARGET fortranobject PROPERTY POSITION_INDEPENDENT_CODE ON)
add_custom_command(
OUTPUT geopack_pymodule.c geopack_py-f2pywrappers.f
DEPENDS src/tsyganenko/geopack_py.f
VERBATIM
COMMAND "${Python_EXECUTABLE}" -m numpy.f2py
"${CMAKE_CURRENT_SOURCE_DIR}/src/tsyganenko/geopack_py.f" -m geopack_py --lower)
python_add_library(
geopack_py MODULE "${CMAKE_CURRENT_BINARY_DIR}/geopack_pymodule.c"
"${CMAKE_CURRENT_BINARY_DIR}/geopack_py-f2pywrappers.f"
"${CMAKE_CURRENT_SOURCE_DIR}/src/tsyganenko/geopack_py.f" WITH_SOABI)
target_link_libraries(geopack_py PRIVATE fortranobject)
install(TARGETS geopack_py DESTINATION .)
</code></pre>
<p>It successfully installs, allegedly, but when trying to import the module I get the following error. I don't understand the <code>scikit-build-core</code> documentation well enough to be able to debug it. I'm fairly certain that it's a problem that <code>T96.f</code> and <code>T02.f</code> aren't mentioned anywhere in <code>CMakeLists.txt</code> but I'm not sure how to add them, and I'm not sure whether I need to include <code>geopack_py.pyf</code>. Can anyone assist? <a href="https://github.com/johncoxon/tsyganenko/tree/change-setup" rel="nofollow noreferrer">If you want to access the code directly, it's all in the change-setup branch of the module on GitHub.</a></p>
<pre><code>In [1]: import geopack_py
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
Cell In[1], line 1
----> 1 import geopack_py
ImportError: dlopen(/opt/homebrew/Caskroom/mambaforge/base/envs/test/lib/python3.12/site-packages/geopack_py.cpython-312-darwin.so, 0x0002): symbol not found in flat namespace '_t01_01_'
</code></pre>
|
<python><cmake><f2py><scikit-build>
|
2024-09-27 10:44:31
| 2
| 605
|
John Coxon
|
79,030,984
| 2,515,313
|
How to delete old class file referenced in django migration
|
<pre><code>class Migration(migrations.Migration):
dependencies = [
("buckets", "0001_auto_20230420_2018"),
]
operations = [
# Delete existing tables
migrations.RunSQL(
sql=buckets.my_models.models.RefObjModel.get_create_table_sql(),
reverse_sql=migrations.RunSQL.noop,
),
</code></pre>
<p>Above is a code sample, we have a migration file.</p>
<p>Question: We have to remove RefObjModel, how we should be doing it, what is preferred approach. If we are deleting the class file/code application run is failing as it tries to run all the migration file and unable to find referenced class.</p>
<p>If we delete the references from all the migration files manually, does it create any problem in production deployment/application</p>
|
<python><django><django-models>
|
2024-09-27 10:43:35
| 2
| 954
|
RQube
|
79,030,890
| 19,475,185
|
Cannot schedule Google Colab Pro+ when using Gspread, Selenium
|
<p>I have a Colab notebook which connects to my Drive, authenticates for Gspread, downloads & installs Chromium with Selenium, downloads & uses XVFB and pyvirtualdisplay. The notebook runs several python .py files from my Drive.</p>
<p>When I try to schedule it, it fails to run. The copy of the notebook doesn't show any error.</p>
<pre><code>from google.colab import auth
from google.auth import default
auth.authenticate_user()
import gspread
creds, _ = default()
gc = gspread.authorize(creds)
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
[...]
%run '/content/drive/MyDrive/scheduled_tasks/[...].py'
[...]
</code></pre>
<p>Are there any specific limitations to be able to successfully schedule a Colab Pro+ notebook to run in the background? Should I add something specific to my code?</p>
|
<python><google-colaboratory>
|
2024-09-27 10:17:27
| 1
| 10,508
|
Barry the Platipus
|
79,030,706
| 7,403,431
|
Python, numpy and the cacheline
|
<p>I try to follow <a href="https://igoro.com/archive/gallery-of-processor-cache-effects/" rel="nofollow noreferrer">https://igoro.com/archive/gallery-of-processor-cache-effects/</a> in python using <code>numpy</code>.
Though it does not work and I don't quite understand why...</p>
<p><code>numpy</code> has fixed size dtypes, such as <code>np.int64</code> which takes up 8 bytes.
So with a cacheline of 64 bytes, 8 array values should be held in cache.</p>
<p>Thus, when doing the timing, I should not see a notable change in required time when accessing values within the cacheline, because the same number of cache line transfers are needed.</p>
<p>Based on <a href="https://stackoverflow.com/a/49786712/7403431">this SO answer</a>, I also tried to disable the garbage collection which didn't change anything.</p>
<pre class="lang-py prettyprint-override"><code># import gc
import time
import numpy as np
def update_kth_entries(arr, k):
arr[k] = 0
start = time.perf_counter_ns()
for idx in range(0, len(arr), k):
arr[idx] = 0
end = time.perf_counter_ns()
print(f"Updated every {k:4} th entry ({len(arr)//k:7} elements) in {(end - start)*1e-9:.5f}s")
return arr
# gc.disable()
arr = np.arange(8*1024*1024, dtype=np.int64)
print(
f"(Data) size of array: {arr.nbytes/1024/1024:.2f} MiB "
f"(based on {arr.dtype})"
)
for k in np.power(2, np.arange(0,11)):
update_kth_entries(arr, k)
# gc.enable()
</code></pre>
<p>This gives something like</p>
<pre><code>(Data) size of array: 64.00 MiB (based on int64)
Updated every 1 th entry (8388608 elements) in 0.72061s
Updated every 2 th entry (4194304 elements) in 0.32783s
Updated every 4 th entry (2097152 elements) in 0.14810s
Updated every 8 th entry (1048576 elements) in 0.07622s
Updated every 16 th entry ( 524288 elements) in 0.04409s
Updated every 32 th entry ( 262144 elements) in 0.01891s
Updated every 64 th entry ( 131072 elements) in 0.00930s
Updated every 128 th entry ( 65536 elements) in 0.00434s
Updated every 256 th entry ( 32768 elements) in 0.00234s
Updated every 512 th entry ( 16384 elements) in 0.00129s
Updated every 1024 th entry ( 8192 elements) in 0.00057s
</code></pre>
<p>Here is the output of <code>lscpu -C</code></p>
<pre><code>NAME ONE-SIZE ALL-SIZE WAYS TYPE LEVEL SETS PHY-LINE COHERENCY-SIZE
L1d 32K 384K 8 Data 1 64 1 64
L1i 32K 384K 8 Instruction 1 64 1 64
L2 256K 3M 4 Unified 2 1024 1 64
L3 16M 16M 16 Unified 3 16384 1 64
</code></pre>
<p>At this point I am quite confused about what I am observing.</p>
<ul>
<li>On the one hand I fail to see the cacheline using above code.</li>
<li>On the other hand I can show some sort of CPU caching effect using something like <a href="https://stackoverflow.com/a/70465213/7403431">in this answer</a> with a large enough 2D array.</li>
</ul>
<p>I did above tests in a container on a Mac.
A quick test on my Mac shows the same behavior.</p>
<p>Is this odd behavior due to the python interpreter?</p>
<p>What am I missing here?</p>
<hr />
<h1>EDIT</h1>
<p>Based on the answer of @jΓ©rΓ΄me-richard I did some more timing, using <code>timeit</code> based on the functions</p>
<pre class="lang-py prettyprint-override"><code>from numba import jit
import numpy as np
def for_loop(arr,k,arr_cnt):
for idx in range(0, arr_cnt, k):
arr[idx] = 0
return arr
def vectorize(arr, k, arr_cnt):
arr[0:arr_cnt:k] = 0
return arr
@jit
def for_loop_numba(arr, k, arr_cnt):
for idx in range(0, arr_cnt, k):
arr[idx] = 0
</code></pre>
<p>Using the same array from above with some more information</p>
<pre class="lang-py prettyprint-override"><code>dtype_size_bytes = 8
arr = np.arange(dtype_size_bytes * 1024 * 1024, dtype=np.int64)
print(
f"(Data) size of array: {arr.nbytes/1024/1024:.2f} MiB "
f"(based on {arr.dtype})"
)
cachline_size_bytes = 64
l1_size_bytes = 32*1024
l2_size_bytes = 256*1024
l3_size_bytes = 3*1024*1024
print(f"Elements in cacheline: {cachline_size_bytes//dtype_size_bytes}")
print(f"Elements in L1: {l1_size_bytes//dtype_size_bytes}")
print(f"Elements in L2: {l2_size_bytes//dtype_size_bytes}")
print(f"Elements in L3: {l3_size_bytes//dtype_size_bytes}")
</code></pre>
<p>which gives</p>
<pre><code>(Data) size of array: 64.00 MiB (based on int64)
Elements in cacheline: 8
Elements in L1: 4096
Elements in L2: 32768
Elements in L3: 393216
</code></pre>
<p>If I now use timeit on above functions for various <code>k</code> (stride length) I get</p>
<pre><code>for loop
stride= 1: total time to traverse = 598 ms Β± 18.5 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
stride= 1: time per element = 71.261 nsec +/- 2.208 nsec
stride= 2: total time to traverse = 294 ms Β± 3.6 ms per loop (mean Β± std. dev. of 7 runs, 1 loop each)
stride= 2: time per element = 70.197 nsec +/- 0.859 nsec
stride= 4: total time to traverse = 151 ms Β± 1.4 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)
stride= 4: time per element = 72.178 nsec +/- 0.666 nsec
stride= 8: total time to traverse = 77.2 ms Β± 1.55 ms per loop (mean Β± std. dev. of 7 runs, 10 loops each)
stride= 8: time per element = 73.579 nsec +/- 1.476 nsec
stride= 16: total time to traverse = 37.6 ms Β± 684 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10 loops each)
stride= 16: time per element = 71.730 nsec +/- 1.305 nsec
stride= 32: total time to traverse = 20 ms Β± 1.39 ms per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 32: time per element = 76.468 nsec +/- 5.304 nsec
stride= 64: total time to traverse = 10.8 ms Β± 707 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 64: time per element = 82.099 nsec +/- 5.393 nsec
stride= 128: total time to traverse = 5.16 ms Β± 225 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 128: time per element = 78.777 nsec +/- 3.426 nsec
stride= 256: total time to traverse = 2.5 ms Β± 114 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 256: time per element = 76.383 nsec +/- 3.487 nsec
stride= 512: total time to traverse = 1.31 ms Β± 38.7 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
stride= 512: time per element = 80.239 nsec +/- 2.361 nsec
stride=1024: total time to traverse = 678 ΞΌs Β± 36.3 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
stride=1024: time per element = 82.716 nsec +/- 4.429 nsec
Vectorized
stride= 1: total time to traverse = 6.12 ms Β± 708 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 1: time per element = 0.729 nsec +/- 0.084 nsec
stride= 2: total time to traverse = 5.5 ms Β± 862 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 2: time per element = 1.311 nsec +/- 0.206 nsec
stride= 4: total time to traverse = 5.73 ms Β± 871 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 4: time per element = 2.732 nsec +/- 0.415 nsec
stride= 8: total time to traverse = 5.73 ms Β± 401 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 8: time per element = 5.468 nsec +/- 0.382 nsec
stride= 16: total time to traverse = 4.01 ms Β± 107 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 16: time per element = 7.644 nsec +/- 0.205 nsec
stride= 32: total time to traverse = 2.35 ms Β± 178 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 32: time per element = 8.948 nsec +/- 0.680 nsec
stride= 64: total time to traverse = 1.42 ms Β± 74.7 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
stride= 64: time per element = 10.809 nsec +/- 0.570 nsec
stride= 128: total time to traverse = 792 ΞΌs Β± 100 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
stride= 128: time per element = 12.089 nsec +/- 1.530 nsec
stride= 256: total time to traverse = 300 ΞΌs Β± 19.2 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
stride= 256: time per element = 9.153 nsec +/- 0.587 nsec
stride= 512: total time to traverse = 144 ΞΌs Β± 7.38 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
stride= 512: time per element = 8.780 nsec +/- 0.451 nsec
stride=1024: total time to traverse = 67.8 ΞΌs Β± 5.67 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
stride=1024: time per element = 8.274 nsec +/- 0.692 nsec
for loop numba
stride= 1: total time to traverse = 6.11 ms Β± 316 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 1: time per element = 0.729 nsec +/- 0.038 nsec
stride= 2: total time to traverse = 5.02 ms Β± 246 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 2: time per element = 1.197 nsec +/- 0.059 nsec
stride= 4: total time to traverse = 4.93 ms Β± 366 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 4: time per element = 2.350 nsec +/- 0.175 nsec
stride= 8: total time to traverse = 5.55 ms Β± 500 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 8: time per element = 5.292 nsec +/- 0.476 nsec
stride= 16: total time to traverse = 3.65 ms Β± 228 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 16: time per element = 6.969 nsec +/- 0.434 nsec
stride= 32: total time to traverse = 2.13 ms Β± 48.8 ΞΌs per loop (mean Β± std. dev. of 7 runs, 100 loops each)
stride= 32: time per element = 8.133 nsec +/- 0.186 nsec
stride= 64: total time to traverse = 1.48 ms Β± 75.2 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
stride= 64: time per element = 11.322 nsec +/- 0.574 nsec
stride= 128: total time to traverse = 813 ΞΌs Β± 84.1 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
stride= 128: time per element = 12.404 nsec +/- 1.283 nsec
stride= 256: total time to traverse = 311 ΞΌs Β± 14.1 ΞΌs per loop (mean Β± std. dev. of 7 runs, 1,000 loops each)
stride= 256: time per element = 9.477 nsec +/- 0.430 nsec
stride= 512: total time to traverse = 138 ΞΌs Β± 7.46 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
stride= 512: time per element = 8.394 nsec +/- 0.455 nsec
stride=1024: total time to traverse = 67.6 ΞΌs Β± 6.14 ΞΌs per loop (mean Β± std. dev. of 7 runs, 10,000 loops each)
stride=1024: time per element = 8.253 nsec +/- 0.750 nsec
</code></pre>
<p>As already mentioned by @jΓ©rΓ΄me-richard, the python overhead in the standard for loop compared to the numpy vecotrization or the numba case is huge, ranging from a factor of 10 to 100.</p>
<p>The numpy vectorization / numba cases are comparable.</p>
|
<python><numpy><cpu-cache>
|
2024-09-27 09:24:17
| 1
| 1,962
|
Stefan
|
79,030,673
| 5,490,316
|
Message: session not created: This version of ChromeDriver only supports Chrome version 114 Current browser version is 129.0.6668.60 with binary path
|
<p>I'm using selenium and ChromeDriver, worked with it several times and have no errors. Suddenly today I got this warning:</p>
<blockquote>
<p>The chromedriver version (114.0.5735.90) detected in PATH at C:\Work\Scrape\chromedriver.exe might not be compatible with the detected chrome version (129.0.6668.60); currently, chromedriver 129.0.6668.70 is recommended for chrome 129.*, so it is advised to delete the driver in PATH and retry</p>
</blockquote>
<p>and an error message:</p>
<blockquote>
<p>selenium.common.exceptions.SessionNotCreatedException: Message: session not created: This version of ChromeDriver only supports Chrome version 114
Current browser version is 129.0.6668.60 with binary path C:\Program Files\Google\Chrome\Application\chrome.exe</p>
</blockquote>
<p>Read a 1-year-old answered question <a href="https://stackoverflow.com/questions/76913935/selenium-common-exceptions-sessionnotcreatedexception-this-version-of-chromedri">here</a>, I already used the code until now. The previous question are between Chromedriver 114 and Chrome 116, now my problem is the Chrome version 129. Should I downgrade my Chrome and how to do it? I'm using selenium 4.20</p>
|
<python><google-chrome><selenium-webdriver><selenium-chromedriver>
|
2024-09-27 09:15:59
| 1
| 387
|
louislugas
|
79,029,563
| 11,626,909
|
How to scrape all customer reviews?
|
<p>I am trying to scrape all reviews in this website - <a href="https://www.backmarket.com/en-us/r/l/airpods/345c3c05-8a7b-4d4d-ac21-518b12a0ec17" rel="nofollow noreferrer">https://www.backmarket.com/en-us/r/l/airpods/345c3c05-8a7b-4d4d-ac21-518b12a0ec17</a>. The website says there are 753 reviews, but when I try to scrape all reviews, I get only 10 reviews. So, I am not sure how to scrape all 753 reviews from the page, Here is my code-</p>
<pre><code># importing modules
import pandas as pd
from requests import get
from bs4 import BeautifulSoup
# Fetch the web page
url = 'https://www.backmarket.com/en-us/r/l/airpods/345c3c05-8a7b-4d4d-ac21-518b12a0ec17'
response = get(url) # link exlcudes posts with no picures
page = response.text
# Parse the HTML content
soup = BeautifulSoup(page, 'html.parser')
# To see different information
## reviewer's name
reviewers_name = soup.find_all('p', class_='body-1-bold')
[x.text for x in reviewers_name]
name = []
for items in reviewers_name:
name.append(items.text if items else None)
## Purchase Data
purchase_date = soup.find_all('p', class_='text-static-default-low body-2')
[x.text for x in purchase_date]
date = []
for items in purchase_date:
date.append(items.text if items else None)
## Country
country_text = soup.find_all('p', class_='text-static-default-low body-2 mt-32')
[x.text for x in country_text]
country = []
for items in country_text:
country.append(items.text if items else None)
## Reviewed Products
products_text = soup.find_all('span', class_= 'rounded-xs inline-block max-w-full truncate body-2-bold px-4 py-0 bg-static-default-mid text-static-default-hi')
[x.text for x in products_text]
products = []
for items in products_text:
products.append(items.text if items else None)
## Actual Reviews
review_text = soup.find_all('p',class_='body-1 block whitespace-pre-line')
[x.text for x in review_text]
review = []
for items in review_text:
review.append(items.text if items else None)
## Review Ratings
review_ratings_value = soup.find_all('span',class_='ml-4 mt-1 md:mt-2 body-2-bold')
[x.text for x in review_ratings_value]
review_ratings = []
for items in review_ratings_value:
review_ratings.append(items.text if items else None)
# Create the Data Frame
pd.DataFrame({
'reviewers_name': name,
'purchase_date': date,
'country': country,
'products': products,
'review': review,
'review_ratings': review_ratings
})
</code></pre>
<p>My question is how I can scrape all reviews.</p>
|
<python><beautifulsoup>
|
2024-09-27 01:21:24
| 2
| 401
|
Sharif
|
79,029,557
| 1,890,061
|
Polars dataframe: how to efficiently aggregate over many non-disjoint groups
|
<p>I have a dataframe with columns <code>x</code>, <code>y</code>, <code>c_1</code>, <code>c2</code>, ..., <code>c_K</code>, where K is somewhat large (K β 1000 or 2000).</p>
<p>Each of the columns <code>c_i</code> is boolean column, and I'd like to compute an aggregation <code>f(x, y)</code> over rows where where <code>c_i</code> is True. (For example, <code>f(x,y) = x.sum() * y.sum()</code>.)</p>
<p>One way to do this is:</p>
<pre><code>ds.select([
f(pl.col("x").filter(pl.col(f"c_{i+1}"), pl.col("y").filter(pl.col(f"c_{i+1}"))
for i in range(K)
])
</code></pre>
<p>In my problem, the number <code>K</code> is large, and the above query seems somewhat inefficient (filtering is done twice).</p>
<ul>
<li>What is the recommended/most efficient/most elegant way of accomplishing this?</li>
</ul>
<hr />
<p><strong>EDIT.</strong></p>
<p>Here is a runnable example (code at bottom), as well as some timings corresponding to @Hericks's answer below. TLDR: <strong>Method 1</strong> as proposed is the current best.</p>
<div class="s-table-container"><table class="s-table">
<thead>
<tr>
<th></th>
<th></th>
<th>Wall time</th>
</tr>
</thead>
<tbody>
<tr>
<td>1</td>
<td><strong>repeated filters</strong></td>
<td><strong>409ms</strong></td>
</tr>
<tr>
<td>2</td>
<td><code>pl.concat</code></td>
<td>29.6s (β70x slower)</td>
</tr>
<tr>
<td>2*</td>
<td><code>pl.concat</code>, lazy</td>
<td>1.27s (3x slower)</td>
</tr>
<tr>
<td>3</td>
<td>unpivot with agg</td>
<td>1min 17s</td>
</tr>
<tr>
<td>3*</td>
<td>unpivot with agg, lazy</td>
<td>1min 17s (same as 3)</td>
</tr>
</tbody>
</table></div>
<pre><code>import polars as pl
import polars.selectors as cs
import numpy as np
rng = np.random.default_rng()
def f(x,y):
return x.sum() * y.sum()
N = 2_000_000
K = 1000
dat = dict()
dat["x"] = np.random.randn(N)
dat["y"] = np.random.randn(N)
for i in range(K):
dat[f"c_{i+1}"] = rng.choice(2, N).astype(np.bool_)
tmpds = pl.DataFrame(dat)
## Method 1
tmpds.select([
f(
pl.col("x").filter(pl.col(f"c_{i+1}")),
pl.col("y").filter(pl.col(f"c_{i+1}")))
.alias(f"f_{i+1}") for i in range(K)
])
## Method 2
pl.concat([
tmpds.filter(pl.col(f"c_{i+1}")).select(f(pl.col("x"), pl.col("y")).alias(f"f_{i+1}"))
for i in range(K)
], how="horizontal")
## Method 2*
pl.concat([
tmpds.lazy().filter(pl.col(f"c_{i+1}")).select(f(pl.col("x"), pl.col("y")).alias(f"f_{i+1}")).collect()
for i in range(K)
], how="horizontal")
## Method 3
(
tmpds
.unpivot(on=cs.starts_with("c"), index=["x", "y"])
.filter("value")
.group_by("variable")
.agg(
f(pl.col("x"), pl.col("y"))
)
)
##Method 3*
(
tmpds
.lazy()
.unpivot(on=cs.starts_with("c"), index=["x", "y"])
.filter("value")
.group_by("variable", maintain_order=True)
.agg(
f(pl.col("x"), pl.col("y"))
)
.collect()
)
</code></pre>
|
<python><python-polars>
|
2024-09-27 01:17:40
| 2
| 804
|
Kevin
|
79,029,447
| 1,048,404
|
KeyError: 'Name' in pipenv/vendor/importlib_metadata/_adapters.py:54 using pipenv
|
<p>I got this weird error in my CI/CD pipeline running in GCP Cloud build.</p>
<blockquote>
<p>Package installation failed... KeyError: 'Name' site-packages/pipenv/vendor/importlib_metadata/_adapters.py:54 everything is working locally when runnning <code>pipenv install</code>.</p>
</blockquote>
<p>I'm running in <code>python 3.11.4</code>.</p>
<p>Troubleshooting steps that I have followed are:</p>
<ul>
<li>Remove the <code>Pipfile.lock</code> entirely and run again <code>pipenv install</code> didn't work.</li>
<li>Removed the environment entirely <code>pipenv --rm</code> and removed the cache <code>pip cache purge</code> then recreate again the env <code>pipenv shell && pipenv install</code> didn't solved the issue.</li>
</ul>
<h5>Still more on it:</h5>
<ul>
<li>I found this thread: <a href="https://github.com/pypa/twine/issues/1125#issuecomment-2191295673" rel="nofollow noreferrer">https://github.com/pypa/twine/issues/1125#issuecomment-2191295673</a> where the issue is identified as caused by a breaking change in <code>importlib_metadata==8.0.0</code> and that the way of solving it is just pin the version <code>importlib-metadata==7.2.1</code> so I went ahead and applied the change but unfortunately it didn't solved the issue.</li>
<li>I ran as well <code>pipenv graph | grep "importlib"</code> and all the dependencies for importlib are pointing to that <code>7.2.1</code> version.</li>
</ul>
|
<python><cicd><google-cloud-build><pipenv><kaniko>
|
2024-09-26 23:52:39
| 1
| 1,772
|
allexiusw
|
79,029,400
| 6,599,898
|
Regular expression for Python to detect a string and find code block corresponding to the string
|
<p>I have below text</p>
<pre><code>text = '''
here are some examples
@with some text
@then some more text
@ some more@text @moretext probedetail
how are you
something is going on
}
how has it been
weather is good
'''
</code></pre>
<p>I want to do the following -</p>
<ol>
<li>Find a string 'probedetail'.</li>
<li>Find @ closest to 'probedetail'. It has to be before probe detail, but not in the same line. So we need to search in above lines.</li>
<li>After finding closest @ in above line.</li>
<li>Start from that @ and go below till first } is found.</li>
<li>Extract the text.</li>
</ol>
<p>So in above example I am expecting the below as answer-</p>
<p>@then some more text \n
@ some more@text @moretext probedetail</p>
|
<python><regex>
|
2024-09-26 23:26:45
| 1
| 301
|
Harneet.Lamba
|
79,029,223
| 3,879,857
|
Order of system-wide locations and virtual environment in `sys.path`
|
<p>I created a virtual environment using</p>
<pre class="lang-none prettyprint-override"><code>python -m venv venv
</code></pre>
<p>Now I'm opening a Python shell without activating the virtual environment by running</p>
<pre><code>import sys
print(sys.path, sys.prefix)
</code></pre>
<p>I get</p>
<pre class="lang-none prettyprint-override"><code>['', '/usr/lib/python312.zip', '/usr/lib/python3.12', '/usr/lib/python3.12/lib-dynload', '/usr/lib/python3.12/site-packages'] /usr
</code></pre>
<p>which is exactly what I'm expecting.</p>
<p>While, if I activate the environment, the output is</p>
<pre class="lang-none prettyprint-override"><code>['', '/usr/lib/python312.zip', '/usr/lib/python3.12', '/usr/lib/python3.12/lib-dynload', '/home/myname/mypath/venv/lib/python3.12/site-packages'] /home/myname/Projects/pypath/venv
</code></pre>
<p>What upset me a lot is that even <em>inside</em> the venv it seems that the interpreter is searching packages firstly in the system-wide location, and only then inside the venv directory.</p>
<p>Is this true?</p>
|
<python><python-3.x><python-venv>
|
2024-09-26 21:37:55
| 2
| 887
|
MaPo
|
79,029,220
| 14,250,641
|
Efficient Mapping of Average Scores Across Interval Windows
|
<p>I have a df of genomic intervals with millions of rows for example:</p>
<pre><code>chromosome start end
1 300 500
1 400 600
... ... ...
</code></pre>
<ol>
<li>find the center of each interval (+/- 250 from the center the start/end)</li>
</ol>
<pre><code>chromosome start end center
1 300 500 400
1 400 600 500
... ... ...
</code></pre>
<ol start="2">
<li>Create new windows (-/+ 250 around the center)</li>
</ol>
<pre><code>chromosome start end center window_start window_end
1 300 500 400 50 750
1 400 600 500 150 850
... ... ...
</code></pre>
<ol start="3">
<li>get the scores for every single position within these windows (using my own function). You must first recreate the df with only Chr, window_start, and window_end.</li>
</ol>
<pre><code>chromosome start end
1 50 750
1 150 850
... ... ...
</code></pre>
<ol start="4">
<li></li>
</ol>
<p>My score function will output the following df (much bigger, there will be 500 rows for each interval because 1 row=1 position. For this example, it would be 1000 rows because we have 2 intervals). :</p>
<pre><code>chromosome start end score
1 50 50 .8
1 51 51 .2
1 52 52 .12
...
1 750 750 .43
... ... ...
</code></pre>
<ol start="5">
<li>FINAL: I want a plot such that I will be able to have the x axis represent the position relative to the center (-250th position to +250th position) and the y axis will be the average score. So I would take the -250th position across all of my intervals (here I only have 2 intervals) and I would average across those scores.</li>
</ol>
<p>Example of what I want is here:
<a href="https://i.sstatic.net/j5hIGUFd.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/j5hIGUFd.png" alt="enter image description here" /></a></p>
<p>I mostly need help with getting from step 4 to 5 in the most efficient way (step 4 isn't the time-consuming part, step 5 is-- according to the way I'm doing my code right now).</p>
<p>This code here works perfectly, but it is way too slow.</p>
<pre><code>def get_postional_avgs(promoter_df):
promoter_df['center'] = (promoter_df['Start'] + promoter_df['End']) // 2
# Calculate window start and end
promoter_df['window_start'] = promoter_df['center'] - 250
promoter_df['window_end'] = promoter_df['center'] + 250
# Prepare a new DataFrame for conservation score requests
conservation_intervals = promoter_df[['Chromosome', 'window_start', 'window_end']]
conservation_intervals.columns = ['Chromosome', 'Start', 'End'] #create a new start/end
# Retrieve base-level conservation scores
base_level_scores = get_base_scores(conservation_intervals)
# Initialize a DataFrame to hold average scores
average_scores = []
for index, row in promoter_df.iterrows():
center = row['center']
# Extract scores for the current cCRE
scores_for_cCRE = base_level_scores[
(base_level_scores['Chromosome'] == row['Chromosome']) &
(base_level_scores['Start'] >= (center - 250)) &
(base_level_scores['End'] <= (center + 250))
]
# Initialize a list to store average scores for each position
avg_scores = {}
for pos in range(-250, 251):
current_position = center + pos
current_scores = scores_for_cCRE[scores_for_cCRE['Start'] == current_position]
if not current_scores.empty:
avg_scores[pos] = current_scores['Score'].mean()
else:
avg_scores[pos] = np.nan # Handle missing scores
# Append to the results list
average_scores.append(avg_scores)
# Convert average scores to a DataFrame
avg_score_df = pd.DataFrame(average_scores)
</code></pre>
<p>If you want to see the code for the get_score():</p>
<pre><code>import pyBigWig
def get_base_level_conservation_scores(df):
"""
Retrieve all conservation scores that occur within a given genomic interval.
The output DataFrame will be expanded to include scores for each base.
Output DataFrame has 1-indexed coordinates.
"""
df = df.reset_index(drop=True)
bw = pyBigWig.open("hg38.phyloP100way.bw")
# Convert to 0-indexing (assuming input is 1-indexed)
df['Start'] = df['Start'] - 1
# Initialize score data list
score_data = {
'Chromosome': [],
'Start': [],
'End': [],
'Score': []
}
# Process each interval
for i in range(len(df)):
chrom = df['Chromosome'][i]
start = df['Start'][i]
end = df['End'][i]
# Check for invalid intervals
if start < 0 or end <= 0 or start >= end:
print(f"Invalid interval for {chrom}: start={start}, end={end}")
continue # Skip this iteration
# Get scores for the interval
try:
scores_per_base = bw.intervals(chrom, start, end)
except RuntimeError as e:
print(f"Error for {chrom}: start={start}, end={end} - {e}")
continue # Skip this iteration if there's an error
if scores_per_base is None or len(scores_per_base) == 0:
# If there are no scores in the specified interval, add NaN entries
for position in range(start, end):
score_data['Chromosome'].append(chrom)
score_data['Start'].append(position)
score_data['End'].append(position)
score_data['Score'].append(np.nan)
else:
for start_pos, end_pos, score in scores_per_base:
for position in range(start_pos, end_pos):
score_data['Chromosome'].append(chrom)
score_data['Start'].append(position)
score_data['End'].append(position)
score_data['Score'].append(score)
# Convert to DataFrame, drop duplicates, and drop NaN scores
score_data_df = pd.DataFrame(score_data)
score_data_df = score_data_df.drop_duplicates(subset=['Chromosome', 'Start', 'End'])
score_data_df = score_data_df.dropna(subset=['Score'])
# Close the bigWig file
bw.close()
return score_data_df
</code></pre>
|
<python><pandas><dataframe><matplotlib><bioinformatics>
|
2024-09-26 21:36:08
| 1
| 514
|
youtube
|
79,029,168
| 3,453,776
|
How to unit test a gRPC client by simulating a grpc.GrpcError in Python
|
<p>I have implemented a gRPC server and and client <a href="https://github.com/avinassh/grpc-errors/blob/master/python/client.py" rel="nofollow noreferrer">following this example</a>.
I'm using the <code>grpc.ServicerContext.abort_with_status()</code> method to raise the exceptions in the server.</p>
<p>The unit tests for the error cases in the server are written the same way I'm calling the server from the client, by catching the grpc.GrpcClient exception:</p>
<pre><code> try:
result = client.my_rpc(request)
except grpc.RpcError as rpc_error:
status = rpc_status.from_call(rpc_error)
</code></pre>
<p>Now I want to test the client, but I have not found way to test the cases where this exception occours. I'm trying to mock the stub method call to raise this exception.</p>
<p>But in the tests, when trying to obtain the status:
<code>status = rpc_status.from_call(rpc_error)</code>
I'm getting the following exception:
<code>AttributeError: 'RpcError' object has no attribute 'trailing_metadata'</code></p>
<p>I have not even tried to add a status to this RpcError, I'm following what was done in <a href="https://stackoverflow.com/a/61729432/3453776">this answer</a>.</p>
<p>Anyone has faced this issue?</p>
<p>I have tried so many things and I can not find a way to make it work.
I implemented a class to act as a stub that uses the <code>grpcio-testing</code> library to make calls to another class which is a servicer class I implemented. Then mocking the stub in the client, but I couldn't make it raise the exception when using the <code>grpc.ServicerContext.abort_with_status()</code>.</p>
<p>Any help is welcome, if there is a better way to test this, please let me know.</p>
|
<python><grpc><grpcio>
|
2024-09-26 21:15:17
| 0
| 571
|
nnov
|
79,029,009
| 12,336,422
|
How to get nice types from python `dataclass` without instantiating the class
|
<p>Let's say I have the following class:</p>
<pre class="lang-py prettyprint-override"><code>from dataclasses import dataclass
from typing import Annotated
@dataclass
class Pizza:
price: Annotated[float, "money"]
size: Annotated[float, "dimension"]
@dataclass
class Topping:
sauce: Annotated[str, "matter"]
</code></pre>
<p>These are obviously data classes (i.e. for storing data in other words) but I would like to use them for the type hinting as well. For example, it would be quite straightforward to write a functions like:</p>
<pre class="lang-py prettyprint-override"><code>def bake_pizza(pizza: Pizza) -> Pizza:
some_operation
return new_pizza
def check_if_vegetarian(topping: Pizza.Topping) -> bool:
some_operation
return True
</code></pre>
<p>And the type comparison via e.g. <code>isinstance</code> will work without a problem.</p>
<p>Now my problem is that if I'm using the attributes, I can obviously not do the same, so for example I cannot write a function like:</p>
<pre class="lang-py prettyprint-override"><code>def calculate_cost(cost: Pizza.price) -> float:
some_calculation
return cost
</code></pre>
<p>because <code>price</code> is an attribute and not a class. I know that instead I could write:</p>
<pre class="lang-py prettyprint-override"><code>def calculate_cost(cost: Pizza.__annotations__["price"]) -> float:
some_calculation
return cost
</code></pre>
<p>Then I would get what I want, but it's for us a bit of a problem that depending on whether it's an attribute or a class the notation is so different. So far the only one possibility to do it without having to distinguish between attributes and classes was to use strings, e.g. <code>"Pizza.Topping"</code> or <code>"Pizza.price"</code>, but I'm not sure how clean this method is. Does anyone happen to have a better idea?</p>
|
<python><python-typing><python-dataclasses>
|
2024-09-26 20:24:18
| 1
| 733
|
sams-studio
|
79,028,930
| 19,299,757
|
How to maximize browser window in Playwright
|
<p>I've 2 questions related to python playscript when launching a browser.
Looks like I can use</p>
<pre><code>def test_has_title(page: Page):
page.goto("https://my-url.com")
</code></pre>
<p>Another way is</p>
<pre><code>with sync_playwright() as p:
browser = p.chromium.launch(headless=False, args=["--start-maximized"])
page = browser.new_page()
page.goto("https://my-url.com")
</code></pre>
<ol>
<li>What is the correct way of launching the browser? When should I use any of the above?</li>
<li>My browser window is not maximized in spite of --start-maximized argument. What is the way to maximize the browser window?</li>
</ol>
|
<python><playwright><playwright-python>
|
2024-09-26 19:53:26
| 2
| 433
|
Ram
|
79,028,844
| 11,951,910
|
Is it possible to get the owner of a file from S3 bucket
|
<p>I am new to aws and have been tasked with a few aws s3 related items. One is can I get the owner of a file in S3. Multiple people are uploading files to S3, but I would like to get the original owner. After downloading any file locally it has me as the owner when I check the properties.
Checking S3 it looks like the only metadata is type, last modified, size, storage class and ETag.</p>
|
<python><python-3.x><amazon-web-services><amazon-s3>
|
2024-09-26 19:23:53
| 0
| 718
|
newdeveloper
|
79,028,838
| 12,162,229
|
Polars Rolling Mean, fill start of window with null instead of shortened window
|
<p>My question is whether there is a way to have null until the full window can be filled at the start of a rolling window in polars. For example:</p>
<pre><code>dates = [
"2020-01-01",
"2020-01-02",
"2020-01-03",
"2020-01-04",
"2020-01-05",
"2020-01-06",
"2020-01-01",
"2020-01-02",
"2020-01-03",
"2020-01-04",
"2020-01-05",
"2020-01-06",
]
df = pl.DataFrame({"dt": dates, "a": [3, 4, 2, 8, 10, 1, 1, 7, 5, 9, 2, 1], "b": ["Yes","Yes","Yes","Yes","Yes", "Yes", "No", "No", "No", "No", "No", "No"]}).with_columns(
pl.col("dt").str.strptime(pl.Date).set_sorted()
)
df = df.sort(by = 'dt')
df.rolling(
index_column="dt", period="2d", group_by = 'b'
).agg(pl.col("a").mean().alias("ma_2d"))
</code></pre>
<p>Result</p>
<pre><code>b dt ma_2d
str date f64
"Yes" 2020-01-01 3.0
"Yes" 2020-01-02 3.5
"Yes" 2020-01-03 3.0
"Yes" 2020-01-04 5.0
"Yes" 2020-01-05 9.0
</code></pre>
<p>My expectation in this case is that the first day should be null because there aren't 2 days to fill the window. But polars seems to just truncate the window to fill the starting days.</p>
|
<python><python-polars>
|
2024-09-26 19:22:12
| 2
| 317
|
AColoredReptile
|
79,028,654
| 7,872,857
|
fmin_slsqp taking too long
|
<p>I am using fmin_slsqp to find the weights that minimize mean squared error. The weights need to be positive. For each pair of X and y, it takes ~10 seconds. (Each X is (10, 1000) and y is (10,)). I have 8000 pairs that need to be calculated:(</p>
<p>Is there any error with the code, or it is just my data takes too long to converge? Is there any way to make this process efficient, like is there a way to calculate all 8000 pairs together?</p>
<pre><code>def loss(W, X, y):
return np.mean((y - X.dot(W))**2)
def get_result(X, y):
w_start = [1/X.shape[1]] * X.shape[1]
weights = fmin_slsqp(partial(loss_w, X=X, y=y),
np.array(w_start),
bounds=[(0.0, np.inf)] * X.shape[1],
disp=False)
return weights
</code></pre>
|
<python><scipy><scipy-optimize>
|
2024-09-26 18:13:59
| 2
| 481
|
Anonny
|
79,028,538
| 19,299,757
|
Playwright timeout
|
<p>I am very new to playwright and I am trying a simple POC where I open a URL, enter user and password and click LOGIN button. Then checking if a particular title is present or not.
This is my simple code.</p>
<pre><code>import pytest
import asyncio
from playwright.sync_api import Page, expect, async_playwright
@pytest.mark.asyncio
async def test_has_title():
async with async_playwright() as p:
browser = await p.chromium.launch(headless=False)
page = await browser.new_page()
agree_button = "//button[@class='agree-btn'][contains(text(),'AGREE & CONTINUE')]"
user_text_box = "USER"
pwd_text_box = "PASSWORD"
login_button = "//button[contains(text(),'LOGIN')]"
await page.goto("https://my-url.com")
await page.get_by_role("button", name="AGREE & CONTINUE").click()
await page.get_by_placeholder("Username").click()
await page.get_by_placeholder("Username").fill("myuser")
await page.get_by_placeholder("Password").click()
await page.get_by_placeholder("Password").fill("mypassword")
await page.get_by_role("button", name="LOGIN",).click()
await expect(page.locator('title')).to_have_text('Welcome', timeout=50000)
</code></pre>
<p>I run this using this command from Pycharm.</p>
<pre><code>pytest -v -s --capture=tee-sys --html=report.html .\test_login.py --headed
</code></pre>
<p>This is running ok until it encounters the await expect page and throws this error.</p>
<pre><code>await expect(page.locator('title')).to_have_text('Citco Treasury', timeout=50000)
..\venv\Lib\site-packages\playwright\sync_api\__init__.py:142: in __call__
raise ValueError(f"Unsupported type: {type(actual)}")
E ValueError: Unsupported type: <class 'playwright.async_api._generated.Locator'>
</code></pre>
<p>This app after clicking the LOGIN button will take few seconds to load so that the title is
visible. Is that a reason for this error? Its unable to find the title just after clicking LOGIN button?</p>
<p>Any help is much appreciated.
By the way await is supposed to wait until the next action in the test sequence?</p>
|
<python><pytest><playwright><playwright-python>
|
2024-09-26 17:35:41
| 0
| 433
|
Ram
|
79,028,133
| 7,758,213
|
Managing all modules imports in a single file in Python/Django project, while avoiding circular imports
|
<p>Iβm working on a Django project that has numerous imported modules across various files, sometimes totaling up to 50 lines of imports per page. To reduce cluttering, I created a single file, "imports.py", to centralize my imports.</p>
<p>Hereβs a brief example of what it looks like:</p>
<pre><code>from datetime import date, datetime
from typing import Any, Callable, Optional, Final, Type, TypeAlias, cast, Iterable
from functools import wraps
from cryptography.fernet import Fernet
from requests.auth import HTTPBasicAuth
from requests.models import Response
from PIL import Image, UnidentifiedImageError
from smtplib import SMTPRecipientsRefused
from dotenv import load_dotenv
from django.db import models, IntegrityError
...
# a lot more
__all__ = [
'datetime', 'Any',
'Callable', 'Optional', 'Final', 'Type', 'TypeAlias', 'cast', 'Iterable', 'wraps',
'Fernet', 'HTTPBasicAuth', 'Response', 'Image', 'UnidentifiedImageError',
'SMTPRecipientsRefused', 'load_dotenv', 'models', 'IntegrityError',......]
</code></pre>
<p>Then, in other files, I import everything from imports.py like this:</p>
<pre><code>from bi_app.py.imports import *
</code></pre>
<p>While I know this might be unconventional, I find it more organized.
This method works well for external modules and also for Linters, but when I try to include imports for my own project files, I often run into circular import issues.</p>
<p>My question is: Is there a way to combine all imports from my own files, into a single file without causing circular imports?
Thanks for your help</p>
|
<python><django><python-import>
|
2024-09-26 15:49:51
| 1
| 968
|
Izik
|
79,028,098
| 1,088,979
|
How to detect and properly handle signal.SIGTERM in Windows 11
|
<p>In my Python program, I need to detect and gracefully handle <code>signal.SIGTERM</code></p>
<p>I use the following two small Python programs to send and handle <code>signal.SIGTERM</code>:</p>
<p><strong>Receiver and handler:</strong></p>
<pre><code>import signal, time, os
def handle_signal(signum, frame):
global running
print ("Received signal", signum)
running = False
signal.signal(signal.SIGTERM, handle_signal)
# Write the PID to a file
with open("pid.txt", "w") as f:
f.write(str(os.getpid()))
running = True
while running:
time.sleep(1)
</code></pre>
<p><strong>Sender:</strong></p>
<pre><code>import os, signal, time
# Read PID from file
with open("pid.txt", "r") as f:
pid = int(f.read())
time.sleep(5) # Optional delay
# Send SIGUSR1 signal
os.kill(pid, signal.SIGTERM)
</code></pre>
<p>As soon as I send <code>signal.SIGTERM</code> to the receiver, it gets terminated, but <code>handle_signal</code> is never called.</p>
<p>Why? How can I make this setup work under Windows 11 or Windows platform?</p>
|
<python>
|
2024-09-26 15:40:05
| 0
| 9,584
|
Allan Xu
|
79,028,082
| 20,591,261
|
Polars Pivot Dataframe an count the cumulative uniques ID
|
<p>I have a polars dataframe that contains and ID, DATE and OS. For each day i would like to count how many uniques ID are until that day.</p>
<pre><code>import polars as pl
df = (
pl.DataFrame(
{
"DAY": [1,1,1,2,2,2,3,3,3],
"OS" : ["A","B","A","B","A","B","A","B","A"],
"ID": ["X","Y","Z","W","X","J","K","L","X"]
}
)
)
</code></pre>
<p>Desired Output:</p>
<pre><code>shape: (3, 3)
βββββββ¬ββββββ¬ββββββ
β DAY β A β B β
β --- β --- β --- β
β i64 β i64 β i64 β
βββββββͺββββββͺββββββ‘
β 1 β 2 β 1 β
β 2 β 2 β 3 β
β 3 β 3 β 4 β
βββββββ΄ββββββ΄ββββββ
</code></pre>
<p>It should looks like this, because on day 1, the are 3 values and 3 ID . On day 2 the ID "X" its reapeted with the same OS so, the columns A remains the same, and the other 2 are different so add 2 to B. On day 3, the ID X its reapeated with A, and the other 2 are different, so it sums again over each column.</p>
<p>I think it could be solved with an approach like the following:</p>
<pre><code>(
df
.pivot(
index="DAY",
on="OS",
aggregate_function=(pl.col("ID").cum_sum().unique())
)
)
</code></pre>
|
<python><python-polars>
|
2024-09-26 15:37:24
| 1
| 1,195
|
Simon
|
79,027,972
| 477,311
|
Pandas rolling window operation inconsistent results based on the length of a series
|
<p>I stumbled upon a weird behaviour in the windowing functionality in pandas, it seems that a rolling sum operation gives different results depending on the length of the series itself.</p>
<p>Given 2 series:</p>
<pre class="lang-py prettyprint-override"><code>s1 = pd.Series(np.arange(5), index=range(5)) # s1 = 1, 2, 3, 4
s2 = pd.Series(np.arange(2, 5), index=range(2, 5)) # s2 = 2, 3, 4, 5
</code></pre>
<p>We apply a rolling sum on both:</p>
<pre class="lang-py prettyprint-override"><code>k = 0.1
r1 = (s1 * k).rolling(2).sum().dropna() # r1 = 1, 3, 5, 7
r2 = (s2 * k).rolling(2).sum().dropna() # r2 = 5, 7
# remove values from r1 which are not in r2
r1 = r1[r2.index] # r1 = 5, 7
# now r1 should be exactly the same as r2, let's check the indices:
all(r1.index == r2.index) # => true
</code></pre>
<p>However, if we check the values, they are not exactly equal:</p>
<pre><code>r1.iloc[0] == r2.iloc[0] # => false
abs(r1.iloc[0] - r2.iloc[0]) < 0.000000000000001 # => true
abs(r1.iloc[0] - r2.iloc[0]) < 0.0000000000000001 # => false
</code></pre>
<p>I am aware that floating point operations are not exact, and I don't think the observed behaviour is a bug.</p>
<p>However, I would assume, that the same deterministic calculations within the window(s) are applied to both series, so I would expect that the results to be exactly the same.</p>
<p>I am curious as to what exactly is causing this behaviour in the implementation of the window operation.</p>
|
<python><pandas>
|
2024-09-26 15:12:19
| 1
| 886
|
lukstei
|
79,027,967
| 2,104,933
|
openpyxl.chart is creating a new series for each row
|
<p>I have a simple python script to create an excel spreadsheet with a line chart and single series of data. The data in the spreadsheet is as desired. When opened in MS Excel, the chart contains a separate series for each row in the table. The chart looks fine when opened with Numbers app.</p>
<pre><code> from openpyxl.chart import LineChart, Reference
rows = [
['Date', 'Force'],
[date(2015,9, 1), 4],
[date(2015,9, 2), 9],
[date(2015,9, 3), 11]
]
for row in rows:
ws.append(row) # Add each row to the worksheet
chart = LineChart()
chart.title = "Force Measurements"
chart.y_axis.title = "Force (LBS)"
chart.x_axis.title = "Date"
data = Reference(ws, min_col=2, min_row=1, max_col=2,max_row=4)
chart.add_data(data, titles_from_data=True)
ws.add_chart(chart, "D5")
</code></pre>
<p>The chart worksheet created in Excel ends up with four series:</p>
<p><a href="https://i.sstatic.net/FO1gg1Vo.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/FO1gg1Vo.png" alt="separate series for each row" /></a></p>
<p>But should be just one series like this:</p>
<p><a href="https://i.sstatic.net/DalaDP94.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/DalaDP94.png" alt="desired output" /></a></p>
<p>I have tried to use <code>Series</code> with no effect. I don't know if this is a bug in Excel or an issue with openpyxl. Anything that can be done in code to fix this in MS Excel?</p>
|
<python><openpyxl>
|
2024-09-26 15:08:35
| 0
| 2,259
|
lilbiscuit
|
79,027,896
| 461,212
|
How to reroute stdout and stderr to a Frame element (i.e. not to a Multiline element)
|
<p>This <code>Multiline</code> rerouting below works, but this is not matching what I exactly want:</p>
<pre><code>output_layout = [
[sg.Text("OUTPUT")],
[sg.Multiline(size=(0,5), font='Courier 8', text_color="#DBFF33",
expand_x=True, expand_y=True, write_only=True, border_width=2,
reroute_stdout=True, reroute_stderr=True, echo_stdout_stderr=True,
autoscroll=True, auto_refresh=True)]]
</code></pre>
<p>I checked similar questions and <a href="https://docs.pysimplegui.com/en/latest/call_reference/tkinter/elements/frame/" rel="nofollow noreferrer">this documentation</a> about the Frame parameters: I could not find any to reroute the standard output into the Frame element.</p>
<p>My current <code>Frame</code> element, "as is" without rerouting yet:</p>
<pre><code># output_layout = [
# sg.Frame(title= frameTitles[12],font=FrameFontCol1, layout=[[sg.Column(cb12,
# size= (0 , size[1]) ) ]], expand_x=True)]
# -> I tried to apply the stdout re-routing args here, but it does not "fly" obv.
</code></pre>
<h1></h1>
<p>Maybe this is not possible or, would there be kind of a clean workaround or embedding/nested solutions?</p>
|
<python><user-interface><pysimplegui>
|
2024-09-26 14:50:44
| 2
| 9,395
|
hornetbzz
|
79,027,767
| 5,868,293
|
Time series classification, using lagged data, and exogenous time series variables for exploratory features
|
<p>I have the following pandas dataframe</p>
<pre><code>import pandas as pd
pd.DataFrame({
'region': [1,1,1,1,2,2,2,2,3,3,3,3],
'week': [1,2,3,4,1,2,3,4,1,2,3,4],
'rain': [1,1,0,1,1,1,1,1,1,0,0,0],
'clouds': [1,1,0,0,0,0,1,0,1,0,0,0]
})
region week rain clouds
0 1 1 1 1
1 1 2 1 1
2 1 3 0 0
3 1 4 1 0
4 2 1 1 0
5 2 2 1 0
6 2 3 1 1
7 2 4 1 0
8 3 1 1 1
9 3 2 0 0
10 3 3 0 0
11 3 4 0 0
</code></pre>
<p>which indicates at a specific region, at a specific week, if it rained, considering if there clouds at that region at that week.</p>
<p>In my toy example, I would like to be able to predict if it will rain at week <code>n</code>, taking into account if it rained the previous weeks and also if there were clouds the previous weeks.</p>
<p>How could I achieve that ? What would be an appropriate model for this type of problems ?</p>
|
<python><time-series>
|
2024-09-26 14:17:48
| 0
| 4,512
|
quant
|
79,027,616
| 16,869,946
|
Pandas groupby transform mean with date before current row for huge huge dataframe
|
<p>I have a Pandas dataframe that looks like</p>
<pre><code>df = pd.DataFrame([['John', '1/1/2017','10'],
['John', '2/2/2017','15'],
['John', '2/2/2017','20'],
['John', '3/3/2017','30'],
['Sue', '1/1/2017','10'],
['Sue', '2/2/2017','15'],
['Sue', '3/2/2017','20'],
['Sue', '3/3/2017','7'],
['Sue', '4/4/2017','20']],
columns=['Customer', 'Deposit_Date','DPD'])
</code></pre>
<p>And I want to create a new row called <code>PreviousMean</code>. This column is the year to date average of DPD for that customer. i.e. Includes all DPDs up to but not including rows that match the current deposit date. If no previous records existed then it's null or 0.</p>
<p>So the desired outcome looks like</p>
<pre><code> Customer Deposit_Date DPD PreviousMean
0 John 2017-01-01 10 NaN
1 John 2017-02-02 15 10.0
2 John 2017-02-02 20 10.0
3 John 2017-03-03 30 15.0
4 Sue 2017-01-01 10 NaN
5 Sue 2017-02-02 15 10.0
6 Sue 2017-03-02 20 12.5
7 Sue 2017-03-03 7 15.0
8 Sue 2017-04-04 20 13.0
</code></pre>
<p>And after some researching on the site and internet here is one solution:</p>
<pre><code>df['PreviousMean'] = df.apply(
lambda x: df[(df.Customer == x.Customer) & (df.Deposit_Date < x.Deposit_Date)].DPD.mean(),
axis=1)
</code></pre>
<p>And it works fine. However, my actual datafram is much larger (~1 million rows) and the above code is very slow. Is there any better way to do it? Thanks</p>
|
<python><pandas><dataframe><group-by>
|
2024-09-26 13:41:17
| 3
| 592
|
Ishigami
|
79,027,535
| 2,774,589
|
Rotate view for cartopy
|
<p>I am creating the following map.</p>
<pre class="lang-py prettyprint-override"><code>import matplotlib.pyplot as plt
import cartopy.crs as ccrs
import cartopy.feature as cfeature
# Create a plot using the PlateCarree projection
fig, ax = plt.subplots(figsize=(10, 6), subplot_kw={'projection': ccrs.PlateCarree()})
# Set the extent for the Amazon region in regular latitude/longitude coordinates
ax.set_extent([-74, -32, -5, 15], crs=ccrs.PlateCarree())
# Add features to the map
ax.coastlines()
ax.add_feature(cfeature.BORDERS, linestyle=':')
# Add gridlines with labels
gl = ax.gridlines(draw_labels=True, linestyle='--', color='gray')
gl.top_labels = False # Disable top labels
gl.right_labels = False # Disable right-side labels
gl.xlabel_style = {'size': 10, 'color': 'black'}
gl.ylabel_style = {'size': 10, 'color': 'black'}
# Display the map
plt.show()
</code></pre>
<p><a href="https://i.sstatic.net/65gv4iBM.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/65gv4iBM.png" alt="enter image description here" /></a></p>
<p>but I want to rotate the view to alight the x axis with the coast orientation by giving the coordinates of this box.</p>
<p><a href="https://i.sstatic.net/kvxqOYb8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/kvxqOYb8.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><cartopy>
|
2024-09-26 13:24:19
| 1
| 1,660
|
iury simoes-sousa
|
79,027,337
| 14,202,481
|
Polars not using all available CPU
|
<p>I'm running code that performs vectorized calculations using the lazy Polars approach. The code loops approximately 4,000 times and takes about 4 seconds to execute. However, when I check my CPU usage, I see that only 25% of the CPU is being utilized. I would like to increase CPU usage to 80%β100%.</p>
<p>I'm running this on my Windows computer. Below is an example of a condition class where the evaluate function is executed each time in the loop. How can I modify the code or configuration to ensure that more CPU resources are used?</p>
<pre><code>from src.conditions.base_condition import BaseCondition
import polars as pl
from src.utils.types import TimeFrameDataFrames
from src.enums.types_enums import TimeFrame
short_avg_period_options = [20, 30, 40, 50]
long_avg_period_options = [100, 140, 160, 200]
class MovingAverageCondition(BaseCondition):
def __init__(self, short_avg_period: int = 20, long_avg_period: int = 200, bullish: bool = True, timeframe: TimeFrame = TimeFrame.M5):
self.short_avg_period = short_avg_period
self.long_avg_period = long_avg_period
self.bullish = bullish
self.timeframe = timeframe
@property
def name(self) -> str:
return 'MovingAverageCondition'
def get_parameters(self) -> dict:
return {
'short_avg_period': self.short_avg_period,
'long_avg_period': self.long_avg_period,
'bullish': self.bullish,
'timeframe': self.timeframe
}
def required_timeframes(self) -> list:
return [self.timeframe]
def evaluate(self, dfs: TimeFrameDataFrames, **kwargs) -> pl.Series:
"""
Evaluates the condition using Polars DataFrame from the specified timeframe.
Returns a Polars Series indicating where the condition is met.
"""
timeframe_df = dfs.get(self.timeframe)
if timeframe_df is None:
raise ValueError(f"{self.timeframe.value} data is required for MovingAverageCondition.")
# Use lazy evaluation for performance optimization
timeframe_df_lazy = timeframe_df.lazy()
# Calculate short and long moving averages (lazy)
sma_short = pl.col('Close').rolling_mean(window_size=self.short_avg_period)
sma_long = pl.col('Close').rolling_mean(window_size=self.long_avg_period)
# Bullish or bearish condition check (lazy)
if self.bullish:
condition = (pl.col('Close') > sma_short) & (sma_short > sma_long)
else:
condition = (pl.col('Close') < sma_short) & (sma_short < sma_long)
# Collect the result and fill any null values
result_df = timeframe_df_lazy.select(condition.alias("condition")).collect()
return result_df["condition"].fill_null(False)
@staticmethod
def get_instances():
instances = []
for short in short_avg_period_options:
for long in long_avg_period_options:
instances.append(MovingAverageCondition(short_avg_period=short, long_avg_period=long))
return instances
import os
import psutil
import polars as pl
import time
class TrackPerformance:
@staticmethod
def track_cpu_usage(func):
# Measure CPU usage during Polars execution
cpu_usage_before = psutil.cpu_percent(interval=None)
start_time = time.time()
# Execute your Polars operation here (example)
func()
end_time = time.time()
cpu_usage_after = psutil.cpu_percent(interval=None)
print(f"CPU usage before: {cpu_usage_before}%")
print(f"CPU usage after: {cpu_usage_after}%")
print(f"Execution time: {end_time - start_time:.4f} seconds")
</code></pre>
|
<python><python-polars>
|
2024-09-26 12:39:06
| 0
| 557
|
mr_robot
|
79,027,200
| 21,049,944
|
How to change a list element by index in a list column
|
<p>I have a column of lists in my polars dataframe. I would like to access and change a value by list index.</p>
<p><strong>Example input</strong></p>
<pre class="lang-py prettyprint-override"><code>df = pl.DataFrame({
"values": [
[10, 20, 30, 40],
[50, 60, 70, 80],
[90, 100, 110, 120],
],
})
</code></pre>
<p><strong>Pseudocode</strong></p>
<pre class="lang-py prettyprint-override"><code>df = df.with_columns(
pl.col("values").list.eval(pl.element(3) = 1).alias("values2")
)
</code></pre>
<p><strong>Expected outcome</strong></p>
<pre><code>df = pl.DataFrame({
"values": [
[10, 20, 30, 1],
[50, 60, 70, 1],
[90, 100, 110, 1],
],
})
</code></pre>
|
<python><list><python-polars>
|
2024-09-26 12:03:50
| 2
| 388
|
Galedon
|
79,027,048
| 4,984,633
|
Do something different on last row of a tuple
|
<p>I'm manually constructing a JSON so I want the last element to not have a <code>,</code> seperator before the <code>]</code>.</p>
<p>This my code and the documentation says len() works for tuples but I can't get it to work.
The <code>data</code> is a tuple constructed from a SQLite.fetchall() and it has been validated.</p>
<pre><code>for row in data:
f.write("{\n")
f.write(f"'PUT':'{row[1]}'\n")
if row == len(data): # I also tried len(data)-1 as suggested by posts
f.write("}\n")
else:
f.write("},\n")
</code></pre>
<p>This code constructs the JSON but all elements including the last have the <code>,</code> separator.</p>
<p>What is the best pythonic way to do this?</p>
|
<python><sqlite><tuples>
|
2024-09-26 11:22:55
| 2
| 525
|
Ben Jones
|
79,027,016
| 2,546,099
|
Disable python package imports depending on available libraries
|
<p>In my python-based project I am using several functions from <code>PySide6</code>. These functions require libraries such as <code>libGl.so</code> and <code>libglib.so</code>. This is not a problem when using the package as a standalone-package.</p>
<p>When packaging it into a docker-image and using <code>python:3.12</code> or similar base images, I however have to add</p>
<pre><code>RUN apt-get update && apt-get install ffmpeg libsm6 libxext6 libegl-dev -y
</code></pre>
<p>during build-time, to add these packages. As I try to keep my resulting image as small as possible, I would like to avoid this step.</p>
<p>Is there any way of using something similar as (e.g.) preprocessor routines in <code>C++</code>, to selectively disable imports depending on availability of libraries? I am aware of that this might also disable other functions, but for the purpose of this question this is not relevant, as I don't use these functions anyway in the volume-context.</p>
|
<python><docker>
|
2024-09-26 11:12:43
| 1
| 4,156
|
arc_lupus
|
79,026,988
| 13,858,293
|
np.zeros and np.arange results different dtype
|
<p><a href="https://i.sstatic.net/lHbWqx9F.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/lHbWqx9F.png" alt="enter image description here" /></a>
<a href="https://i.sstatic.net/XIllkPlc.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/XIllkPlc.png" alt=", " /></a></p>
<p>Why a = np.arange(4.); the result dtype is float,
while a = np.zeros((4.)), was wrong, the error message: "'float' object cannot be interpreted as an integer"</p>
<p>I understand that np.zeros dtype by default is float, but why np.zeros((4.))result here refers as integer?</p>
<p>and a = np.zeros((4,)); dtype refers to float again.</p>
<p>So question is : why both np.arange(4.) and np.zeros((4,)) results dtype are float, np.zeros((4.)) here not work?</p>
|
<python><numpy>
|
2024-09-26 11:05:33
| 1
| 326
|
Elizabeth S
|
79,026,948
| 2,728,074
|
Using Clarabel as a backend of CVXPY
|
<p>Is this a bug, or is there something fundamental I'm missing with how to use cvxpy?</p>
<p>Consider this code for a least squares optimisation for a 3 variable vector with 3 measurements:</p>
<pre><code>import scipy
import numpy as np
import cvxpy as cvx
import clarabel
A = np.array([[-43.83695965, 0.34990409, -1.32935518],
[-48.86811152, 0.57411847, -1.38219424],
[-49.25621142, 0.75738835, -1.2979702 ]])
b = np.array([ 53897.91898074, -159390.11128713, -62013.59835614])
bounds_arr = np.array([[-1., -1., -1.],
[ 1., 1., 1.]])
#------- Using CLARABEL as cvxpy backend---------------
x = cvx.Variable(A.shape[1])
cost = cvx.sum_squares(A @ x - b)
constraints = []
constraints += [xv >= bnd for (xv, bnd) in zip(x, bounds_arr[0])]
constraints += [xv <= bnd for (xv, bnd) in zip(x, bounds_arr[1])]
constraints += [x[0] + x[1] + x[2] >= -np.sqrt(3)]
# constraints += [cvx.sum(x) >= -np.sqrt(3)] # This form doesn't work either
prob = cvx.Problem(cvx.Minimize(cost), constraints)
prob.solve(solver="CLARABEL", verbose=True) # Says 'PrimalInfeasible'
print("x: ", x.value)
#------- Using CLARABEL directly---------------
P = A.T @ A
P = scipy.sparse.triu(P).tocsc()
q = -b.T @ A
# Note constant term is ignored in this case because it is irrelevant to the solution of the optimisation problem.
Ac = scipy.sparse.vstack([-scipy.sparse.identity(A.shape[1]),
scipy.sparse.identity(A.shape[1]),
np.array([[-1.0, -1.0, -1.0]])]).tocsc()
bc = np.concat([-bounds_arr[0], bounds_arr[1], [np.sqrt(3)]], axis=0)
cones = [clarabel.NonnegativeConeT(A.shape[1]), clarabel.NonnegativeConeT(A.shape[1]), clarabel.NonnegativeConeT(1)]
settings = clarabel.DefaultSettings()
solver = clarabel.DefaultSolver(P, q, Ac, bc, cones, settings) #, A, b, cones, settings
solution = solver.solve() # WORKS
print("x: ", solution.x)
</code></pre>
<p>Most likely, I'm missing something fundamental with cvxpy - that is, why doesn't both approaches work?</p>
<p>If not, is this a bug in cvxpy?</p>
|
<python><optimization><mathematical-optimization><least-squares><cvxpy>
|
2024-09-26 10:58:13
| 0
| 469
|
Charlie
|
79,026,867
| 9,998,989
|
Highlight certain points in Plotly through dropbown bar
|
<p>I am trying to make an interactive PCA plot with Plotly. I am having some troubles in highlighting a certain marker that a user wants to see in the plot.</p>
<p>Instead of highlighting a marker there is a square around it.</p>
<p>My attempt at the code:</p>
<pre><code>from sklearn.datasets import make_blobs
import pandas as pd
import plotly.express as px
X, y = make_blobs(n_samples=30, centers=3, n_features=2,
random_state=0)
ff = PCA(n_components= 2)
clusters = pd.DataFrame(data = ff.fit_transform(X), columns = ['PC1', 'PC2'])
clusters['target'] = y
id = [0, 4, 7]
updatemenus = [dict(buttons=list(dict(label = idd ,method = 'relayout', args = ['shapes', [dict(markers = dict(color = 'Red', size = 120), type = "markers", xref = 'x', yref = 'y', x0 = clusters.loc[idd, 'PC1'], y0=clusters.loc[idd, 'PC2'])]]) for idd in id))]
fig = px.scatter(clusters, x = 'PC1', y = 'PC2', color = 'target', hover_data = ['target'])
fig.update_layout(updatemenus = updatemenus)
fig.show()
</code></pre>
<p><a href="https://i.sstatic.net/WzZukZwX.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/WzZukZwX.png" alt="Example" /></a></p>
<p>What am I doing wrong :)</p>
|
<python><plotly>
|
2024-09-26 10:42:58
| 1
| 752
|
Noob Programmer
|
79,026,693
| 258,414
|
Numpy Error : Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly
|
<p>I 'am trying to find similar vector with spacy and numpy. I found the code following url :
<a href="https://stackoverflow.com/questions/54717449/mapping-word-vector-to-the-most-similar-closest-word-using-spacy">Mapping word vector to the most similar/closest word using spaCy</a></p>
<p>But I'm getting type error</p>
<pre><code>import numpy as np
your_word = "country"
ms = nlp.vocab.vectors.most_similar(
np.asarray([nlp.vocab.vectors[nlp.vocab.strings[your_word]]]),
n=10, )
words = [nlp.vocab.strings[w] for w in ms[0][0]] distances = ms[2] print(words)
</code></pre>
<p>error :</p>
<pre><code>---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
Cell In[139], line 6
1 import numpy as np
3 your_word = "country"
5 ms = nlp.vocab.vectors.most_similar(
----> 6 np.asarray([nlp.vocab.vectors[nlp.vocab.strings[your_word]]]),
7 n=10,
8 )
10 words = [nlp.vocab.strings[w] for w in ms[0][0]]
11 distances = ms[2]
File cupy/_core/core.pyx:1475, in cupy._core.core._ndarray_base.__array__()
TypeError: Implicit conversion to a NumPy array is not allowed. Please use `.get()` to construct a NumPy array explicitly.
</code></pre>
<p>I'm using gpu, how can I fix this?</p>
|
<python><numpy><gpu><spacy>
|
2024-09-26 10:03:53
| 1
| 857
|
onder
|
79,026,491
| 2,685,189
|
generic gdb python helpers to traverse c++ STL container and invoke callback for the elements
|
<p>I want to debug c++ application core and gcore files.<br>
This includes creating reports for relevant data in data repositories of the binary.<br>
Those repositories are based on STL containers (std::map<>, std::set<>, std::vector<>, ...)<br>
For this I use my own "generic" python helpers to automatically traverse STL containers, invoke callback function to handle the data, and create data reports.<br></p>
<p>Here is an example:<br></p>
<p>Test c++ program:</p>
<pre><code>#include <string>
#include <map>
#include <iostream>
struct DemoStruct
{
DemoStruct(int arg1, double arg2, std::string arg3, std::string arg4, char arg5, double arg6)
: wantSeeThis_1(arg1),
wantIgnoreThis_1(arg2),
wantSeeThis_2(arg3),
wantIgnoreThis_2(arg4),
wantIgnoreThis_3(arg5),
wantSeeThis_3(arg6)
{ };
~DemoStruct() = default;
int wantSeeThis_1;
double wantIgnoreThis_1;
std::string wantSeeThis_2;
std::string wantIgnoreThis_2;
char wantIgnoreThis_3;
double wantSeeThis_3;
};
std::map<int, DemoStruct> staticData = {
{ 1, {1, 2.0, "hello", "world", 'a', 3.0 } },
{ 2, {10, 3.2, "hi", "world", 'b', 0.3 } },
{ 3, {50, 4.7, "salute", "world", 'c', 11.5 } },
};
int main(int, char**)
{
// set breakpoint here and inspect data
return 0;
}
</code></pre>
<p>compile it with:</p>
<pre><code>> g++ -std=c++11 -o main -g3 -O0 main.cpp
</code></pre>
<p>use #2 python scripts for debugging:</p>
<p>File #1:</p>
<pre><code>> cat stlhelper.py
# -*- coding: utf-8 -*-
import gdb
import sys
import re
import itertools
import six
#
# base class for PrintGenricXY
#
class PrintGenericCommon (gdb.Function):
"""print generic common"""
def __init__ (self):
super (PrintGenericCommon, self).__init__ ("PrintGenericList")
#
# data is hosted by the subject and we don't know their representation
# subject.ElemItemString() is responsible to format properly (might be complex data)
#
def PrintResult(self, subject):
# calculate the max sizes per element of all lines for alignment
sizes = subject.sizesDef
for x in subject.result:
for key, value in six.iteritems(sizes):
valStr = subject.ElemItemString(x[key])
if len(valStr) > value:
sizes[key] = len(valStr)
totalSize = 0
for key, value in six.iteritems(sizes):
sizes[key] += 2
totalSize += sizes[key]
print("-" * totalSize)
for i in range(len(subject.result)):
myStr = ''
for j in range(len(subject.elements)):
setSize=sizes[subject.elements[j]]
if (j == len(subject.elements) - 1):
setSize=0 # don't auto-align the last element, which often can have some large elements in the list
myStr += "%-*s" % (setSize, subject.ElemItemString(subject.result[i][subject.elements[j]]))
print (myStr)
# print optional __VERBOSE data in a next line
if '__VERBOSE' in subject.result[i]:
str = subject.ElemItemString(subject.result[i]['__VERBOSE'])
lines = str.splitlines()
for line in lines:
print ('\t' + line)
if len(str) > 0 and str[-1] == '\n':
print('')
if i == 0:
print ("-" * totalSize)
#
# generic printer for std::map<> and std::set<>
#
class PrintGenericMapOrSet (PrintGenericCommon):
"""print generic map/set"""
regex = re.compile('\$count')
maxNoResults = -1
def __init__ (self):
super (PrintGenericCommon, self).__init__ ("PrintGenericMapOrSet")
def setMaxNoResults(self, maxNoResults):
self.maxNoResults = maxNoResults
def invokeCore(self, headlines, map, valueType, isConst, isPointer, valueHandler):
collectedLines = []
map = map['_M_t']['_M_impl']
count = map['_M_node_count']
for x in headlines:
collectedLines.append(self.regex.sub(str(count), x))
node = map['_M_header']['_M_left']
valuetype = gdb.lookup_type (valueType)
if isConst:
valuetype = valuetype.const()
if isPointer:
valuetype = valuetype.pointer()
nodetype = gdb.lookup_type('std::_Rb_tree_node < %s >' % valuetype)
nodetype = nodetype.pointer()
i = 0
while i < count:
collectedLines.append(valueHandler(i, node.cast (nodetype).dereference()['_M_value_field'], node))
if node.dereference()['_M_right']:
node = node.dereference()['_M_right']
while node.dereference()['_M_left']:
node = node.dereference()['_M_left']
else:
parent = node.dereference()['_M_parent']
while node == parent.dereference()['_M_right']:
node = parent
parent = parent.dereference()['_M_parent']
if node.dereference()['_M_right'] != parent:
node = parent
i += 1
if self.maxNoResults > -1 and i >= self.maxNoResults:
break;
return "\n".join(collectedLines)
def invoke (self, headlines, mapLoc, valueType, valueHandler, typeWrapper):
map = gdb.parse_and_eval (mapLoc)
return self.invokeCore(headlines, map, valueType, valueHandler, typeWrapper);
#
# generic printer for std::map<>
#
class PrintGenericMap (gdb.Function):
"""print generic map"""
handler = PrintGenericMapOrSet()
def __init__ (self):
super (PrintGenericMap, self).__init__ ("PrintGenericMap")
def setMaxNoResults(self, maxNoResults):
return self.handler.setMaxNoResults(maxNoResults)
def invokeCore(self, headlines, map, valueType, valueHandler):
return self.handler.invokeCore(headlines, map, 'std::pair' + valueType, False, False, valueHandler)
def invoke (self, headlines, mapLoc, valueType, valueHandler, maxNoResults):
return self.handler.invoke(headlines, mapLoc, 'std::pair' + valueType, False, False, valueHandler);
def PrintResult(self, subject):
return self.handler.PrintResult(subject)
#
# generic printer for std::set<>
#
class PrintGenericSet (gdb.Function):
"""print generic set"""
handler = PrintGenericMapOrSet()
def __init__ (self):
super (PrintGenericSet, self).__init__ ("PrintGenericSet")
def setMaxNoResults(self, maxNoResults):
return self.handler.setMaxNoResults(maxNoResults)
def invokeCore(self, headlines, map, valueType, isConst, isPointer, valueHandler):
return self.handler.invokeCore(headlines, map, valueType, isConst, isPointer, valueHandler)
def invoke (self, headlines, mapLoc, valueType, isConst, isPointer, valueHandler, maxNoResults):
return self.handler.invoke(headlines, mapLoc, valueType, isConst, isPointer, valueHandler);
def PrintResult(self, subject):
return self.handler.PrintResult(subject)
</code></pre>
<p>File #2:</p>
<pre><code>> cat main.py
# -*- coding: utf-8 -*-
import gdb, platform, six, copy
from stlhelper import PrintGenericMap, PrintGenericSet
class PrintStaticData (gdb.Command):
"""Print STL data demo.\n"""
printer = None
result = []
headline = {}
sizesDef = {}
elements = [
'key',
'see_1',
'see_2',
'see_3',
]
errorEntry = {
'key' : '?',
'see_1' : '?',
'see_2' : '?',
'see_3' : '?',
}
def __init__ (self):
super (PrintStaticData, self).__init__ ("PrintStaticData", gdb.COMMAND_USER)
self.headline = {
'key' : 'Key',
'see_1' : 'Value #1',
'see_2' : 'Value #2',
'see_3' : 'Value #3',
}
def ReInit(self):
if(platform.python_version().startswith("2")):
for key, value in six.iteritems(self.headline):
self.sizesDef[key] = 0
else:
for key in self.headline.keys():
self.sizesDef[key] = 0
self.result = []
self.result.append(self.headline)
def ElemItemString(self, elem):
return elem
def Traverse(self, index, pair, node = None):
first = pair['first']
second = pair['second']
newEntry = copy.deepcopy(self.errorEntry)
newEntry['key'] = str(first)
newEntry['see_1'] = str(second['wantSeeThis_1'])
newEntry['see_2'] = str(second['wantSeeThis_2'])
newEntry['see_3'] = str(second['wantSeeThis_3'])
self.result.append(newEntry)
return ''
def invoke (self, arg, from_tty):
self.ReInit()
self.printer = PrintGenericMap()
data = gdb.parse_and_eval("staticData")
result = self.printer.invokeCore(
[],
data,
'<int const, DemoStruct>',
self.Traverse)
self.printer.PrintResult(self)
PrintStaticData()
</code></pre>
<p>.gdbinit file (adjust for your own paths):</p>
<pre><code>cat $HOME/.gdbinit
set print object
set print pretty
set print static off
set pagination off
set auto-load safe-path /
python
sys.path.insert(0, "/home/me/TEST")
import main
end
set history filename ~/.gdb_history
set history save
</code></pre>
<p>Execute the demo in 'gdb' with e.g. gcc 4.8.5</p>
<pre><code>> gdb main
(gdb) break 35
Breakpoint 1 at 0x400c18: file main.cpp, line 35.
(gdb) r
Starting program: /home/me/TEST/main
Breakpoint 1, main () at main.cpp:35
35 return 0;
(gdb) PrintStaticData
----------------------------------------------
Key Value #1 Value #2 Value #3
----------------------------------------------
1 1 "hello" 3
2 10 "hi" 0.29999999999999999
3 50 "salute" 11.5
</code></pre>
<p>So far so good.<br></p>
<p>The problem now is, that it does not work for gcc 9.4.0, anymore. :-( <br>
The reason is, that the internal data structure for STL type std::map<> changed.<br>
My python helpers are not really "generic" - they depend on the concrete STL implementation.<br>
Using gcc 9.4.0 I hit</p>
<pre><code>(gdb) PrintStaticData
Python Exception <class 'gdb.error'> There is no member or method named _M_value_field.:
Error occurred in Python: There is no member or method named _M_value_field.
</code></pre>
<p>There is no _M_value_field anymore - which I use here in stdlhelper.py:</p>
<pre><code> collectedLines.append(valueHandler(i, node.cast (nodetype).dereference()['_M_value_field'], node))
</code></pre>
<p>So I <em>could</em> try to update the python code to also handle the new STL implementation.<br>
But I think (hope), that there is already some generic tool in the gcc python STL toolbox available to do the same:<br>
Traverse any STL container - without need to know about it's implementation - and invoke a callback function to handle each element that is stored in the container.<br></p>
<p>Is such available from gcc python helpers? <br>
Maybe some link to documentation / howto? <br></p>
|
<python><c++><stl><gdb>
|
2024-09-26 09:20:43
| 1
| 363
|
Frank Bergemann
|
79,026,315
| 5,016,028
|
Using Langchain ChatOpenAI functionality with LiteLLM
|
<p>I have initially written two functions to provide me with a model and function calling directly from OpenAI. Here is the complete code:</p>
<pre><code>from langchain_openai import ChatOpenAI
def get_open_ai(temperature=0, model='gpt-4'):
llm = ChatOpenAI(
model=model,
temperature = temperature,
)
return llm
def get_open_ai_json(temperature=0, model='gpt-4'):
llm = ChatOpenAI(
model=model,
temperature = temperature,
model_kwargs={"response_format": {"type": "json_object"}},
)
return llm
</code></pre>
<p>The problem is I now need to use LiteLLM as a proxy, which I am forwarding to a local port, for example <code>localhost:3005</code>. I know I should have an option somewhere to put an <code>openai_base = "localhost:3005"</code> or something similar, to make my code hit the LiteLLM gateway instead of OpenAI directly, but this does not work in my above code. I also looked at the <a href="https://docs.litellm.ai/docs/proxy/user_keys" rel="nofollow noreferrer">LiteLLM documentation</a> which gives an example with the OpenAI library not with the Langchain wraper. Could anyone please tell me what I must change in my code to make it work with my own LiteLLM server ?</p>
|
<python><machine-learning><openai-api><large-language-model><py-langchain>
|
2024-09-26 08:44:39
| 1
| 4,373
|
Qubix
|
79,026,305
| 1,406,168
|
Managed Identity on Databricks - DefaultAzureCredential failed to retrieve a token from the included credentials
|
<p>I am trying to send a message to a service bus in azure.</p>
<p>But I get following error:</p>
<pre><code> ServiceBusError: Handler failed: DefaultAzureCredential failed to
retrieve a token from the included credentials.
</code></pre>
<p>This is the line that fails:</p>
<pre><code>credential = DefaultAzureCredential()
</code></pre>
<p>Normally, I would use the az login, but not sure how to do this in databricks.</p>
<pre><code>import nest_asyncio
import asyncio
from azure.servicebus import ServiceBusMessage
from azure.servicebus.aio import ServiceBusClient
from azure.identity.aio import DefaultAzureCredential
nest_asyncio.apply()
local_user = dbutils.notebook.entry_point.getDbutils().notebook().getContext().userName().get()
print(local_user)
FULLY_QUALIFIED_NAMESPACE = "xxx.servicebus.windows.net"
TOPIC_NAME = "xxoutbound"
credential = DefaultAzureCredential()
token = credential.get_token('xxx')
print(token)
async def send_single_message(sender):
# Create a Service Bus message and send it to the queue
message = ServiceBusMessage("Single Message")
await sender.send_messages(message)
print("Sent a single message")
async def run():
# create a Service Bus client using the credential
async with ServiceBusClient(
fully_qualified_namespace=FULLY_QUALIFIED_NAMESPACE,
credential=credential,
logging_enable=True) as servicebus_client:
# get a Queue Sender object to send messages to the queue
sender = servicebus_client.get_topic_sender(topic_name=TOPIC_NAME)
async with sender:
# send one message
await send_single_message(sender)
# Close credential when no longer needed.
await credential.close()
print("dfsdf")
asyncio.run(run())
print("Done sending messages")
print("-----------------------")
</code></pre>
|
<python><databricks><azure-databricks><azureservicebus>
|
2024-09-26 08:42:41
| 1
| 5,363
|
Thomas Segato
|
79,026,303
| 5,682,416
|
How to use Python argparse to have lazily loaded subcommands
|
<p>I have a python package with a lot of sub commands and sub-sub commands. It is organized somewhat like this:</p>
<p>main.py</p>
<pre><code>import argparse
from sum import prepare_arg_parser as prepare_sum_parser
from sub import prepare_arg_parser as prepare_sub_parser
parser = argparse.ArgumentParser()
sub_parsers = parser.add_subparsers(dest="command", required=True)
prepare_sum_parser(sub_parsers.add_parser("sum"))
prepare_sub_parser(sub_parsers.add_parser("sub"))
args = parser.parse_args()
args.func(args)
</code></pre>
<p>sum.py</p>
<pre><code>import argparse
def prepare_arg_parser(parser):
parser.set_defaults(func=do_sum)
parser.add_argument("a", type=int)
parser.add_argument("b", type=int)
def do_sum(args):
print(f"{args.a} + {args.b} = {args.a + args.b}")
</code></pre>
<p>and so on.</p>
<p>However, as the list of module grows, the startup time also loads.
I would like to lazily load each sub commands, so it only loads what's needed for the selected command.</p>
<p>I tried something like this:</p>
<pre><code>import argparse
def do_sum(args):
from sum import prepare_arg_parser
parser = argparse.ArgumentParser()
prepare_arg_parser(parser)
sum_args = parser.parse_args(args)
sum_args.func(sum_args)
parser = argparse.ArgumentParser()
sub_parsers = parser.add_subparsers(dest="command", required=True)
sub_parsers.add_parser("sum").set_defaults(func=do_sum)
args, rest = parser.parse_known_args()
args.func(rest)
</code></pre>
<p>But then the help and error texts are not right.</p>
<p>Another way would be to split each sub commands into a "prepare argparse" module which would know all the args but lazily call the do_sum function for example, but I don't like it as it separate the argument declaration from their use. Also it would still load a lot of unnecessary files.</p>
<p>Is there have lazily-loaded sub commands?</p>
|
<python><lazy-loading><argparse>
|
2024-09-26 08:41:46
| 1
| 1,828
|
Hugal31
|
79,026,269
| 17,580,381
|
An apparent contradiction between pylance and mypy
|
<p>Consider the following class:</p>
<pre><code>class CM:
def __init__(self):
...
def __enter__(self) -> CM:
return self
def __exit__(self, *_):
...
</code></pre>
<p>mypy reports no issues with this. However, pylance highlights the type hint and indicates "CM" is not defined.</p>
<p>Is this the correct way to type hint the return from __enter__ ? If yes then, presumably, pylance is flawed. Or is mypy wrong?</p>
<p>Is there another way to write this type hint that would satisfy both pylance and mypy</p>
|
<python>
|
2024-09-26 08:34:20
| 1
| 28,997
|
Ramrab
|
79,026,175
| 1,443,630
|
autoreload python without master in uwsgi + flask
|
<p>I have a Flask server, and for several reasons I need to set <code>master = false</code> in my uwsgi configuration. But due to this, I'm not able to auto reload on file changes anymore.</p>
<p>This is my uwsgi ini</p>
<pre class="lang-ini prettyprint-override"><code>[uwsgi]
master = false
enable-threads = true
wsgi-file = /opt/merlin/main.py
callable = app
protocol = http
http-socket = localhost:5000
daemonize = /var/log/merlin/merlin-app.log
env = FLASK_ENV=development
py-autoreload = 1
</code></pre>
<p>The reason that I need to use <code>master = false</code> is that I'm importing <code>pandas</code> and <code>google.cloud.bigquery</code>, and this causes the errors:</p>
<pre><code>uWSGI listen queue of socket "localhost:5000" (fd: 3) full !!! (2871782829/10922)
</code></pre>
<p>Not exactly sure about the issue. Exploring more I found that it's related to PIL probably that causes it to not run with master = true.</p>
<p>So, is there any way to run python auto reload in uwsgi with master = false?</p>
|
<python><flask><uwsgi>
|
2024-09-26 08:09:53
| 1
| 2,075
|
Mahesh Bansod
|
79,026,122
| 9,209,203
|
how to run pySpark
|
<p>I am new in Python and trying to run the <a href="https://github.com/krishnaik06/Pyspark-With-Python/blob/main/Tutorial%202-%20PySpark%20DataFrames-%20Part%201.ipynb" rel="nofollow noreferrer">code</a> below in VS. But I am keep getting <code>SyntaxError: invalid syntax</code>. How to get around with this ?</p>
<pre><code>from pyspark.sql import SparkSession
spark=SparkSession.builder.appName('Dataframe').getOrCreate()
spark
</code></pre>
|
<python><pyspark>
|
2024-09-26 07:58:18
| 0
| 3,031
|
symkly
|
79,025,937
| 565,635
|
Why does converting an old datetime to EST on Windows offset time by an additional ~18 minutes?
|
<p>Consider the following:</p>
<pre class="lang-py prettyprint-override"><code>from datetime import datetime, timezone
from zoneinfo import ZoneInfo
dt = datetime(1677, 9, 22, 5, 0, tzinfo=timezone.utc)
print(dt.astimezone(ZoneInfo("EST")))
</code></pre>
<p>On my MacOS machine this prints <code>1677-09-22 00:00:00-05:00</code> as expected. On my Windows 11 machine it prints <code>1677-09-21 23:41:52-05:18:08</code>.</p>
<p>While I acknowledge that a specific timezone like <code>EST</code> doesn't really make much sense for a datetime that old, I'm still very puzzled as to why the offset has increased by ~18 minutes, and only on Windows.</p>
<p>The exact same datetime in 2024 converts as expected on both platforms.</p>
<p>EDIT: I'm perfectly aware that the systems can have different timezone databases. I'd like to know what caused specifically a non-integer hour offset of ~18 minutes.</p>
|
<python><datetime><timezone>
|
2024-09-26 07:09:10
| 0
| 119,106
|
orlp
|
79,025,822
| 10,451,021
|
Recursively check each test case project/CR Id field value for consistency
|
<p>In Azure DevOps, I am trying to get project/CR Id field's value recursively from each user story and all its child items -> test cases at any level and match each test case's project/CR Id field's value for consistency, if they are present and same. I am getting the below error using SDK.</p>
<pre><code>import os
import operator as op
import re
from azure.devops.connection import Connection
from msrest.authentication import BasicAuthentication
# Replace with your actual PAT and organization URL
personal_access_token = 'PAT'
organization_url = 'https://dev.azure.com/**'
work_item_id = 123 # Replace with your actual work item ID
# Create connection to the organization
credentials = BasicAuthentication('', personal_access_token)
connection = Connection(base_url=organization_url, creds=credentials)
# Get work item tracking client for interacting with work items API.
work_item_tracking_client = connection.clients.get_work_item_tracking_client()
def check_project_cr_ids_consistency(work_item):
project_cr_ids_consistent = True
project_cr_id_value = None
def recursive_check(item):
nonlocal project_cr_ids_consistent, project_cr_id_value
if 'Custom.ProjectorCRId' in item.fields:
current_project_cr_id_value = item.fields['Custom.ProjectorCRId']
if current_project_cr_id_value.strip() == "":
project_cr_ids_consistent = False
elif project_cr_id_value is None:
project_cr_id_value = current_project_cr_id_value
elif project_cr_id_value != current_project_cr_id_value:
project_cr_ids_consistent = False
else:
project_cr_ids_consistent = False
if hasattr(item, 'relations'):
for relation in item.relations:
if relation.rel == 'System.LinkTypes.Hierarchy-Forward':
child_id = int(relation.url.split('/')[-1])
child_work_items = work_item_tracking_client.get_work_items(ids=[child_id])
for child_work_item in child_work_items:
recursive_check(child_work_item)
recursive_check(work_item)
return (project_cr_ids_consistent, project_cr_id_value)
try:
# Fetch details of the specific work item by ID, including relations (child items).
work_item = work_item_tracking_client.get_work_item(id=work_item_id, expand='all')
# Print details of the fetched work item.
title=work_item.fields['System.Title']
print(f"Title: {title}")
consistent , cr_or_project_value= check_project_cr_ids_consistency(work_item)
except Exception as e :
print(f"An error occurred: {e}")
print(f"Project/CR IDs Consistent: {consistent}")
if not consistent :
print("Inconsistent Project/CR IDs found among children.")
else :
print(f"Consistent Project/CR ID Value: {cr_or_project_value}")
pattern=re.compile(r'\b' + re.escape(cr_or_project_value) + r'\b')
match=pattern.search(title)
print(bool(match))
</code></pre>
<p><strong>Error:</strong></p>
<pre><code>An error occurred: 'NoneType' object is not iterable
Exception has occurred: NameError
name 'consistent' is not defined
File "C:\Users\Desktop\CRAutomation\projectid.py", line 67, in <module>
print(f"Project/CR IDs Consistent: {consistent}")
NameError: name 'consistent' is not defined
</code></pre>
<p>Complete Error:-</p>
<pre><code>PS C:\Users\Desktop\CRAutomation> c:; cd 'c:\Users\Desktop\repo\CR_Devops_Automate'; & 'c:\Python310\python.exe' 'c:\Users\703301396\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\adapter/../..\debugpy\launcher' '55082' '--' 'C:\Users\703301396\Desktop\CRAutomation\projectid.py'
Title: test 123
An error occurred: 'NoneType' object is not iterable
Traceback (most recent call last):
File "c:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\__main__.py", line 39, in <module>
cli.main()
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy/..\debugpy\server\cli.py", line 430, in main
run()
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy/..\debugpy\server\cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "C:\Users\Desktop\CRAutomation\projectid.py", line 67, in <module>
print(f"Project/CR IDs Consistent: {consistent}")
NameError: name 'consistent' is not defined
</code></pre>
<p>After removing try except:-</p>
<pre><code>PS C:\Users\Desktop\repo\CR_Devops_Automate> c:; cd 'c:\Users\Desktop\repo\CR_Devops_Automate'; & 'c:\Python310\python.exe' 'c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\adapter/../..\debugpy\launcher' '56267' '--' 'C:\Users\703301396\Desktop\CRAutomation\projectid.py'
Title: test 123
Traceback (most recent call last):
File "c:\Python310\lib\runpy.py", line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\Python310\lib\runpy.py", line 86, in _run_code
exec(code, run_globals)
File "c:\Users\703301396\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\__main__.py", line 39, in <module>
cli.main()
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy/..\debugpy\server\cli.py", line 430, in main
run()
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy/..\debugpy\server\cli.py", line 284, in run_file
runpy.run_path(target, run_name="__main__")
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 321, in run_path
return _run_module_code(code, init_globals, run_name,
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 135, in _run_module_code
_run_code(code, mod_globals, init_globals,
File "c:\Users\.vscode\extensions\ms-python.debugpy-2024.10.0-win32-x64\bundled\libs\debugpy\_vendored\pydevd\_pydevd_bundle\pydevd_runpy.py", line 124, in _run_code
exec(code, run_globals)
File "C:\Users\Desktop\CRAutomation\projectid.py", line 62, in <module>
consistent = cr_or_project_value= check_project_cr_ids_consistency(work_item)
File "C:\Users\Desktop\CRAutomation\projectid.py", line 50, in check_project_cr_ids_consistency
recursive_check(work_item)
File "C:\Users\Desktop\CRAutomation\projectid.py", line 48, in recursive_check
recursive_check(child_work_item)
File "C:\Users\Desktop\CRAutomation\projectid.py", line 42, in recursive_check
for relation in item.relations:
TypeError: 'NoneType' object is not iterable
</code></pre>
|
<python><azure-devops>
|
2024-09-26 06:38:34
| 1
| 1,999
|
Salman
|
79,025,728
| 1,522,776
|
Issue with publishing python function as ontology function
|
<p>Now that palantir workshop supports using python repository as well along with typescript repository, I'm trying to create a python function which makes some api call and return a string value. The code works as expected in the live preview, but when I try to deploy to the ontology function and run it, it starts throwing error.</p>
<pre><code>@function
def getProjectRID_Created_for_palantir_response(projectName: String) -> String:
url = "https://domainName/compass/api/search/projects"
headers = {
'Content-type': 'application/json',
'Authorization': 'Bearer XSSSS'
}
payload1 = {"pageSize":400,"query":"",}
response1 = requests.post(url, headers=headers, json=payload1)
if response1.status_code == 200:
projects = response1.json()["values"]
projects = json.dumps(projects)
projects = json.loads(projects)
for project in projects:
if project['resource']['name'].lower() == projectName.lower():
return project['resource']['rid']
return "project not found"
else:
return f"failed to retrive projects:{response1.status_code}"
</code></pre>
<p>below is the error what I get</p>
<pre><code>ConnectionError: HTTPSConnectionPool(host='domainName', port=443): Max retries exceeded with url: /compass/api/search/projects (Caused by NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x7fdff4543410>: Failed to resolve 'domainName' ([Errno -2] Name or service not known)")).
Error Parameters: {}
raise NameResolutionError(self.host, self, e) from e
urllib3.exceptions.NameResolutionError: <urllib3.connection.HTTPSConnection object at 0x7fdff4543410>: Failed to resolve 'domainName' ([Errno -2] Name or service not known)
</code></pre>
<p>I believe we have to create an egress policy and use it in the function, can anyone help me on how to use the egress policy tag here.
I've also created the egress policy and imported to the repository but now sure how to use it in the function</p>
|
<python><palantir-foundry><palantir-foundry-api><palantir-foundry-security>
|
2024-09-26 06:05:31
| 0
| 301
|
Arvind
|
79,025,626
| 3,404,377
|
In Sphinx, how can I generate a page with information about all items in a domain?
|
<p>I've written a custom domain in Sphinx. It has directives that register items (which all end up in the <code>data</code> member) then parallel builds get merged together with <code>merge_domaindata</code>.</p>
<p>I want to create an index-like page that lists every item registered in the domain. I want more control than I could get with a regular <code>Index</code> -- in particular, I'd like to generate some custom nodes for each item in my domain's <code>data</code>.</p>
<p>Is there any way to do that? Looking at the <a href="https://www.sphinx-doc.org/en/master/extdev/event_callbacks.html#events" rel="nofollow noreferrer">core events sequence</a>, it looks like I'd have to delay until after all the <code>env-merge-info</code> event finishes (because before then, we haven't merged the multiple domain objects together). But I want to be able to use cross-references in my generated nodes, so it should be before the reference resolver post-transform.</p>
<p>Could I maybe register a post-transform to do it? How could I go about that, and what would be the recommended transform priority?</p>
|
<python><documentation><python-sphinx><docutils>
|
2024-09-26 05:27:34
| 1
| 1,131
|
ddulaney
|
79,025,556
| 2,210,825
|
Clustering longitudinal data with labels?
|
<p>I have longitudinal data as follows:</p>
<pre><code>import pandas as pd
# Define the updated data with samples only in 'sample_A' or 'sample_B'
data = {
'gene_id': ['gene_1', 'gene_1', 'gene_1', 'gene_1', 'gene_1',
'gene_1', 'gene_1', 'gene_1', 'gene_1', 'gene_1',
'gene_2', 'gene_2', 'gene_2', 'gene_2', 'gene_2',
'gene_2', 'gene_2', 'gene_2', 'gene_2', 'gene_2',
'gene_3', 'gene_3', 'gene_3', 'gene_3', 'gene_3',
'gene_3', 'gene_3', 'gene_3', 'gene_3', 'gene_3'],
'position': [1, 2, 3, 4, 5,
1, 2, 3, 4, 5,
1, 2, 3, 4, 5,
1, 2, 3, 4, 5,
1, 2, 3, 4, 5,
1, 2, 3, 4, 5],
'value': [5.1, 5.5, 5.7, 6.0, 6.3,
6.3, 6.5, 6.7, 6.8, 5.1,
2.3, 2.5, 2.7, 3.0, 3.1,
3.1, 3.2, 3.3, 3.4, 2.3,
3.7, 3.8, 3.9, 4.0, 4.0,
4.0, 4.1, 4.2, 4.3, 3.7],
'sample': ['sample_A', 'sample_A', 'sample_A', 'sample_A', 'sample_B',
'sample_B', 'sample_B', 'sample_B', 'sample_B', 'sample_A',
'sample_A', 'sample_A', 'sample_A', 'sample_A', 'sample_B',
'sample_B', 'sample_B', 'sample_B', 'sample_B', 'sample_A',
'sample_A', 'sample_A', 'sample_A', 'sample_A', 'sample_B',
'sample_B', 'sample_B', 'sample_B', 'sample_B', 'sample_A']
}
# Create the DataFrame
df = pd.DataFrame(data)
</code></pre>
<p>My goal is to cluster gene value profiles then see how those clusters correspond to samples. So for example here, a profile is defined as follows: take a sample, take a gene_id, now take all (position, value) tuples within the resulting subset.</p>
<p>By clustering here, I am interested in understanding how the shape and amplitudes of the curves plotted by profiles cluster. As a start, a simple KMeans would be fine with me.</p>
<p>After clustering the idea would be to restore to each profile the sample it came from, and then plot the cluster space and see how samples gets distributed.</p>
<p>I've seen solutions in R for this, but haven't seen any solutions in python. Any help is appreciated.</p>
|
<python><scipy><cluster-analysis><longitudinal>
|
2024-09-26 04:54:57
| 2
| 1,458
|
donkey
|
79,025,526
| 16,382,765
|
Escaping quotes in Python subprocesses for Windows
|
<p>I'm trying to terminate a Python program that uses many threads on its own.</p>
<p>If I'm not mistaken, just <code>sys.exit()</code> works fine.</p>
<p>However, to guard against my many mistakes, including losing references to threads, I tried the following:</p>
<pre><code>subprocess.Popen(['start', 'cmd.exe', '/c', f'timeout 5&taskkill /f /fi "PID eq {os.getppid()}"'], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
</code></pre>
<p>I thought it was a problem with escaping the quotes, so I tried several things but failed. I gave up and did the following and it worked perfectly.</p>
<pre><code>with open('exit_self.bat', 'w') as file:
file.write(f'timeout 5&taskkill /f /fi "PID eq {os.getppid()}"&del exit_self.bat')
subprocess.Popen(['start', 'cmd.exe', '/c', 'exit_self.bat'], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
</code></pre>
<p>How can I do it without temp files? What did I miss? For reference, I used <code>/k</code> instead of <code>/c</code> option of <code>cmd.exe</code> to leave the window and check the error message in the window, it is as follows:</p>
<pre><code>Waiting for 0 seconds, press a key to continue ...
ERROR: Invalid argument/option - 'eq'.
Type "TASKKILL /?" for usage.
</code></pre>
<p>I'm not sure if it will help, but I added <code>echo</code> to see the syntax of the command being executed:</p>
<pre><code>subprocess.Popen(['start', 'cmd.exe', '/k', 'echo', f'timeout 5&taskkill /f /fi "PID eq {os.getppid()}"'], shell=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
</code></pre>
<p>The result is:</p>
<pre><code>"timeout 5&taskkill /f /fi \"PID eq 3988\""
</code></pre>
|
<python><windows><batch-file><subprocess>
|
2024-09-26 04:43:24
| 1
| 523
|
enoeht
|
79,025,496
| 4,984,633
|
SQLite3 DB to front end in flask just creates empty tags for each row
|
<p>I've read the docs and looked at 2 different tutorials all do it the same way but my front end renders with empty tags but it creates a tag for the expect items in the database.</p>
<p>The desired behaviour is that there is a p tag for each row in the feedback column of the database.</p>
<p>I have a SQLite3 db, that has data in it, table created with this query:
<code>CREATE TABLE feedback(id INTEGER PRIMARY KEY autoincrement,feedback TEXT NOT NULL);</code></p>
<p>This function gets all the data from the database</p>
<pre><code>def listFeedback():
print("hello")
con = sql.connect("databaseFiles/database.db")
cur = con.cursor()
data = cur.execute('SELECT * FROM feedback').fetchall()
con.close()
return data
</code></pre>
<p>This renders the tempate
<code>return render_template('/sucess.html', state=True, posts=listFeedback())</code></p>
<p>This is in the template</p>
<pre><code> {% for post in posts %}
<div class='post'>
<p>{{ post['feedback'] }}</p>
</div>
{% endfor %}
</code></pre>
<p>If I have 6x items in the DB the front will have 6x empty p tags even though the feedback column has text content.</p>
<p>post.feedback has same effect but just using post will fill the P tag with all columns from each row when I just want the one row.</p>
|
<python><sqlite><flask>
|
2024-09-26 04:23:08
| 1
| 525
|
Ben Jones
|
79,025,343
| 5,105,207
|
Read 16 bit color depth image with Wand
|
<p>I want to process a 24-bit image in 16-bit mode with Wand. This is how I read the image</p>
<pre class="lang-py prettyprint-override"><code>from wand.image import Image as WandImage
source_file='img/in.tif'
with WandImage(filename=source_file, depth=16) as img:
print(img.depth)
print(max(img.export_pixels()))
</code></pre>
<p>the output is 8 and 255, so the color depth is still 8 bit. The <code>depth=16</code> argument did not change anything. What is going wrong? My wand version is</p>
<pre class="lang-none prettyprint-override"><code>Wand 0.6.13
ImageMagick 7.1.1-38 Q16-HDRI x64 b0ab922:20240901 https://imagemagick.org
</code></pre>
|
<python><wand>
|
2024-09-26 02:48:41
| 0
| 1,413
|
Page David
|
79,025,263
| 1,573,761
|
Using TypeGuard on class method
|
<h2>Problem</h2>
<p>I am trying to get something as follows to work:</p>
<pre class="lang-py prettyprint-override"><code>from abc import ABC
class Foo(ABC):
@property
@abstractmethod
def value(self) -> str | None: ...
def has_value(self) -> TypeGuard[FooWithValue]:
return self.value is not None
class FooWithValue(Foo, ABC):
@property
@abstractmethod
def value(self) -> str: ...
</code></pre>
<p>With the desired use case to be able to have code like:</p>
<pre class="lang-py prettyprint-override"><code>foo = SomeFoo()
print(f"foo has value: {foo.value}") if foo.has_value() else print("foo has no value")
</code></pre>
<p>Unfortunately, this doesn't work because the <code>TypeGuard</code> does not seem compatible with class methods.</p>
<p>EDIT: To be specific, <code>TypeGuard</code> applies to the 2nd argument of a class method. See for example <a href="https://peps.python.org/pep-0742/#specification" rel="nofollow noreferrer">PEP 742</a>:</p>
<blockquote>
<p>If a type narrowing function is implemented as an instance method or class method, the first positional argument maps to the second parameter (after <code>self</code> or <code>cls</code>).</p>
</blockquote>
<p>Is there a way of getting the class method to work with the type guard, or to achieve the same effective result?</p>
<h2>Workaround</h2>
<p>As a workaround, defining a standalone function <em>does</em> work, but I feel it isn't quite as nice:</p>
<pre class="lang-py prettyprint-override"><code>def foo_has_value(foo: Foo) -> TypeGuard[FooWithValue]:
return isinstance(foo, FooWithValue) or foo.value is not None
</code></pre>
<p>I would prefer to avoid this, if possible.</p>
|
<python><python-typing><typeguards>
|
2024-09-26 02:02:58
| 1
| 457
|
JP-Ellis
|
79,025,212
| 96,588
|
Treating a specific argument value format as deprecated in argparse
|
<p>I'm changing some <code>argparse</code> code to no longer require a unit for a numeric value, since the unit is always the same ("m"). I'd like the code to emit a deprecation warning only when called with the "m" suffix on the parameter value. That is, <code>./foo.py --bar=1m</code> should emit a deprecation warning saying the "m" suffix is deprecated, and <code>./foo.py --bar=1</code> should not emit such a warning. How can I do this?</p>
<p>I tried to set <code>type=str_to_bar</code>:</p>
<pre class="lang-py prettyprint-override"><code>def str_to_bar(value: str) -> Decimal:
number_value = value.removesuffix("m")
if number_value != value:
structlog.get_logger().warning(
"Specifying bar with a trailing 'm' character will not be supported in future versions. "
"Please use a plain decimal number like '0.3' instead.",
DeprecationWarning,
)
return Decimal(number_value)
</code></pre>
<p>Unfortunately, <code>argparse</code> treats the <code>DeprecationWarning</code> as fatal, so that's not an option.</p>
|
<python>
|
2024-09-26 01:23:36
| 1
| 59,558
|
l0b0
|
79,025,180
| 9,090,039
|
Why does read block after select in Python?
|
<p>It is understood that in exceptional circumstances, a read may block after select(2) declares that data is available for reading for something like a network socket, where checksum failures or other buffer manipulations may cause data to be discarded between <code>select</code> and <code>read</code>.</p>
<p>However, I would not expect that to happen for a Python program when dealing only with standard pipes.</p>
<p>Consider the following:</p>
<pre class="lang-py prettyprint-override"><code>with subprocess.Popen(["openssl", "speed"], text=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE) as process:
# Something weird is going on. `fd.read()` below blocks even if `select` declares
# that there is data available. The `set_blocking` calls below partially help this,
# but now there is a strange delay at program start.
os.set_blocking(process.stdout.fileno(), False)
os.set_blocking(process.stderr.fileno(), False)
buffer = StringIO()
while process.poll() is None:
rfds, _, _ = select.select([process.stdout, process.stderr], [], [])
for fd in rfds:
if fd == process.stdout:
chunk = fd.read()
if DEBUG:
sys.stdout.write(chunk)
buffer.write(chunk)
elif fd == process.stderr:
sys.stdout.write(fd.read())
</code></pre>
<p>I would expect this to work without the <code>set_blocking</code> calls, but it doesn't. I have also tried with standard binary streams (<code>text=False</code>) and the results are the same.</p>
<p>Why is read blocking when <code>select</code> says it shouldn't?</p>
|
<python><python-3.x>
|
2024-09-26 01:03:52
| 1
| 950
|
Benjamin Crawford Ctrl-Alt-Tut
|
79,025,101
| 6,449,740
|
which is the best way to convert json into a dataframe?
|
<p>I have a question about the best way to convert this JSON to a Dataframe:</p>
<p>JSON data:</p>
<pre class="lang-json prettyprint-override"><code>{
"myschema": {
"accounts": {
"load_type": "daily",
"fields": {
"id": "nvarchar2",
"isdeleted": "number",
"master": "nvarchar2",
"name": "nvarchar2"
}
},
"customer": {
"load_type": "daily",
"fields": {
"id": "nvarchar2",
"accountid": "nvarchar2",
"usergroupid": "nvarchar2"
}
},
"resources": {
"load_type": "daily",
"fields": {
"id": "nvarchar2",
"isdeleted": "number",
"name": "nvarchar2",
"currency": "nvarchar2"
}
},
....
....
}
}
</code></pre>
<p>The result must be something like this:</p>
<pre class="lang-none prettyprint-override"><code>TABLE |LOAD_TYPE |COLUMN |COLUMN_TYPE |
+-----------+-----------+-----------------+--------------+
| accounts |daily |id |NVARCHAR2 |
| accounts |daily |master |NVARCHAR2 |
| accounts |daily |name |NVARCHAR2 |
| customer |daily |id |NVARCHAR2 |
| customer |daily |accountid |NVARCHAR2 |
| customer |daily |usergroupid |NVARCHAR2 |
| resources |daily |id |NVARCHAR2 |
| resources |daily |name |NVARCHAR2 |
| resources |daily |currency |NVARCHAR2 |
+-----------+-----------+-----------------+--------------+
</code></pre>
<p>I tried the next code:</p>
<pre><code>df2 = spark.read.option("multiLine", "true").json(json_s3_path)
df2.printSchema()
root
|-- mySchema: struct (nullable = true)
| |-- accounts: struct (nullable = true)
| | |-- FIELDS: struct (nullable = true)
.....
.....
</code></pre>
<p>and also the next code:</p>
<pre><code>df3 = spark.read.format("json") \
.option("multiLine", True) \
.option("header",True) \
.option("inferschema",True) \
.load(json_s3_path) \
</code></pre>
<p>and the result for this is:</p>
<pre class="lang-none prettyprint-override"><code>+----------------------------------------------------------------------------------------------------------------------------------------------------+
|mySchema |
+----------------------------------------------------------------------------------------------------------------------------------------------------+
|{{{NVARCHAR2, NUMBER, NVARCHAR2, NVARCHAR2}, Delta}, {{NVARCHAR2, NVARCHAR2, NVARCHAR2}, Delta}, {{NVARCHAR2, NVARCHAR2, NUMBER, NVARCHAR2}, Delta}}|
+----------------------------------------------------------------------------------------------------------------------------------------------------+
</code></pre>
|
<python><json><dataframe><apache-spark><pyspark>
|
2024-09-26 00:00:19
| 1
| 545
|
Julio
|
79,025,002
| 5,231,990
|
Improving speed and robustnes for opencv HoughCircles for only single circle
|
<p>There are many threads on how to detect multiple circles via OpenCV's <code>HoughCircles</code> function, and I understand that this is a complex task with no single good answer. However, maybe someone can give some hints for parameters for the specific problem of only finding a single circle. I also want to only get an answer when the algorithm is sure enough, so no 'maybe', please. The data sometimes has a background of 100-150 (out of 255) with the signal peaking all the way to >200 for most of the time. Does the function handle offsets well or would it be worth to spend some time to remove the background?</p>
<p>My current code:</p>
<pre><code>im8blur = cv2.blur(im8, (n, n))
</code></pre>
<p>or</p>
<pre><code>im8blur = cv2.GaussianBlur(im8, (n,n), cv2.BORDER_DEFAULT)
</code></pre>
<p>with <code>n</code> in the range of 3-10 for the first case or >30 for the second case.</p>
<p>Then, with these parameters, the results are best, but it takes >20 minutes for 140 images...</p>
<pre><code>detected_circles = cv2.HoughCircles(im8blur,
cv2.HOUGH_GRADIENT, 1,
minDist = 2000,
param1 = 14,
param2 = 4,
minRadius = 400,
maxRadius = 600)
</code></pre>
<p>My circles are all around 520px in radius, but setting the <code>minRadius</code> to 500 or setting the <code>maxRadius</code> to -1 yielded worse results.</p>
<p>For my 1400x1400 pictures, I picked a 2000 minDist to ensure only one circle to be found.</p>
<p>However, I still get mixed results. Only when it is super clear, I get a good fit, but most of the time a circle is places somewhere.</p>
<p>For the case of only one circle, there must be a better solution...?</p>
|
<python><opencv><computer-vision><data-fitting><hough-transform>
|
2024-09-25 22:52:34
| 0
| 360
|
Swift
|
79,024,937
| 6,618,225
|
Applying function to filtered columns in Pandas
|
<p>I have a Pandas Dataframe with 4 different columns: an ID, country, team and a color that is assigned to each player following a specific order.</p>
<p>I want to create a new column that contains a number based on the team and the country that simply counts up following the color order, however colors may appear more than once per team.
The column "ID" has to be simply sorted according to the alphabet, then the country column has to be filtered by country, then the script needs to check what teams are in what country and accordingly filter by team, then sort team by the color code and number the first team, then filter the country for the next team, sort again by color but CONTINUE the counting until all teams of a country are numbered.
Then the next country gets filtered and the numbering starts again from 1 with the first team of that country.</p>
<p>It sounds complicated and I have an example code here. I apologize, it is not small but I figure it needs to be of a certain size to make the problem more understandable.</p>
<p>I used <code>df = df.sort_values(by='ID')</code> to sort the column ID by alphabet and I sorted the column 'color' by making it categorical using <code>df['Color'] = pd.Categorical(df['Color'], colorcode)</code> (similar to the custom sorting in Excel)</p>
<p>I have added the column Result to the example which shows what I am trying to reach programmatically. It does not matter whether the Result numbers are integers or strings.</p>
<p>Here the example:</p>
<pre><code>import pandas as pd
pd.set_option('display.max_rows', None)
pd.set_option('display.max_columns', None)
colorcode = ['red', 'green', 'blue', 'yellow', 'white', 'grey', 'brown', 'violet', 'turquoise', 'black', 'orange', 'pink', 'red2', 'green2', 'blue2', 'yellow2', 'white2', 'grey2', 'brown2', 'violet2', 'turquoise2', 'black2', 'orange2', 'pink2']
data = {
'ID' : ['12318683-999', '12318683-001', '12318687-999', '12318687-001', '12318684-999', '12318684-001', '12318686-999', '12318686-001', '12318685-999', '12318685-001', '12319256-999', '12319256-004', '12319256-003', '12319256-002', '12319256-001', '12319255-999', '12319255-002', '12319255-001', '12317944-999', '12317944-009', '12317944-008', '12317944-007', '12317944-006', '12317944-005', '12317944-004', '12317944-003', '12317944-002', '12317944-010', '12317944-001', '12317942-006', '12317942-005', '12317942-004', '12317942-003', '12317942-002', '12317942-001', '12317943-006', '12317943-005', '12317943-004', '12317943-003', '12317943-002', '12317943-001', '12317941-999', '12317941-009', '12317941-008', '12317941-007', '12317941-006', '12317941-005', '12317941-004', '12317941-003', '12317941-002', '12317941-001', '12319261-999', '12319261-001', '12319260-999', '12319260-001', '12319259-999', '12319259-001', '12319095-999', '12319095-001', '12319258-999', '12319258-002', '12319258-001', '12319257-999', '12319257-001', '12319262-999', '12319262-003', '12319262-002', '12319262-001', '12319264-006', '12319264-005', '12319264-004', '12319264-003', '12319264-002', '12319264-001', '12319263-006', '12319263-005', '12319263-004', '12319263-003', '12319263-002', '12319263-001', '12318985-009', '12318985-008', '12318985-007', '12318985-006', '12318985-005', '12318985-004', '12318985-003', '12318985-002', '12318985-012', '12318985-011', '12318985-010', '12318985-001', '12318986-999', '12318986-004', '12318986-003', '12318986-002', '12318986-001', '12317719-999', '12317719-003', '12317719-002', '12317719-001', '12319310-999', '12319310-003', '12319310-002', '12319310-001', '12317718-999', '12317718-002', '12317718-001', '12319311-999', '12319311-001', '12317720-999', '12317720-001', '12319319-999', '12319319-008', '12319319-007', '12319319-006', '12319319-005', '12319319-004', '12319319-003', '12319319-002', '12319319-001', '12317721-999', '12317721-001', '12318721-999', '12318721-001', '12318716-999', '12318716-001', '12318724-999', '12318724-001', '12318725-999', '12318725-004', '12318725-003', '12318725-002', '12318725-001', '12318726-999', '12318726-001', '12318715-999', '12318715-001', '12318718-999', '12318718-001', '12319123-999', '12319123-003', '12319123-002', '12319123-001', '12318714-999', '12318714-001', '12319118-999', '12319118-002', '12319118-001', '12318713-999', '12318713-001', '12319121-999', '12319121-004', '12319121-003', '12319121-002', '12319121-001', '12318727-999', '12318727-001', '12319116-999', '12319116-003', '12319116-002', '12319116-001', '12319119-999', '12319119-002', '12319119-001', '12319120-999', '12319120-003', '12319120-002', '12319120-001', '12319304-999', '12319304-005', '12319304-004', '12319304-003', '12319304-002', '12319304-001', '12319122-999', '12319122-002', '12319122-001', '12319117-999', '12319117-005', '12319117-004', '12319117-003', '12319117-002', '12319117-001', '12319305-999', '12319305-001', '12319306-999', '12319306-001', '23149872-999', '23149872-002', '23149872-001', '12320092-999', '12320092-002', '12320092-001', '12320093-999', '12320093-002', '12320093-001', '12320095-999', '12320095-001', '12318669-999', '12318669-002', '12318669-001', '12318364-999', '12318364-001', '12318366-999', '12318366-001', '12318365-999', '12318365-001', '12318644-999', '12318644-001'],
'Country': ['UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'Germany', 'Germany', 'Germany', 'Germany', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'USA', 'UK', 'UK', 'UK', 'UK', 'UK', 'UK', 'USA', 'USA'],
'Team' : ['Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team6', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team3', 'Team3', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team1', 'Team3', 'Team3', 'Team3', 'Team3', 'Team1', 'Team1', 'Team3', 'Team3', 'Team3', 'Team1', 'Team1', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team1', 'Team1', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team2', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team3', 'Team2', 'Team2', 'Team2', 'Team2', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team4', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team5', 'Team4', 'Team4'],
'Color' : ['red', 'red', 'green', 'green', 'blue', 'blue', 'yellow', 'yellow', 'white', 'white', 'violet', 'violet', 'violet', 'violet', 'violet', 'brown', 'brown', 'brown', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'yellow', 'blue', 'blue', 'blue', 'blue', 'blue', 'blue', 'green', 'green', 'green', 'green', 'green', 'green', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'pink', 'pink', 'red-2', 'red-2', 'green-2', 'green-2', 'turquoise', 'turquoise', 'blue-2', 'blue-2', 'blue-2', 'yellow-2', 'yellow-2', 'turquoise', 'turquoise', 'turquoise', 'turquoise', 'orange', 'orange', 'orange', 'orange', 'orange', 'orange', 'black', 'black', 'black', 'black', 'black', 'black', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'grey', 'white', 'white', 'white', 'white', 'white', 'grey', 'grey', 'grey', 'grey', 'green', 'green', 'green', 'green', 'white', 'white', 'white', 'brown', 'brown', 'yellow', 'yellow', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'red', 'blue', 'blue', 'grey', 'grey', 'white', 'white', 'yellow', 'yellow', 'blue', 'blue', 'blue', 'blue', 'blue', 'black', 'black', 'turquoise', 'turquoise', 'red', 'red', 'red', 'red', 'red', 'red', 'green', 'green', 'green', 'green', 'green', 'violet', 'violet', 'blue', 'blue', 'blue', 'blue', 'blue', 'brown', 'brown', 'yellow', 'yellow', 'yellow', 'yellow', 'white', 'white', 'white', 'grey', 'grey', 'grey', 'grey', 'black', 'black', 'black', 'black', 'black', 'black', 'brown', 'brown', 'brown', 'violet', 'violet', 'violet', 'violet', 'violet', 'violet', 'turquoise', 'turquoise', 'violet', 'violet', 'grey', 'grey', 'grey', 'white', 'white', 'white', 'yellow', 'yellow', 'yellow', 'blue', 'blue', 'green', 'green', 'green', 'turquoise', 'turquoise', 'violet', 'violet', 'brown', 'brown', 'red', 'red'],
'Result' : ['10', '10', '11', '11', '12', '12', '13', '13', '14', '14', '17', '17', '17', '17', '17', '16', '16', '16', '4', '4', '4', '4', '4', '4', '4', '4', '4', '4', '4', '3', '3', '3', '3', '3', '3', '2', '2', '2', '2', '2', '2', '1', '1', '1', '1', '1', '1', '1', '1', '1', '1', '21', '21', '22', '22', '23', '23', '9', '9', '24', '24', '24', '25', '25', '18', '18', '18', '18', '20', '20', '20', '20', '20', '20', '19', '19', '19', '19', '19', '19', '6', '6', '6', '6', '6', '6', '6', '6', '6', '6', '6', '6', '5', '5', '5', '5', '5', '16', '16', '16', '16', '12', '12', '12', '12', '15', '15', '15', '17', '17', '14', '14', '11', '11', '11', '11', '11', '11', '11', '11', '11', '13', '13', '6', '6', '5', '5', '4', '4', '3', '3', '3', '3', '3', '10', '10', '9', '9', '1', '1', '1', '1', '1', '1', '2', '2', '2', '2', '2', '8', '8', '3', '3', '3', '3', '3', '7', '7', '4', '4', '4', '4', '5', '5', '5', '6', '6', '6', '6', '20', '20', '20', '20', '20', '20', '7', '7', '7', '8', '8', '8', '8', '8', '8', '19', '19', '18', '18', '15', '15', '15', '14', '14', '14', '13', '13', '13', '12', '12', '11', '11', '11', '9', '9', '8', '8', '7', '7', '10', '10']
}
df = pd.DataFrame(data)
df = df.sort_values(by='ID') # This line sorts the column ID by alphabet
df['Color'] = pd.Categorical(df['Color'], colorcode)
df = pd.DataFrame(data)
print(df)
</code></pre>
<p>My problem is that I cannot figure out how to filter the columns (first Country, then Team) and then count up according to the color, starting from 1 for red and not start at 1 again for the next team as long as I am still in the same country.</p>
|
<python><pandas><dataframe>
|
2024-09-25 22:11:55
| 2
| 357
|
Kai
|
79,024,518
| 1,954,677
|
how to make jinja2 remove newlines/spaces generated by tags but preserve newlines/spaces generated by static text
|
<p>When using a jinja2 template such as this</p>
<pre><code>A
{%- if flag == "0" %}
X
{%- elif flag == "1" %}
Y
{%- endif %}
B
</code></pre>
<p>my intuitive goal would be, whatever <code>A,B,X,Y</code> are, to simply "insert the contents <code>X,Y</code> between <code>A,B</code>".</p>
<p>This means I'd expect for <code>flag=0,1,2</code> the following results:</p>
<pre><code>A
X
B
</code></pre>
<pre><code>A
Y
B
</code></pre>
<pre><code>A
B
</code></pre>
<p>Note that <code>A,B,X,Y</code> are <em>not</em> jinja2 variables, they are placeholders for static text in the template, which, however,
could also consist of multiple lines.</p>
<p>I seek for a <em>general</em> syntax which satisfies my above expectation for any <code>A,X,Y,B</code>,
i.e. if I edit the <strong>contents</strong> of the fields <code>A,X,Y,B</code> I do not want to touch the <strong>syntax</strong>.</p>
<p>The syntax given above does not fulfill this requirement, if the fields <code>A,B,X,Y</code> have newlines in their surrounding,
e.g. if I change it to</p>
<pre><code>a
{%- if flag == "0" %}
x
{%- elif flag == "1" %}
y
y
{%- endif %}
b
</code></pre>
<p>(now <code>a,b,x,y</code> are real characters, placeholder <code>Y</code> has multiple lines and ends with an empty line)
it will produce for <code>flag=1</code> the output</p>
<pre><code>a
y
y
b
</code></pre>
<p>while according to my expectation I want this output</p>
<pre><code>a
y
y
b
</code></pre>
<p>As I work with a newline-senstive target-language, I would now have to adopt the template syntax again.</p>
<p>So I seek a <em>general</em> template syntax of the following form</p>
<pre><code>A
{????}
X
{????}
Y
{????}
B
</code></pre>
<p>which works for any placeholders <code>A,B,X,Y</code>. In this form, jinja2 tags and target placeholders are required to be on separate lines.</p>
<p>What I tried so far:</p>
<p>Playing with my initial template (switching 6 minus signs and adding some newlines at 4 positions) I generated <code>2^10 = 1024</code> possible templates.
10 of them only work for <code>A,B,X,Y</code> without newlines, with newlines, none of them does.</p>
<p>The problem seems to be, that using <code>{%-</code> syntax after <code>Y</code> removes the newline before the <code>{</code> (wanted) but at the same time removes the trailing spaces of <code>Y</code> (not wanted)</p>
<p>Considering the question title, I know that it is not clearly distinguishable if a newline is "caused" by a tag or static text, but this was the best I could come up with.</p>
|
<python><jinja2>
|
2024-09-25 19:33:37
| 1
| 3,916
|
flonk
|
79,024,508
| 1,641,112
|
How can I change the color of the file name and line number in pytest output?
|
<p><a href="https://i.sstatic.net/8rDrNSTK.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/8rDrNSTK.png" alt="enter image description here" /></a></p>
<p>I have the above output from pytest to make it easier to find info of interest. One thing I'd like to do is make he file name and line number pop out in a different color.</p>
<p>I have a custom log_helper module I'm using:</p>
<pre><code>import logging
import os
import pprint
# Initialize the logger at the module level, so it's only created once
logger = logging.getLogger('debug_logger')
# Set up the logger only if it hasn't been set up already
if not logger.handlers:
log_level = os.environ.get('LOG_LEVEL', 'WARNING').upper()
logger.setLevel(getattr(logging, log_level, logging.WARNING))
formatter = logging.Formatter('%(levelname)s: %(message)s')
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
class RelativePathFormatter(logging.Formatter):
def __init__(self, *args, base_path=None, **kwargs):
super().__init__(*args, **kwargs)
# Define the base path (in your case ~/python)
self.base_path = base_path if base_path else os.path.expanduser('~/python')
def format(self, record):
# Modify the pathname to be relative to the base path
if record.pathname.startswith(self.base_path):
record.pathname = os.path.relpath(record.pathname, self.base_path)
return super().format(record)
# Setup logger using the custom formatter
def get_logger(name='debug_logger'):
logger = logging.getLogger(name)
# Remove existing handlers to avoid conflicts
if logger.hasHandlers():
logger.handlers.clear()
log_level = os.environ.get('LOG_LEVEL', 'DEBUG').upper()
logger.setLevel(getattr(logging, log_level, logging.DEBUG))
# Apply custom formatter
formatter = RelativePathFormatter(
'%(levelname)-8s %(pathname)s:%(lineno)d - %(message)s',
base_path=os.path.expanduser('~/python')
)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
logger.addHandler(handler)
return logger
# Debug print function that respects log level and logs caller info
def d(data, depth=None):
"""
Debug print function that logs data in DEBUG mode with optional depth control for complex data structures.
Shows the correct file and line number of the caller, not log_helper.py.
:param data: The data to log (complex or simple).
:param depth: Optional depth to limit the structure's representation, defaults to None.
"""
logger = get_logger() # Get the global logger
# Bail out immediately if the logger is not in DEBUG mode
if not logger.isEnabledFor(logging.DEBUG):
return
# Format the data for debug output
formatted_data = pprint.pformat(data, depth=depth)
# Check if the formatted data has multiple lines
if "\n" in formatted_data:
# Add a newline before the data if it spans multiple lines
formatted_data = "\n" + formatted_data
# Adjust the stack level so the logger reports the caller's file and line number
logger.debug(formatted_data, stacklevel=2)
</code></pre>
<p>I'm also using a pytest.ini file:</p>
<pre><code>[pytest]
# Include numbered files as test files
python_files = test_*.py *_test.py [0-9]*-*.py 00-*_test
log_cli_format = %(levelname)-8s %(pathname)s:%(lineno)d - %(message)s
</code></pre>
<p>Finally, I have this conftest.py file:</p>
<pre><code># File: ~/python/tests/conftest.py
import time
from pathlib import Path
from collections import defaultdict
import pytest
# ANSI color codes
CYAN = '\033[96m'
YELLOW = '\033[93m'
GREEN = '\033[92m'
RED = '\033[91m'
RESET = '\033[0m'
test_results = {}
test_structure = defaultdict(lambda: defaultdict(list))
start_time = None
import pytest # Make sure pytest is imported
import logging
@pytest.hookimpl(trylast=True)
def pytest_configure(config):
logging_plugin = config.pluginmanager.get_plugin("logging-plugin")
# Change color on existing log level
logging_plugin.log_cli_handler.formatter.add_color_level(logging.DEBUG, "blue")
logging_plugin.log_cli_handler.formatter.add_color_level(logging.INFO, "cyan")
def pytest_sessionstart(session):
global start_time
start_time = time.time()
def pytest_collection_modifyitems(session, config, items):
# for item in items:
# path = Path(item.fspath).relative_to(Path(session.config.rootdir))
# dir_path = path.parent
# file_name = path.name
# test_structure[dir_path][file_name].append(item)
# Sort items based on their location in the file
items.sort(key=lambda x: x.location[1])
@pytest.hookimpl(tryfirst=True, hookwrapper=True)
def pytest_runtest_makereport(item, call):
outcome = yield
result = outcome.get_result()
if result.when == 'call':
test_results[item.nodeid] = result.outcome
def pytest_sessionfinish(session, exitstatus):
global start_time
duration = time.time() - start_time
verbosity = session.config.option.verbose
if verbosity > 1:
print("\n")
for dir_path in sorted(test_structure.keys()):
print(f"\n{CYAN}{dir_path}{RESET}")
for file_name in sorted(test_structure[dir_path].keys()):
print(f" {YELLOW}{file_name}{RESET}")
for item in test_structure[dir_path][file_name]:
result = test_results.get(item.nodeid, "UNKNOWN")
if result == "passed":
color = GREEN
elif result == "failed":
color = RED
else:
color = YELLOW
indent = " "
if "::" in item.name:
class_name, test_name = item.name.split("::")
print(f" {class_name}")
indent = " "
else:
test_name = item.name
print(f"{indent}{test_name} {color}{result.upper()}{RESET}")
# Print summary
passed = sum(1 for result in test_results.values() if result == "passed")
failed = sum(1 for result in test_results.values() if result == "failed")
total = len(test_results)
print(f"\n{GREEN if failed == 0 else RED}=== {passed} passed, {failed} failed, {total} total in {duration:.2f}s ==={RESET}\n")
def pytest_terminal_summary(terminalreporter, exitstatus, config):
# Completely override the default summary
pass
</code></pre>
<p>I've tried a few different approaches but haven't gotten anywhere.</p>
|
<python><pytest>
|
2024-09-25 19:27:56
| 1
| 7,553
|
StevieD
|
79,024,431
| 228,014
|
Subclass of pathlib.Path doesn't support "/" operator
|
<p>I'm attempting to create a subclass of <code>pathlib.Path</code> that will do some manipulation to the passed string path value before passing it along to the base class.</p>
<pre><code>class MyPath(Path):
def __init__(self, str_path):
str_path = str_path.upper() # just representative, not what I'm actually doing
super().__init__(str_path)
</code></pre>
<p>However, when I try to use this:</p>
<pre><code>foo = MyPath("/path/to/my/file.txt")
bar = foo / "bar"
</code></pre>
<p>I get the following error: <code>TypeError: unsupported operand type(s) for /: 'MyPath' and 'str'</code></p>
<p>I am using Python 3.12 which I understand to have better support for subclassing <code>Path</code></p>
|
<python><python-3.x><oop><subclassing><pathlib>
|
2024-09-25 19:02:32
| 1
| 4,396
|
Kyle
|
79,024,371
| 4,449,954
|
Persisting token/auth record cache for DeviceCodeCredentials
|
<p>I have a Python program that submits pipelines to Azure ML. This code typically runs on headless Linux VMs, and authenticates with Azure using the <code>DeviceCodeCredentials</code> flow. I want to cache these credentials so that I can run this script many times and only have to re-authenticate every so often (e.g. every hour).</p>
<p>According to the documentation, <code>TokenCachePersistenceOptions</code> should solve this, however it seems to have no effect beyond writing a useless file. This is what my code looks like:</p>
<pre class="lang-py prettyprint-override"><code>from azure.ai.ml import MLClient
from azure.identity import DeviceCodeCredential, TokenCachePersistenceOptions
from .constants import AML_SUBSCRIPTION_ID, AML_RESOURCE_GROUP, AML_WORKSPACE
cache_path = os.path.expanduser("~/.azure/msal_token_cache.json")
token_cache_options = TokenCachePersistenceOptions(name=cache_path, allow_unencrypted_storage=True)
credential = DeviceCodeCredential(token_cache_persistence_options=token_cache_options)
client = MLClient(
credential=credential,
subscription_id=AML_SUBSCRIPTION_ID,
resource_group_name=AML_RESOURCE_GROUP,
workspace_name=AML_WORKSPACE,
)
# do something useless that requires making requests
jobs = client.jobs.list()
_ = [j.name for j in jobs]
</code></pre>
<p>I would expect that if I run this script, authenticate, and then run it a second time, it would not request authentication again. However, it does.</p>
<p>I have tried different credential scopes, as well creating and manually serializing and deserializing all manner of "tokens" and "authorization records", and passing those tokens and authorization records into methods that may or may not actually do anything at all with them, but nothing works. I have scoured the documentation for the Python SDK. I contacted Azure, who were useless.</p>
<p>Does anyone have suggestions?</p>
|
<python><azure><azure-identity>
|
2024-09-25 18:44:32
| 1
| 1,080
|
stuart
|
79,024,206
| 6,449,740
|
How convert json file into dataframe with spark?
|
<p>One of my task today is read a simpe json file convert into dataframe and do a loop over the dataframe and do some validations, etc...</p>
<p>This is part of my code:</p>
<pre><code>bucket_name = 'julio-s3'
json_source = 'source/'
file_2 = "tmp.json"
json_s3_path = f"s3://{bucket_name}/{json_source}/{file_2}"
print(json_s3_path)
df = spark.read.json(json_s3_path)
df.printSchema()
df.show()
</code></pre>
<p>And here is the first error:</p>
<pre><code> AnalysisException: Since Spark 2.3, the queries from raw JSON/CSV
files are disallowed when the referenced columns only include the
internal corrupt record column (named _corrupt_record by default). For
example:
spark.read.schema(schema).csv(file).filter($"_corrupt_record".isNotNull).count()
and
spark.read.schema(schema).csv(file).select("_corrupt_record").show().
Instead, you can cache or save the parsed results and then send the
same query. For example, val df =
spark.read.schema(schema).csv(file).cache() and then
df.filter($"_corrupt_record".isNotNull).count().
</code></pre>
<p>So, I test with the folloging :</p>
<pre><code>multiline_df = spark.read.option("multiline","true").json(json_s3_path)
multiline_df.show(truncate=False)
print(type(multiline_df))
</code></pre>
<p>and this is the result:</p>
<pre><code>+----------------------------------------------------------------------------------------------------------------------------------------------------+
|mySchema |
+----------------------------------------------------------------------------------------------------------------------------------------------------+
|{{{NVARCHAR2, NUMBER, NVARCHAR2, NVARCHAR2}, Delta}, {{NVARCHAR2, NVARCHAR2, NVARCHAR2}, Delta}, {{NVARCHAR2, NVARCHAR2, NUMBER, NVARCHAR2}, Delta}}|
+----------------------------------------------------------------------------------------------------------------------------------------------------+
<class 'pyspark.sql.dataframe.DataFrame'>
</code></pre>
<p>And well my json file is something like this:</p>
<pre><code>{
"myschema": {
"accounts": {
"load_type": "daily",
"fields": {
"id": "nvarchar2",
"isdeleted": "number",
"master": "nvarchar2",
"name": "nvarchar2"
}
},
"customer": {
"load_type": "daily",
"fields": {
"id": "nvarchar2",
"accountid": "nvarchar2",
"usergroupid": "nvarchar2"
}
},
"resources": {
"load_type": "daily",
"fields": {
"id": "nvarchar2",
"isdeleted": "number",
"name": "nvarchar2",
"currency": "nvarchar2"
}
}
}
}
</code></pre>
<p>I need to do a loop over the FIELDS object to find which of them are "NVARCHAR2" and print the key and the value, for example have something like this:</p>
<pre><code> TABLE |COLUMN |COLUMN_TYPE |
+-----------+-----------------+--------------+
| accounts |id |NVARCHAR2 |
| accounts |master |NVARCHAR2 |
| accounts |name |NVARCHAR2 |
| customer |id |NVARCHAR2 |
| customer |accountid |NVARCHAR2 |
| customer |usergroupid |NVARCHAR2 |
| resources |id |NVARCHAR2 |
| resources |name |NVARCHAR2 |
| resources |currency |NVARCHAR2 |
+-----------+-----------------+--------------+
</code></pre>
<p>Can somebody help me to resolve this problem reading the json in a correct structure?</p>
<p>Regards</p>
|
<python><dataframe><apache-spark><pyspark><aws-glue>
|
2024-09-25 17:49:27
| 1
| 545
|
Julio
|
79,024,195
| 785,404
|
Is there an inline way to assert that a value is not None?
|
<p>I have this code</p>
<pre class="lang-py prettyprint-override"><code>if foo:
bar = 1
else:
bar = maybe_return_int(baz)
</code></pre>
<p>The return type of <code>maybe_return_int</code> is <code>Optional[int]</code>, so mypy complains</p>
<pre><code>error: Incompatible types in assignment (expression has type "int | None", variable has type "int") [assignment]
</code></pre>
<p>However, in the context of my code I know that <code>maybe_return_int(baz)</code> will never return <code>None</code>. To get it to type-check, I have to write this tortuous thing:</p>
<pre class="lang-py prettyprint-override"><code>if foo:
bar = 1
else:
bar_maybe_none = maybe_return_int(baz)
assert bar_maybe_none is not None
bar = bar_maybe_none
</code></pre>
<p>Is there a shorter way to write this? I would like it if there were an <code>assert_not_none</code> function that I could use like this:</p>
<pre class="lang-py prettyprint-override"><code>if foo:
bar = 1
else:
bar = assert_not_none(maybe_return_int(baz))
</code></pre>
<p>Could I write such an <code>assert_not_none</code> function?</p>
<p>I could use cast, but I like that assert will actually do the check (at least when assertions aren't disabled) and that with assert I don't have to write the type (in my real code it's a more verbose thing to type than just an int).</p>
|
<python><python-typing><mypy>
|
2024-09-25 17:47:30
| 2
| 2,085
|
Kerrick Staley
|
79,024,193
| 4,181,335
|
How to get the pandas version in Python with numpy >= 2.0.0 installed
|
<p>One usually gets the pandas version in Python as follows:</p>
<pre><code>import pandas
print(pandas.__version__)
</code></pre>
<p>However if numpy version 2.0.0 or higher is installed, and the pandas version is < 2.2.2
it typically crashes as follows on the import statement:</p>
<p><a href="https://i.sstatic.net/H3SRRa2O.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/H3SRRa2O.png" alt="enter image description here" /></a></p>
<p>Is there a clever way to check the pandas version (within Python) in order to warn the user of this incompatibility instead of the user-unfriendly traceback dump?</p>
|
<python><pandas><numpy>
|
2024-09-25 17:47:14
| 1
| 343
|
Aendie
|
79,024,143
| 18,769,241
|
How to turn a tuple of lists into a list?
|
<p>I am using Python 2.7 to turn a tuple of lists (of 2 elements) of float into a list of floats (<code>([1.0],[2.0])</code> => <code>[1.0,2.0]</code>) like the following:</p>
<pre><code>[tuple_lists[0][0],tuple_lists[1][0]]
</code></pre>
<p>Is there a pythonic 2.7 way to do so in a more elegant way?</p>
|
<python><python-2.7>
|
2024-09-25 17:29:42
| 2
| 571
|
Sam
|
79,024,070
| 4,875,641
|
Python Multiprocessing seems to require a seemingly wrong combination of imports
|
<p>I have the following code using the multiprocessor package</p>
<pre><code>print ('cpu count=',multiprocessing.cpu_count())
event = multiprocessing.Event() # Assign an event object
lock = multiprocessing.Lock()
with Pool(processes=2) as pool:
for serverNum in range (2):
pool.apply_async(funcCall, (event, lock, serverNum))
</code></pre>
<p>When the code is preceeded with</p>
<pre><code>import multiprocessing
</code></pre>
<p>I get the error "Pool" is not defined</p>
<p>If I include instead</p>
<pre><code>from multiprocessing import Event, Lock, Pool
</code></pre>
<p>I get the error</p>
<pre><code>print ('cpu count=',multiprocessing.cpu_count())
^^^^^^^^^^^^^^^
</code></pre>
<p>NameError: name 'multiprocessing' is not defined. Did you forget to import 'multiprocessing'?</p>
<p>When include</p>
<pre><code>from multiprocessing import Event, Lock, Pool
import multiprocessing
</code></pre>
<p>It brings in everything I need. But it seems wrong to include the module as well as including the names of individual functions in the module.</p>
|
<python><import><multiprocessing><nameerror>
|
2024-09-25 17:06:35
| 2
| 377
|
Jay Mosk
|
79,024,010
| 2,893,712
|
Pandas Return Corresponding Column Based on Date Being Between Two Values
|
<p>I have a Pandas dataframe that is setup like so:</p>
<pre><code>Code StartDate EndDate
A 2024-07-01 2024-08-03
B 2024-08-06 2024-08-10
C 2024-08-11 2024-08-31
</code></pre>
<p>I have a part of my code that iterates through each day (starting from 2024-07-01) and I am trying to return the corresponding <code>Code</code> given a date (with a fallback if the date does not fall within any StartDate/EndDate range).</p>
<p>My original idea was to do something like:</p>
<pre><code>DAYS = DAY_DF['Date'].tolist() # Just a list of each day
for DAY in DAYS:
code = False
for i,r in df.iterrows():
if r['StartDate'] <= DAY <= r['EndDate']:
code = r['Code']
break
if not code: # `Code` is still False
code = 'Fallback_Code'
</code></pre>
<p>But this seems very inefficient to iterate over each row in the dataframe especially because I have a lot of records in my dataframe.</p>
<p>Here are some example inputs and the resulting code output:</p>
<pre><code>2024-07-03 -> 'A'
2024-08-04 -> 'Fallback_Code'
2024-08-10 -> 'B'
2024-08-11 -> 'C'
</code></pre>
|
<python><pandas>
|
2024-09-25 16:50:29
| 1
| 8,806
|
Bijan
|
79,024,006
| 6,163,621
|
Determine or indicate if Linux process is started by web user (apache) or cronjob?
|
<p>On my server, python scripts can be started either through a cronjob, or through our website (via apache). Sometimes I want to kill processes started by one, but not the other. When I run <code>htop</code>, it looks like the user is "ubuntu" for both. Is there a way to determine natively which process started the script? Or if not natively, indicate how it was started?</p>
<p>Ideally I'd write another script in python that would find those processes (let's say those started by apache) and kill them while leaving the cronjobs alone. Meaning I can leverage both linux and python tools for this purpose.</p>
<p><strong>UPDATE:</strong> Thanks for all the comments, they've opened up some avenues for me to try (namely the Process ID <em>group</em> and using a different user for the cronjob). Will post an answer when I get it nailed down.</p>
|
<python><linux><apache><cron>
|
2024-09-25 16:49:36
| 0
| 9,134
|
elPastor
|
79,023,989
| 2,127,650
|
Implementing image drag-and-drop functionality with DearPyGui framework challenged me too much
|
<p>A few days ago I started learning the <strong>DearPyGui</strong> framework, got excited with speed and results. However, I encountered challenges when I tried to implement drag-and-drop functionality for images. The usage seems straightforward as the methods <code>add_image</code> and <code>add_image_button</code> have <code>drag_callback</code> and <code>drop_callback</code> parameters, but whatever I tried nothing worked.
I searched for the solution in the <a href="https://dearpygui.readthedocs.io/en/latest/index.html" rel="nofollow noreferrer">official documentation</a> but without success - there is not much written about drag-and-drop. I also looked for the ready solutions and found only <a href="https://github.com/IvanNazaruk/DearPyGui-DragAndDrop/tree/main" rel="nofollow noreferrer">this</a> that seems for text drag and drop from outside the application and it doesn't address my issue, I need to drag and drop images between elements within the same window.</p>
<p>I am using the most recent <strong>DearPyGui 1.11.1</strong> on a <strong>Windows machine</strong> with <strong>Python 3.9.13</strong>.
The images are created but I can't fire the <code>drag_callback</code> or <code>drop_callback</code> events. For the test I added <code>callback=click_callback</code> parameter to the <code>add_image_button</code> function and it worked. Looks like this method can work, just I am missing something.
Below is a simplified variant of my approach - it automatically creates the images to test the code:</p>
<pre><code>import dearpygui.dearpygui as dpg
def get_image(width, height, red, green, blue, alpha):
"""Generates simple image"""
return [red, green, blue, alpha] * width * height
def drag_callback(sender, app_data):
print(f"Starting drag from {sender}") # Never comes here
def drop_callback(sender, app_data):
print(f"Dropped from {sender}") # Never comes here
def create_image(image_data, width, height, image_tag, texture_tag):
with dpg.texture_registry():
texture_id = dpg.add_static_texture(width, height, image_data, tag=texture_tag)
# Add the image with drag and drop functionality
dpg.add_image_button(texture_id, tag=image_tag, payload_type="image_payload", drag_callback=drag_callback,
drop_callback=drop_callback)
if __name__ == '__main__':
dpg.create_context()
print(f"DPG version {dpg.get_dearpygui_version()}") # DPG version 1.11.1
# Create a window
with dpg.window(label="Image Drag-and-Drop", tag="DragDropWindow", width=800, height=600, no_close=True,
no_collapse=True):
width = 100
height = 100
img1 = get_image(width, height, 1, 0, 1, 1)
img2 = get_image(width, height, 0, 1, 0, 1)
# Create draggable images with unique tags for both group and texture
create_image(img1, width, height, "Image1", "ImageTexture1")
create_image(img2, width, height, "Image2", "ImageTexture2")
# Setup and show viewport
dpg.create_viewport(title="Main", width=800, height=600)
dpg.setup_dearpygui()
dpg.show_viewport()
dpg.start_dearpygui()
# Clean up
dpg.destroy_context()
</code></pre>
|
<python><drag-and-drop><dearpygui>
|
2024-09-25 16:43:41
| 1
| 658
|
zviad
|
79,023,929
| 550,235
|
type specific autocomplete in vscode python
|
<p>Assuming I'm properly typing my python code, is there a way or extension to get vscode to only offer autocomplete options that are appropriate for that parameter?</p>
<p>For example:</p>
<pre><code>from enum import IntEnum
class MyEnum(IntEnum):
OneThing = 1
Another = 2
EvenMore = 3
def my_function(parameter : MyEnum):
print(parameter)
</code></pre>
<p>Now when I start writing my code, I type...</p>
<p><code>my_function(</code></p>
<p>And it completes the <code>()</code> and offers info on the type that needs to go there.</p>
<p>I'd LIKE for me to be able to start typing <code>One</code> and have it have MyEnum.OneThing in its suggestions</p>
<p>If I create a variable at the top scope like...</p>
<pre><code>MyEnum_OneThing = MyEnum.OneThing
</code></pre>
<p>I can then type</p>
<p><code>my_function(One</code> and it will autocomplete the <code>)</code> and suggest <code>MyEnum_OneThing</code></p>
<p>Is there a way to get it to suggest <code>MyEnum.OneThing</code>? We have some objects with hundreds of enumerated values and we'd like autocomplete/intellisense to be a little more helpful.</p>
<p>(Or, I suppose, an alternate solution would be a way to automate the MyEnum_<thing>=MyEnum.<thing> so that the enum with 100s of items ends up at the top level scope so intellisense picks it up.</p>
|
<python><visual-studio-code><autocomplete>
|
2024-09-25 16:29:33
| 1
| 2,730
|
Russ Schultz
|
79,023,865
| 17,721,722
|
How to Handle Multiple Date Formats in a Single Column with PySpark?
|
<p>I am working with a DataFrame in PySpark that contains a column named <code>datdoc</code>, which has multiple date formats as shown below:</p>
<pre><code>datdoc
07-SEP-24
07-SEP-2024
07-SEP-2024
07-SEP-2024
07-SEP-24
07-SEP-24
07-SEP-2024
07-SEP-2024
07-SEP-2024
07-SEP-2024
07-SEP-2024
</code></pre>
<p>I need to parse these dates into a default format. I've tried the following approaches, but I'm running into issues.</p>
<ol>
<li><strong>First Attempt: Using CASE WHEN</strong><br />
I used the following payload to handle multiple date formats:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>columns = {'field_name': 'datdoc', 'current_format': ['dd-MMM-yy', 'dd-MMM-yyyy'], 'data_type': 'Date'}
dateexpression = Column<'CASE WHEN (to_date(datdoc, dd-MMM-yy) IS NOT NULL) THEN to_date(datdoc, dd-MMM-yy) WHEN (to_date(datdoc, dd-MMM-yyyy) IS NOT NULL) THEN to_date(datdoc, dd-MMM-yyyy) ELSE NULL END AS datdoc'>
</code></pre>
<ol start="2">
<li><strong>Second Attempt: Single Format Parsing</strong><br />
I also tried simplifying to a single format:</li>
</ol>
<pre class="lang-py prettyprint-override"><code>columns = {'field_name': 'datdoc', 'current_format': ['dd-MMM-yy'], 'data_type': 'Date'}
date_expression = Column<'to_date(datdoc, dd-MMM-yy) AS datdoc'>
</code></pre>
<h3>Python Function</h3>
<pre class="lang-py prettyprint-override"><code>def change_date_format(self, columns) -> None:
def _convert_date_format(field_name: str, current_format: list, is_timestamp: bool) -> F.Column:
base_function = F.to_timestamp if is_timestamp else F.to_date
expression = None
if len(current_format) == 1:
return base_function(F.col(field_name), current_format[0]).alias(field_name)
else:
for fmt in current_format:
current_expr = base_function(F.col(field_name), fmt)
if expression is None:
expression = F.when(current_expr.isNotNull(), current_expr)
else:
expression = expression.when(current_expr.isNotNull(), current_expr)
return expression.otherwise(F.lit(None)).alias(field_name)
cols = {col["field_name"] for col in columns}
date_expressions = []
for col in columns:
if col["data_type"] in ["DateTime", "Time"]:
date_expressions.append(_convert_date_format(col["field_name"], col["current_format"], True))
elif col["data_type"] == "Date":
date_expressions.append(_convert_date_format(col["field_name"], col["current_format"], False))
expression = [F.col(i) for i in self.df.columns if i not in cols]
self.df = self.df.select(*date_expressions, *expression)
</code></pre>
<p>In both cases, I encountered the following error when trying to parse <code>07-SEP-2024</code> using <code>dd-MMM-yy</code>:</p>
<pre class="lang-bash prettyprint-override"><code>24/09/25 21:10:18 WARN TaskSetManager: Lost task 0.0 in stage 9.0 (TID 7) (rhy-4 executor driver): org.apache.spark.SparkUpgradeException: [INCONSISTENT_BEHAVIOR_CROSS_VERSION.PARSE_DATETIME_BY_NEW_PARSER] You may get a different result due to the upgrading to Spark >= 3.0:
Fail to parse '07-SEP-2024' in the new parser. You can set "spark.sql.legacy.timeParserPolicy" to "LEGACY" to restore the behavior before Spark 3.0, or set to "CORRECTED" and treat it as an invalid datetime string.
</code></pre>
<p><a href="https://pastebin.com/5dYLNkKv" rel="nofollow noreferrer">Click Here to View Whole Error</a></p>
<h3>Question</h3>
<p>Is there a way to ensure that invalid date strings are returned as <code>NULL</code> instead of incorrectly parsed? One approach I considered is using <code>CASE WHEN</code> with a RegEx pattern in PySpark. However, I would like to explore fixing my current approach first. Any guidance on how to achieve this would be greatly appreciated!</p>
|
<python><apache-spark><date><pyspark><apache-spark-sql>
|
2024-09-25 16:05:45
| 2
| 501
|
Purushottam Nawale
|
79,023,711
| 16,869,946
|
Python list with NumPy arrays as elements
|
<p>I have a list with NumPy arrays as elements that looks like this:</p>
<pre><code>[array([ 0.2, -2.3, 5.3]),
array([-1.6, -1.7, 0.3]),
array([ 2.4, -0.2, -3.0]),
array([-4.1, -2.3, -2.7])]
</code></pre>
<p>and I want to convert it into 3 lists, each with elements from the columns of the above list. So the desired outcome looks like</p>
<pre><code>list1 = [0.2, -1.6, 2.4, -4.1]
list2 = [-2.3, -1.7, -0.2, -2.3]
list3 = [5.3, 0.3, -3.0, -2.7]
</code></pre>
|
<python><arrays><list><numpy>
|
2024-09-25 15:25:34
| 5
| 592
|
Ishigami
|
79,023,648
| 8,543,025
|
Setting Axis Range for Subplot in Plotly-Python
|
<p>I am trying to manually set the range of one (shared) y-axis in a plotly multi-plot figure, but for some reason, it also affects the range of the other y-axes.<br />
Take a look at this example. I'll start by creating a 3x2 figure, with a shared y-axis per row.</p>
<pre><code>import numpy as np
import plotly.graph_objects as go
from plotly.subplots import make_subplots
import plotly.io as pio
pio.renderers.default = "browser"
np.random.seed(42)
N = 20
nrows, ncols, ntraces = 3, 2, 3
fig = make_subplots(
rows=nrows, cols=ncols,
shared_xaxes=True, shared_yaxes=True,
)
for r in range(nrows):
scale = 1 / 10 ** r
for c in range(ncols):
for t in range(ntraces):
y = np.random.randn(N) * scale
fig.add_trace(
row=r + 1, col=c + 1,
trace=go.Scatter(y=y, mode="markers+lines", name=f"trace {t}")
)
fig.update_layout(showlegend=False)
fig.show()
</code></pre>
<p>This generates the following figure:
<a href="https://i.sstatic.net/AhG1UR8J.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AhG1UR8J.png" alt="generic example for multi-plot figure" /></a></p>
<p>Now I want to manually set the range only for the first row, so I do:</p>
<pre><code>fig.update_yaxes(range=[-2, 2], row=1, col=1)
fig.show()
</code></pre>
<p>This indeed sets the range as required. Problem is, this upsets all other axes as well, changing their range to some automatic value (<code>[-1, 4]</code>):
<a href="https://i.sstatic.net/AJ0KfUl8.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/AJ0KfUl8.png" alt="Wrong Range" /></a></p>
<p>I tried manually setting the range of the other rows using various combinations of <code>range</code> and <code>rangemode='normal'</code>, for example:</p>
<pre><code>fig.update_yaxes(range=[None, None], row=2, col=1)
fig.update_yaxes(range=None, row=2, col=1)
fig.update_yaxes(rangemode='normal', row=2, col=1)
</code></pre>
<p>Nothing seems to work...<br />
How do I manually set the y-axis range only for one of the axes?</p>
|
<python><python-3.x><plotly>
|
2024-09-25 15:08:58
| 1
| 593
|
Jon Nir
|
79,023,570
| 4,211,279
|
Vertically scroll multiple plots in Tkinter
|
<p>I would like to draw multiple plots with Matplotlib, add them to a Tkinter frame (stack them one below the other), and be able to scroll vertically between the plots.
Each plot shold fill the x-direction, and have a minimum y-height, so that if the total height of the multiple plots is above the height of the window, the scroll should be enabled.</p>
<p>I have found a way to scroll a single large figure (made of multiple subplots), but I am constrained to use a library which cannot produce subplots, so my idea was to stack the single plots one below the other, considering them as Tkinter widgets.</p>
<p>So far, I have the following, but as you can see the scrollbar is present but doesn't work.
Any insight would be useful.</p>
<pre><code>from matplotlib.backends.backend_tkagg import FigureCanvasTkAgg
from matplotlib.figure import Figure
f = Figure()
a = f.add_subplot(111)
g = Figure()
b = g.add_subplot(111)
from tkinter import *
root=Tk()
frame=Frame(root)
frame.pack(expand=True, fill=BOTH)
canvas=Canvas(frame,bg='#FFFFFF',scrollregion=(0,0,500,500))
vbar=Scrollbar(frame,orient=VERTICAL)
vbar.pack(side=RIGHT,fill=Y)
vbar.config(command=canvas.yview)
canvas.config()
canvas.config(yscrollcommand=vbar.set)
canvas.pack(side=LEFT,expand=True,fill=BOTH)
middle = Frame(canvas, bg="yellow")
middle.pack(side="bottom", expand=True, fill="both")
canvas_1 = FigureCanvasTkAgg(f, middle)
canvas_1.get_tk_widget().pack(expand=True, fill="both")
canvas_1.draw()
bottom = Frame(canvas, bg="blue")
bottom.pack(side="bottom", expand=True, fill="both")
canvas_2 = FigureCanvasTkAgg(g, bottom)
canvas_2.get_tk_widget().pack(expand=True, fill="both")
canvas_2.draw()
root.mainloop()
</code></pre>
<p>which produces:</p>
<p><a href="https://i.sstatic.net/JfPvMo52.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/JfPvMo52.png" alt="enter image description here" /></a></p>
<p><a href="https://i.sstatic.net/f5YSaSO6.png" rel="nofollow noreferrer"><img src="https://i.sstatic.net/f5YSaSO6.png" alt="enter image description here" /></a></p>
|
<python><matplotlib><tkinter>
|
2024-09-25 14:54:56
| 1
| 930
|
Pier Paolo
|
79,023,460
| 11,450,166
|
Handling Circular Imports in Pydantic models with FastAPI
|
<p>I'm developing a <strong>FastAPI</strong> application organized with the following module structure.</p>
<pre><code>...
β βββ modules
β β βββ box
β β β βββ routes.py
β β β βββ services.py
β β β βββ models.py # the sqlalchemy classes
β β β βββ schemas.py # the pydantic schemas
β β βββ toy
β β β βββ routes.py
β β β βββ services.py
β β β βββ models.py
β β β βββ schemas.py
</code></pre>
<p>Each module contains <strong>SQLAlchemy</strong> models, <strong>Pydantic</strong> models (also called schemas), FastAPI routes, and services that handle the business logic.</p>
<p>In this example, I am using two modules that represent boxes and toys. Each toy is stored in one box, and each box contains multiple toys, following a classic <code>1 x N</code> relationship.</p>
<p>With <strong>SQLAlchemy</strong> everything goes well, defining relationships is straightforward by using <code>TYPE_CHECKING</code> to handle circular dependencies:</p>
<pre class="lang-py prettyprint-override"><code># my_app.modules.box.models.py
from sqlalchemy.orm import Mapped, mapped_column, relationship
if TYPE_CHECKING:
from my_app.modules.toy.models import Toy
class Box(Base):
__tablename__ = "box"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
toys: Mapped[list["Toy"]] = relationship(back_populates="box")
</code></pre>
<pre class="lang-py prettyprint-override"><code># my_app.modules.toy.models.py
from sqlalchemy.orm import Mapped, mapped_column, relationship
if TYPE_CHECKING:
from my_app.modules.box.models import Box
class Toy(Base):
__tablename__ = "toy"
id: Mapped[int] = mapped_column(primary_key=True, autoincrement=True)
box: Mapped["Box"] = relationship(back_populates="toys")
</code></pre>
<p>This setup works perfectly without raising any circular import errors. However, I encounter issues when defining the same relationships between <strong>Pydantic</strong> schemas. If I import directly the modules on my schemas.py,</p>
<pre class="lang-py prettyprint-override"><code># my_app.modules.box.schemas.py
from my_app.modules.toy.schemas import ToyBase
class BoxBase(BaseModel):
id: int
class BoxResponse(BoxBase):
toys: list[ToyBase]
</code></pre>
<pre class="lang-py prettyprint-override"><code># my_app.modules.toy.schemas.py
from my_app.modules.box.schemas import BoxBase
class ToyBase(BaseModel):
id: int
class ToyResponse(ToyBase):
box: BoxBase
</code></pre>
<p>I recieve the circular import error:</p>
<pre><code>ImportError: cannot import name 'ToyBase' from partially initialized module 'my_app.modules.toy.schemas' (most likely due to a circular import)...
</code></pre>
<p>I also try the <strong>SQLAlchemy</strong> approach of <code>TYPE_CHECKING</code> and string declaration:</p>
<pre class="lang-py prettyprint-override"><code># my_app.modules.box.schemas.py
if TYPE_CHECKING:
from my_app.modules.toy.schemas import ToyBase
class BoxBase(BaseModel):
id: int
class BoxResponse(BoxBase):
toys: list["ToyBase"]
</code></pre>
<pre class="lang-py prettyprint-override"><code># my_app.modules.toy.schemas.py
if TYPE_CHECKING:
from my_app.modules.box.schemas import BoxBase
class ToyBase(BaseModel):
id: int
class ToyResponse(ToyBase):
box: "BoxBase"
</code></pre>
<p>But apparently, pydantic doesn't support this:</p>
<pre><code>raise PydanticUndefinedAnnotation.from_name_error(e) from e
pydantic.errors.PydanticUndefinedAnnotation: name 'ToyBase' is not defined
</code></pre>
<p>(<a href="https://stackoverflow.com/questions/79017748/circular-import-issue-with-fastapi-and-pydantic-models">Some answers</a>) suggest that the issue comes from a poor module organization. (<a href="https://github.com/fastapi/fastapi/issues/153" rel="nofollow noreferrer">Others</a>) suggest, too complex and hard to understand solutions.</p>
<p>Maybe I'm wrong but I consider the relationship between <code>Box</code> and <code>Toy</code> something trivial and fundamental that should be manageable in any moderately complex project. For example, a straightforward use case would be to request a toy along with its containing box and vice versa, a box with all its toys. Aren't they legitimate requests?</p>
<h2>So, my question</h2>
<p>How can I define interrelated <strong>Pydantic</strong> schemas (<code>BoxResponse</code> and <code>ToyResponse</code>) that reference each other without encountering circular import errors? I'm looking for an clear and maintainable solution that preserves the independence of the box and toy modules, similar to how relationships are handled in <strong>SQLAlchemy</strong> models. Any suggestions or at least an explanation of why this is so difficult to achieve?</p>
|
<python><sqlalchemy><fastapi><pydantic><circular-dependency>
|
2024-09-25 14:33:18
| 2
| 311
|
Biowav
|
79,023,426
| 7,971,750
|
Older versions of lxml and pandas on WIndows 10 in 2024
|
<p>I've recently ran into an issue where I have to install an older (4.6.2) version of lxml to use an older version of pandas, however, when installing via <code>pip install lxml==4.6.2</code>, the wheel simply won't build due to missing libxml2.</p>
<p><a href="https://www.zlatkovic.com/projects/libxml/index.html" rel="nofollow noreferrer">Someone</a> made a port of libxml2 to Windows, and the instructions essentially boil down to "you're already supposed to know how to install it" (thanks, very useful). Extracting binaries from bin of the linked library ports to a $PATH-included directory did nothing.</p>
<p>The commonly linked in older discussions on libxml2, lxml and various issues regarding its install <a href="https://www.lfd.uci.edu/%7Egohlike/" rel="nofollow noreferrer">storage of pre-build wheels</a>, according to archive.org, either redirects to a github link where only modern versions are pre-built (e. g. 5.2.2 for lxml).</p>
<p>Using the aforementioned archive.org, I've been able to find the list of pre-built wheels and install the one I needed, but otherwise, is that it? Just rely on archive.org?</p>
|
<python><pandas><lxml><libxml2>
|
2024-09-25 14:26:45
| 0
| 322
|
bqback
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.