Spaces:
Runtime error
Using PaddleOCR with GPU
Hello,
I have been using Paddle OCR for a very short time now, and I am completely new to the PaddlePaddle framework. I like what I am seeing here, but there is some confusion about the working of Paddle OCR. I am hopeful that you guys can help me out here.
When running Paddle OCR on my CPU, even with a lot of threads, it is quite slow. It takes some 10 seconds to generate a complete readout of a page which takes 0.9 seconds using Tesseract. I would like to try taking a readout using GPU, but I cannot set the use_gpu parameter to True. I have tried using a variety of techniques:
1. Using the command-line argument:
As you can see in the image below, the args.use_gpu was still False in spite of me explicitly instructing otherwise in the command
2. Using the PaddleOCR class, which has a namespace member args:
I tried to change the use_gpu flag after initializing an object of the PaddleOCR class, but this did not have any measurable change in the inference time, which is still quite high in a page. Specifically, I did this:
pocr = PaddleOCR()
pocr.args.use_gpu =True
This had no effect on the inference time whatsoever, and the time taken remains the same as it was on CPU, right down to the millisecond.
Can you please offer me some guidance on how to run Paddle OCR using GPU?
NOTE:
I am already using paddlepaddle-gpu, and along with CUDA 11.4. I know that CUDA works on my system since I can access it using PyTorch.
Okay, never mind, it turns out that paddlepaddle-gpu was not installed correctly. I have reinstalled the entire package stack in a fresh environment and now it works just fine.