repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
JaidedAI/EasyOCR | deep-learning | 1,051 | Invalid SOS parameters for sequential JPEG | getting error `Invalid SOS parameters for sequential JPEG` with image capture using samsung device.
this is the image. => https://imgur.com/a/T4ncxjF
| open | 2023-06-14T08:06:17Z | 2023-06-14T08:07:08Z | https://github.com/JaidedAI/EasyOCR/issues/1051 | [] | azuddin | 1 |
ultralytics/ultralytics | computer-vision | 19,199 | Yolov8-OpenVino-CPP-Inference Abnormal detection | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
model: yolov8n
system: ubuntu20
question: In OpenVINO inference, the category inference works fine, but the bounding boxes are incorrect.




result:

### Additional
_No response_ | closed | 2025-02-12T08:19:22Z | 2025-02-15T05:21:43Z | https://github.com/ultralytics/ultralytics/issues/19199 | [
"question",
"detect",
"exports"
] | yang5757 | 6 |
FlareSolverr/FlareSolverr | api | 273 | 32 bit ARM docker image fails to start | When starting the docker image using docker-compose, I get the following string of errors:
> flaresolverr exited with code 133
flaresolverr |
flaresolverr |
flaresolverr | #
flaresolverr | # Fatal error in , line 0
flaresolverr | # unreachable code
flaresolverr | #
flaresolverr | #
flaresolverr | #
flaresolverr | #FailureMessage Object: 0x7e89358c | closed | 2021-12-27T16:15:25Z | 2022-01-09T14:11:30Z | https://github.com/FlareSolverr/FlareSolverr/issues/273 | [
"fix available"
] | SmartBoy84 | 2 |
davidsandberg/facenet | computer-vision | 484 | Step how to apply this library | Can someone step by step teach me how to use this library to detect face from beginning by using python and code using visual code. Sorry, because I am a beginner and I was interested on it. Thank you very much | closed | 2017-10-12T17:10:48Z | 2017-10-21T11:28:33Z | https://github.com/davidsandberg/facenet/issues/484 | [] | sy0209 | 1 |
donnemartin/data-science-ipython-notebooks | machine-learning | 88 | Data Science | open | 2022-07-01T22:25:03Z | 2023-03-16T10:41:22Z | https://github.com/donnemartin/data-science-ipython-notebooks/issues/88 | [
"needs-review"
] | SibiyaS | 1 | |
PokemonGoF/PokemonGo-Bot | automation | 5,518 | Mqtt connection refused - not authorised | ### Expected Behavior
With social enabled, I expect to receive a list of pokemon to snipe.
### Actual Behavior
I receive empty list every time.
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
[config](http://pastebin.com/v1DpQ3Hy)
### Output when issue occurred
when enabling mqtt debug:
> rc: 0
> rc: 5
> on_disconnect
> Unexpected disconnection.
> rc: 0
> rc: 5
> on_disconnect
> Unexpected disconnection.
### Steps to Reproduce
Run dev branch with above config.
### Other Information
OS:
Linux Mint 18
Branch:
dev
Git Commit:
b2eb347847bfd23b7e4089f53729b8287b3ddbbc
Python Version:
Python 2.7.12
Any other relevant files/configs (eg: path files)
It was working until yesterday, and it worked once this afternoon in between non-working runs.
Other functions work fine, only mqtt list of pokemon returns empty.
| closed | 2016-09-17T21:00:00Z | 2016-09-22T09:15:31Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5518 | [] | cowst | 9 |
cvat-ai/cvat | tensorflow | 8,817 | Making Nuclio UI available when hosting CVAT | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Is your feature request related to a problem? Please describe.
I would like to make the nuclio ui available when using cvat.
### Describe the solution you'd like
Making the nuclio platform ui available on the url myurl/nuclio/ endpoint when hosting cvat with a tls cert
### Describe alternatives you've considered
I have looked at the documentation for both apis and have set rules to deconflict them, I have also tested the following labels on docker compose.
With the below settings, I can access the cvat ui and nuclio ui via localhost:8080 and localhost:8080/nuclio/ respectively. The trouble is when i use a ssl/tls cert the redirect does not happen. I am quite new to traefik and I wonder if i made a mistake somewhere?
```
labels:
- traefik.enable=true
- traefik.http.routers.nuclio.service=nuclio
- traefik.http.services.nuclio.loadbalancer.server.port=8070
- traefik.http.routers.nuclio.rule=Host(`${CVAT_HOST:-localhost}`) && (
PathPrefix(`/api/namespaces`) ||
PathPrefix(`/api/frontend_spec`) ||
PathPrefix(`/api/versions`) ||
(PathPrefix(`/api/projects`) && HeadersRegexp(`x-nuclio-project-namespace`, ".+")) ||
PathPrefix(`/api/functions`) ||
PathPrefix(`/api/function_events`) ||
PathPrefix(`/api/function_templates`) ||
PathPrefix(`/assets/i18n`) ||
PathPrefix(`/dashboard-config.json`) ||
PathPrefix(`/nuclio/`)
)
```
When i access via unsecured http i add the following router:
```
- traefik.http.routers.nuclio.entrypoints=web
```
When accessing via the secured route i add the websecure. however this does not seem to redirect me.
```
- traefik.http.routers.nuclio.entrypoints=websecure
- traefik.http.routers.dashboard.tls=true
```
### Additional context
_No response_ | open | 2024-12-12T00:43:37Z | 2024-12-12T13:26:12Z | https://github.com/cvat-ai/cvat/issues/8817 | [
"enhancement",
"documentation"
] | marmal88 | 0 |
biolab/orange3 | data-visualization | 6,841 | Edit Domain interpret as time: add more dd-mm options | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
<img width="659" alt="image" src="https://github.com/biolab/orange3/assets/55989717/9f1691a4-b4cb-4aa3-870b-e777988a51c3">
Currently, manual options for conversion of text to date-time include some options with the month before the day and others with the day before the month. Some options with day before month that are popular in certain European countries are missing, e.g. 25-11-2025 00:00:00 and with "/" or "." instead of "-" as well as yy instead of yyyy are missing
**What's your proposed solution?**
Add the missing options to the drop-down menu
**Are there any alternative solutions?**
The alternative would be to apply string manipulations using Formula before Edit Domain
| open | 2024-06-26T10:05:10Z | 2024-11-23T09:20:35Z | https://github.com/biolab/orange3/issues/6841 | [
"meal"
] | wvdvegte | 2 |
nonebot/nonebot2 | fastapi | 2,615 | Plugin: 人类友好数据配置 | ### PyPI 项目名
nonebot-plugin-humanaticstore
### 插件 import 包名
nonebot_plugin_humanaticstore
### 标签
[{"label":"config","color":"#ea5252"},{"label":"配置工具","color":"#ea5252"}]
### 插件配置项
_No response_ | closed | 2024-03-23T14:53:06Z | 2024-03-24T13:16:38Z | https://github.com/nonebot/nonebot2/issues/2615 | [
"Plugin"
] | QuanhuZeYu | 7 |
streamlit/streamlit | data-visualization | 10,610 | st.navigation() not working on Linux but working on Windows | ### Checklist
- [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar issues.
- [x] I added a very descriptive title to this issue.
- [x] I have provided sufficient information below to help reproduce this issue.
### Summary
I'm trying to create sidenav using st.navigation but it's not showing up in Linux (Ubuntu 20.04 LTS) while its working fine on Windows.
### Reproducible Code Example
```Python
import streamlit as st
st.session_state['user_pages'] = {
'Report Section': [st.Page(page='links/page1.py', title='Report Creator', icon=':material/calculate:')],
'View Section':[st.Page(page='links/page2.py', title='Report Viewer', icon=':material/heap_snapshot_thumbnail:')],
}
pg = st.navigation(st.session_state['user_pages'])
pg.run()
```
### Steps To Reproduce
Run the above script on Ubuntu 20.04 LTS.
### Expected Behavior
Side bar with navbar section

### Current Behavior
Sidebar without navbar sections on Ubuntu 20.04 LTS

### Is this a regression?
- [x] Yes, this used to work in a previous version.
### Debug info
- Streamlit version: v1.42.2
- Python version: 3.10.12
- Operating System: Ubuntu 20.04 LTS
- Browser: Edge | closed | 2025-03-03T18:11:52Z | 2025-03-04T08:55:49Z | https://github.com/streamlit/streamlit/issues/10610 | [
"type:bug",
"status:needs-triage"
] | amanchaudhary-95 | 1 |
ray-project/ray | tensorflow | 51,349 | Release test map_groups.many_groups (sort_shuffle_pull_based) failed | Release test **map_groups.many_groups (sort_shuffle_pull_based)** failed. See https://buildkite.com/ray-project/release/builds/35758#0195916e-c154-473b-9806-e922721e0873 for more details.
Managed by OSS Test Policy | closed | 2025-03-13T22:07:28Z | 2025-03-18T16:57:56Z | https://github.com/ray-project/ray/issues/51349 | [
"bug",
"P0",
"triage",
"data",
"release-test",
"jailed-test",
"ray-test-bot",
"weekly-release-blocker",
"stability"
] | can-anyscale | 1 |
collerek/ormar | sqlalchemy | 1,123 | use snake case for table names in the database | as we know if the `tablename` is not mentioned in the `Meta` class, the default table name will be `class.__name__.lower()+'s'`
but if the class name is in *CamelCase*, it will create a total mess in the database. apparently, there is no way to automize this process (converting *CamelCase* into *snake_case*) in the `Meta` class, and table names should be hard coded for each class. | open | 2023-06-28T14:58:13Z | 2023-06-28T15:00:16Z | https://github.com/collerek/ormar/issues/1123 | [
"enhancement"
] | kkasra12 | 0 |
Anjok07/ultimatevocalremovergui | pytorch | 1,316 | UVR error | everytime it gets to around 54 percent it gives a error is there any fix to this ??
| open | 2024-05-02T02:09:28Z | 2024-05-02T02:09:28Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1316 | [] | nosoul08 | 0 |
TencentARC/GFPGAN | pytorch | 371 | Changes eye color | I see it's already been pointed out that the model is too aggressive, which includes changing eye color - blue to brown. | open | 2023-04-21T21:03:50Z | 2024-07-10T09:39:10Z | https://github.com/TencentARC/GFPGAN/issues/371 | [] | FunctionalHarpy | 1 |
tfranzel/drf-spectacular | rest-api | 1,294 | Specify different security for each endpoint | Hello, I was wondering if there was a way to specify a different `security` for each endpoint, so that some endpoints can have the security look like this:
```json
"security": [
{
"ApiKeyAuth": []
}
]
```
And others like this:
```json
"security": [
{
"ApiKeyAuth": [],
"jwtAuth": []
}
]
```
Thank you! | open | 2024-09-14T10:21:09Z | 2024-09-24T09:10:40Z | https://github.com/tfranzel/drf-spectacular/issues/1294 | [] | stefanofusai | 2 |
Miserlou/Zappa | flask | 1,539 | Import issue with pytorch | Works fine in the local environment but on deploying to aws lambda it gives an error in the zappa logs.
import torch isn't working.
```
import torch
File "/var/task/torch/__init__.py", line 56, in <module>
from torch._C import *
ModuleNotFoundError: No module named 'torch._C'
```
| open | 2018-06-19T16:51:05Z | 2023-02-13T15:29:29Z | https://github.com/Miserlou/Zappa/issues/1539 | [] | ahwankumar | 4 |
ultralytics/yolov5 | pytorch | 12,796 | What does the number of iterations of the c3 module mean in yolov5? | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
What does the number of iterations of the c3 module mean in yolov5?
(To be precise, it seems to be the number of bottlenecks.)

thanks!
### Additional
_No response_ | closed | 2024-03-07T05:21:04Z | 2024-10-20T19:40:57Z | https://github.com/ultralytics/yolov5/issues/12796 | [
"question",
"Stale"
] | ghkdtkddl | 7 |
aiogram/aiogram | asyncio | 1,129 | "Task exception was never retrieved" when calling function "bot.send_photo" (I/O operation on closed file) | ### Checklist
- [X] I am sure the error is coming from aiogram code
- [X] I have searched in the issue tracker for similar bug reports, including closed ones
### Operating system
CentOS Linux release 7.9.2009
### Python version
3.8.13
### aiogram version
2.25.1
### Expected behavior
Execution of function "bot.send_photo" without errors.
### Current behavior
the "bot.send_photo" function gives an error "Task exception was never retrieved" in 50% of cases
### Steps to reproduce
The error is floating and repeats in 50% of cases.
### Code example
```python3
async def get_user_photo(user):
photo, log_text = None, f'Member ID:{user.id} profile photo'
try:
upp = await bot.get_user_profile_photos(user.id)
logging.debug('Found %d photos', upp.total_count)
if upp.total_count > 0:
file_id = upp.photos[0][0].file_id
logging.debug('Photo file_id: %s', file_id)
file = await bot.get_file(file_id)
logging.debug('Photo path: %s', file.file_path)
photo = await bot.download_file(file.file_path)
except exceptions.TelegramAPIError as err:
log_error(log_text, err)
except ClientResponseError as err:
log_error(log_text, err)
if photo is None:
log_text += ' is not available'
else:
log_text += ' downloaded successfully'
logging.info(log_text)
return photo
async def send_message(user, chat_type, title):
text = get_chat_title(chat_type, title)
text += get_user_text(user)
photo = await get_user_photo(user)
uids = get_enabled_uids()
for user_id in uids:
if photo is not None:
await bot.send_photo(user_id, photo, text)
else:
await bot.send_message(user_id, text)
logging.debug('Message about new member ID:%d sent to user ID:%d',
user.id, user_id)
```
### Logs
```sh
2023-02-16 12:31:24,611 - root - INFO - New member in channel "Test": {"user": {"id": 1854593435, "is_bot": false, "first_name": "Example", "username": "dvdstore"}, "status": "member"}
2023-02-16 12:31:24,666 - root - DEBUG - Found 1 photos
2023-02-16 12:31:24,667 - root - DEBUG - Photo file_id: AgACAgIAAxUAAWPt9-zQB19Kxtd2somtnQYVDfQxAAK3tzEbBahwS2qkEc-vyAzpAQADAgADYQADLgQ
2023-02-16 12:31:24,719 - root - DEBUG - Photo path: photos/file_33.jpg
2023-02-16 12:31:24,757 - root - INFO - Member ID:1854593435 profile photo downloaded successfully
2023-02-16 12:31:24,980 - root - DEBUG - Message about new member ID:1854593435 sent to user ID:1134534455
2023-02-16 12:31:24,982 - asyncio - ERROR - Task exception was never retrieved
future: <Task finished name='Task-1547' coro=<Dispatcher._process_polling_updates() done, defined at /opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiogram/dispatcher/dispatcher.py:407> exception=ValueError('I/O operation on closed file.')>
Traceback (most recent call last):
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiogram/dispatcher/dispatcher.py", line 415, in _process_polling_updates
for responses in itertools.chain.from_iterable(await self.process_updates(updates, fast)):
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiogram/dispatcher/dispatcher.py", line 235, in process_updates
return await asyncio.gather(*tasks)
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiogram/dispatcher/handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiogram/dispatcher/dispatcher.py", line 306, in process_update
return await self.chat_member_handlers.notify(update.chat_member)
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiogram/dispatcher/handler.py", line 117, in notify
response = await handler_obj.handler(*args, **partial_data)
File "/opt/nirn2-bot/mod_inspector.py", line 163, in message_chat_member
await send_message(new.user, chat_type, cmu.chat.title)
File "/opt/nirn2-bot/mod_inspector.py", line 138, in send_message
await bot.send_photo(user_id, photo, text)
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiogram/bot/bot.py", line 565, in send_photo
result = await self.request(api.Methods.SEND_PHOTO, payload, files)
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiogram/bot/base.py", line 236, in request
return await api.make_request(await self.get_session(), self.server, self.__token, method, data, files,
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiogram/bot/api.py", line 139, in make_request
async with session.post(url, data=req, **kwargs) as response:
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiohttp/client.py", line 1141, in __aenter__
self._resp = await self._coro
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiohttp/client.py", line 508, in _request
req = self._request_class(
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiohttp/client_reqrep.py", line 313, in __init__
self.update_body_from_data(data)
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiohttp/client_reqrep.py", line 505, in update_body_from_data
body = body()
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiohttp/formdata.py", line 170, in __call__
return self._gen_form_data()
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiohttp/formdata.py", line 163, in _gen_form_data
self._writer.append_payload(part)
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiohttp/multipart.py", line 829, in append_payload
size = payload.size
File "/opt/nirn2-bot/telegram/lib64/python3.8/site-packages/aiohttp/payload.py", line 369, in size
position = self._value.tell()
ValueError: I/O operation on closed file.
```
### Additional information
I also found out through the debugger that the "bot.download_file" function returns an object of the type "io.BytesIO", and not a file. So at first glance, everything should be fine. | closed | 2023-02-16T10:01:21Z | 2023-08-06T14:47:44Z | https://github.com/aiogram/aiogram/issues/1129 | [
"bug"
] | rustequal | 1 |
hbldh/bleak | asyncio | 1,090 | Is it possible to call async function more than once at certain time interval? | * bleak version:Latest
* Python version:3.8.10
* Operating System: Linux Mint-64bit
* BlueZ version (`bluetoothctl -v`) in case of Linux: 5.56
### Description
I realized that calling the async function more than once or creating two saparate async function gives a runtime error "event loop is already closed". While searching for solution I found the suggestion to move everything with await in one function. While I may want to create another async function or call the same function more than once to see if the device is discover or new devices are available.
| closed | 2022-10-24T15:37:30Z | 2022-10-24T15:41:03Z | https://github.com/hbldh/bleak/issues/1090 | [] | ujur007 | 0 |
randyzwitch/streamlit-folium | streamlit | 72 | st_folium keeps reloading Streamlit page till page crashes | Streamlit page keeps reloading every 3 seconds till it crashes. Now I use st_static that is not recommended ?
`st_data = st_folium(m, key='map',width = 650, height = 600)`
Any idea what is happening ? | closed | 2022-06-23T13:52:40Z | 2023-07-06T16:47:07Z | https://github.com/randyzwitch/streamlit-folium/issues/72 | [] | GSkrt | 13 |
CorentinJ/Real-Time-Voice-Cloning | tensorflow | 1,298 | Error installing o WIn10 x64 | I got the error when i ran the following command in command line
```pip install -r requirements.txt```
How do i fix this
> Preparing metadata (pyproject.toml) ... error
> error: subprocess-exited-with-error
>
> × Preparing metadata (pyproject.toml) did not run successfully.
> │ exit code: 1
> ╰─> [264 lines of output]
> setup.py:66: RuntimeWarning: NumPy 1.20.3 may not yet support Python 3.10.
> warnings.warn(
> Running from numpy source directory.
> setup.py:485: UserWarning: Unrecognized setuptools command, proceeding with generating Cython sources and expanding templates
> run_build = parse_setuppy_commands()
> Processing numpy/random\_bounded_integers.pxd.in
> Processing numpy/random\bit_generator.pyx
> Processing numpy/random\mtrand.pyx
> Processing numpy/random\_bounded_integers.pyx.in
> Processing numpy/random\_common.pyx
> Processing numpy/random\_generator.pyx
> Processing numpy/random\_mt19937.pyx
> Processing numpy/random\_pcg64.pyx
> Processing numpy/random\_philox.pyx
> Processing numpy/random\_sfc64.pyx
> Cythonizing sources
> blas_opt_info:
> blas_mkl_info:
> No module named 'numpy.distutils._msvccompiler' in numpy.distutils; trying from distutils
> customize MSVCCompiler
> libraries mkl_rt not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> blis_info:
> libraries blis not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> openblas_info:
> libraries openblas not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> get_default_fcompiler: matching types: '['gnu', 'intelv', 'absoft', 'compaqv', 'intelev', 'gnu95', 'g95', 'intelvem', 'intelem', 'flang']'
> customize GnuFCompiler
> Could not locate executable g77
> Could not locate executable f77
> customize IntelVisualFCompiler
> Could not locate executable ifort
> Could not locate executable ifl
> customize AbsoftFCompiler
> Could not locate executable f90
> customize CompaqVisualFCompiler
> Could not locate executable DF
> customize IntelItaniumVisualFCompiler
> Could not locate executable efl
> customize Gnu95FCompiler
> Could not locate executable gfortran
> Could not locate executable f95
> customize G95FCompiler
> Could not locate executable g95
> customize IntelEM64VisualFCompiler
> customize IntelEM64TFCompiler
> Could not locate executable efort
> Could not locate executable efc
> customize PGroupFlangCompiler
> Could not locate executable flang
> don't know how to compile Fortran code on platform 'nt'
> NOT AVAILABLE
>
> atlas_3_10_blas_threads_info:
> Setting PTATLAS=ATLAS
> libraries tatlas not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> atlas_3_10_blas_info:
> libraries satlas not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> atlas_blas_threads_info:
> Setting PTATLAS=ATLAS
> libraries ptf77blas,ptcblas,atlas not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> atlas_blas_info:
> libraries f77blas,cblas,atlas not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\system_info.py:1989: UserWarning:
> Optimized (vendor) Blas libraries are not found.
> Falls back to netlib Blas library which has worse performance.
> A better performance should be easily gained by switching
> Blas library.
> if self._calc_info(blas):
> blas_info:
> libraries blas not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\system_info.py:1989: UserWarning:
> Blas (http://www.netlib.org/blas/) libraries not found.
> Directories to search for the libraries can be specified in the
> numpy/distutils/site.cfg file (section [blas]) or by setting
> the BLAS environment variable.
> if self._calc_info(blas):
> blas_src_info:
> NOT AVAILABLE
>
> C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\system_info.py:1989: UserWarning:
> Blas (http://www.netlib.org/blas/) sources not found.
> Directories to search for the sources can be specified in the
> numpy/distutils/site.cfg file (section [blas_src]) or by setting
> the BLAS_SRC environment variable.
> if self._calc_info(blas):
> NOT AVAILABLE
>
> non-existing path in 'numpy\\distutils': 'site.cfg'
> lapack_opt_info:
> lapack_mkl_info:
> libraries mkl_rt not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> openblas_lapack_info:
> libraries openblas not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> openblas_clapack_info:
> libraries openblas,lapack not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> flame_info:
> libraries flame not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> atlas_3_10_threads_info:
> Setting PTATLAS=ATLAS
> libraries lapack_atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib
> libraries tatlas,tatlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib
> libraries lapack_atlas not found in C:\
> libraries tatlas,tatlas not found in C:\
> libraries lapack_atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\libs
> libraries tatlas,tatlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\libs
> <class 'numpy.distutils.system_info.atlas_3_10_threads_info'>
> NOT AVAILABLE
>
> atlas_3_10_info:
> libraries lapack_atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib
> libraries satlas,satlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib
> libraries lapack_atlas not found in C:\
> libraries satlas,satlas not found in C:\
> libraries lapack_atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\libs
> libraries satlas,satlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\libs
> <class 'numpy.distutils.system_info.atlas_3_10_info'>
> NOT AVAILABLE
>
> atlas_threads_info:
> Setting PTATLAS=ATLAS
> libraries lapack_atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib
> libraries ptf77blas,ptcblas,atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib
> libraries lapack_atlas not found in C:\
> libraries ptf77blas,ptcblas,atlas not found in C:\
> libraries lapack_atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\libs
> libraries ptf77blas,ptcblas,atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\libs
> <class 'numpy.distutils.system_info.atlas_threads_info'>
> NOT AVAILABLE
>
> atlas_info:
> libraries lapack_atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib
> libraries f77blas,cblas,atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib
> libraries lapack_atlas not found in C:\
> libraries f77blas,cblas,atlas not found in C:\
> libraries lapack_atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\libs
> libraries f77blas,cblas,atlas not found in C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\libs
> <class 'numpy.distutils.system_info.atlas_info'>
> NOT AVAILABLE
>
> lapack_info:
> libraries lapack not found in ['C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\lib', 'C:\\', 'C:\\Users\\MyUserName\\AppData\\Local\\Programs\\Python\\Python310\\libs']
> NOT AVAILABLE
>
> C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\system_info.py:1849: UserWarning:
> Lapack (http://www.netlib.org/lapack/) libraries not found.
> Directories to search for the libraries can be specified in the
> numpy/distutils/site.cfg file (section [lapack]) or by setting
> the LAPACK environment variable.
> return getattr(self, '_calc_info_{}'.format(name))()
> lapack_src_info:
> NOT AVAILABLE
>
> C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\system_info.py:1849: UserWarning:
> Lapack (http://www.netlib.org/lapack/) sources not found.
> Directories to search for the sources can be specified in the
> numpy/distutils/site.cfg file (section [lapack_src]) or by setting
> the LAPACK_SRC environment variable.
> return getattr(self, '_calc_info_{}'.format(name))()
> NOT AVAILABLE
>
> numpy_linalg_lapack_lite:
> FOUND:
> language = c
> define_macros = [('HAVE_BLAS_ILP64', None), ('BLAS_SYMBOL_SUFFIX', '64_')]
>
> C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\dist.py:275: UserWarning: Unknown distribution option: 'define_macros'
> warnings.warn(msg)
> running dist_info
> running build_src
> build_src
> building py_modules sources
> creating build
> creating build\src.win-amd64-3.10
> creating build\src.win-amd64-3.10\numpy
> creating build\src.win-amd64-3.10\numpy\distutils
> building library "npymath" sources
> Traceback (most recent call last):
> File "C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
> main()
> File "C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
> json_out['return_val'] = hook(**hook_input['kwargs'])
> File "C:\Users\MyUserName\AppData\Local\Programs\Python\Python310\lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 149, in prepare_metadata_for_build_wheel
> return hook(metadata_directory, config_settings)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\build_meta.py", line 157, in prepare_metadata_for_build_wheel
> self.run_setup()
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\build_meta.py", line 248, in run_setup
> super(_BuildMetaLegacyBackend,
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\build_meta.py", line 142, in run_setup
> exec(compile(code, __file__, 'exec'), locals())
> File "setup.py", line 513, in <module>
> setup_package()
> File "setup.py", line 505, in setup_package
> setup(**metadata)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\core.py", line 169, in setup
> return old_setup(**new_attr)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\__init__.py", line 165, in setup
> return distutils.core.setup(**attrs)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\core.py", line 148, in setup
> dist.run_commands()
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 967, in run_commands
> self.run_command(cmd)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command
> cmd_obj.run()
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\command\dist_info.py", line 31, in run
> egg_info.run()
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\command\egg_info.py", line 24, in run
> self.run_command("build_src")
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\cmd.py", line 313, in run_command
> self.distribution.run_command(command)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\dist.py", line 986, in run_command
> cmd_obj.run()
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\command\build_src.py", line 144, in run
> self.build_sources()
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\command\build_src.py", line 155, in build_sources
> self.build_library_sources(*libname_info)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\command\build_src.py", line 288, in build_library_sources
> sources = self.generate_sources(sources, (lib_name, build_info))
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\command\build_src.py", line 378, in generate_sources
> source = func(extension, build_dir)
> File "numpy\core\setup.py", line 671, in get_mathlib_info
> st = config_cmd.try_link('int main(void) { return 0;}')
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 243, in try_link
> self._link(body, headers, include_dirs,
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\command\config.py", line 162, in _link
> return self._wrap_method(old_config._link, lang,
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\command\config.py", line 96, in _wrap_method
> ret = mth(*((self,)+args))
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 137, in _link
> (src, obj) = self._compile(body, headers, include_dirs, lang)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\command\config.py", line 105, in _compile
> src, obj = self._wrap_method(old_config._compile, lang,
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\command\config.py", line 96, in _wrap_method
> ret = mth(*((self,)+args))
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\command\config.py", line 132, in _compile
> self.compiler.compile([src], include_dirs=include_dirs)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 401, in compile
> self.spawn(args)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-build-env-d7rdrnsm\overlay\Lib\site-packages\setuptools\_distutils\_msvccompiler.py", line 505, in spawn
> return super().spawn(cmd, env=env)
> File "C:\Users\MyUserName\AppData\Local\Temp\pip-install-82eggieg\numpy_518742ed255d4791b03e0e4ce63becbe\numpy\distutils\ccompiler.py", line 90, in <lambda>
> m = lambda self, *args, **kw: func(self, *args, **kw)
> TypeError: CCompiler_spawn() got an unexpected keyword argument 'env' | open | 2024-05-15T18:41:28Z | 2024-05-15T18:41:28Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1298 | [] | oghenez | 0 |
gradio-app/gradio | data-visualization | 9,999 | Error in Browser-Console: heartbeat/nsi77w2je2 net::ERR_INCOMPLETE_CHUNKED_ENCODING 200 | ### Describe the bug
We deploy our gradio-Chatbot with AWS App Runner.
On App Runner, there is a 120s timeout:
> There is a total of 120 seconds request timeout limit on the HTTP requests. The 120 seconds include the time the application takes to read the request, including the body, and complete writing the HTTP response.
See https://docs.aws.amazon.com/apprunner/latest/dg/develop.html
The JS-error occurs only on the remote AWS App Runner environment, linked to the App Runner timeout.
When the page loads initially, an initial Server-Sent Events (SSE) connection is opened but remains open without closing. After 120 seconds, the connection hits the App Runner timeout and disconnects, causing a console error (e.g., `net::ERR_INCOMPLETE_CHUNKED_ENCODING` in Chrome or `NS_ERROR_NET_PARTIAL_TRANSFER` in Firefox).
This initial connection is separate from connections opened and closed for each chat message.
My analysis so far:
* This behavior seems to have started since [this commit](https://github.com/gradio-app/gradio/commit/450b8cc898f130f15caa3742f65c17b9f7a8f398#diff-ee40ceafb7d58e3c6c84334218bafe1351f942a6eb124d1cd81bde3ef5448179R151), which introduces repeated calls to the connect method [here](https://github.com/gradio-app/gradio/blob/main/js/spa/src/Index.svelte#L289) for the client.
* It’s unclear if/when this initial connection is closed. It appears that nothing significant is transmitted over this initial SSE connection, which mostly pings the "heartbeat" endpoint to check if the application is alive.
* The heartbeat endpoint returns a regular status (e.g., "ALIVE") but may not do anything with this response. After 120 seconds, the stream times out and triggers the error
There is no observed impact on chatbot functionality, but we don't really want to deploy anything that causes JS-errors.
### Have you searched existing issues? 🔎
- [X] I have searched and found no existing issues
### Reproduction
It is a bit tricky to reproduce this error without having to deploy on AWS App Runner.
However, you can just use this simple demo:
```python
import random
import gradio as gr
def random_response(message, history):
return random.choice(["Yes", "No"])
gr.ChatInterface(random_response, type="messages").launch()
```
And then set a breakpoint under "Sources" in "client/js/src/client.ts" [here](https://github.com/gradio-app/gradio/blob/2e2cdbfb609ca992ccc31bb38589486aaaa14012/client/js/src/client.ts#L220).
Then you can see that an initial connection is opened on the initial page load.
The comment there even says:
`// Just connect to the endpoint without parsing the response. Ref: https://github.com/gradio-app/gradio/pull/7974#discussion_r1557717540`
### Screenshot

### Logs
```shell
dummy/gradio_api/heartbeat/j3sx9yr4dk:1
Failed to load resource: net::ERR_INCOMPLETE_CHUNKED_ENCODING
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Darwin
gradio version: 5.5.0
gradio_client version: 1.4.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.6.2.post1
audioop-lts is not installed.
fastapi: 0.115.4
ffmpy: 0.4.0
gradio-client==1.4.2 is not installed.
httpx: 0.27.2
huggingface-hub: 0.26.2
jinja2: 3.1.4
markupsafe: 2.1.5
numpy: 2.1.3
orjson: 3.10.11
packaging: 24.2
pandas: 2.2.3
pillow: 11.0.0
pydantic: 2.9.2
pydub: 0.25.1
python-multipart==0.0.12 is not installed.
pyyaml: 6.0.2
ruff: 0.7.3
safehttpx: 0.1.1
semantic-version: 2.10.0
starlette: 0.41.2
tomlkit==0.12.0 is not installed.
typer: 0.13.0
typing-extensions: 4.12.2
urllib3: 2.2.3
uvicorn: 0.32.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2024.10.0
httpx: 0.27.2
huggingface-hub: 0.26.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 12.0
```
### Severity
I can work around it | open | 2024-11-19T15:22:38Z | 2025-02-28T14:09:31Z | https://github.com/gradio-app/gradio/issues/9999 | [
"bug",
"cloud"
] | machnicole | 0 |
tortoise/tortoise-orm | asyncio | 1,437 | Cannot drop postgres test db | **Describe the bug**
finalizer() fails with
```
E asyncpg.exceptions.ObjectInUseError: database "test_db" is being accessed by other users
E DETAIL: There is 1 other session using the database.
```
**To Reproduce**
```
@pytest.fixture(scope="session")
def event_loop():
policy = asyncio.get_event_loop_policy()
loop = policy.new_event_loop()
yield loop
loop.close()
@pytest.fixture(scope="session", autouse=True)
def initialize_tests(request, event_loop):
initializer(models, db_url=test_db_url)
request.addfinalizer(finalizer)
```
**Expected behavior**
I expect the test db to be dropped as finalizer is calling connections.close_all() before trying to delete the db
| open | 2023-07-25T00:04:44Z | 2023-07-25T00:04:44Z | https://github.com/tortoise/tortoise-orm/issues/1437 | [] | adlmtl | 0 |
clovaai/donut | nlp | 196 | Signature Detection using DONUT Model - again this damn bouding box | # Signature Detection using DONUT Model
Hello everyone!
## Objective
I wanted to discuss a topic related to the DONUT model in the context of signature detection. While DONUT was not originally designed for bounding box detection, I have reviewed the past issues regarding this and found a possible solution mentioned in the paper - incorporating cross-attention.
## Background
My objective is slightly different as I aim to detect signatures on documents. Although I already have a YOLO model that performs well for this task, I am interested in developing a model that can also comprehend the semantic context, enabling it to better understand which pages require signatures.
## Proposed Modifications
To achieve this, I thought about the following modifications to the model's architecture:
1. **Position Embedding**: I plan to change the position embedding from relative to absolute. By using absolute position embedding, the model can gain better spatial information about objects.
2. **Patch Division**: Currently, the Swin transformer divides the image into 4x4 patches and uses patch positions as (x_min, y_min, x_max, y_max) coordinates.
3. **Learning Objective**: Instead of relying solely on cross-entropy loss for fine-tuning tasks, I believe it would be more suitable to adjust the learning objective for detecting bounding boxes. Bounding boxes that deviate significantly from the ground truth should be penalized more than those that are closer.
## Pretrained Weights
Considering these modifications, I am unsure whether it would be reasonable to use the pretrained weights (donut/base) or if training from scratch would be more appropriate.
## Alternative Approach
Alternatively, I am also considering a simpler method to explore:
- **Finetuning DONUT on Classification**: I can finetune DONUT on a classification task where documents are labeled as either containing a signature or not. In this case, cross-entropy loss could be used to detect the presence of a signature.
## Feedback
I would greatly appreciate your thoughts and feedback on these ideas.
Please feel free to contribute to the discussion and share your opinions. Let's work together to enhance signature detection using the DONUT model! | open | 2023-05-24T21:06:05Z | 2023-05-25T16:12:28Z | https://github.com/clovaai/donut/issues/196 | [] | AmT42 | 2 |
encode/databases | asyncio | 259 | aiopg reports multiple values for keyword argument "password" when password supplied as kv arg | Hello, I have the following code:
`self.database = databases.Database(f'{self.db_engine}://{self.db_user}@{self.db_host}:{self.db_port}/{self.db_database}',
password=os.getenv("DB_PASS"), min_size=1, max_size=100)`
when I execute that, I get the following error in the trace, I am not sure if something on my side or it is an issue:
`
File "/home/vito/project/venv/lib/python3.8/site-packages/databases/backends/aiopg.py", line 73, in connect
self._pool = await aiopg.create_pool(
TypeError: create_pool() got multiple values for keyword argument 'password'
`
Given that `self.db_engine = 'postgresql+aiopg'`, seems to me it could be related on how `databases` module uses the `aiopg` `create_pool()` call and translates the parameters. But I can't be sure so I am posting here.
Cheers | open | 2020-10-26T14:11:52Z | 2024-08-20T09:21:02Z | https://github.com/encode/databases/issues/259 | [] | vitodsk | 1 |
HumanSignal/labelImg | deep-learning | 193 | UnicodeDecodeError while installing on windows7 | <!--
Unable to install on windows machine
-->
- **OS: windows 7
- **
Qt version: 5.6.2
SIP version: 4.18
PyQt version: 5.6
`
UnicodeDecodeError: 'charmap' codec can't decode byte 0x90 in position 5059: character maps to <undefined>`
| closed | 2017-11-07T13:06:49Z | 2018-09-02T09:49:19Z | https://github.com/HumanSignal/labelImg/issues/193 | [] | shravankumar147 | 2 |
BayesWitnesses/m2cgen | scikit-learn | 94 | gcc crashes compiling output | Similar to #88, but in my case the problem isn't the size of the binary but that gcc sometimes runs out of memory building the C output.
Could the output be broken up over multiple files to reduce compiler memory use? | closed | 2019-08-01T14:04:06Z | 2019-08-23T21:06:21Z | https://github.com/BayesWitnesses/m2cgen/issues/94 | [] | beojan | 4 |
ScrapeGraphAI/Scrapegraph-ai | machine-learning | 549 | Unable to join discord on readme |  | closed | 2024-08-14T16:52:05Z | 2024-08-15T02:50:05Z | https://github.com/ScrapeGraphAI/Scrapegraph-ai/issues/549 | [] | angelotc | 1 |
jazzband/django-oauth-toolkit | django | 604 | Is Resource owner flow possible or not and if yes how? | I'm coming from #581 which involves some knowledge and some questions that are kinda off topic there and therefore I made a new issue.
The problem I have is that the documentation is to centric on using some html views that ship with this plugin. My goal is to create a single page application that authorizes itself in a similar way as any other potential client would do via oauth2. For this I basically want to follow the resource owner flow as described in the rfc here: https://tools.ietf.org/html/rfc6749#section-4.3
So I want to
1. let user enter credentials in a form
2. make a POST request with those credentials
3. receive the oauth token as a response when the credentials are correct
Is this even possible with django-oauth-toolkit or is dot not meant to be used for this flow? Please explain how if the answer is that its possible. I'm happy to PR an understandable documentation if things are clear then. | closed | 2018-06-13T21:50:54Z | 2018-10-23T16:52:08Z | https://github.com/jazzband/django-oauth-toolkit/issues/604 | [] | ohcibi | 2 |
home-assistant/core | asyncio | 141,172 | [Overkiz] - Battery unknown for all equipments | ### The problem
I see that all my Equipment in the Overkiz integration has unknown value for Battery sensor.
### What version of Home Assistant Core has the issue?
core-2025.3.4
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
Overkiz
### Link to integration documentation on our website
_No response_
### Diagnostics information
See example in the diagnostic file.
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
[overkiz-c3870f84f152a0449c3a9448c042aa20-Capteur de fumée Étage-015a934295c501564fa0aa50a13a8cba (1).json](https://github.com/user-attachments/files/19407689/overkiz-c3870f84f152a0449c3a9448c042aa20-Capteur.de.fumee.Etage-015a934295c501564fa0aa50a13a8cba.1.json) | open | 2025-03-23T07:59:46Z | 2025-03-23T08:39:06Z | https://github.com/home-assistant/core/issues/141172 | [
"integration: overkiz"
] | alsmaison | 1 |
nolar/kopf | asyncio | 286 | adopt(V1Secret) fails | > <a href="https://github.com/fsniper"><img align="left" height="50" src="https://avatars0.githubusercontent.com/u/803734?v=4"></a> An issue by [fsniper](https://github.com/fsniper) at _2020-01-03 16:24:10+00:00_
> Original URL: https://github.com/zalando-incubator/kopf/issues/286
>
## Long story short
When trying to adopt a V1Secret object, adoption fails with
```AttributeError: 'V1Secret' object has no attribute 'setdefault'```
## Description
<details><summary>The code snippet to reproduce the issue</summary>
```python
metadata = kubernetes.client.V1ObjectMeta(name = meta['name'])
payload = {
'HOST': os.environ['HOST'],
}
data = kubernetes.client.V1Secret(string_data=payload, metadata=metadata)
kopf.adopt(data)
```
</details>
<details><summary>The exact command to reproduce the issue</summary>
```bash
kopf run ...
```
</details>
<details><summary>The full output of the command that failed</summary>
```
[2020-01-03 16:09:15,471] kopf.objects [ERROR ] [test/example-azure-postgresql-database] Handler 'on_create_handler' failed with an exception. Will retry.
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 523, in _execute_handler
lifecycle=lifecycle, # just a default for the sub-handlers, not used directly.
File "/usr/local/lib/python3.7/site-packages/kopf/reactor/handling.py", line 612, in _call_handler
**kwargs,
File "/usr/local/lib/python3.7/site-packages/kopf/reactor/invocation.py", line 115, in invoke
result = await loop.run_in_executor(config.WorkersConfig.get_syn_executor(), real_fn)
File "/usr/local/Cellar/python/3.7.5/Frameworks/Python.framework/Versions/3.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run
result = self.fn(*self.args, **self.kwargs)
File "apg_op.py", line 62, in on_create_handler
create_secret(meta, spec, dbname, roles)
File "apg_op.py", line 28, in create_secret
kopf.adopt(data)
File "/usr/local/lib/python3.7/site-packages/kopf/toolkits/hierarchies.py", line 139, in adopt
append_owner_reference(objs, owner=real_owner)
File "/usr/local/lib/python3.7/site-packages/kopf/toolkits/hierarchies.py", line 28, in append_owner_reference
refs = obj.setdefault('metadata', {}).setdefault('ownerReferences', [])
AttributeError: 'V1Secret' object has no attribute 'setdefault'
```
</details>
## Environment
<!-- The following commands can help:
`kopf --version` or `pip show kopf`
`kubectl version`
`python --version`
-->
* Kopf version: 0.24
* Kubernetes version: v1.13.10
* Python version: 3.7.5
* OS/platform: MacOs Mojave
<details><summary>Python packages installed</summary>
<!-- use `pip freeze --all` -->
```
adal==1.2.0
aiohttp==3.6.2
aiojobs==0.2.2
altgraph==0.16.1
arrow==0.12.1
asn1crypto==0.24.0
astroid==2.2.5
async-timeout==3.0.1
attrs==19.1.0
azure-status==0.0.2
beautifulsoup4==4.6.3
binaryornot==0.4.4
bleach==3.1.0
boto==2.49.0
boto3==1.9.60
botocore==1.12.109
CacheControl==0.12.5
cachetools==3.0.0
cachy==0.2.0
certifi==2018.11.29
cffi==1.11.5
chardet==3.0.4
cleo==0.6.8
Click==7.0
click-default-group==1.2
click-plugins==1.0.4
colorama==0.3.9
colorlog==3.1.4
configparser==3.5.0
cookiecutter==1.6.0
cryptography==2.4.2
dateparser==0.7.0
delegator.py==0.1.1
docutils==0.14
envoy==0.0.3
future==0.17.1
gitdb2==2.0.5
GitPython==2.1.11
google-auth==1.6.1
greenlet==0.4.15
html5lib==1.0.1
humanfriendly==4.17
humanize==0.5.1
hvac==0.9.1
idna==2.7
iso8601==0.1.12
isort==4.3.21
Jinja2==2.10
jinja2-time==0.2.0
jmespath==0.9.3
jsonschema==3.0.2
kopf==0.24
kubernetes==10.0.1
lazy-object-proxy==1.4.2
lockfile==0.12.2
macholib==1.11
marathon==0.10.0
MarkupSafe==1.1.0
maya==0.5.0
mccabe==0.6.1
memoized-property==1.0.3
msgpack==0.6.0
multidict==4.7.2
oauthlib==2.1.0
pastel==0.1.1
pefile==2019.4.18
pendulum==1.5.1
pexpect==4.6.0
pip==19.3.1
pkginfo==1.5.0.1
ply==3.10
poetry==0.12.17
poyo==0.4.2
prometheus-client==0.1.0
ptc==0.1.1
ptc-dcos==0.1.6
ptc-msai==0.2.2
ptc-startproject==0.1.0
ptc-terraform==0.1.6
ptyprocess==0.6.0
py==1.7.0
py-postgresql==1.2.1
pyasn1==0.4.4
pyasn1-modules==0.2.2
pybitbucket==0.12.0
pycparser==2.19
Pygments==2.3.0
pyhcl==0.3.10
PyInstaller==3.4
PyJWT==1.7.0
pykube-ng==19.12.1
pylev==1.3.0
pylint==2.3.1
pynvim==0.3.1
pyparsing==2.4.2
pyrsistent==0.14.11
pyrx-ats==0.3.1
python-dateutil==2.7.5
python-terraform==0.10.0
pytz==2018.7
pytzdata==2018.7
PyYAML==3.13
readme-renderer==24.0
regex==2018.11.22
requests==2.21.0
requests-oauthlib==0.8.0
requests-toolbelt==0.8.0
rsa==3.4.2
s3transfer==0.1.13
setuptools==40.6.3
shellingham==1.3.1
simplejson==3.17.0
six==1.11.0
smmap2==2.0.5
snaptime==0.2.4
tabulate==0.8.2
toml==0.10.0
tomlkit==0.5.5
tqdm==4.29.0
twine==1.12.1
typed-ast==1.4.0
typing-extensions==3.7.4.1
tzlocal==1.5.1
uritemplate==0.6
urllib3==1.25.7
virtualenv==16.2.0
voluptuous==0.11.7
webencodings==0.5.1
websocket-client==0.54.0
wheel==0.32.3
whichcraft==0.5.2
wrapt==1.11.2
xar==19.4.22
yarl==1.4.2
```
</details>
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-01-07 10:07:46+00:00_
>
Thanks for reporting.
Currently, only raw dicts are supported. The classes of other libraries are not yet supported. Though, it would be a good addition for this specific toolkit — hierarchy management — so that it can also accept `kubernetes`'s and `pykube-ng`'s classes for adopting/labelling/annotating/owning.
---
> <a href="https://github.com/janvdvegt"><img align="left" height="30" src="https://avatars3.githubusercontent.com/u/12046878?v=4"></a> Commented by [janvdvegt](https://github.com/janvdvegt) at _2020-01-27 17:29:02+00:00_
>
I'm interested in picking this up, although I only started using kopf yesterday. Is my observation correct that the base kubernetes client is not used anywhere? I'm looking at that part of the code now and I would need some guidance on what would be a clean way to fit it in. Where would the conditions on whether it is a dictionary vs a kubernetes object go?
---
> <a href="https://github.com/nolar"><img align="left" height="30" src="https://avatars0.githubusercontent.com/u/544296?v=4"></a> Commented by [nolar](https://github.com/nolar) at _2020-01-27 21:10:16+00:00_
>
[janvdvegt](https://github.com/janvdvegt) That is correct: Kubernetes API clients are not used in Kopf. Only `aiohttp` is used — for API communication (low-level).
However, Kopf has slightly extended behaviour if `kubernetes` and/or pykube-ng are installed — it will reuse their authentication methods. But it does not require them to be present (effectively limiting the authentication capabilities to simple tokens & username-passwords & ssl certs).
The hierarchy logic is fully located in `kopf.toolkits.hierarchies` — 3rd-party classes support should go there too.
Can you please briefly describe what is your intention — i.e. what are you going to change/implement to fix this issue (conceptually)?
**PS:** Please, be aware of #298 before you start implementing anything — it is a huge shuffle of classes & modules — the logic remains exactly the same though. It does not affect the hierarchies toolkit, luckily.
---
> <a href="https://github.com/janvdvegt"><img align="left" height="30" src="https://avatars3.githubusercontent.com/u/12046878?v=4"></a> Commented by [janvdvegt](https://github.com/janvdvegt) at _2020-01-28 08:32:29+00:00_
>
On a base level it would be the methods in `kopf.toolkits.hierarchies` to also accept the `kubernetes` objects like `V1Secret` so that users can manipulate these objects as well with regards to ownership. This would mean the definition of a K8sObject would include these objects from `kubernetes`. It looks like all the logic can stay inside the `hierarchies` module. | closed | 2020-08-18T20:02:46Z | 2021-02-25T08:44:09Z | https://github.com/nolar/kopf/issues/286 | [
"enhancement",
"archive"
] | kopf-archiver[bot] | 1 |
proplot-dev/proplot | matplotlib | 313 | Improve tight layout algorithm handling of empty gridspec slots | ### Description
When some subplots have colorbar and some don't, the position will be wrong.
### Steps to reproduce
```python
import proplot as pplt
import numpy as np
# Colorbars
fig, axs = pplt.subplots(nrows=2, ncols=2)
state = np.random.RandomState(51423)
m = axs.heatmap(state.rand(10, 10), cmap='dusk')
axs[0].colorbar(m[0], loc='r', label='test')
axs[1].colorbar(m[0], loc='r', label='test')
axs[3].colorbar(m[0], loc='l', label='test')
axs[3].colorbar(m[0], loc='r', label='test')
axs[3].colorbar(m[0], loc='r', label='test')
```
**Actual behavior**:

### Proplot version
Paste the results of `import matplotlib; print(matplotlib.__version__); import proplot; print(proplot.version)`here.
```
3.5.0
0.9.5.post105
```
| closed | 2021-12-23T12:47:43Z | 2022-01-24T09:01:18Z | https://github.com/proplot-dev/proplot/issues/313 | [
"enhancement"
] | zxdawn | 6 |
ray-project/ray | machine-learning | 51,590 | [core] Combine multiple grpc connections into one | ### Description
Based on on-CPU profiling, grpc takes most of the CPU resource on raylet.
We should be able to save some CPU after combine them together (i.e. no need for so many completion queue and polling thread).
<img width="1196" alt="Image" src="https://github.com/user-attachments/assets/3e935d5d-0ae5-4032-b940-0139ca2a5477" />
### Use case
_No response_ | open | 2025-03-21T10:40:17Z | 2025-03-21T10:40:17Z | https://github.com/ray-project/ray/issues/51590 | [
"enhancement",
"P2",
"core"
] | dentiny | 0 |
zappa/Zappa | flask | 598 | [Migrated] zappa package fails with zappa from git and slim_handler | Originally from: https://github.com/Miserlou/Zappa/issues/1549 by [jneves](https://github.com/jneves)
## Context
If you're trying to run a zappa as an editable from a git repo, it works, except if you have slim_handler enabled.
## Expected Behavior
It should build a zip with zappa inside.
## Actual Behavior
It fails with a traceback like this when building the handler_venv:
```
Traceback (most recent call last):
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 2693, in handle
sys.exit(cli.handle())
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 504, in handle
self.dispatch_command(self.command, stage)
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 543, in dispatch_command
self.package(self.vargs['output'])
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 633, in package
self.create_package(output)
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/cli.py", line 2171, in create_package
venv=self.zappa.create_handler_venv(),
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/core.py", line 403, in create_handler_venv
copytree(os.path.join(current_site_packages_dir, z), os.path.join(venv_site_packages_dir, z))
File "/Users/joao/testes/zappa-1/1/ve/src/zappa/zappa/utilities.py", line 37, in copytree
lst = os.listdir(src)
NotADirectoryError: [Errno 20] Not a directory: '/Users/joao/testes/zappa-1/1/ve/lib/python3.6/site-packages/zappa.egg-link
```
## Possible Fix
Use the function for copying editables as when slim_handler is false.
## Steps to Reproduce
With the following zappa_settings.json:
```
{
"dev": {
"project_name": "1",
"runtime": "python3.6",
"slim_handler": true
}
}
```
Run on a virtualenv: `pip install -e git+https://github.com/miserlou/zappa#egg=zappa`
and then `zappa package`.
## Your Environment
* Zappa version used: 0.46.1 from git
* Operating System and Python version: Mac OS X 10.12.6, python 3.6.4 (from home brew)
* The output of `pip freeze`:
```
argcomplete==1.9.3
base58==1.0.0
boto3==1.7.45
botocore==1.10.45
certifi==2018.4.16
cfn-flip==1.0.3
chardet==3.0.4
click==6.7
docutils==0.14
durationpy==0.5
first==2.0.1
future==0.16.0
hjson==3.0.1
idna==2.7
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.20.0
pip-tools==2.0.2
placebo==0.8.1
python-dateutil==2.6.1
python-slugify==1.2.4
PyYAML==3.12
requests==2.19.1
s3transfer==0.1.13
six==1.11.0
toml==0.9.4
tqdm==4.19.1
troposphere==2.3.0
Unidecode==1.0.22
urllib3==1.23
Werkzeug==0.14.1
wsgi-request-logger==0.4.6
-e git+https://github.com/miserlou/zappa@d99c193e32733946fb52a4f9b2bdfd1d2929ba49#egg=zappa
```
* Your `zappa_settings.py`: already above.
| closed | 2021-02-20T12:26:24Z | 2024-04-13T17:09:58Z | https://github.com/zappa/Zappa/issues/598 | [
"no-activity",
"auto-closed"
] | jneves | 4 |
miguelgrinberg/python-socketio | asyncio | 833 | socketio.exceptions.ConnectionError: Unexpected response from server | 1.I have a flask_socketio app which is deployed on heroku. the flask app shows an index.html page when you call the url.
Config for flask app
Flask==2.0.2
gunicorn==20.1.0
eventlet==0.30.2
gevent-websocket==0.10.1
Flask-SocketIO==4.3.1
python-engineio==3.13.2
python-socketio==4.6.0
from flask import Flask, render_template, make_response, redirect
from flask_socketio import SocketIO, send, emit
import os
These are the imports in the flask server
When i access this server deployed to heroku using a html+js app I can access it using cdn in script tag(from any html page basically)
But when i access it from python program I get unexpected response error. Providing full stacktrace here:
Attempting polling connection to https://my-server-raspi.herokuapp.com/socket.io/?transport=polling&EIO=4
Traceback (most recent call last):
File "/home/pi/Desktop/test-project/client.py", line 4, in <module>
sio.connect('https://my-server-raspi.herokuapp.com/')
File "/home/pi/.local/lib/python3.9/site-packages/socketio/client.py", line 282, in connect
six.raise_from(exceptions.ConnectionError(exc.args[0]), None)
File "<string>", line 3, in raise_from
socketio.exceptions.ConnectionError: Unexpected response from server
| closed | 2021-12-22T16:14:28Z | 2021-12-22T17:02:01Z | https://github.com/miguelgrinberg/python-socketio/issues/833 | [] | AnanyaCS | 1 |
jupyter/nbgrader | jupyter | 1,397 | Issues using formgrader after Windows "Reset this PC" | ### Windows
### nbgrader version 0.6.1
### Jupyter Notebook version 6.1.4
### Going to Manually grade assignments within the Formgrader
### After having to reset my PC after unrelated technical issues, I can no longer manually grade assignments. I am able to open formgrader, generate new assignments, and autograde them as well as old assignments. However, I cannot manually grade assignments, getting the error message "Sorry, the formgrader encountered an error. Please contact the administrator of the formgrader for further assistance." I know this is related with having to reset my Windows installation and reinstall both Anaconda and nbgrader all over again, but it would be invaluable to recover my existing assignments and grade them for my students.
I tried making a new quickstart course in another directory to test whether I could manually grade anything from a new course and new gradebook.db, but even this doesn't work.
### Windows 'Reset this PC' (reinstalling Windows) and trying to access an existing nbgrader class
| closed | 2020-12-01T19:55:44Z | 2021-03-17T23:23:28Z | https://github.com/jupyter/nbgrader/issues/1397 | [
"question"
] | michaeld32 | 1 |
xonsh/xonsh | data-science | 5,284 | "ValueError: I/O operation on closed file." when using print() with file argument | ## Steps to Reproduce
1. Create bug_xonsh.py and paste the following code:
```python
import datetime
import time
import sys
def now_nice_format(arg_utc: bool = False) -> str:
""" Helper function for timestamped_line() """
dt = datetime.datetime.now(datetime.UTC) if arg_utc else datetime.datetime.now()
res = time.strftime("%Y-%m-%d %H:%M:%S", datetime.datetime.timetuple(dt))
return res
def timestamped_line(arg_str: str = "") -> str:
return f"[{now_nice_format()}] {arg_str}"
def timestamped_print(arg_str: str = "", arg_file = sys.stdout, arg_force_flush: bool = False):
print(timestamped_line(arg_str), file=arg_file, flush=arg_force_flush)
```
In xonsh, run the following code:
```python
import bug_xonsh
bug_xonsh.timestamped_print("test")
xonsh: To log full traceback to a file set: $XONSH_TRACEBACK_LOGFILE = <filename>
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "P:\bug_xonsh.py", line 16, in timestamped_print
print(timestamped_line(arg_str), file=arg_file, flush=arg_force_flush)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\Users\<username>\Appdata\Local\Programs\Python\Python312\Lib\site-packages\xonsh\base_shell.py", line 182, in write
self.mem.write(s)
ValueError: I/O operation on closed file.
```
<details>
```
+------------------+---------------------------+
| xonsh | 0.14.4 |
| Python | 3.12.1 |
| PLY | 3.11 |
| have readline | False |
| prompt toolkit | 3.0.43 |
| shell type | prompt_toolkit |
| history backend | json |
| pygments | 2.17.2 |
| on posix | False |
| on linux | False |
| on darwin | False |
| on windows | True |
| on cygwin | False |
| on msys2 | False |
| is superuser | False |
| default encoding | utf-8 |
| xonsh encoding | utf-8 |
| encoding errors | surrogateescape |
| xontrib 1 | coreutils |
| RC file 1 | C:\Users\<redacted>/.xonshrc |
+------------------+---------------------------+
```
<details>
## For community
⬇️ **Please click the 👍 reaction instead of leaving a `+1` or 👍 comment**
| open | 2024-02-21T21:00:32Z | 2024-05-12T11:26:00Z | https://github.com/xonsh/xonsh/issues/5284 | [
"stdout"
] | Harding-Stardust | 1 |
pallets-eco/flask-wtf | flask | 297 | Exposure of the `csrf_token` field value | Previously Flask-WTF stripped away the `csrf_token` field value when accessing the form data. But in 42befd0420d1f4896a70339de3d474044461a1c9 this was removed.
Is this intentional? Now a form will expose the token as part of the data, even though it's an implicit value not generally useful outside the form.
I realize it is WTForms that implements the general logic for supporting CSRF validation, so maybe this is viewed as the responsibility of WTForms in the same way that `form.populate_obj` explicitly avoids populating the CSRF field value on the object. Sadly WTForms has no such filtering when accessing `form.data`, and the filtering in Flask-WTF was useful as it was.
I'll raise an issue with WTForms if that's where you think this should be fixed. | open | 2017-06-16T07:41:23Z | 2021-11-07T15:40:49Z | https://github.com/pallets-eco/flask-wtf/issues/297 | [
"csrf"
] | fdanielsen | 5 |
yt-dlp/yt-dlp | python | 11,714 | Unsupported URL error for a twitter account | ### DO NOT REMOVE OR SKIP THE ISSUE TEMPLATE
- [X] I understand that I will be **blocked** if I *intentionally* remove or skip any mandatory\* field
### Checklist
- [X] I'm reporting that yt-dlp is broken on a **supported** site
- [X] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [X] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [X] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [X] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766) and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=) for similar issues **including closed ones**. DO NOT post duplicates
- [X] I've read the [guidelines for opening an issue](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#opening-an-issue)
- [X] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
Asia
### Provide a description that is worded well enough to be understood
Unable to scrape media from a twitter account. I get the error `ERROR: Unsupported URL: https://x.com/PlayStormgate`
### Provide verbose output that clearly demonstrates the problem
- [X] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [X] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
G:\Kitty> ..\yt-dlp.exe "https://twitter.com/PlayStormgate" --match-filter "is_retweet = false" -vU
[debug] Command-line config: ['https://twitter.com/PlayStormgate', '--match-filter', 'is_retweet = false', '-vU']
[debug] Encodings: locale cp1252, fs utf-8, pref cp1252, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version stable@2024.11.18 from yt-dlp/yt-dlp [7ea278792] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19045-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: none
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2024.08.30, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.2.3, websockets-13.1
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Loaded 1837 extractors
[debug] Fetching release info: https://api.github.com/repos/yt-dlp/yt-dlp/releases/latest
Latest version: stable@2024.11.18 from yt-dlp/yt-dlp
yt-dlp is up to date (stable@2024.11.18 from yt-dlp/yt-dlp)
[generic] Extracting URL: https://twitter.com/PlayStormgate
[generic] PlayStormgate: Downloading webpage
[redirect] Following redirect to https://x.com/PlayStormgate
[generic] Extracting URL: https://x.com/PlayStormgate
[generic] PlayStormgate: Downloading webpage
WARNING: [generic] Falling back on generic information extractor
[generic] PlayStormgate: Extracting information
[debug] Looking for embeds
ERROR: Unsupported URL: https://x.com/PlayStormgate
Traceback (most recent call last):
File "yt_dlp\YoutubeDL.py", line 1624, in wrapper
File "yt_dlp\YoutubeDL.py", line 1759, in __extract_info
File "yt_dlp\extractor\common.py", line 742, in extract
File "yt_dlp\extractor\generic.py", line 2553, in _real_extract
yt_dlp.utils.UnsupportedError: Unsupported URL: https://x.com/PlayStormgate
```
| closed | 2024-12-03T01:33:50Z | 2024-12-03T01:36:21Z | https://github.com/yt-dlp/yt-dlp/issues/11714 | [
"duplicate",
"site-enhancement"
] | HughMungis | 1 |
ultralytics/ultralytics | python | 19,167 | Can NOT get result.boxes with yolo 11 | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
I have a python code block to check if a specified product image is qualified(OK), after the images labeld and models trained with yolo 11,
I run the follow code block to check the specified product(datasets/images/val/image1_ok.png), but got "**No target found**",
It seems that it can NOT get result.boxes:
```
from ultralytics import YOLO
import cv2
def check_product_quality(image_path):
model = YOLO("best.pt")
image = cv2.imread(image_path)
results = model(image)
result = results[0]
qualified_class_id = 0
if result.boxes is not None and len(result.boxes) > 0:
for box in result.boxes:
class_id = int(box.cls[0])
if class_id != qualified_class_id:
return False
else:
print("No target found")
return False
return True
image_path = "datasets/images/val/image1_ok.png"
is_qualified = check_product_quality(image_path)
if is_qualified:
print("OK")
else:
print("KO")
```
Even the image showing in val_batch0_labels within the comment.
Thanks in advance for any comments!
### Additional
_No response_ | open | 2025-02-10T15:04:50Z | 2025-02-15T06:38:53Z | https://github.com/ultralytics/ultralytics/issues/19167 | [
"question",
"detect"
] | LingPiao | 12 |
ultralytics/yolov5 | deep-learning | 12,982 | Confusion Matrix wrong output | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello,
I trained my model on 3000 images containing one class, with backgrounds.
When I do the validation, the confusion matrix shows that my model detects objects with 100% in backgrounds. Even though it doesn't really do that, since I did an inference and it doesn't detect anything in the empty images (background).
How can I solve this problem?
### Additional
_No response_ | closed | 2024-05-04T16:39:13Z | 2024-10-20T19:45:14Z | https://github.com/ultralytics/yolov5/issues/12982 | [
"question",
"Stale"
] | IbrahimAlmasri01 | 3 |
twopirllc/pandas-ta | pandas | 577 | Half trend by everget | # https://www.tradingview.com/script/U1SJ8ubc-HalfTrend/
please implement half trend by everget | open | 2022-08-26T10:22:06Z | 2022-09-22T01:51:09Z | https://github.com/twopirllc/pandas-ta/issues/577 | [
"enhancement",
"help wanted",
"good first issue"
] | bibinvargheset | 2 |
vllm-project/vllm | pytorch | 14,426 | [Usage]: LLM.beam_search is much slower in vLLM 0.7.3 compared to 0.5.4 | ### Your current environment
```text
The output of `python collect_env.py`
```
Collecting environment information...
PyTorch version: 2.5.1+cu124
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: Could not collect
CMake version: Could not collect
Libc version: glibc-2.31
Python version: 3.9.21 (main, Dec 11 2024, 16:24:11) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-5.4.0-162-generic-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.1.105
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA RTX A6000
GPU 1: NVIDIA RTX A6000
Nvidia driver version: 535.113.01
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 48 bits physical, 48 bits virtual
CPU(s): 192
On-line CPU(s) list: 0-191
Thread(s) per core: 2
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 25
Model: 1
Model name: AMD EPYC 7643 48-Core Processor
Stepping: 1
Frequency boost: enabled
CPU MHz: 2641.008
CPU max MHz: 2300.0000
CPU min MHz: 1500.0000
BogoMIPS: 4591.26
Virtualization: AMD-V
L1d cache: 3 MiB
L1i cache: 3 MiB
L2 cache: 48 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-47,96-143
NUMA node1 CPU(s): 48-95,144-191
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines, IBPB conditional, IBRS_FW, STIBP always-on, RSB filling, PBRSB-eIBRS Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 invpcid_single hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 invpcid cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold v_vmsave_vmload vgif umip pku ospke vaes vpclmulqdq rdpid overflow_recov succor smca
Versions of relevant libraries:
[pip3] gptqmodel==1.7.4+cu121torch2.5
[pip3] numpy==1.26.4
[pip3] nvidia-cublas-cu12==12.4.5.8
[pip3] nvidia-cuda-cupti-cu12==12.4.127
[pip3] nvidia-cuda-nvrtc-cu12==12.4.127
[pip3] nvidia-cuda-runtime-cu12==12.4.127
[pip3] nvidia-cudnn-cu12==9.1.0.70
[pip3] nvidia-cufft-cu12==11.2.1.3
[pip3] nvidia-curand-cu12==10.3.5.147
[pip3] nvidia-cusolver-cu12==11.6.1.9
[pip3] nvidia-cusparse-cu12==12.3.1.170
[pip3] nvidia-nccl-cu12==2.21.5
[pip3] nvidia-nvjitlink-cu12==12.4.127
[pip3] nvidia-nvtx-cu12==12.4.127
[pip3] pyzmq==26.2.1
[pip3] torch==2.5.1
[pip3] torchaudio==2.5.1
[pip3] torchvision==0.20.1
[pip3] transformers==4.47.1
[pip3] triton==3.1.0
[conda] gptqmodel 1.7.4+cu121torch2.5 pypi_0 pypi
[conda] numpy 1.26.4 pypi_0 pypi
[conda] nvidia-cublas-cu12 12.4.5.8 pypi_0 pypi
[conda] nvidia-cuda-cupti-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-nvrtc-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cuda-runtime-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-cudnn-cu12 9.1.0.70 pypi_0 pypi
[conda] nvidia-cufft-cu12 11.2.1.3 pypi_0 pypi
[conda] nvidia-curand-cu12 10.3.5.147 pypi_0 pypi
[conda] nvidia-cusolver-cu12 11.6.1.9 pypi_0 pypi
[conda] nvidia-cusparse-cu12 12.3.1.170 pypi_0 pypi
[conda] nvidia-nccl-cu12 2.21.5 pypi_0 pypi
[conda] nvidia-nvjitlink-cu12 12.4.127 pypi_0 pypi
[conda] nvidia-nvtx-cu12 12.4.127 pypi_0 pypi
[conda] pyzmq 26.2.1 pypi_0 pypi
[conda] torch 2.5.1 pypi_0 pypi
[conda] torchaudio 2.5.1 pypi_0 pypi
[conda] torchvision 0.20.1 pypi_0 pypi
[conda] transformers 4.47.1 pypi_0 pypi
[conda] triton 3.1.0 pypi_0 pypi
ROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.7.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
[4mGPU0 GPU1 CPU Affinity NUMA Affinity GPU NUMA ID[0m
GPU0 X NODE 48-95,144-191 1 N/A
GPU1 NODE X 48-95,144-191 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NVIDIA_VISIBLE_DEVICES=all
NVIDIA_REQUIRE_CUDA=cuda>=12.1 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 export rand=nvidiartx,driver>=470,driver<471 brand=geforce,driver>=470,driver<471 brand=geforcertx,driver>=470,driver<471 brand=quadro,driver>=470,driver<471 export rand=quadrortx,driver>=470,driver<471 brand=titan,driver>=470,driver<471 brand=titanrtx,driver>=470,driver<471 brand=tesla,driver>=525,driver<526 export rand=unknown,driver>=525,driver<526 brand=nvidia,driver>=525,driver<526 brand=nvidiartx,driver>=525,driver<526 brand=geforce,driver>=525,driver<526 export rand=geforcertx,driver>=525,driver<526 brand=quadro,driver>=525,driver<526 brand=quadrortx,driver>=525,driver<526 brand=titan,driver>=525,driver<526 export rand=titanrtx,driver>=525,driver<526
NCCL_VERSION=2.17.1-1
NVIDIA_DRIVER_CAPABILITIES=compute,utility
NVIDIA_PRODUCT_NAME=CUDA
TORCHINDUCTOR_CACHE_DIR=/tmp/torchinductor_user
NVIDIA_CUDA_END_OF_LIFE=1
CUDA_VERSION=12.1.0
CUDAHOME=/usr/local/cuda
TORCHINDUCTOR_COMPILE_THREADS=1
TORCH_NCCL_AVOID_RECORD_STREAMS=1
LD_LIBRARY_PATH=/usr/local/nvidia/lib:/usr/local/nvidia/lib64
CUDA_HOME=/usr/local/cuda
CUDA_HOME=/usr/local/cuda
CUDA_MODULE_LOADING=LAZY
NCCL_CUMEM_ENABLE=0
### How would you like to use vllm
## Issue: Beam Search Performance Regression in vLLM 0.7.3 vs. 0.5.4
I noticed a significant slowdown when using **beam search** in `vllm` **0.7.3** compared to **0.5.4**. With the same model (`google/gemma-2-9b-it`), it takes **~25 seconds per generation in 0.7.3**.
- Is my inference setup correct, or should I change something?
- Is there a known issue with beam search being slower in recent versions?
The code I used for inference in both versions:
### **vLLM 0.5.4 (fast <1 sec)**
```python
from vllm import LLM, SamplingParams
llm = LLM(
model="google/gemma-2-9b-it",
enable_prefix_caching=True,
disable_sliding_window=True,
gpu_memory_utilization=0.8,
swap_space=8,
trust_remote_code=True,
)
sampling_params = SamplingParams(
n=2,
temperature=0.0,
top_p=1.0,
max_tokens=256,
repetition_penalty=1.1,
top_k=-1,
seed=10,
use_beam_search=True
)
outputs = llm.generate(["Hello! My name is "], sampling_params)
```
### **vLLM 0.7.3 (~25 sec)**
```python
from vllm import LLM
from vllm.sampling_params import BeamSearchParams
import torch
model_id="google/gemma-2-9b-it"
prefix_cached_llm = LLM(
model=model_id,
gpu_memory_utilization=0.8,
swap_space=8,
trust_remote_code=True,
tensor_parallel_size=1,
dtype=torch.bfloat16
)
params = BeamSearchParams(beam_width=2, max_tokens=256)
outputs = prefix_cached_llm.beam_search([{"prompt": "Hello! My name is "}], params)
```
### Before submitting a new issue...
- [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions. | open | 2025-03-07T09:50:36Z | 2025-03-10T04:43:34Z | https://github.com/vllm-project/vllm/issues/14426 | [
"usage"
] | upayuryeva | 3 |
sinaptik-ai/pandas-ai | pandas | 1,032 | Error: pg_config executable not found. | ### System Info
OS version: "22.04.1 LTS (Jammy Jellyfish)"
Python version: Python 3.9.16
The current version of pandasai being used: cloned repo
### 🐛 Describe the bug
When installing dependencies with poetry I see the following error:
```bash
python -m pip install --user pipx
python -m pipx ensurepath
# re-enter the shell to get PATH updated
git clone <my fork of pandas-ai>
cd pandas-ai
pipx install poetry
poetry install --all-extras --with dev
...
- Installing pre-commit (3.5.0)
- Installing psycopg2 (2.9.9): Failed
ChefBuildError
Backend subprocess exited when trying to invoke get_requires_for_build_wheel
running egg_info
writing psycopg2.egg-info/PKG-INFO
writing dependency_links to psycopg2.egg-info/dependency_links.txt
writing top-level names to psycopg2.egg-info/top_level.txt
Error: pg_config executable not found.
pg_config is required to build psycopg2 from source. Please add the directory
containing pg_config to the $PATH or specify the full executable path with the
option:
python setup.py build_ext --pg-config /path/to/pg_config build ...
or with the pg_config option in 'setup.cfg'.
If you prefer to avoid building psycopg2 from source, please install the PyPI
'psycopg2-binary' package instead.
For further information please check the 'doc/src/install.rst' file (also at
<https://www.psycopg.org/docs/install.html>).
- Installing pymysql (1.1.0)
...
```
Some packages get installed, while others get missing due to the error.
Is there something I am doing wrong to install dependencies? | closed | 2024-03-14T14:32:07Z | 2024-03-19T22:14:18Z | https://github.com/sinaptik-ai/pandas-ai/issues/1032 | [] | YarShev | 5 |
google-research/bert | nlp | 859 | tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[2968] = 30523 is not in [0, 30522) | I am trying to train my own data for text classification (multiple classes). I'm trying to run it with the following command:
`python run_classifier.py --task_name=cola --do_train=true --do_eval=true --do_predict=true --data_dir=./data/ --vocab_file=./uncased_L-12_H-768_A-12/vocab.txt --bert_config_file=./uncased_L-12_H-768_A-12/bert_config.json --init_checkpoint=./uncased_L-12_H-768_A-12/bert_model.ckpt --max_seq_length=400 --train_batch_size=8 --learning_rate=2e-5 --num_train_epochs=3.0 --output_dir=./bert_output/ --do_lower_case=True`
I used the following pretrained model:
uncased_L-12_H-768_A-12
Data is the following:
https://bitbucket.org/lyriccoder/bert/downloads/dev.tsv
https://bitbucket.org/lyriccoder/bert/downloads/test.tsv
https://bitbucket.org/lyriccoder/bert/downloads/train.tsv
Here is my run_classifier.py. I changed it since I have several classes:
[run_classifier.zip](https://github.com/google-research/bert/files/3626835/run_classifier.zip)
I've just changed the number of classes for ColaProcessor:
```
def get_labels(self):
"""See base class."""
return ["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
```
I have the following stacktrace:
```
Use tf.where in 2.0, which has the same broadcast rule as np.where
INFO:tensorflow:Done calling model_fn.
I0918 17:04:05.368514 15956 estimator.py:1147] Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
I0918 17:04:05.370477 15956 basic_session_run_hooks.py:541] Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
I0918 17:04:08.241855 15956 monitored_session.py:240] Graph was finalized.
2019-09-18 17:04:08.244092: I tensorflow/core/platform/cpu_feature_guard.cc:142] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
WARNING:tensorflow:From C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
W0918 17:04:08.261773 15956 deprecation.py:323] From C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\saver.py:1276: checkpoint_exists (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file APIs to check for files with this prefix.
INFO:tensorflow:Restoring parameters from ./bert_output/model.ckpt-0
I0918 17:04:08.268754 15956 saver.py:1280] Restoring parameters from ./bert_output/model.ckpt-0
WARNING:tensorflow:From C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\saver.py:1066: get_checkpoint_mtimes (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file utilities to get mtimes.
W0918 17:04:13.132208 15956 deprecation.py:323] From C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\saver.py:1066: get_checkpoint_mtimes (from tensorflow.python.training.checkpoint_management) is deprecated and will be removed in a future version.
Instructions for updating:
Use standard file utilities to get mtimes.
INFO:tensorflow:Running local_init_op.
I0918 17:04:13.602988 15956 session_manager.py:500] Running local_init_op.
INFO:tensorflow:Done running local_init_op.
I0918 17:04:13.804451 15956 session_manager.py:502] Done running local_init_op.
INFO:tensorflow:Saving checkpoints for 0 into ./bert_output/model.ckpt.
I0918 17:04:19.897372 15956 basic_session_run_hooks.py:606] Saving checkpoints for 0 into ./bert_output/model.ckpt.
ERROR:tensorflow:Error recorded from training_loop: indices[2968] = 30523 is not in [0, 30522)
[[node bert/embeddings/GatherV2 (defined at C:\Users\lyriccoder\PycharmProjects\bert\modeling.py:419) ]]
Errors may have originated from an input operation.
Input Source operations connected to node bert/embeddings/GatherV2:
bert/embeddings/Reshape (defined at C:\Users\lyriccoder\PycharmProjects\bert\modeling.py:414)
bert/embeddings/word_embeddings/read (defined at C:\Users\lyriccoder\PycharmProjects\bert\modeling.py:412)
Original stack trace for 'bert/embeddings/GatherV2':
File "run_classifier.py", line 980, in <module>
tf.app.run()
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_classifier.py", line 879, in main
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2871, in train
saving_listeners=saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1158, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1188, in _train_model_default
features, labels, ModeKeys.TRAIN, self.config)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2709, in _call_model_fn
config)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1146, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2967, in _model_fn
features, labels, is_export_mode=is_export_mode)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 1549, in call_without_tpu
return self._call_model_fn(features, labels, is_export_mode=is_export_mode)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 1867, in _call_model_fn
estimator_spec = self._model_fn(features=features, **kwargs)
File "run_classifier.py", line 644, in model_fn
num_labels, use_one_hot_embeddings)
File "run_classifier.py", line 582, in create_model
use_one_hot_embeddings=use_one_hot_embeddings)
File "C:\Users\lyriccoder\PycharmProjects\bert\modeling.py", line 180, in __init__
use_one_hot_embeddings=use_one_hot_embeddings)
File "C:\Users\lyriccoder\PycharmProjects\bert\modeling.py", line 419, in embedding_lookup
output = tf.gather(embedding_table, flat_input_ids)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\util\dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3475, in gather
return gen_array_ops.gather_v2(params, indices, axis, name=name)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 4835, in gather_v2
batch_dims=batch_dims, name=name)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3616, in create_op
op_def=op_def)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
E0918 17:04:33.387931 15956 error_handling.py:70] Error recorded from training_loop: indices[2968] = 30523 is not in [0, 30522)
[[node bert/embeddings/GatherV2 (defined at C:\Users\lyriccoder\PycharmProjects\bert\modeling.py:419) ]]
Errors may have originated from an input operation.
Input Source operations connected to node bert/embeddings/GatherV2:
bert/embeddings/Reshape (defined at C:\Users\lyriccoder\PycharmProjects\bert\modeling.py:414)
bert/embeddings/word_embeddings/read (defined at C:\Users\lyriccoder\PycharmProjects\bert\modeling.py:412)
Original stack trace for 'bert/embeddings/GatherV2':
File "run_classifier.py", line 980, in <module>
tf.app.run()
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_classifier.py", line 879, in main
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2871, in train
saving_listeners=saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1158, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1188, in _train_model_default
features, labels, ModeKeys.TRAIN, self.config)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2709, in _call_model_fn
config)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1146, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2967, in _model_fn
features, labels, is_export_mode=is_export_mode)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 1549, in call_without_tpu
return self._call_model_fn(features, labels, is_export_mode=is_export_mode)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 1867, in _call_model_fn
estimator_spec = self._model_fn(features=features, **kwargs)
File "run_classifier.py", line 644, in model_fn
num_labels, use_one_hot_embeddings)
File "run_classifier.py", line 582, in create_model
use_one_hot_embeddings=use_one_hot_embeddings)
File "C:\Users\lyriccoder\PycharmProjects\bert\modeling.py", line 180, in __init__
use_one_hot_embeddings=use_one_hot_embeddings)
File "C:\Users\lyriccoder\PycharmProjects\bert\modeling.py", line 419, in embedding_lookup
output = tf.gather(embedding_table, flat_input_ids)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\util\dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3475, in gather
return gen_array_ops.gather_v2(params, indices, axis, name=name)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 4835, in gather_v2
batch_dims=batch_dims, name=name)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3616, in create_op
op_def=op_def)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
INFO:tensorflow:training_loop marked as finished
I0918 17:04:33.432810 15956 error_handling.py:96] training_loop marked as finished
WARNING:tensorflow:Reraising captured error
W0918 17:04:33.434805 15956 error_handling.py:130] Reraising captured error
Traceback (most recent call last):
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\client\session.py", line 1356, in _do_call
return fn(*args)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\client\session.py", line 1341, in _run_fn
options, feed_dict, fetch_list, target_list, run_metadata)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\client\session.py", line 1429, in _call_tf_sessionrun
run_metadata)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[2968] = 30523 is not in [0, 30522)
[[{{node bert/embeddings/GatherV2}}]]
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "run_classifier.py", line 980, in <module>
tf.app.run()
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_classifier.py", line 879, in main
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2876, in train
rendezvous.raise_errors()
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\error_handling.py", line 131, in raise_errors
six.reraise(typ, value, traceback)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\six.py", line 693, in reraise
raise value
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2871, in train
saving_listeners=saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1158, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1192, in _train_model_default
saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1484, in _train_with_estimator_spec
_, loss = mon_sess.run([estimator_spec.train_op, estimator_spec.loss])
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\monitored_session.py", line 754, in run
run_metadata=run_metadata)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1252, in run
run_metadata=run_metadata)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1353, in run
raise six.reraise(*original_exc_info)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\six.py", line 693, in reraise
raise value
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1338, in run
return self._sess.run(*args, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1411, in run
run_metadata=run_metadata)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\training\monitored_session.py", line 1169, in run
return self._sess.run(*args, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\client\session.py", line 950, in run
run_metadata_ptr)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\client\session.py", line 1173, in _run
feed_dict_tensor, options, run_metadata)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\client\session.py", line 1350, in _do_run
run_metadata)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\client\session.py", line 1370, in _do_call
raise type(e)(node_def, op, message)
tensorflow.python.framework.errors_impl.InvalidArgumentError: indices[2968] = 30523 is not in [0, 30522)
[[node bert/embeddings/GatherV2 (defined at C:\Users\lyriccoder\PycharmProjects\bert\modeling.py:419) ]]
Errors may have originated from an input operation.
Input Source operations connected to node bert/embeddings/GatherV2:
bert/embeddings/Reshape (defined at C:\Users\lyriccoder\PycharmProjects\bert\modeling.py:414)
bert/embeddings/word_embeddings/read (defined at C:\Users\lyriccoder\PycharmProjects\bert\modeling.py:412)
Original stack trace for 'bert/embeddings/GatherV2':
File "run_classifier.py", line 980, in <module>
tf.app.run()
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\platform\app.py", line 40, in run
_run(main=main, argv=argv, flags_parser=_parse_flags_tolerate_undef)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\absl\app.py", line 299, in run
_run_main(main, args)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\absl\app.py", line 250, in _run_main
sys.exit(main(argv))
File "run_classifier.py", line 879, in main
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2871, in train
saving_listeners=saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 367, in train
loss = self._train_model(input_fn, hooks, saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1158, in _train_model
return self._train_model_default(input_fn, hooks, saving_listeners)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1188, in _train_model_default
features, labels, ModeKeys.TRAIN, self.config)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2709, in _call_model_fn
config)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\estimator.py", line 1146, in _call_model_fn
model_fn_results = self._model_fn(features=features, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 2967, in _model_fn
features, labels, is_export_mode=is_export_mode)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 1549, in call_without_tpu
return self._call_model_fn(features, labels, is_export_mode=is_export_mode)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow_estimator\python\estimator\tpu\tpu_estimator.py", line 1867, in _call_model_fn
estimator_spec = self._model_fn(features=features, **kwargs)
File "run_classifier.py", line 644, in model_fn
num_labels, use_one_hot_embeddings)
File "run_classifier.py", line 582, in create_model
use_one_hot_embeddings=use_one_hot_embeddings)
File "C:\Users\lyriccoder\PycharmProjects\bert\modeling.py", line 180, in __init__
use_one_hot_embeddings=use_one_hot_embeddings)
File "C:\Users\lyriccoder\PycharmProjects\bert\modeling.py", line 419, in embedding_lookup
output = tf.gather(embedding_table, flat_input_ids)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\util\dispatch.py", line 180, in wrapper
return target(*args, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\ops\array_ops.py", line 3475, in gather
return gen_array_ops.gather_v2(params, indices, axis, name=name)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 4835, in gather_v2
batch_dims=batch_dims, name=name)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 788, in _apply_op_helper
op_def=op_def)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\util\deprecation.py", line 507, in new_func
return func(*args, **kwargs)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 3616, in create_op
op_def=op_def)
File "C:\Users\lyriccoder\PycharmProjects\bert\venv\lib\site-packages\tensorflow\python\framework\ops.py", line 2005, in __init__
self._traceback = tf_stack.extract_stack()
```
I have googled a lot and the problem is related to number of words in vocabulary. But this number in the config is large:
```
{
"attention_probs_dropout_prob": 0.1,
"hidden_act": "gelu",
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"max_position_embeddings": 512,
"num_attention_heads": 12,
"num_hidden_layers": 12,
"type_vocab_size": 2,
**"vocab_size": 30522**
}
```
I use only CPU. Also some people tells that the problem happens with CPU only, not GPU.
Here is my PC info:
```
OS Name: Microsoft Windows 10 Enterprise
OS Version: 10.0.17763 N/A Build 17763
OS Manufacturer: Microsoft Corporation
OS Configuration: Member Workstation
OS Build Type: Multiprocessor Free
System Manufacturer: LENOVO
System Model: 10T7004KRU
System Type: x64-based PC
Processor(s): 1 Processor(s) Installed.
[01]: Intel64 Family 6 Model 158 Stepping 10 GenuineIntel ~1704 Mhz
BIOS Version: LENOVO M1UKT28A, 18.02.2019
Windows Directory: C:\Windows
System Directory: C:\Windows\system32
Boot Device: \Device\HarddiskVolume2
Total Physical Memory: 8 059 MB
Available Physical Memory: 3 743 MB
Virtual Memory: Max Size: 10 098 MB
Virtual Memory: Available: 2 289 MB
Virtual Memory: In Use: 7 809 MB
Hotfix(s): 6 Hotfix(s) Installed.
[01]: KB4483452
[02]: KB4470788
[03]: KB4489907
[04]: KB4497932
[05]: KB4512937
[06]: KB4511553
Network Card(s): 4 NIC(s) Installed.
[01]: Intel(R) Dual Band Wireless-AC 8265
Connection Name: Wi-Fi
Status: Media disconnected
[02]: Intel(R) Ethernet Connection (7) I219-V
Connection Name: Ethernet
IP address(es)
[03]: Array Networks SSL VPN Adapter
Connection Name: Ethernet 2
Status: Hardware not present
[04]: VirtualBox Host-Only Ethernet Adapter
Connection Name: VirtualBox Host-Only Network
Hyper-V Requirements: VM Monitor Mode Extensions: Yes
Virtualization Enabled In Firmware: Yes
Second Level Address Translation: Yes
Data Execution Prevention Available: Yes
```
Could you please help? | open | 2019-09-18T14:25:09Z | 2019-12-11T17:31:52Z | https://github.com/google-research/bert/issues/859 | [] | lyriccoder | 2 |
aio-libs/aiomysql | asyncio | 559 | SELECT not returning boolean | If you fetch a query that selected a boolean, the boolean value will be missing in the tuple returned. | closed | 2021-01-19T10:26:20Z | 2021-06-29T17:43:40Z | https://github.com/aio-libs/aiomysql/issues/559 | [] | Miolus | 2 |
AutoGPTQ/AutoGPTQ | nlp | 108 | New LLM format: Falcon 40B and 7B - "RWForCausalLM" | There's another new LLM and LLM format available, and apparently it's very good: https://huggingface.co/tiiuae/falcon-40b
Format is `RWForCausalLM`
Example inference code:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import transformers
import torch
model = "tiiuae/falcon-40b"
tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)
sequences = pipeline(
"Girafatron is obsessed with giraffes, the most glorious animal on the face of this Earth. Giraftron believes all other animals are irrelevant when compared to the glorious majesty of the giraffe.\nDaniel: Hello, Girafatron!\nGirafatron:",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id,
)
for seq in sequences:
print(f"Result: {seq['generated_text']}")
```
It'd be awesome if AutoGPTQ could quantise this new model format.
I can look at it myself next week some time, but am working on too many other things to do it immediately. | closed | 2023-05-26T13:51:16Z | 2023-05-27T08:23:31Z | https://github.com/AutoGPTQ/AutoGPTQ/issues/108 | [
"enhancement"
] | TheBloke | 0 |
vaexio/vaex | data-science | 2,379 | Can we use dynamic column names for creating dataframes? | From the documentation,
x = np.arange(2)
y = np.array([10, 20])
z = np.array(['dog', 'cat'])
df_numpy = vaex.from_arrays(x=x, y=y, z=z)
What should be done if we only know the column names and number of columns at runtime?
Example : we will be having column names in a list =['x','y','k'].
How to create a data frame from this?
Sorry for the obvious question but couldn't find a solution. | closed | 2023-06-30T13:15:42Z | 2023-06-30T13:19:09Z | https://github.com/vaexio/vaex/issues/2379 | [] | DeveloperVivek9 | 0 |
jupyter-book/jupyter-book | jupyter | 1,763 | Import markdown file into markdown file | ### Context
Hi there,
Is it possible to import (input or embed) a markdown or text file into (one of the main JupyterBook content) markdown files?
In particular, one of the JupyterBook markdown files that I work on is getting really long, and I'd like to split it into several (helper) files that I would import into the main one.
Thank you!
Ana
### Proposal
_No response_
### Tasks and updates
_No response_ | closed | 2022-06-21T20:28:28Z | 2022-12-13T07:33:31Z | https://github.com/jupyter-book/jupyter-book/issues/1763 | [
"enhancement"
] | atrisovic | 5 |
SciTools/cartopy | matplotlib | 2,017 | Missing default natural earth shapefiles | ### Description
The cartopy geoaxes `coastlines()` method attempts to download (and store them in local cartopy cache dir) 110m natural earth shapefiles. This is not ideal, particularly when a stack is deployed on a machine which is not internet facing and the user has not manually updated the cache folder.
For example the following code will hang if there is no internet connection and the only way out is via keyboard interruption (eg `Ctrl+C`)
#### Code to reproduce
For example, on a Linux OS, remove `~/.local/share/cartopy` and disconnect from the internet and run:
```python
#!/usr/bin/env python
import cartopy.crs as ccrs
import matplotlib.pyplot as plt
proj = ccrs.PlateCarree()
fig = plt.figure(figsize=(6, 4))
ax = fig.add_subplot(111, projection=proj)
ax.set_global()
ax.coastlines()
plt.show()
```
#### Traceback
```
.../lib/python3.6/site-packages/cartopy/io/__init__.py:260: DownloadWarning: Downloading: https://naciscdn.org/naturalearth/110m/physical/ne_110m_coastline.zip
warnings.warn('Downloading: {}'.format(url), DownloadWarning)
^CException in Tkinter callback
Traceback (most recent call last):
File ".../lib/python3.6/tkinter/__init__.py", line 1702, in __call__
return self.func(*args)
File ".../lib/python3.6/tkinter/__init__.py", line 746, in callit
func(*args)
File ".../lib/python3.6/site-packages/matplotlib/backends/_backend_tk.py", line 253, in idle_draw
self.draw()
File ".../lib/python3.6/site-packages/matplotlib/backends/backend_tkagg.py", line 9, in draw
super(FigureCanvasTkAgg, self).draw()
File ".../lib/python3.6/site-packages/matplotlib/backends/backend_agg.py", line 407, in draw
self.figure.draw(self.renderer)
File ".../lib/python3.6/site-packages/matplotlib/artist.py", line 41, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File ".../lib/python3.6/site-packages/matplotlib/figure.py", line 1864, in draw
renderer, self, artists, self.suppressComposite)
File ".../lib/python3.6/site-packages/matplotlib/image.py", line 131, in _draw_list_compositing_images
a.draw(renderer)
File ".../lib/python3.6/site-packages/matplotlib/artist.py", line 41, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File ".../lib/python3.6/site-packages/cartopy/mpl/geoaxes.py", line 479, in draw
return matplotlib.axes.Axes.draw(self, renderer=renderer, **kwargs)
File ".../lib/python3.6/site-packages/matplotlib/artist.py", line 41, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File ".../lib/python3.6/site-packages/matplotlib/cbook/deprecation.py", line 411, in wrapper
return func(*inner_args, **inner_kwargs)
File ".../lib/python3.6/site-packages/matplotlib/axes/_base.py", line 2747, in draw
mimage._draw_list_compositing_images(renderer, self, artists)
File ".../lib/python3.6/site-packages/matplotlib/image.py", line 131, in _draw_list_compositing_images
a.draw(renderer)
File ".../lib/python3.6/site-packages/matplotlib/artist.py", line 41, in draw_wrapper
return draw(artist, renderer, *args, **kwargs)
File ".../lib/python3.6/site-packages/cartopy/mpl/feature_artist.py", line 155, in draw
geoms = self._feature.intersecting_geometries(extent)
File ".../lib/python3.6/site-packages/cartopy/feature/__init__.py", line 302, in intersecting_geometries
return super(NaturalEarthFeature, self).intersecting_geometries(extent)
File ".../lib/python3.6/site-packages/cartopy/feature/__init__.py", line 110, in intersecting_geometries
return (geom for geom in self.geometries() if
File ".../lib/python3.6/site-packages/cartopy/feature/__init__.py", line 286, in geometries
name=self.name)
File ".../lib/python3.6/site-packages/cartopy/io/shapereader.py", line 295, in natural_earth
return ne_downloader.path(format_dict)
File ".../lib/python3.6/site-packages/cartopy/io/__init__.py", line 222, in path
result_path = self.acquire_resource(target_path, format_dict)
File ".../lib/python3.6/site-packages/cartopy/io/shapereader.py", line 350, in acquire_resource
shapefile_online = self._urlopen(url)
File ".../lib/python3.6/site-packages/cartopy/io/__init__.py", line 261, in _urlopen
return urlopen(url)
File ".../lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File ".../lib/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File ".../lib/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File ".../lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File ".../lib/python3.6/urllib/request.py", line 1361, in https_open
context=self._context, check_hostname=self._check_hostname)
File ".../lib/python3.6/urllib/request.py", line 1318, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File ".../lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File ".../lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File ".../lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File ".../lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File ".../lib/python3.6/http/client.py", line 964, in send
self.connect()
File ".../lib/python3.6/http/client.py", line 1392, in connect
super().connect()
File ".../lib/python3.6/http/client.py", line 936, in connect
(self.host,self.port), self.timeout, self.source_address)
File ".../lib/python3.6/socket.py", line 713, in create_connection
sock.connect(sa)
KeyboardInterrupt
```
### Operating system
Linux
### Cartopy version
0.18.0 onwards
| open | 2022-03-22T18:41:38Z | 2022-03-22T18:41:38Z | https://github.com/SciTools/cartopy/issues/2017 | [] | yaswant | 0 |
Lightning-AI/pytorch-lightning | machine-learning | 20,190 | shortcuts for logging weights and biases norms | ### Description & Motivation
Knowing the norm of weights was necessary to debug float16 training for me.
### Pitch
from lightning.pytorch.utilities import grad_norm
norms = grad_norm(self.layer, norm_type=2)
something like this for weights would be convenient.
### Alternatives
_No response_
### Additional context
_No response_
cc @borda | open | 2024-08-11T23:04:44Z | 2024-08-11T23:05:06Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20190 | [
"feature",
"needs triage"
] | heth27 | 0 |
hbldh/bleak | asyncio | 885 | Wrong bytes received on Windows | I am working with the Arduino BLE board and I just send a simple byte array: `char byte_array[] = {0x00, 0x11, 0x22, 0x11}`. I am able to receive data on my PC but completely different. First two bytes are okay but the rest is not: `bytearray(b'\x00\x11"\x11')`. Do you maybe know what can cause this issue?
| open | 2022-07-13T18:37:37Z | 2022-07-17T20:57:21Z | https://github.com/hbldh/bleak/issues/885 | [
"3rd party issue"
] | ASabovic | 1 |
ageitgey/face_recognition | python | 909 | Error while use batch_face_locations and how to get pixel of face location | * face_recognition version: 1.2.3
* Python version: 3.6.8
* Operating System: ubuntu 18.04
i'm try to use batch_face_locations, this my code :
```
selfieimage = face_recognition.load_image_file("path")
face_locations2 =face_recognition.batch_face_locations(selfieimage,number_of_times_to_upsample=0,batch_size=128)
```
but got this error :
```
_call__(): incompatible function arguments. The following argument types are supported:
1. (self: dlib.cnn_face_detection_model_v1, imgs: list, upsample_num_times: int=0, batch_size: int=128) -> std::vector<std::vector<dlib::mmod_rect, std::allocator<dlib::mmod_rect> >, std::allocator<std::vector<dlib::mmod_rect, std::allocator<dlib::mmod_rect> > > >
2. (self: dlib.cnn_face_detection_model_v1, img: array, upsample_num_times: int=0) -> std::vector<dlib::mmod_rect, std::allocator<dlib::mmod_rect> >
Invoked with: <dlib.cnn_face_detection_model_v1 object at 0x7f4c901706f8>, array([[[255, 255, 218],
[255, 255, 218],
[255, 255, 218],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 218],
[255, 255, 218],
[255, 255, 218],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
[[255, 255, 218],
[255, 255, 218],
[255, 255, 218],
...,
[255, 255, 255],
[255, 255, 255],
[255, 255, 255]],
...,
[[230, 148, 120],
[230, 148, 120],
[230, 148, 120],
...,
[255, 191, 195],
[255, 191, 195],
[255, 191, 195]],
[[230, 148, 120],
[230, 148, 120],
[230, 148, 120],
...,
[255, 191, 195],
[255, 191, 195],
[255, 191, 195]],
[[230, 148, 120],
[230, 148, 120],
[230, 148, 120],
...,
[255, 191, 195],
[255, 191, 195],
[255, 191, 195]]], dtype=uint8), 1; kwargs: batch_size=128
Did you forget to `#include <pybind11/stl.h>`? Or <pybind11/complex.h>,
<pybind11/functional.h>, <pybind11/chrono.h>, etc. Some automatic
conversions are optional and require extra headers to be included
when compiling your pybind11 module
```
why this happen ?
and how to find pixel of face cause i want to compare higher pixel if more than one face is detected in photo, what i try is calculate the top, right, bottom, left but is wrong:
```
face_locations1 = face_recognition.face_locations(selfieimage, number_of_times_to_upsample=2, model="cnn")
compare_encode1 = face_recognition.face_encodings(selfieimage, face_locations1, num_jitters=6)
for (top, right, bottom, left), face_encoding in zip(face_locations1, compare_encode1):
i = 0
print(face_encoding)
res = [compare_encode1[i]]
selfielowest.append(compare_encode1[i])
i = i + 1
```
| open | 2019-08-21T13:04:13Z | 2019-08-21T13:22:23Z | https://github.com/ageitgey/face_recognition/issues/909 | [] | blinkbink | 0 |
miguelgrinberg/Flask-SocketIO | flask | 1,755 | gunicorn + gthread,cannot use thread pool |
1、async_mode = threading
2、gunicorn command : gunicorn -w 1 --threads 5 -b 0.0.0.0:8899 app:app
3、socketio func :
@socketio.on('run', namespace="/test")
def test_func(data):
print(threading.currentThread().name)
The result is 【Thread-2】 instead of 【ThreadPoolExecutor-0_2】
| closed | 2021-12-24T15:45:55Z | 2021-12-25T10:16:21Z | https://github.com/miguelgrinberg/Flask-SocketIO/issues/1755 | [
"invalid"
] | door7474 | 3 |
brightmart/text_classification | nlp | 41 | No such file or directory: '../cache_vocabulary_label_pik/hierAtten_word_voabulary.pik' | where is hierAtten_word_voabulary.pik? | closed | 2018-03-28T04:05:22Z | 2018-04-22T04:58:08Z | https://github.com/brightmart/text_classification/issues/41 | [] | parahaoer | 2 |
junyanz/pytorch-CycleGAN-and-pix2pix | pytorch | 1,686 | Epoch Starts from 1 When Using --continue_train | Hello everyone,
I have a question regarding the use of the --continue_train flag in my training process. After applying this flag, the terminal shows that the epoch count starts from 1. This has made me uncertain whether I am actually continuing the previous training or if it’s starting over from scratch.
I had previously saved the training state, but with this command, it seems like the training is restarting. Is this normal behavior? Or might I have made an error somewhere? What settings or files should I check to troubleshoot this?
Thank you for your assistance! | open | 2025-01-16T03:58:14Z | 2025-02-14T21:11:06Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1686 | [] | H-skyfxxcker | 1 |
scikit-learn/scikit-learn | python | 30,237 | BUG: Test collection for Transformer fails | ### Describe the bug
On latest `scientific-python-nightly-wheels` wheel things were passing yesterday but [now we now get the following](https://github.com/mne-tools/mne-python/actions/runs/11720293695/job/32675716039#step:17:68) when [using parametrize_with_checks](https://github.com/mne-tools/mne-python/blob/649857aacb24a0afc3b069f1e75bb3cf843a8766/mne/decoding/tests/test_search_light.py#L346):
```
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/sklearn/utils/estimator_checks.py:[26](https://github.com/mne-tools/mne-python/actions/runs/11720293695/job/32675716039#step:17:27)9: in _yield_transformer_checks
if tags.transformer_tags.preserves_dtype:
E AttributeError: 'NoneType' object has no attribute 'preserves_dtype'
```
<details>
<summary>Full traceback</summary>
```
Downloading https://pypi.anaconda.org/scientific-python-nightly-wheels/simple/scikit-learn/1.6.dev0/scikit_learn-1.6.dev0-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (13.1 MB)
...
$ mne sys_info
...
├☑ sklearn 1.6.dev0
...
$ pytest -m 'not (ultraslowtest or pgtest)' --tb=short --cov=mne --cov-report xml --color=yes --junit-xml=junit-results.xml -vv mne/
============================= test session starts ==============================
platform linux -- Python 3.12.7, pytest-8.3.3, pluggy-1.5.0 -- /opt/hostedtoolcache/Python/3.12.7/x64/bin/python
cachedir: .pytest_cache
PyQt6 6.7.1 -- Qt runtime 6.7.3 -- Qt compiled 6.7.1
MNE 1.9.0.dev108+gcc0a15c0b -- /home/runner/work/mne-python/mne-python/mne
rootdir: /home/runner/work/mne-python/mne-python
configfile: pyproject.toml
plugins: timeout-2.3.1, qt-4.4.0, cov-6.0.0
collecting ... collected 4694 items / 1 error / 70 deselected / 5 skipped / 4624 selected
==================================== ERRORS ====================================
___________ ERROR collecting mne/decoding/tests/test_search_light.py ___________
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/pluggy/_hooks.py:513: in __call__
return self._hookexec(self.name, self._hookimpls.copy(), kwargs, firstresult)
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/pluggy/_manager.py:120: in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/_pytest/python.py:245: in pytest_pycollect_makeitem
return list(collector._genfunctions(name, obj))
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/_pytest/python.py:462: in _genfunctions
self.ihook.pytest_generate_tests.call_extra(methods, dict(metafunc=metafunc))
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/pluggy/_hooks.py:574: in call_extra
return self._hookexec(self.name, hookimpls, kwargs, firstresult)
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/pluggy/_manager.py:120: in _hookexec
return self._inner_hookexec(hook_name, methods, kwargs, firstresult)
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/_pytest/python.py:115: in pytest_generate_tests
metafunc.parametrize(*marker.args, **marker.kwargs, _param_mark=marker)
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/_pytest/python.py:1206: in parametrize
argnames, parametersets = ParameterSet._for_parametrize(
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/_pytest/mark/structures.py:159: in _for_parametrize
parameters = cls._parse_parametrize_parameters(argvalues, force_tuple)
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/_pytest/mark/structures.py:146: in _parse_parametrize_parameters
ParameterSet.extract_from(x, force_tuple=force_tuple) for x in argvalues
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/sklearn/utils/estimator_checks.py:518: in checks_generator
for check in _yield_all_checks(estimator, legacy=legacy):
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/sklearn/utils/estimator_checks.py:369: in _yield_all_checks
for check in _yield_transformer_checks(estimator):
/opt/hostedtoolcache/Python/3.12.7/x64/lib/python3.12/site-packages/sklearn/utils/estimator_checks.py:[26](https://github.com/mne-tools/mne-python/actions/runs/11720293695/job/32675716039#step:17:27)9: in _yield_transformer_checks
if tags.transformer_tags.preserves_dtype:
E AttributeError: 'NoneType' object has no attribute 'preserves_dtype'
```
</details>
According to a local `git bisect` this was introduced by #30122. I see some API entries and maybe we need to adjust some code, but it seems like an API change should at least emit a future or deprecation warning rather than fail hard like this.
### Steps/Code to Reproduce
```
$ pytest mne/decoding/tests/test_search_light.py
```
### Expected Results
Tests pass (or at least we get an informative error about our misuse of something!)
### Actual Results
:point_up:
### Versions
```shell
System:
python: 3.12.3 (main, Sep 11 2024, 14:17:37) [GCC 13.2.0]
executable: /home/larsoner/python/virtualenvs/base/bin/python
machine: Linux-6.8.0-48-generic-x86_64-with-glibc2.39
Python dependencies:
sklearn: 1.6.dev0
pip: 24.0
setuptools: 75.3.0
numpy: 2.2.0.dev0
scipy: 1.15.0.dev0
Cython: 3.0.10
pandas: 3.0.0.dev0+1597.g9e10119dc8
matplotlib: 3.10.0.dev491+gf7b3def52b
joblib: 1.4.2
threadpoolctl: 3.5.0
Built with OpenMP: True
threadpoolctl info:
user_api: blas
internal_api: openblas
num_threads: 1
prefix: libscipy_openblas
filepath: /home/larsoner/python/virtualenvs/base/lib/python3.12/site-packages/numpy.libs/libscipy_openblas64_-6bb31eeb.so
version: 0.3.28
threading_layer: pthreads
architecture: Haswell
user_api: blas
internal_api: openblas
num_threads: 1
prefix: libscipy_openblas
filepath: /home/larsoner/python/virtualenvs/base/lib/python3.12/site-packages/scipy.libs/libscipy_openblas-68440149.so
version: 0.3.28
threading_layer: pthreads
architecture: Haswell
user_api: openmp
internal_api: openmp
num_threads: 8
prefix: libgomp
filepath: /usr/lib/x86_64-linux-gnu/libgomp.so.1.0.0
version: None
```
| closed | 2024-11-07T19:25:33Z | 2024-11-12T16:21:22Z | https://github.com/scikit-learn/scikit-learn/issues/30237 | [
"Bug",
"Developer API"
] | larsoner | 10 |
Gerapy/Gerapy | django | 49 | 建议 调度爬虫的添加参数 功能 | rt | closed | 2018-03-23T01:21:42Z | 2019-01-24T09:43:12Z | https://github.com/Gerapy/Gerapy/issues/49 | [] | hehanlin | 2 |
plotly/jupyter-dash | dash | 83 | ImportError: cannot import name 'get_current_traceback' from 'werkzeug.debug.tbtools' | Running into a similar issue as in plotly/dash#1992:
```
Traceback:
/usr/local/lib/python3.7/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
vizx_dashboards/__init__.py:26: in <module>
from .create_dashboard import main as create_dashboard # noqa: F401
vizx_dashboards/create_dashboard.py:22: in <module>
from jupyter_dash import JupyterDash
../../../brix_venv/vizx_dashboards/lib/python3.7/site-packages/jupyter_dash/__init__.py:2: in <module>
from .jupyter_app import JupyterDash
../../../brix_venv/vizx_dashboards/lib/python3.7/site-packages/jupyter_dash/jupyter_app.py:21: in <module>
from werkzeug.debug.tbtools import get_current_traceback
E ImportError: cannot import name 'get_current_traceback' from 'werkzeug.debug.tbtools' (/home/********/brix_venv/vizx_dashboards/lib/python3.7/site-packages/werkzeug/debug/tbtools.py)
``` | closed | 2022-03-30T11:20:20Z | 2022-03-31T19:45:55Z | https://github.com/plotly/jupyter-dash/issues/83 | [] | deepyaman | 1 |
huggingface/datasets | tensorflow | 7,047 | Save Dataset as Sharded Parquet | ### Feature request
`to_parquet` currently saves the dataset as one massive, monolithic parquet file, rather than as several small parquet files. It should shard large datasets automatically.
### Motivation
This default behavior makes me very sad because a program I ran for 6 hours saved its results using `to_parquet`, putting the entire billion+ row dataset into a 171 GB *single shard parquet file* which pyarrow, apache spark, etc. all cannot work with without completely exhausting the memory of my system. I was previously able to work with larger-than-memory parquet files, but not this one. I *assume* the reason why this is happening is because it is a single shard. Making sharding the default behavior puts datasets in parity with other frameworks, such as spark, which automatically shard when a large dataset is saved as parquet.
### Your contribution
I could change the logic here https://github.com/huggingface/datasets/blob/bf6f41e94d9b2f1c620cf937a2e85e5754a8b960/src/datasets/io/parquet.py#L109-L158
to use `pyarrow.dataset.write_dataset`, which seems to support sharding, or periodically open new files. We would only shard if the user passed in a path rather than file handle. | open | 2024-07-12T23:47:51Z | 2024-07-17T12:07:08Z | https://github.com/huggingface/datasets/issues/7047 | [
"enhancement"
] | tom-p-reichel | 2 |
manrajgrover/halo | jupyter | 10 | Why not use kwargs for options ? | Hi there, this is a question / enhancement proposal:
instead of relying on type checking, why not use keyword arguments to initialize a `Halo` instance ? This would be more Pythonic IMO (although, I realize filing an issue just to tell you that might make me sound like an a**hole).
This way, you would have more control about accepted arguments (nothing stops someone from creating an `option` dictionnary with nonsensical options, or maybe just a typo), and also expose the default values of arguments to the documentation / inline help of `Halo`:
```python
class Halo(object):
def __init__(self, text='', color='cyan', interval=-1, enabled=True, stream=sys.stdout):
# ... #
```
Of course, this may cause problems for `interval` since it depends on the chosen spinner, but using `None` or **-1** would probably do the trick.
Once again, this is just a suggestion ! | closed | 2017-10-02T21:00:24Z | 2017-10-04T15:31:52Z | https://github.com/manrajgrover/halo/issues/10 | [
"enhancement"
] | althonos | 4 |
openapi-generators/openapi-python-client | rest-api | 961 | Optional property of type list is initialized with empty list instead of UNSET when using from_dict() | **Describe the bug**
A model with a property of type `Union[Unset, List[SomeEnum]]` **initializes correctly** when using `__init__` initialization:
```python
>>> # with non-empty list
>>> SomeRequestBody(my_optional_list=[SomeEnum.FOO])
SomeRequestBody(my_optional_list=[<SomeEnum.FOO: 'FOO'>], additional_properties={})
>>> # with empty list
>>> SomeRequestBody()
SomeRequestBody(my_optional_list=<fake_spec_client.types.Unset object at 0x7ff2001db9d0>, additional_properties={})
```
BUT, when I initialize it using `.from_dict()` method, **unexpectedly it provides an empty list as a default value**:
```python
>>> # everything's fine when passing an explicit value
>>> SomeRequestBody.from_dict(dict(my_optional_list=['FOO']))
SomeRequestBody(my_optional_list=[<SomeEnum.FOO: 'FOO'>], additional_properties={})
>>> # but this is not expected. I'd expect the same result as from regular initialization SomeRequestBody()
>>> SomeRequestBody.from_dict(dict())
SomeRequestBody(my_optional_list=[], additional_properties={})
```
From my point of view, `.from_dict()` initialization should keep align with the logic in `__init__` initialization.
**OpenAPI Spec File**
```json
{"openapi": "3.0.2", "info": {"title": "fake_spec", "version": "0.0.0"},
"paths": {},
"components": {
"schemas": {
"SomeRequestBody": {
"title": "SomeRequestBody",
"type": "object",
"properties": {
"my_optional_list": {"type": "array", "items": {"$ref": "#/components/schemas/SomeEnum"}}
}
},
"SomeEnum": {
"title": "SomeEnum",
"enum": ["FOO", "BAR"],
"type": "string"
}
}
}
}
```
**Desktop (please complete the following information):**
- OS: macOS 14.1
- Python Version: 3.10
- openapi-python-client version: reproduced on v0.13.4 (which I use because I need Python 3.7 support) and on the latest v0.17.2
| open | 2024-02-14T13:53:36Z | 2024-12-17T17:04:14Z | https://github.com/openapi-generators/openapi-python-client/issues/961 | [] | keshamin | 4 |
deepset-ai/haystack | nlp | 8,265 | docs: model switching in SentenceTransformers and OpenAI Embedders | Introduce the parameter for switching models for the selected embedders in the docs | closed | 2024-08-21T13:35:28Z | 2024-08-22T13:52:39Z | https://github.com/deepset-ai/haystack/issues/8265 | [
"type:documentation"
] | dfokina | 0 |
deepinsight/insightface | pytorch | 2,535 | Issues with reproducing some accuracies illustrated in insightface/model_zoo github | Hello,
We have issues with reproducing accuracies illustrated in insightface/model_zoo github through the following table :
<img width="637" alt="IResnetList" src="https://github.com/deepinsight/insightface/assets/72048938/8b96fb3b-70f6-49c8-a536-02ded0e900a6">
We downloaded the associated .onnx files from the link in the last column of the table. We used a python script and loaded the .onnx files using the "load" method of onnx package. We also used "get_model" method of "insightface.model_zoo" package. In both cases, we found different results from the table. In both cases, we obtained different results from the table for LFW dataset (80.4% vs 99.55% for VGG2).
Do we need to preprocess the .onnx file in order to get the accuracy you show in the table?
Best regards, | open | 2024-02-27T17:32:29Z | 2024-02-27T17:32:29Z | https://github.com/deepinsight/insightface/issues/2535 | [] | Dena-Kazerani | 0 |
zappa/Zappa | flask | 793 | [Migrated] BadRequestException: CloudWatch Logs role ARN must be set in account settings to enable logging | Originally from: https://github.com/Miserlou/Zappa/issues/1946 by [ebridges](https://github.com/ebridges)
When enabling logging using the setting `cloudwatch_log_level` an exception will get thrown if the [API Gateway Settings](https://console.aws.amazon.com/apigateway/home#/settings) has not configured an ARN with permissions to write to Cloudwatch.
Exception encountered (stack trace below):
```
botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the UpdateStage operation: CloudWatch Logs role ARN must be set in account settings to enable logging
```
## Expected Behavior
Deploy should be successful
## Actual Behavior
This exception is thrown:
```
Traceback (most recent call last):
File "/home/elektrum/venv/lib/python3.7/site-packages/zappa/cli.py", line 2779, in handle
sys.exit(cli.handle())
File "/home/elektrum/venv/lib/python3.7/site-packages/zappa/cli.py", line 509, in handle
self.dispatch_command(self.command, stage)
File "/home/elektrum/venv/lib/python3.7/site-packages/zappa/cli.py", line 546, in dispatch_command
self.deploy(self.vargs['zip'])
File "/home/elektrum/venv/lib/python3.7/site-packages/zappa/cli.py", line 830, in deploy
endpoint_url = self.deploy_api_gateway(api_id)
File "/home/elektrum/venv/lib/python3.7/site-packages/zappa/cli.py", line 2675, in deploy_api_gateway
cache_cluster_encrypted=self.stage_config.get('cache_cluster_encrypted', False)
File "/home/elektrum/venv/lib/python3.7/site-packages/zappa/core.py", line 1802, in deploy_api_gateway
self.get_patch_op('caching/dataEncrypted', cache_cluster_encrypted)
File "/home/elektrum/venv/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/home/elektrum/venv/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.errorfactory.BadRequestException: An error occurred (BadRequestException) when calling the UpdateStage operation: CloudWatch Logs role ARN must be set in account settings to enable logging
```
## Possible Fix
**Example of how to set the value using AWS CLI:**
https://docs.aws.amazon.com/cli/latest/reference/apigateway/update-account.html#examples
**Boto method that should be used to do the update to the account:**
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/apigateway.html#APIGateway.Client.update_account
## Steps to Reproduce
1. Clear out the field for Cloudwatch ARN in [API Gateway Account Settings](https://console.aws.amazon.com/apigateway/home#/settings). "Empty" is the default value.
2. Add the setting `cloudwatch_log_level` to `zappa_settings.json`
3. Run `zappa deploy [stage]`
4. Deploy should fail with the above stack trace.
## Your Environment
* Zappa version used: `0.48.2`
* Python version: `3.7`
* Operating System: `Amazon Linux AMI release 2018.03`
<details>
<summary>pip freeze output</summary>
```
argcomplete==1.9.3
boto3==1.9.243
botocore==1.12.243
CacheControl==0.12.5
cachy==0.2.0
certifi==2019.9.11
cfn-flip==1.2.1
chardet==3.0.4
cleo==0.6.8
Click==7.0
defusedxml==0.6.0
Django==2.2.6
django-allauth==0.40.0
django-dotenv==1.4.2
django-sslserver==0.21
django-storages==1.7.2
djangorestframework==3.10.3
docutils==0.15.2
durationpy==0.5
future==0.16.0
gunicorn==19.9.0
hjson==3.0.1
html5lib==1.0.1
idna==2.8
jmespath==0.9.3
jsonschema==3.1.1
kappa==0.6.0
lambda-packages==0.20.0
lockfile==0.12.2
msgpack==0.6.2
oauthlib==3.1.0
pastel==0.1.1
placebo==0.9.0
poetry==0.12.17
psycopg2-binary==2.8.3
pylev==1.3.0
pyrsistent==0.14.11
python-dateutil==2.6.1
python-slugify==1.2.4
python3-openid==3.1.0
pytz==2019.3
PyYAML==5.1.2
requests==2.22.0
requests-oauthlib==1.2.0
s3transfer==0.2.1
shellingham==1.3.1
six==1.12.0
sqlparse==0.3.0
toml==0.10.0
tomlkit==0.5.8
tqdm==4.19.1
troposphere==2.5.2
Unidecode==1.1.1
urllib3==1.25.6
Werkzeug==0.16.0
wsgi-request-logger==0.4.6
zappa==0.48.2
```
</details>
<details>
<summary>Output of `uname -a`</summary>
`Linux cc900f5afd54 4.9.184-linuxkit #1 SMP Tue Jul 2 22:58:16 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux`
</details>
| open | 2021-02-20T12:42:31Z | 2025-03-21T16:33:11Z | https://github.com/zappa/Zappa/issues/793 | [
"needs-review"
] | jneves | 5 |
mljar/mljar-supervised | scikit-learn | 482 | Splitting train/test has off-by-one error | I got this error:
```
The sum of train_size and test_size = 55757, should be smaller than the number of samples 55756. Reduce test_size and/or train_size.
Traceback (most recent call last):
File "C:\Users\off99\anaconda3\lib\site-packages\supervised\base_automl.py", line 1084, in _fit
trained = self.train_model(params)
File "C:\Users\off99\anaconda3\lib\site-packages\supervised\base_automl.py", line 371, in train_model
mf.train(results_path, model_subpath)
File "C:\Users\off99\anaconda3\lib\site-packages\supervised\model_framework.py", line 165, in train
train_data, validation_data = self.validation.get_split(k_fold, repeat)
File "C:\Users\off99\anaconda3\lib\site-packages\supervised\validation\validation_step.py", line 30, in get_split
return self.validator.get_split(k, repeat)
File "C:\Users\off99\anaconda3\lib\site-packages\supervised\validation\validator_split.py", line 76, in get_split
X_train, X_validation, y_train, y_validation = train_test_split(
File "C:\Users\off99\anaconda3\lib\site-packages\sklearn\model_selection\_split.py", line 2175, in train_test_split
n_train, n_test = _validate_shuffle_split(n_samples, test_size, train_size,
File "C:\Users\off99\anaconda3\lib\site-packages\sklearn\model_selection\_split.py", line 1849, in _validate_shuffle_split
raise ValueError('The sum of train_size and test_size = %d, '
ValueError: The sum of train_size and test_size = 55757, should be smaller than the number of samples 55756. Reduce test_size and/or train_size.
```
Here's the code that splits the dataset

This is the code that is used for training:
```py
from supervised.automl import AutoML
automl = AutoML(
total_time_limit=3600,
mode='Perform',
ml_task='binary_classification',
eval_metric='auc',
max_single_prediction_time=None,
golden_features=False,
kmeans_features=False,
train_ensemble=True,
algorithms=[
# 'Baseline',
# 'Linear',
# 'Decision Tree',
'Random Forest',
'Extra Trees',
'LightGBM',
'Xgboost',
'CatBoost',
'Neural Network'
],
validation_strategy={
"validation_type": "split",
"train_ratio": train_ratio,
"shuffle": False,
"stratify": False
},
)
automl.fit(X, y)
```
What is the cause of this off-by-one error? How do I fix it?
It seems mljar probably interpreted the ratio wrongly or round the number wrongly somehow. I just wanted to feed my own validation set to the training process (no CV, just simple cut-in-the-middle split) | open | 2021-10-27T12:20:47Z | 2021-10-27T14:11:08Z | https://github.com/mljar/mljar-supervised/issues/482 | [
"bug"
] | offchan42 | 4 |
jowilf/starlette-admin | sqlalchemy | 433 | Enhancement: APScheduler Integration | **Is your feature request related to a problem? Please describe.**
APScheduler is a "Task scheduling library for Python". It allows us to schedule some tasks, mostly based on functions.
The problem is this: ability to view, add, reschedule, cancel or stop a task/job from the Starlette Admin interface.
**Describe the solution you'd like**
I'm working on something.
**Describe alternatives you've considered**
I didn't consider but FastAPI Amis Admin has support for it as third party solution: [amisadmin/fastapi-scheduler: FastAPI-Scheduler is a simple scheduled task management FastAPI extension based on APScheduler.](https://github.com/amisadmin/fastapi-scheduler)
**Additional context**
I believe this could be a seperate package like `fastapi-scheduler` but still, I think it should be in this package with optional dependencies. | open | 2023-12-23T16:15:58Z | 2023-12-23T16:16:30Z | https://github.com/jowilf/starlette-admin/issues/433 | [
"enhancement"
] | hasansezertasan | 1 |
fohrloop/dash-uploader | dash | 105 | Getting a maximum file size with default settings. | I am trying to upload around 900 files a 15MB I get the following popup:
Total file size too large (11547.9 Mb) ! Maximum total filesize is: 5120.0 Mb
Although, I did not set a maximum file size in the plugin.
html.H3("Upload MS-files"),
html.Div(
du.Upload(
id="ms-uploader",
filetypes=["tar", "zip", "mzxml", "mzml", "mzXML", "mzML"],
upload_id=uuid.uuid1(),
max_files=10000,
pause_button=True,
cancel_button=True,
text="Upload mzXML/mzML files.",
),
style={
"textAlign": "center",
"width": "100%",
"padding": "0px",
"margin-bottom": "20px",
"display": "inline-block",
},
),
I thought without that setting the file size is unlimited? | closed | 2022-10-15T17:10:39Z | 2024-05-23T10:11:42Z | https://github.com/fohrloop/dash-uploader/issues/105 | [] | sorenwacker | 4 |
opengeos/leafmap | streamlit | 157 | CRSError: Invalid projection: epsg:4326 on OSM notebook example | `
CRSError: Invalid projection: epsg:4326: (Internal Proj Error: proj_create: SQLite error on SELECT name, type, coordinate_system_auth_name, coordinate_system_code, datum_auth_name, datum_code, area_of_use_auth_name, area_of_use_code, text_definition, deprecated FROM geodetic_crs WHERE auth_name = ? AND code = ?: no such column: area_of_use_auth_name)
`
<!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
leafmap-0.6.1
Python 3.9.7
windows 10 pro
Could be related to this issue
https://github.com/geopandas/geopandas/issues/1887

| closed | 2021-12-28T00:46:28Z | 2021-12-28T01:47:40Z | https://github.com/opengeos/leafmap/issues/157 | [
"bug"
] | Niko-La | 1 |
pydantic/pydantic | pydantic | 10,866 | ErrorDetail loc does not use alias, which is inconsistent with V1 | ### Initial Checks
- [X] I confirm that I'm using Pydantic V2
### Description
The field alias was not used in `loc` in Pydantic V2. In V1 if we configured the model as below, the `loc` was always populated with aliased name no matter what.
```
class Config:
alias_generator = to_lower_camel
allow_population_by_field_name = True
```
However in V2, if we have inputs in snake_case, the `loc` will be populated as snake_case, see `first_name` in the example below. Note that the json schema generated consistently with V1 (all properties converted correctly to camelCase).
Because validation error will be output to customer, so IMO it should be consistent with json schema.
Should we somehow have a config setting to make `loc` consistent with json schema here ?
### Example Code
```Python
from pydantic import ConfigDict, BaseModel, Field, ValidationError
from pydantic.alias_generators import to_camel
class Person(BaseModel):
model_config = ConfigDict(
alias_generator=to_camel,
populate_by_name=True,
)
first_name: str
last_name: str
age: int
print(json.dumps(Person.model_json_schema(by_alias=True), indent=2))
try:
Person.model_validate({"first_name": 1, "last_name": "bar", "age": 34})
except ValidationError as e:
print(e.errors())
"""
{
"$defs": {
"FooV2": {
"properties": {
"flag": {
"description": "flag type foo",
"title": "Flag",
"type": "boolean"
}
},
"required": [
"flag"
],
"title": "FooV2",
"type": "object"
}
},
"properties": {
"firstName": {
"title": "Firstname",
"type": "string"
},
"lastName": {
"title": "Lastname",
"type": "string"
},
"age": {
"title": "Age",
"type": "integer"
},
"foo": {
"$ref": "#/$defs/FooV2",
"default": null,
"title": "Foo"
}
},
"required": [
"firstName",
"lastName",
"age"
],
"title": "Person",
"type": "object"
}
[{'type': 'string_type', 'loc': ('first_name',), 'msg': 'Input should be a valid string', 'input': 1, 'url': 'https://errors.pydantic.dev/2.9/v/string_type'}]
"""
```
### Python, Pydantic & OS Version
```Text
pydantic version: 2.9.2
pydantic-core version: 2.23.4
pydantic-core build: profile=release pgo=false
install path: ~/Codes/misc/python/.venv/lib/python3.12/site-packages/pydantic
python version: 3.12.7 (main, Nov 9 2024, 12:30:18) [Clang 16.0.0 (clang-1600.0.26.4)]
platform: macOS-15.1-arm64-arm-64bit
related packages: typing_extensions-4.12.2
commit: unknown
```
| open | 2024-11-18T11:31:45Z | 2024-11-19T10:28:57Z | https://github.com/pydantic/pydantic/issues/10866 | [
"feature request"
] | khaind | 4 |
FlareSolverr/FlareSolverr | api | 937 | Can't add indexer in Jackett | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
- FlareSolverr version:
- Last working FlareSolverr version:
- Operating system:
- Are you using Docker: [yes]
- FlareSolverr User-Agent (see log traces or / endpoint):
- Are you using a VPN: [no]
- Are you using a Proxy: [no]
- Are you using Captcha Solver: [i don't know]
- If using captcha solver, which one:
- URL to test this issue:
```
### Description
My jackett and Flaresolver work on Docker correctly but i have troubleshooting when i want to add YGG in jackett .
Jackett Log

Flaresolver Log

Jackett error

### Logged Error Messages
```text
An error occurred while updating this indexer
Error parsing response, check FlareSolverr. Response: <!DOCTYPE html><html lang="en"><head> <meta charset="utf-8"> <title>openmediavault Workbench</title> <meta http-equiv="X-UA-Compatible" content="IE=edge"> <meta name="ROBOTS" content="NOINDEX, NOFOLLOW"> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1, user-scalable=no"> <meta name="apple-mobile-web-app-capable" content="yes"> <link rel="icon" type="image/x-icon" href="favicon.ico"> <link rel="apple-touch-icon" href="favicon_180x180.png"> <link rel="icon" href="favicon.svg" sizes="any" type="image/svg+xml"> <style> .boot-loader { position: absolute; display: flex; align-items: center; justify-content: center; width: 100%; height: 100%; top: 0; left: 0; cursor: url("data:image/x-icon;base64,AAABAAEAICAAAAEAIACoEAAAFgAAACgAAAAgAAAAQAAAAAEAIAAAAAAAABAAAMIeAADCHgAAAAAAAAAAAAD///8A////AP///wD///8A////AP///wD///8A////AP///wD///8AAAAA/wAAAP8AAAD/AAAA/wAAAP8AAAD/AAAA/wAAAP8AAAD/AAAA/wAAAP8AAAD/////AP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wAAAAD/AAAA/wAAAP8AAAD/AAAA/wAAAP8AAAD/AAAA/wAAAP8AAAD/AAAA/wAAAP////8A////AP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wD///8AAAAA/wAAAP8AAAD/AAAA/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/8Dg4P8AAAD/AAAA/wAAAP8AAAD/AAAA/wAAAP////8A////AP///wD///8A////AP///wD///8A////AP///wD///8A////AP///wAAAAD/AAAA/wAAAP8AAAD/wODg/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/wAAAP8AAAD/AAAA/wAAAP8AAAD/AAAA/////wD///8A////AP///wD///8A////AP///wD///8A////AP///wAAAAD/AAAA/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/wAAAP8AAAD/AAAA/wAAAP////8A////AP///wD///8A////AP///wD///8A////AAAAAP8AAAD/wODg/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/8Dg4P/A4OD/wODg/
```
### Screenshots
_No response_ | closed | 2023-11-01T16:43:18Z | 2023-11-07T16:38:37Z | https://github.com/FlareSolverr/FlareSolverr/issues/937 | [
"more information needed"
] | Thorsteiin | 1 |
ultralytics/ultralytics | computer-vision | 18,729 | How to Resume Hyperparameter Tuning in YOLO11 Using Last Saved Parameters? | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/ultralytics/ultralytics/discussions) and found no similar questions.
### Question
I am performing hyperparameter tuning using the YOLO11 model. Unfortunately, the tuning process was interrupted midway due to a power outage. Therefore, I plan to resume tuning from the last progress point.
Based on a previous question https://github.com/orgs/ultralytics/discussions/9536, the following method was suggested:
```python
from ultralytics import YOLO
# Initialize your model
model = YOLO('yolo11n.pt')
# Load best hyperparameters
model.overrides.update('path/to/best_hyperparameters.yaml')
# Start tuning
model.tune(data="coco8.yaml", ...remaining parameters...)
```
Following this approach, I substituted "last_hyperparameter" for "best_hyperparameter", but encountered the following error:
```python
ValueError: dictionary update sequence element #0 has length 1; 2 is required
```
This error seems to be related to the value range described here:
https://docs.ultralytics.com/ko/guides/hyperparameter-tuning/#mutate-hyperparameters.
However, both "last_hyperparameter" and "best_hyperparameter" are defined as single values, not ranges. How should I proceed if I want to continue tuning from the last_hyperparameter?"
### Additional
_No response_ | open | 2025-01-17T09:31:12Z | 2025-02-26T23:53:15Z | https://github.com/ultralytics/ultralytics/issues/18729 | [
"question"
] | bjh03205 | 20 |
miguelgrinberg/flasky | flask | 146 | Many_to_Many relationship | I'm trying to make many_to_many relationship as I understood from the book, between two tables
1) Car_Model
2) Company
Where each car model has more than one companies that its available at, and each company has more than one specific car model, I tried to do it however I think my structure is wrong or there is something am mistaking, if any help with explanation I would appreciate that.
``` python
class Association(db.Model):
__tablename__ = 'Association'
company_id = db.Column(db.Integer, db.ForeignKey('Companies.id'), primary_key=True)
model_id = db.Column(db.Integer, db.ForeignKey('Models.id'), primary_key=True)
class Car_Model(db.Model):
__tablename__ = 'Models'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(10))
# year
brand_id = db.Column(db.Integer, db.ForeignKey('Brands.id'))
company = db.relationship('Association',
foreign_keys=[Association.model_id],
backref=db.backref('model', lazy='joined'),
lazy='dynamic',
cascade='all, delete-orphan')
def __repr__(self):
return '<Model %r>' % self.name
class Company(db.Model):
__tablename__ = 'Companies'
id = db.Column(db.Integer, primary_key=True)
name = db.Column(db.String(32), unique=True)
price = db.Column(db.Integer)
model = db.relationship('Association',
foreign_keys=[Association.company_id],
backref=db.backref('company', lazy='joined'),
lazy='dynamic',
cascade='all, delete-orphan')
def __repr__(self):
return '<Company %r>' % self.name
```
Am getting that error:
FlushError: Attempting to flush an item of type <class 'main.Car_Model'> as a member of collection "Company.model". Expected an object of type <class 'main.Association'> or a polymorphic subclass of this type.
| closed | 2016-05-18T18:52:28Z | 2016-06-01T16:23:27Z | https://github.com/miguelgrinberg/flasky/issues/146 | [
"question"
] | Mohamad1994HD | 3 |
xuebinqin/U-2-Net | computer-vision | 244 | doubt about training datasets resize method | Hello Mr Xuebin:
about the datasets resize method, my training image size is very big(4000*3000), so I need to resize the image to 320*320 offline. but I compare the results of two resize method:
1. 3 downsampling by image pyramid to 500*375 and then resize to 320*320 by Linear interpolation
2. Resize to 320*320 from 4000*3000 by Linear interpolation directly.
the results are very different and the second method produce Jagged and Discontinuous as below

Do you think the difference will effect the model???which method is more reasonable??in your paper you mentiond you resize the image by Linear interpolation. but if the orignal image is too big and it will cause the problem I described just now.
Thanks
Best Regards
| open | 2021-08-11T07:52:25Z | 2021-08-13T02:45:40Z | https://github.com/xuebinqin/U-2-Net/issues/244 | [] | xiongzhu666 | 3 |
piskvorky/gensim | machine-learning | 3,501 | Docs still reference fasttext.build_vocab sentences parameter | Current version of Fasttext.build_vocab method has a first parameter with two possible names, neither of which is "sentences" despite that 2+ year-old (pre version 4) parameter name being listed on https://radimrehurek.com/gensim/models/fasttext.html
| open | 2023-10-31T04:10:28Z | 2023-11-07T21:19:28Z | https://github.com/piskvorky/gensim/issues/3501 | [] | Jeff-Winchell | 1 |
LibrePhotos/librephotos | django | 1,273 | Use original filename when saving photos or videos | **Describe the enhancement you'd like**
Saving a file should give it its original filename instead of the hash
**Describe why this will benefit the LibrePhotos**
Being able to save an image with its original filename is better in every way.
Saving e.g. `http://localhost:3000/media/photos/316f2524acd7f5841c924fe4f1a422641` should use `IMG_2024_0101_1234.jpg` instead of `316f2524acd7f5841c924fe4f1a422641.jpg`
All that is needed is an additional HTTP header:
`Content-Disposition: inline; filename="IMG_2024_0101_1234.jpg"`
**inline** so it preserves the possibility to view the image in the browser, and at the same time you can save it with its real filename.
| closed | 2024-05-17T22:19:03Z | 2024-05-20T09:12:56Z | https://github.com/LibrePhotos/librephotos/issues/1273 | [
"enhancement",
"good first issue"
] | HMKnapp | 1 |
vitalik/django-ninja | django | 791 | Adding extra Schemas to the openapi_extra method | Hello @vitalik!
Is it possible to include extra schemas (not related to a specific route)? For example, as below?
```python
api = NinjaAPI(
openapi_extra={
"components": {
"schemas": {
"TestSchema": {
"title": "TestSchema",
"type": "object",
"properties": {
"cod_test": {"title": "Cód. test", "type": "integer"},
"test": {
"title": "Test",
"maxLength": 30,
"type": "string",
},
},
"required": ["cod_test", "test"],
},
}
}
}
)
```
Or using Schemas/model Schemas directly:
```python
api = NinjaAPI(
openapi_extra={
"components": {
"schemas": {
"TestSchema": TestSchema.schema(),
}
}
}
)
```
Thanks in advance for the help! | closed | 2023-07-13T15:42:23Z | 2023-07-14T12:44:54Z | https://github.com/vitalik/django-ninja/issues/791 | [] | joaoflaviosantos | 2 |
mwaskom/seaborn | pandas | 3,072 | Bug: StopIteration error in histplot | ```python
import seaborn as sns
penguins = sns.load_dataset("penguins")
sns.histplot(data=penguins, x="flipper_length_mm")
```
gives a `StopIteration` error with matplotlib 3.6.1 (works with 3.6.0). seaborn version 0.12.0, python version 3.10.
Detailed error message:
```
---------------------------------------------------------------------------
StopIteration Traceback (most recent call last)
Input In [2], in <cell line: 2>()
1 penguins = sns.load_dataset("penguins")
----> 2 sns.histplot(data=penguins, x="flipper_length_mm")
File ~/env/env/lib/python3.10/site-packages/seaborn/distributions.py:1418, in histplot(data, x, y, hue, weights, stat, bins, binwidth, binrange, discrete, cumulative, common_bins, common_norm, multiple, element, fill, shrink, kde, kde_kws, line_kws, thresh, pthresh, pmax, cbar, cbar_ax, cbar_kws, palette, hue_order, hue_norm, color, log_scale, legend, ax, **kwargs)
1416 else:
1417 method = ax.plot
-> 1418 color = _default_color(method, hue, color, kwargs)
1420 if not p.has_xy_data:
1421 return ax
File ~/env/env/lib/python3.10/site-packages/seaborn/utils.py:139, in _default_color(method, hue, color, kws)
134 scout.remove()
136 elif method.__name__ == "bar":
137
138 # bar() needs masked, not empty data, to generate a patch
--> 139 scout, = method([np.nan], [np.nan], **kws)
140 color = to_rgb(scout.get_facecolor())
141 scout.remove()
File ~/env/env/lib/python3.10/site-packages/matplotlib/__init__.py:1423, in _preprocess_data.<locals>.inner(ax, data, *args, **kwargs)
1420 @functools.wraps(func)
1421 def inner(ax, *args, data=None, **kwargs):
1422 if data is None:
-> 1423 return func(ax, *map(sanitize_sequence, args), **kwargs)
1425 bound = new_sig.bind(ax, *args, **kwargs)
1426 auto_label = (bound.arguments.get(label_namer)
1427 or bound.kwargs.get(label_namer))
File ~/env/env/lib/python3.10/site-packages/matplotlib/axes/_axes.py:2373, in Axes.bar(self, x, height, width, bottom, align, **kwargs)
2371 x0 = x
2372 x = np.asarray(self.convert_xunits(x))
-> 2373 width = self._convert_dx(width, x0, x, self.convert_xunits)
2374 if xerr is not None:
2375 xerr = self._convert_dx(xerr, x0, x, self.convert_xunits)
File ~/env/env/lib/python3.10/site-packages/matplotlib/axes/_axes.py:2182, in Axes._convert_dx(dx, x0, xconv, convert)
2170 try:
2171 # attempt to add the width to x0; this works for
2172 # datetime+timedelta, for instance
(...)
2179 # removes the units from unit packages like `pint` that
2180 # wrap numpy arrays.
2181 try:
-> 2182 x0 = cbook._safe_first_finite(x0)
2183 except (TypeError, IndexError, KeyError):
2184 pass
File ~/env/env/lib/python3.10/site-packages/matplotlib/cbook/__init__.py:1749, in _safe_first_finite(obj, skip_nonfinite)
1746 raise RuntimeError("matplotlib does not "
1747 "support generators as input")
1748 else:
-> 1749 return next(val for val in obj if safe_isfinite(val))
StopIteration:
``` | closed | 2022-10-10T22:53:33Z | 2022-11-04T10:36:35Z | https://github.com/mwaskom/seaborn/issues/3072 | [
"upstream"
] | yannikschaelte | 3 |
ultralytics/yolov5 | machine-learning | 13,078 | How to combine a yolov8 detector and a classifier | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hi guys, I want to combine togheter a detector trained with YOLOv8 and a classifier done with EfficientNetB3.
My detector is been saved with the `model.save(output_model_path)` and so has a `.pt` extension, while the classifier is been saved with the same method but has a `.h5` extension.
Now how can I combine these two models and test the system on a custom video? I have tried this code but I'm receiving the error `TypeError: 'dict' object is not callable` on the `results = detector(frame) ` (line 20).
Thanks
` import torch
from tensorflow.keras.models import load_model
import cv2
detector = torch.load('/myPath/yolov8m_trained.pt')
classifier = load_model('/myPath/efficientnet_model_unfreeze.h5')
video_path = '/myPath//video_test.mp4'
cap = cv2.VideoCapture(video_path)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
results = detector(frame)
detections = results.xyxy[0].cpu().numpy()
for detection in detections:
x1, y1, x2, y2, conf, cls = detection
roi = frame[int(y1):int(y2), int(x1):int(x2)]
roi_resized = cv2.resize(roi, (224, 224))
roi_resized = roi_resized / 255.0
roi_resized = roi_resized.reshape(1, 224, 224, 3)
pred = classifier.predict(roi_resized)
class_id = pred.argmax(axis=1)[0]
label = f'Class: {class_id}, Conf: {conf:.2f}'
cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 2)
cv2.putText(frame, label, (int(x1), int(y1) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2)
cv2.imshow('Video', frame)
if cv2.waitKey(1) & 0xFF == ord('q'):
break
cap.release()
cv2.destroyAllWindows()
output_path = 'output_video.mp4'
fourcc = cv2.VideoWriter_fourcc(*'mp4v')
out = cv2.VideoWriter(output_path, fourcc, 30.0, (int(cap.get(3)), int(cap.get(4))))
cap = cv2.VideoCapture(video_path)
while cap.isOpened():
ret, frame = cap.read()
if not ret:
break
results = detector(frame)
detections = results.xyxy[0].cpu().numpy()
for detection in detections:
x1, y1, x2, y2, conf, cls = detection
roi = frame[int(y1):int(y2), int(x1):int(x2)]
roi_resized = cv2.resize(roi, (224, 224))
roi_resized = roi_resized / 255.0
roi_resized = roi_resized.reshape(1, 224, 224, 3)
pred = classifier.predict(roi_resized)
class_id = pred.argmax(axis=1)[0]
label = f'Class: {class_id}, Conf: {conf:.2f}'
cv2.rectangle(frame, (int(x1), int(y1)), (int(x2), int(y2)), (0, 255, 0), 2)
cv2.putText(frame, label, (int(x1), int(y1) - 10), cv2.FONT_HERSHEY_SIMPLEX, 0.9, (0, 255, 0), 2)
out.write(frame)
cap.release()
out.release()
cv2.destroyAllWindows()` | closed | 2024-06-10T14:28:51Z | 2024-10-20T19:47:29Z | https://github.com/ultralytics/yolov5/issues/13078 | [
"question"
] | Joeyabuki99 | 4 |
sunscrapers/djoser | rest-api | 788 | VK Oauth2 doesn't work from Djoser Social | Hello everyone who sees my issue. So, I'm trying to implement **VK** authorization, but it doesn't work. However, I'm doing what I should do according to the documentation and it works well for **Mailru** authorization!
First of all, my oauth enpoint is located at `/api/auth/o/`. So, I send GET-request to the endpoint+provider_name. In my case that's _mailru_ (what doesn't matter now as it's alright) and _vk-oauth2_. Also, I need to pass redirect_uri. It causes that I get link which I'd send to user. So I pass it. At redirect_uri I passed I get in query parameters values _state_, _redirect_state_ (only for vk, but there's no this thing in the djoser documentation) and _code_.
Then I should send request to the same endpoint, but using POST-method and pass the data I've got (state, code). Then It's supposed to respond me with access token what it does with mailru oauth, but does not with vk
**I get the following error from djoser:**
```
{
"non_field_errors": [
"Your credentials aren't allowed"
]
}
```
**P.S.**
Here's the documentation https://djoser.readthedocs.io/en/latest/social_endpoints.html | closed | 2024-01-15T16:31:01Z | 2024-03-31T12:10:45Z | https://github.com/sunscrapers/djoser/issues/788 | [
"invalid"
] | va1ngvarr | 1 |
qubvel-org/segmentation_models.pytorch | computer-vision | 332 | How to train a model | Hello everyone I'm trying to train a Unet on my dataset using only pytorch and a library called torch_snippets, so my question is how do you train a model from loading the dataset to prediction could you please provide me with an example code and also I have written the code to get the data
```
class Dummy_Train(Dataset):
def __init__(self):
self.items = stems(f'/content/drive/MyDrive/ds/img')[:375]
def __len__(self):
return len(self.items)
def __getitem__(self, ix):
image = cv2.imread(f'/content/drive/MyDrive/ds/img/{self.items[ix]}.png', 1)
image = cv2.resize(image, (224,224))
image = cv2.cvtColor(image,cv2.COLOR_BGR2RGB)
mask = cv2.imread(f'/content/drive/MyDrive/ds/masks_machine/{self.items[ix]}.png')
mask = cv2.resize(mask, (224,224))
mask = cv2.cvtColor(mask,cv2.COLOR_BGR2GRAY)
mask = np.where(mask==2,1,mask)
return image, mask
def choose(self): return self[randint(len(self))]
def collate_fn(self, batch):
ims, masks = list(zip(*batch))
ims = torch.cat([tfms(im.copy()/255.)[None] for im in ims]).float().to(device)
ce_masks = torch.cat([torch.Tensor(mask[None]) for mask in masks]).long().to(device)
return ims, ce_masks
```
### Please reply as soon as possible | closed | 2021-01-17T18:06:20Z | 2021-01-25T14:34:26Z | https://github.com/qubvel-org/segmentation_models.pytorch/issues/332 | [] | The-ML-Hero | 0 |
gee-community/geemap | streamlit | 1,212 | Make `out_file_path ` parameter optional in `zonal_stats` function if `return_fc` is True | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: 0.14.2
- Python version: 3.9.7
- Operating System: Windows 11
### Description
Though this is not exactly a **bug**. I noticed by reading the code of function `zonal_stats` that if the `return_fc` parameter is set to `True`, then, the function will only return the result, and no the data will not be exported to a file, therefore, in this case the `out_file_path` is not needed.
### What I Did
```py
zonal_stats_fc = geemap.zonal_statistics(
in_value_raster=syria_ndvi,
in_zone_vector=syria_adm1_map_feature_collection,
statistics_type="MEAN",
scale=30,
crs="EPSG:4326",
return_fc=True
)
```
In the above code, however, I get the following error:
```
zonal_stats() missing 1 required positional argument: 'out_file_path'
```
I tried to explicitly set `out_file_path` to `None`, then I got:
```
TypeError: _getfullpathname: path should be string, bytes or os.PathLike, not NoneType
```
It **only** works if I set it with a string ending with one of the accepted file types (`shp`, `csv`, etc...). Setting the value to an empty string didn't work.
I think that, at the beginning of the function, there should be a check if `return_fc` is `True`, then no other checks should be performed on `out_file_path`.
Right now, I want to get the data instead of saving it and reading it again, so I'm forced to give the `out_file_path` a value, which is inconvenient. | closed | 2022-08-13T14:35:25Z | 2022-08-13T14:56:15Z | https://github.com/gee-community/geemap/issues/1212 | [
"bug"
] | Reslan-Tinawi | 1 |
plotly/dash | data-visualization | 2,499 | [BUG] Background callbacks - Error with Pattern Matching in Running option | **Describe your context**
I use background callback with Celery + Redis. In the example there is a running option to put in the components. The thing is I am using pattern matching. If I remove the running option, there is no problem.
This is my app.py:

This is my callback:

```
dash 2.9.0
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-mantine-components 0.12.0
dash-table 5.0.0
```
**Describe the bug**
the running option using background callback does not work. Moreover, Why do I see it is a long callback instead of background callback?
Error:
```
File "/usr/local/lib/python3.10/site-packages/dash/_callback.py", line 167, in callback
validate_long_inputs(x[0] for x in long_spec["running"])
File "/usr/local/lib/python3.10/site-packages/dash/_callback.py", line 209, in validate_long_inputs
raise WildcardInLongCallback(
dash.exceptions.WildcardInLongCallback: long callbacks does not support dependencies with
pattern-matching ids
Received: <Output `{"index":["MATCH"],"type":"main_apm_generate_boxplot_button"}.disabled`>
```
| closed | 2023-04-07T00:34:25Z | 2024-03-28T20:27:59Z | https://github.com/plotly/dash/issues/2499 | [] | slyin87 | 4 |
thtrieu/darkflow | tensorflow | 622 | full list of label names supported in dark flow - tiny yolo | hi, where can we find a full list of label names supported in dark flow (tiny yolo) ?
thanks much ! | open | 2018-03-09T17:35:09Z | 2018-03-14T08:14:35Z | https://github.com/thtrieu/darkflow/issues/622 | [] | skylinetech-team | 3 |
zihangdai/xlnet | tensorflow | 263 | Why the max_seq_length = 512 for XLNet? | Hi,
Just a conceptual question:
In the paper, it is mentioned that XLNet derives some parts from Transformer-XL which isn't limited to a fixed context but the hyperparameters section says that the max length is 512.
Can you please help me better understand it?
Thanks! | open | 2020-04-23T12:23:40Z | 2020-10-01T19:19:45Z | https://github.com/zihangdai/xlnet/issues/263 | [] | vr25 | 4 |
plotly/dash | plotly | 2,688 | [Feature Request] Run dash in jupyter without additional server iframe | The problem with Iframe approach is that you need to pass additional ports from your dev environment which could be hard sometime, which makes troubles when you trying to make for example interactive chart, using dash-cytoscape
However as I understand jupyter have own data pipe between server and client.
So may be we can somehow avoid do app.run() the way it creates iframe on 127.0.0.1 ?
| closed | 2023-11-11T05:25:21Z | 2024-07-25T13:33:48Z | https://github.com/plotly/dash/issues/2688 | [] | zba | 1 |
home-assistant/core | asyncio | 140,862 | HA Problem when saving changes or in an undefined status | ### The problem
Hello everyone,
I am currently somewhat at a loss, desperate and grateful for any tips or information on log files or similar. Unfortunately, I have not yet received any tips in other forums/issues.
I have a running HA system (Home Assistant OS, version 2025.3.3) on a Dell Optiplex thin client with an SSD hard disk. In principle, the system seems to be running properly at first glance (automations are running, sensors are being updated, etc.).
However, I have a problem that points to some kind of writing problem when I make changes. It almost looks like the changes (I'll name a few below) are not being saved. If I make the changes, they are visible during the runtime of HA - but if I do the next HA restart, after the next HA restart everything is as if I had never made the changes! But I don't know of any restore function for HA after a restart? I also have no snapshot or similar on the DELL client PC. Are there any indications of a “frozen state” of HA?
Examples of changes:
- I change a value for a template sensor (Settings-Helpers) e.g. 123 -> 456 - after the next HA restart, the previous value 123 is included again
- I change the input sensor for a consumption counter, e.g. from bicycle to car - after the next HA restart, the previous value bicycle is included again
- I have to re-authenticate the Netatmo integration after every HA restart (Config entry 'Home Assistant Cloud' for netatmo integration could not authenticate: Token not valid, trigger renewal)
- I have to re-authenticate the Daikin/Onecta integration after every HA restart (Config entry 'daikin_onecta' for daikin_onecta integration could not authenticate: Problem refreshing token: 400, message='Bad Request', url='https://idp.onecta.daikineurope.com/v1/oidc/token')
- I activate a deactivated device in the Shelly integration - after the next HA restart it is deactivated again
- I delete a device from the mobile integration - it is available again after the next HA restart
- I delete all devices in the Switchbot Bluetooth integration and thus the entire integration - after the next HA restart, both the integration and the deleted devices are available again
As I said - I would be grateful for any help or tips!
BR,
Ralf
### What version of Home Assistant Core has the issue?
core-2025.3.3
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
_No response_
### Link to integration documentation on our website
_No response_
### Diagnostics information
_No response_
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | closed | 2025-03-18T10:28:43Z | 2025-03-19T05:18:54Z | https://github.com/home-assistant/core/issues/140862 | [] | seiferrtalle | 1 |
feder-cr/Jobs_Applier_AI_Agent_AIHawk | automation | 441 | Auto Jobs Applier AI Hawk Setup Guide | # Auto_Jobs_Applier_AIHawk Installation Guide
This guide will help you set up and run the **Auto_Jobs_Applier_AIHawk** project on a Windows machine using Ubuntu via Windows Subsystem for Linux (WSL). We'll walk through installing Ubuntu, setting up the development environment, installing Chrome and Chromedriver, configuring the project, modifying the code to remove interactive prompts, automating script execution, and running the application.
## Table of Contents
1. [Prerequisites](#prerequisites)
2. [Install Ubuntu via Microsoft Store](#install-ubuntu-via-microsoft-store)
3. [Update and Upgrade Ubuntu](#update-and-upgrade-ubuntu)
4. [Install Visual Studio Code on Windows](#install-visual-studio-code-on-windows)
5. [Clone the Repository](#clone-the-repository)
6. [Set Up Python Virtual Environment](#set-up-python-virtual-environment)
7. [Install Project Dependencies](#install-project-dependencies)
8. [Install Google Chrome](#install-google-chrome)
9. [Install ChromeDriver](#install-chromedriver)
10. [Fixing a Known Bug](#fixing-a-known-bug)
11. [Modify the Code to Remove Interactive Prompt](#modify-the-code-to-remove-interactive-prompt)
12. [Add Your Resume PDF](#add-your-resume-pdf)
13. [Update Configuration Files](#update-configuration-files)
14. [Automate Script Execution with Cron](#automate-script-execution-with-cron)
15. [Run the Application](#run-the-application)
16. [Troubleshooting](#troubleshooting)
17. [Conclusion](#conclusion)
---
## Prerequisites
Before proceeding, ensure you have the following:
- **Windows 10 or higher** with WSL enabled.
- **Administrator privileges** on your Windows machine.
- Basic familiarity with command-line operations.
---
## Install Ubuntu via Microsoft Store
1. **Enable WSL (Windows Subsystem for Linux):**
Open **PowerShell** as an administrator and run:
```powershell
wsl --install
```
This command installs WSL and the latest Ubuntu distribution by default. If you already have WSL enabled, you can skip this step.
2. **Restart Your Computer:**
After enabling WSL, restart your computer if prompted.
3. **Install Ubuntu:**
- Open the **Microsoft Store** app on your Windows machine.
- Search for **Ubuntu**.
- Select the latest version (e.g., **Ubuntu 22.04 LTS**) and click **Install**.
- Once installed, launch Ubuntu from the Start menu.
- Follow the on-screen instructions to complete the initial setup, including creating a username and password.
---
## Update and Upgrade Ubuntu
After installing Ubuntu, update the package lists and upgrade installed packages to ensure you have the latest updates.
1. **Open the Ubuntu Terminal.**
2. **Run the Following Commands:**
```bash
sudo apt update
sudo apt upgrade -y
```
---
## Install Visual Studio Code on Windows
1. **Download Visual Studio Code (VS Code):**
Visit the [Visual Studio Code website](https://code.visualstudio.com/) and download the installer for Windows.
2. **Install VS Code:**
Run the downloaded installer and follow the installation prompts.
3. **Install the Remote - WSL Extension:**
- Open **VS Code**.
- Click on the **Extensions** icon in the Activity Bar on the side or press `Ctrl+Shift+X`.
- Search for **Remote - WSL** and install it.
This extension allows you to work seamlessly with your WSL environment directly from VS Code.
---
## Clone the Repository
1. **Open Visual Studio Code.**
2. **Open the WSL Terminal within VS Code:**
- Press `Ctrl+`` (backtick) or go to **View > Terminal**.
- Ensure you're in the Ubuntu WSL environment.
3. **Clone the Repository:**
```bash
git clone https://github.com/feder-cr/Auto_Jobs_Applier_AIHawk.git
```
4. **Navigate to the Project Directory:**
```bash
cd Auto_Jobs_Applier_AIHawk
```
---
## Set Up Python Virtual Environment
Creating a virtual environment ensures that project dependencies are isolated from other projects.
1. **Create a Virtual Environment Named `virtual`:**
```bash
python3 -m venv virtual
```
2. **Activate the Virtual Environment:**
```bash
source virtual/bin/activate
```
You should see `(virtual)` prefixed in your terminal prompt, indicating that the virtual environment is active.
---
## Install Project Dependencies
With the virtual environment activated, install the required Python packages.
1. **Install Dependencies from `requirements.txt`:**
```bash
pip install -r requirements.txt
```
Ensure that the `requirements.txt` file exists in the project directory and lists all necessary packages.
---
## Install Google Chrome
1. **Download the Google Chrome Debian Package:**
```bash
wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb
```
2. **Install the Package:**
```bash
sudo dpkg -i google-chrome-stable_current_amd64.deb
```
3. **Fix Any Dependency Issues:**
If you encounter dependency errors during installation, run:
```bash
sudo apt-get install -f
```
4. **Verify the Installation:**
```bash
google-chrome --version
```
**Expected Output:**
```
Google Chrome 129.0.6668.70
```
---
## Install ChromeDriver
To ensure compatibility between Chrome and ChromeDriver, download the ChromeDriver version that matches your installed version of Google Chrome.
1. **Check Your Google Chrome Version:**
```bash
google-chrome --version
```
**Example Output:**
```
Google Chrome 129.0.6668.70
```
2. **Download the Corresponding ChromeDriver:**
Visit the [Chrome for Testing](https://googlechromelabs.github.io/chrome-for-testing/) page to find the appropriate ChromeDriver version.
**Example for Version 129.0.6668.70:**
```bash
wget https://chromedriver.storage.googleapis.com/129.0.6668.70/chromedriver_linux64.zip
```
3. **Install Unzip (If Not Already Installed):**
```bash
sudo apt install unzip
```
4. **Extract the Downloaded ZIP File:**
```bash
unzip chromedriver_linux64.zip
```
5. **Move ChromeDriver to a Directory in PATH:**
```bash
sudo mv chromedriver /usr/local/bin/
```
6. **Grant Execute Permissions to ChromeDriver:**
```bash
sudo chmod +x /usr/local/bin/chromedriver
```
7. **Verify ChromeDriver Installation:**
```bash
chromedriver --version
```
**Expected Output:**
```
ChromeDriver 129.0.6668.70 (abcdef1234567890abcdef1234567890abcdef12)
```
8. **Ensure ChromeDriver is Accessible:**
```bash
which chromedriver
```
**Expected Output:**
```
/usr/local/bin/chromedriver
```
---
## Fixing a Known Bug
There is a known issue in the project where ChromeDriver is incorrectly referenced with a `.exe` extension in a Linux environment. To fix this:
1. **Locate the `utils.py` File:**
The file path provided seems to be:
```
./virtual/lib/python3.12/site-packages/lib_resume_builder_AIHawk/utils.py
```
Navigate to the directory containing the `manager_facade.py` file:
```bash
cd virtual/lib/python3.10/site-packages/lib_resume_builder_AIHawk/
```
2. **Edit the `manager_facade.py` File:**
Open the file using VS Code or a text editor:
```bash
code utils.py
```
3. **Modificar o Caminho Executável do ChromeService**
Localize a função `choose_style` e assegure-se de que ela aponte para o caminho correto do ChromeDriver sem a extensão `.exe`.
**Antes (Função Original):**
```python
service = ChromeService(executable_path=chromedriver_path)
```
**Depois (Função Modificada):**
```python
service = ChromeService(executable_path="/path/to/chromedriver")
```
4. **Save the Changes:**
- In VS Code, press `Ctrl + S` to save.
- Exit the editor if desired.
---
## Modify the Code to Remove Interactive Prompt
To ensure that the script runs automatically without waiting for user input, we need to modify the `choose_style` function to select a default style programmatically.
1. **Locate the `utils.py` File:**
The file path provided seems to be:
```
./virtual/lib/python3.10/site-packages/lib_resume_builder_AIHawk/manager_facade.py
```
Navigate to the directory containing the `manager_facade.py` file:
```bash
cd virtual/lib/python3.10/site-packages/lib_resume_builder_AIHawk/
```
2. **Edit the `manager_facade.py` File:**
Open the file using VS Code or a text editor:
```bash
code manager_facade.py
```
3. **Modify choose_style function arround line 52:**
**Before (Original Function):**
```python
def choose_style(self):
styles = self.style_manager.get_styles()
if not styles:
print("No styles available")
return None
final_style_choice = "Create your resume style in CSS"
formatted_choices = self.style_manager.format_choices(styles)
formatted_choices.append(final_style_choice)
selected_choice = self.prompt_user(formatted_choices, "Which style would you like to adopt?")
if selected_choice == final_style_choice:
tutorial_url = "https://github.com/feder-cr/lib_resume_builder_AIHawk/blob/main/how_to_contribute/web_designer.md"
print("\nOpening tutorial in your browser...")
webbrowser.open(tutorial_url)
exit()
else:
self.selected_style = selected_choice.split(' (')[0]
```
**After (Modified Function) Option1:**
```python
def choose_style(self):
styles = self.style_manager.get_styles()
if not styles:
print("No styles available")
return None
final_style_choice = "Create your resume style in CSS"
formatted_choices = self.style_manager.format_choices(styles)
formatted_choices.append(final_style_choice)
# **Automate Style Selection:**
# Select a default style without prompting the user.
# You can choose to select the first available style or specify a particular one.
# **Option 1: Select the First Available Style**
if formatted_choices:
selected_choice = formatted_choices[0] # Automatically select the first style
print(f"Selected default style: {selected_choice}")
else:
selected_choice = final_style_choice
```
**After (Modified Function) Option 2:**
```python
def choose_style(self):
styles = self.style_manager.get_styles()
if not styles:
print("No styles available")
return None
final_style_choice = "Create your resume style in CSS"
formatted_choices = self.style_manager.format_choices(styles)
formatted_choices.append(final_style_choice)
# **Automate Style Selection:**
# Select a default style without prompting the user.
# You can choose to select the first available style or specify a particular one.
# **Option 2: Specify a Particular Default Style**
# Uncomment and modify the following lines if you prefer a specific style.
# default_style_name = "Default (style author -> https://github.com/krishnavalliappan)"
# if default_style_name in formatted_choices:
# selected_choice = default_style_name
# print(f"Selected default style: {selected_choice}")
# else:
# selected_choice = formatted_choices[0] # Fallback to first style
# print(f"Default style not found. Selected first available style: {selected_choice}")
if selected_choice == final_style_choice:
tutorial_url = "https://github.com/feder-cr/lib_resume_builder_AIHawk/blob/main/how_to_contribute/web_designer.md"
print("\nOpening tutorial in your browser...")
webbrowser.open(tutorial_url)
exit()
else:
self.selected_style = selected_choice.split(' (')[0]
print(f"Resume will be generated using the '{self.selected_style}' style.")
```
**Explanation of Changes:**
- **Removed Interactive Prompt:**
- The `self.prompt_user` function call has been removed to prevent the script from waiting for user input.
- **Automate Style Selection:**
- **Option 1:** Automatically selects the first available style from the `formatted_choices` list.
- **Option 2 (Commented Out):** Allows you to specify a particular default style by name. Uncomment and modify as needed.
**Note:** Choose either **Option 1** or **Option 2** based on your preference. If you have a preferred default style, **Option 2** offers more control.
---
## Add Your Resume PDF
1. **Create a Directory for Resumes (If Not Exists):**
```bash
mkdir -p resumes
```
2. **Place Your Resume PDF in the `resumes` Directory:**
```bash
cp /path/to/your_resume.pdf resumes/
```
Replace `/path/to/your_resume.pdf` with the actual path to your resume file.
Or simple move using mouse in VSC.
---
## Update Configuration Files
Ensure that the configuration files `secrets.yaml`, `config.yaml`, and `plain_text_resume.yaml` are properly configured with your specific details.
1. **Open Each Configuration File in VS Code:**
```bash
code secrets.yaml
code config.yaml
code plain_text_resume.yaml
```
2. **Update the Necessary Fields:**
3. **Save the Changes.**
---
## Automate Script Execution with Cron
To ensure that the **Auto_Jobs_Applier_AIHawk** script runs automatically every hour without manual intervention, we'll set up a **cron job**. This involves creating a shell script to activate the virtual environment and run the Python script, making the script executable, and scheduling it with cron.
### 📋 **Table of Contents**
1. [Create a Shell Script to Run the Python Script](#create-a-shell-script-to-run-the-python-script)
2. [Make the Shell Script Executable](#make-the-shell-script-executable)
3. [Set Up the Cron Job](#set-up-the-cron-job)
4. [Verify the Cron Job](#verify-the-cron-job)
5. [Monitor Cron Job Logs](#monitor-cron-job-logs)
6. [Troubleshooting](#troubleshooting)
---
### 1. Create a Shell Script to Run the Python Script
To ensure that the Python script runs within the correct environment, we'll create a shell script that activates the virtual environment and executes the script.
1. **Navigate to the Project Directory:**
```bash
cd ~/Auto_Jobs_Applier_AIHawk
```
2. **Create a New Shell Script File:**
Let's name the script `run_auto_jobs.sh`.
```bash
nano run_auto_jobs.sh
```
3. **Add the Following Content to the Script:**
Replace `/home/proteusbr/Auto_Jobs_Applier_AIHawk` with the actual path to your project directory if it's different.
```bash
#!/bin/bash
# Navigate to the project directory
cd /home/proteusbr/Auto_Jobs_Applier_AIHawk
# Activate the virtual environment
source virtual/bin/activate
# Run the Python script with the default style
python main.py --resume resume-job.pdf
# Deactivate the virtual environment
deactivate
```
**Explanation:**
- **Shebang (`#!/bin/bash`):** Specifies that the script should be run in the Bash shell.
- **`cd`:** Navigates to the project directory.
- **`source virtual/bin/activate`:** Activates the Python virtual environment.
- **`python main.py --resume resume-job.pdf`:** Executes the Python script with the specified resume.
- **`deactivate`:** Deactivates the virtual environment after the script completes.
4. **Save and Exit the Editor:**
- Press `Ctrl + O`, then `Enter` to save.
- Press `Ctrl + X` to exit.
---
### 2. Make the Shell Script Executable
To allow the script to be run, grant it execute permissions.
```bash
chmod +x run_auto_jobs.sh
```
---
### 3. Set Up the Cron Job
Cron is a time-based job scheduler in Unix-like operating systems. We'll configure it to run the shell script every hour.
1. **Open the Crontab Editor:**
```bash
crontab -e
```
- **First-Time Setup:** If prompted to choose an editor, select your preferred one (e.g., `nano`).
2. **Add the Cron Job Entry:**
At the end of the crontab file, add the following line:
```bash
0 * * * * /home/proteusbr/Auto_Jobs_Applier_AIHawk/run_auto_jobs.sh >> /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log 2>&1
```
**Explanation:**
- `0 * * * *`: Specifies that the job runs at minute `0` of every hour.
- `/home/proteusbr/Auto_Jobs_Applier_AIHawk/run_auto_jobs.sh`: Full path to the shell script.
- `>> /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log 2>&1`: Redirects both standard output and errors to `cron.log` for logging purposes.
3. **Save and Exit the Crontab Editor:**
- In `nano`, press `Ctrl + O`, `Enter` to save.
- Press `Ctrl + X` to exit.
---
### 4. Verify the Cron Job
To ensure that the cron job has been added successfully:
```bash
crontab -l
```
**Expected Output:**
```
0 * * * * /home/proteusbr/Auto_Jobs_Applier_AIHawk/run_auto_jobs.sh >> /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log 2>&1
```
This confirms that the cron job is scheduled to run hourly.
---
### 5. Monitor Cron Job Logs
Regularly check the `cron.log` file to monitor the execution of your script and identify any potential issues.
1. **View the Last Few Entries:**
```bash
tail -n 20 /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log
```
2. **Follow the Log in Real-Time:**
```bash
tail -f /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log
```
- This command continuously displays new log entries as they are appended.
**Tips:**
- **Successful Execution:** Look for log entries indicating that the script ran without errors.
- **Errors:** If errors are present, they will be logged here. Use these logs to troubleshoot issues.
---
./Auto_Jobs_Applier_AIHawk/app_config.py
MINIMUM_WAIT_TIME = 5
./Auto_Jobs_Applier_AIHawk/src/utils.py
def chrome_browser_options():
options.add_argument("--headless")
./Auto_Jobs_Applier_AIHawk/src/aihawk_job_manager.py
timeout=0
---
### 6. Troubleshooting
If the cron job isn't executing as expected, consider the following troubleshooting steps:
#### a. **Check Cron Service Status**
Ensure that the cron service is running.
```bash
sudo service cron status
```
- **If Not Running:** Start the cron service.
```bash
sudo service cron start
```
#### b. **Verify Script Paths and Permissions**
- **Absolute Paths:** Ensure that all paths in the shell script are absolute.
- **Execute Permissions:** Confirm that the shell script and `chromedriver` have execute permissions.
```bash
ls -l /home/proteusbr/Auto_Jobs_Applier_AIHawk/run_auto_jobs.sh
ls -l /usr/local/bin/chromedriver
```
**Expected Output:**
```
-rwxr-xr-x 1 proteusbr proteusbr ... run_auto_jobs.sh
-rwxr-xr-x 1 root root ... /usr/local/bin/chromedriver
```
#### c. **Environment Variables**
Cron jobs run in a limited environment. Ensure that any necessary environment variables are set within the shell script.
For example, in `run_auto_jobs.sh`, you can set environment variables before activating the virtual environment:
```bash
#!/bin/bash
# Redireciona toda a saída para cron.log
exec >> /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log 2>&1
# Insere um timestamp
echo "----- $(date) -----"
# Define a variável de ambiente TERM
export TERM=xterm
# Navega para o diretório do projeto
cd /home/proteusbr/Auto_Jobs_Applier_AIHawk || { echo "Failed to navigate to project directory"; exit 1; }
# Ativa o ambiente virtual
source virtual/bin/activate || { echo "Failed to activate virtual environment"; exit 1; }
# Executa o script Python principal usando o caminho absoluto
/home/proteusbr/Auto_Jobs_Applier_AIHawk/virtual/bin/python main.py --resume resume-job.pdf || { echo "Python main script failed"; exit 1; }
# Desativa o ambiente virtual
deactivate
echo "Script executed successfully."
```
./Auto_Jobs_Applier_AIHawk/app_config.py
MINIMUM_WAIT_TIME = 5
./Auto_Jobs_Applier_AIHawk/src/utils.py
def chrome_browser_options():
options.add_argument("--headless")
./Auto_Jobs_Applier_AIHawk/src/aihawk_job_manager.py
timeout=0
#### d. **Script Output**
Review the `cron.log` for any error messages that can provide insights into what might be going wrong.
---
## Modify the Code to Remove Interactive Prompt
To ensure that the script runs automatically without waiting for user input, we need to modify the `choose_style` function to select a default style programmatically.
### 📄 **Original `choose_style` Function:**
Located in `./virtual/lib/python3.10/site-packages/lib_resume_builder_AIHawk/manager_facade.py` at line 52.
```python
def choose_style(self):
styles = self.style_manager.get_styles()
if not styles:
print("No styles available")
return None
final_style_choice = "Create your resume style in CSS"
formatted_choices = self.style_manager.format_choices(styles)
formatted_choices.append(final_style_choice)
selected_choice = self.prompt_user(formatted_choices, "Which style would you like to adopt?")
if selected_choice == final_style_choice:
tutorial_url = "https://github.com/feder-cr/lib_resume_builder_AIHawk/blob/main/how_to_contribute/web_designer.md"
print("\nOpening tutorial in your browser...")
webbrowser.open(tutorial_url)
exit()
else:
self.selected_style = selected_choice.split(' (')[0]
```
### 🛠️ **Modified `choose_style` Function:**
```python
def choose_style(self):
styles = self.style_manager.get_styles()
if not styles:
print("No styles available")
return None
final_style_choice = "Create your resume style in CSS"
formatted_choices = self.style_manager.format_choices(styles)
formatted_choices.append(final_style_choice)
# **Automate Style Selection:**
# Select a default style without prompting the user.
# You can choose to select the first available style or specify a particular one.
# **Option 1: Select the First Available Style**
if formatted_choices:
selected_choice = formatted_choices[0] # Automatically select the first style
print(f"Selected default style: {selected_choice}")
else:
selected_choice = final_style_choice
# **Option 2: Specify a Particular Default Style**
# Uncomment and modify the following lines if you prefer a specific style.
# default_style_name = "Default (style author -> https://github.com/krishnavalliappan)"
# if default_style_name in formatted_choices:
# selected_choice = default_style_name
# print(f"Selected default style: {selected_choice}")
# else:
# selected_choice = formatted_choices[0] # Fallback to first style
# print(f"Default style not found. Selected first available style: {selected_choice}")
if selected_choice == final_style_choice:
tutorial_url = "https://github.com/feder-cr/lib_resume_builder_AIHawk/blob/main/how_to_contribute/web_designer.md"
print("\nOpening tutorial in your browser...")
webbrowser.open(tutorial_url)
exit()
else:
self.selected_style = selected_choice.split(' (')[0]
print(f"Resume will be generated using the '{self.selected_style}' style.")
```
**Explanation of Changes:**
1. **Removed Interactive Prompt:**
- The `self.prompt_user` function call has been removed to prevent the script from waiting for user input.
2. **Automate Style Selection:**
- **Option 1:** Automatically selects the first available style from the `formatted_choices` list.
- **Option 2 (Commented Out):** Allows you to specify a particular default style by name. Uncomment and modify as needed.
3. **Feedback Messages:**
- Added `print` statements to inform which style has been selected automatically. This is useful for logging and debugging purposes.
4. **Maintain Functionality:**
- The rest of the function remains unchanged to ensure that if the `final_style_choice` is selected, it behaves as intended (opens the tutorial and exits).
**Recommendation:** If you have a preferred default style, **Option 2** offers more control and ensures consistency. Otherwise, **Option 1** is sufficient for general automation purposes.
---
## Automate Script Execution with Cron
To ensure that the **Auto_Jobs_Applier_AIHawk** script runs automatically every hour without manual intervention, we'll set up a **cron job**. This involves creating a shell script to activate the virtual environment and run the Python script, making the script executable, and scheduling it with cron.
### 📋 **Table of Contents**
1. [Create a Shell Script to Run the Python Script](#create-a-shell-script-to-run-the-python-script)
2. [Make the Shell Script Executable](#make-the-shell-script-executable)
3. [Set Up the Cron Job](#set-up-the-cron-job)
4. [Verify the Cron Job](#verify-the-cron-job)
5. [Monitor Cron Job Logs](#monitor-cron-job-logs)
6. [Troubleshooting](#troubleshooting)
---
### 1. Create a Shell Script to Run the Python Script
To ensure that the Python script runs within the correct environment, we'll create a shell script that activates the virtual environment and executes the script.
1. **Navigate to the Project Directory:**
```bash
cd ~/Auto_Jobs_Applier_AIHawk
```
2. **Create a New Shell Script File:**
Let's name the script `run_auto_jobs.sh`.
```bash
nano run_auto_jobs.sh
```
3. **Add the Following Content to the Script:**
Replace `/home/proteusbr/Auto_Jobs_Applier_AIHawk` with the actual path to your project directory if it's different.
```bash
#!/bin/bash
# Navigate to the project directory
cd /home/proteusbr/Auto_Jobs_Applier_AIHawk
# Activate the virtual environment
source virtual/bin/activate
# Run the Python script with the default style
python main.py --resume resume-job.pdf
# Deactivate the virtual environment
deactivate
```
**Explanation:**
- **Shebang (`#!/bin/bash`):** Specifies that the script should be run in the Bash shell.
- **`cd`:** Navigates to the project directory.
- **`source virtual/bin/activate`:** Activates the Python virtual environment.
- **`python main.py --resume resume-job.pdf`:** Executes the Python script with the specified resume.
- **`deactivate`:** Deactivates the virtual environment after the script completes.
4. **Save and Exit the Editor:**
- Press `Ctrl + O`, then `Enter` to save.
- Press `Ctrl + X` to exit.
---
### 2. Make the Shell Script Executable
To allow the script to be run, grant it execute permissions.
```bash
chmod +x run_auto_jobs.sh
```
Test:
```bash
./run_auto_jobs.sh
```
---
### 3. Set Up the Cron Job
Cron is a time-based job scheduler in Unix-like operating systems. We'll configure it to run the shell script every hour.
1. **Open the Crontab Editor:**
```bash
crontab -e
```
- **First-Time Setup:** If prompted to choose an editor, select your preferred one (e.g., `nano`).
2. **Add the Cron Job Entry:**
At the end of the crontab file, add the following line:
```bash
0 * * * * /home/proteusbr/Auto_Jobs_Applier_AIHawk/run_auto_jobs.sh >> /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log 2>&1
```
**Explanation:**
- `0 * * * *`: Specifies that the job runs at minute `0` of every hour.
- `/home/proteusbr/Auto_Jobs_Applier_AIHawk/run_auto_jobs.sh`: Full path to the shell script.
- `>> /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log 2>&1`: Redirects both standard output and errors to `cron.log` for logging purposes.
3. **Save and Exit the Crontab Editor:**
- In `nano`, press `Ctrl + O`, `Enter` to save.
- Press `Ctrl + X` to exit.
---
### 4. Verify the Cron Job
To ensure that the cron job has been added successfully:
```bash
crontab -l
```
**Expected Output:**
```
0 * * * * /home/proteusbr/Auto_Jobs_Applier_AIHawk/run_auto_jobs.sh >> /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log 2>&1
```
This confirms that the cron job is scheduled to run hourly.
---
### 5. Monitor Cron Job Logs
Regularly check the `cron.log` file to monitor the execution of your script and identify any potential issues.
1. **View the Last Few Entries:**
```bash
tail -n 20 /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log
```
2. **Follow the Log in Real-Time:**
```bash
tail -f /home/proteusbr/Auto_Jobs_Applier_AIHawk/cron.log
```
- This command continuously displays new log entries as they are appended.
**Tips:**
- **Successful Execution:** Look for log entries indicating that the script ran without errors, such as confirmation of the selected style and successful resume generation.
- **Errors:** If errors are present, they will be logged here. Use these logs to troubleshoot issues.
---
### 6. Troubleshooting
If the cron job isn't executing as expected, consider the following troubleshooting steps:
#### a. **Check Cron Service Status**
Ensure that the cron service is running.
```bash
sudo service cron status
```
- **If Not Running:** Start the cron service.
```bash
sudo service cron start
```
#### b. **Verify Script Paths and Permissions**
- **Absolute Paths:** Ensure that all paths in the shell script are absolute.
- **Execute Permissions:** Confirm that the shell script and `chromedriver` have execute permissions.
```bash
ls -l /home/proteusbr/Auto_Jobs_Applier_AIHawk/run_auto_jobs.sh
ls -l /usr/local/bin/chromedriver
```
**Expected Output:**
```
-rwxr-xr-x 1 proteusbr proteusbr ... run_auto_jobs.sh
-rwxr-xr-x 1 root root ... /usr/local/bin/chromedriver
```
#### c. **Environment Variables**
Cron jobs run in a limited environment. Ensure that any necessary environment variables are set within the shell script.
For example, in `run_auto_jobs.sh`, you can set environment variables before activating the virtual environment:
```bash
#!/bin/bash
# Set environment variables if needed
export SOME_ENV_VAR="value"
# Navigate to the project directory
cd /home/proteusbr/Auto_Jobs_Applier_AIHawk
# Activate the virtual environment
source virtual/bin/activate
# Run the Python script
python main.py --resume resume-job.pdf
# Deactivate the virtual environment
deactivate
```
#### d. **Script Output**
Review the `cron.log` for any error messages that can provide insights into what might be going wrong.
---
## Run the Application
With all dependencies installed, configurations set, and automation in place, you can now run the application.
1. **Ensure the Virtual Environment is Activated:**
```bash
source virtual/bin/activate
```
2. **Navigate to the Project Directory (If Not Already There):**
```bash
cd Auto_Jobs_Applier_AIHawk
```
3. **Run the Main Script with Your Resume:**
```bash
python main.py --resume resumes/your_resume.pdf
```
Replace `resumes/your_resume.pdf` with the path to your resume PDF.
**Expected Outcome:**
- The script should execute without asking for style selection.
- The resume should be generated using the selected default style.
- Logs should indicate successful execution.
**Example Output:**
```
Selected default style: Clean Blue (style author -> https://github.com/samodum)
Resume will be generated using the 'Clean Blue' style.
```
---
## Troubleshooting
If you encounter any issues during setup or execution, consider the following troubleshooting steps:
### 1. **Chromedriver Not Found or Incorrect Version**
- **Ensure ChromeDriver Version Matches Chrome Version:**
Verify that the ChromeDriver version matches your installed Google Chrome version.
```bash
google-chrome --version
chromedriver --version
```
- **Re-download ChromeDriver if Versions Mismatch:**
If there's a mismatch, download the correct version from the [Chrome for Testing](https://googlechromelabs.github.io/chrome-for-testing/) site.
### 2. **Permissions Issues**
- **Grant Execute Permissions:**
```bash
sudo chmod +x /usr/local/bin/chromedriver
```
### 3. **Virtual Environment Issues**
- **Activate Virtual Environment:**
```bash
source virtual/bin/activate
```
- **Reinstall Dependencies:**
```bash
pip install -r requirements.txt
```
### 4. **Configuration Errors**
- **Double-Check Configuration Files:**
Ensure that `secrets.yaml`, `config.yaml`, and `plain_text_resume.yaml` have the correct paths and credentials.
### 5. **Selenium Errors**
- **Check Selenium and WebDriver Installation:**
Ensure that both Selenium and ChromeDriver are correctly installed and accessible.
- **Specify ChromeDriver Path Explicitly:**
If Selenium cannot locate ChromeDriver, specify the path explicitly in your scripts as shown in the [Fixing a Known Bug](#fixing-a-known-bug) section.
### 6. **General Debugging**
- **Use Logging Outputs:**
Check the logs for detailed error messages to identify where the issue is occurring.
- **Refer to Official Documentation:**
- [Selenium Documentation](https://www.selenium.dev/documentation/)
- [ChromeDriver Documentation](https://chromedriver.chromium.org/downloads)
---
## Conclusion
By following this step-by-step guide, you have successfully set up and configured the **Auto_Jobs_Applier_AIHawk** project on your Windows machine using Ubuntu via WSL. The modifications to remove interactive prompts and the setup of a cron job ensure that your application runs automatically every hour, streamlining your job application process.
**Summary of Steps:**
1. **Install Ubuntu via WSL and Update It.**
2. **Install Visual Studio Code with the Remote - WSL Extension.**
3. **Clone the Repository and Set Up a Python Virtual Environment.**
4. **Install Project Dependencies, Google Chrome, and ChromeDriver.**
5. **Fix Known Bugs by Modifying the Code to Remove Interactive Prompts.**
6. **Add Your Resume PDF and Update Configuration Files.**
7. **Create a Shell Script and Set Up a Cron Job to Automate Execution.**
8. **Run and Monitor the Application.**
If you continue to experience issues, please refer to the troubleshooting section or reach out to the project maintainers for further assistance.
**Good luck with your automated job applications! 🚀**
---
# Additional Tips
- **Use Environment Variables for Secrets:**
For enhanced security, consider using environment variables to store sensitive information like LinkedIn credentials instead of hardcoding them in `secrets.yaml`.
- **Keep Software Updated:**
Regularly update your system packages, Google Chrome, and ChromeDriver to ensure compatibility and security.
- **Backup Configuration Files:**
Before making significant changes, back up your configuration files to prevent data loss.
- **Virtual Environment Management:**
If you encounter issues with the virtual environment, consider deleting and recreating it to ensure a clean setup.
---
### Why is this change necessary?
_No response_
### Additional context
_No response_ | closed | 2024-09-28T16:59:53Z | 2024-11-08T11:34:46Z | https://github.com/feder-cr/Jobs_Applier_AI_Agent_AIHawk/issues/441 | [
"documentation"
] | proteusbr1 | 2 |
marimo-team/marimo | data-visualization | 4,151 | [ai] Include additional marimo knowledge base in the LLM context | ### Description
Feedback from https://github.com/marimo-team/marimo/issues/4150
> If the knowledge base of marimo isn't used under the hood, that would be a great opportunity. The chatbot seems to now about marimo, so perhaps you already apply a RAG approach with your docs, discord messages and github issues
### Suggested solution
Ideally not RAG.
* Could be including basic API signatures (either all or a subset based off regex).
* Could include the marimo runtime rules (cycles, re-definitions, accessing `.value`)
* Could include popular patterns (run button)
### Alternative
LLMs get better each day, we can just wait until it's good enough without additional context / train on our docs.
### Additional context
_No response_ | open | 2025-03-18T16:15:58Z | 2025-03-18T16:16:29Z | https://github.com/marimo-team/marimo/issues/4151 | [
"enhancement"
] | mscolnick | 0 |
Textualize/rich | python | 3,345 | [REQUEST] Rich Should Accept Highlights as re.compiled re.Patterns and Use them Internally | Rich should take advantage of the potential speed increases through compiled regular expressions in the `re.compile` function in the stdlib `re` module.
I have created a fork here: https://github.com/PyWoody/rich/tree/re_compiled that has the changes in place for demoing.
Using the EmailHighlighter example from the docs, a new Highlighter instance could be created like so
```python3
import re
from rich.console import Console
from rich.highlighter import RegexHighlighter
from rich.theme import Theme
class EmailHighlighter(RegexHighlighter):
"""Apply style to anything that looks like an email."""
base_style = "example."
highlights = [re.compile(r"(?P<email>[\w-]+@([\w-]+\.)+[\w-]+)")]
theme = Theme({"example.email": "bold magenta"})
console = Console(highlighter=EmailHighlighter(), theme=theme)
console.print("Send funds to money@example.org")
```
Note, the above example will already work in the default version because `re.finditer` automatically compiles a `re.Pattern` or string to a `re.Pattern`, as shown here: https://github.com/python/cpython/blob/3.12/Lib/re/__init__.py#L219, but it does not save it for re-use. The `_compile` function in `re` will do some caching automatically, as shown here: https://github.com/python/cpython/blob/3.12/Lib/re/__init__.py#L280, but it will be called every single time `rich.text.Text.highlight_regex` is called versus just saving the compiled version yourself.
The more regular expressions a Highlighter uses the more the `re.Patterns` will be cached, further allowing speed increases. For instance, the `rich.highlighter.ISO8601Highlighter` found updated here: https://github.com/PyWoody/rich/blob/re_compiled/rich/highlighter.py#L144, has a considerable speed increase compared to the default version.
The major caveat will be for custom Highlighters that use strings exclusively. There will be a marginal speed decrease in these situations as each call will need to be `isinstance`d checked and `re.compile`d on demand. This is evident in the `highlight_regex` method in `rich.text.Text` class found updated here: https://github.com/PyWoody/rich/blob/re_compiled/rich/text.py#L615. In my testing, the decrease was marginal enough to be difficult to extract a difference from the noise.
The net-net is basically using `re.compile` for default Highlighters is a free win, people that want to use `re.compile` in their custom highlighters get the speed boost, and existing Highlighters out in-the-wild or people that want to use strings exclusively only receive marginal speed decrease. | open | 2024-04-26T19:44:18Z | 2024-05-16T13:28:26Z | https://github.com/Textualize/rich/issues/3345 | [
"Needs triage"
] | PyWoody | 4 |
itamarst/eliot | numpy | 436 | Intermittent failure from test_omitLoggerFromActionType in 1.7.0 | Seen on my CI:
```
FAIL: test_omitLoggerFromActionType (eliot.tests.test_validation.EndToEndValidationTests)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/tmp/nix-build-python2.7-eliot-1.7.0.drv-0/eliot-1.7.0/eliot/tests/test_validation.py", line 895, in test_omitLoggerFromActionType
self.assertEqual(messages[0]["key"], 123)
AssertionError: 5 != 123
```
This is a somewhat older version so perhaps the problem has been fixed already. I couldn't find any tickets that seemed related, though.
| open | 2019-10-21T13:36:56Z | 2020-04-15T15:57:52Z | https://github.com/itamarst/eliot/issues/436 | [
"bug"
] | exarkun | 6 |
adamerose/PandasGUI | pandas | 235 | PyQtWebEngine fail to install blocking pandasgui | <img width="1049" alt="Screenshot 2023-07-26 at 23 33 56" src="https://github.com/adamerose/PandasGUI/assets/32000608/11f2bf74-bcc4-4e40-a539-8646245c5971">
I am in a venv environment in a Kali (debian) Vm fresh install, (hardware is M1 mac) and the install fails every time same for a classic "pip install PyQtWebEngine" or "pip install PyQtWebEngine".
The odd thing is that outside venv the install work.
on version of the vms worked with this solve https://stackoverflow.com/questions/61254782/importerror-libqt5qmlmodels-so-5-cannot-open-shared-object-file-no-such-file
"sudo apt-get install python3-pyqt5.qtwebengine"
sadly i can't replicate it and i am stuck unable to install and use pandasgui in my venv.
(
I am also aware that pandasgui has a small difference of usage when using it on non windows OS.
just a little
import os
os.environ['APPDATA'] = ""
https://towardsdatascience.com/is-pandas-easy-to-use-try-this-tool-to-make-it-easier-2071eeffe482
) | open | 2023-07-27T01:41:11Z | 2023-09-12T02:22:45Z | https://github.com/adamerose/PandasGUI/issues/235 | [
"bug"
] | morzen | 1 |
mitmproxy/mitmproxy | python | 7,103 | tcp-simple.py example still working? How to intercept raw TCP traffic | #### Problem Description
Is the tcp-simple.py example still correct? I would expect the tcp_message function to be called when sending stuff to mitmdump?
Or did I miss something? Why does it say HTTP(S) proxy listening at ... when I just want to intercept raw TCP
```
mitmdump --tcp-hosts ".*" -s tcp-simple.py --listen-host 10.0.3.100 --listen-port 8083
[17:17:51.247] Loading script tcp-simple.py
[17:17:51.247] HTTP(S) proxy listening at 10.0.3.100:8083.
[17:17:54.905][192.168.13.6:64255] client connect
[17:18:04.904][192.168.13.6:64255] Client closed connection before completing request headers: b'<asd>\n'
[17:18:04.905][192.168.13.6:64255] client disconnect
```
#### Steps to reproduce the behavior:
1. python -m venv venv
2. source venv/bin/activate
3. pip install mitmproxy
4. Take https://docs.mitmproxy.org/stable/addons-examples/#tcp-simple example
5. mitmdump --tcp-hosts ".*" -s tcp-simple.py --listen-host 10.0.3.100 --listen-port 8083
6. nc 10.0.3.100 8083 and type in some stuff, press CTRL-D, CTRL-C
#### System Information
Mitmproxy: 10.4.2
Python: 3.12.0
OpenSSL: OpenSSL 3.3.1 4 Jun 2024
Platform: Linux-6.8.11-amd64-x86_64-with-glibc2.38
| closed | 2024-08-14T15:32:12Z | 2024-08-14T20:03:43Z | https://github.com/mitmproxy/mitmproxy/issues/7103 | [
"kind/triage"
] | osown | 1 |
InstaPy/InstaPy | automation | 6,464 | logging error | Hi.
Out of the blue, nothing foreshadowed trouble.
During the day everything worked, and in the evening it stopped.
Doesn't put likes, just ends the session. Screenshots and logs attached.
Unsubscribes without problems.
Updated selenium (21.3.1), instapy (0.6.16).
There is an assumption that firefox was updated to version 96, I rolled back to 95 but nothing helps.
there are options ?
Find the photo and write "logging error" and after a lot of all sorts of words, after that closes the session
Code
`session = InstaPy(username=insta_username,
password=insta_password,
headless_browser=False)
with smart_run(session):
session.like_by_tags(["#катаемся"], amount=20)
session.set_dont_like(["#порно", "#продажа", "#продать", "#автомойка", "#сервис"])`
Link to log screenshots:
https://drive.google.com/drive/folders/1kGBnedVMPpNtPllycQH19wyeuaQJ3ev8?usp=sharing
| closed | 2022-01-20T12:42:02Z | 2022-01-23T17:44:12Z | https://github.com/InstaPy/InstaPy/issues/6464 | [] | alexpzpz | 7 |
tqdm/tqdm | pandas | 1,324 | Progress bar cut-off when using set_postfix | With tqdm==4.64.0, progress bar doesn't show full dictionary.
```python
from tqdm import tqdm
pbar = tqdm(total=self.num_samples + 1, desc=f'Epoch {current_epoch + 1}/{self.total_epoch}', unit='img')
.
.
.
pbar.set_postfix(**dict)
>>>pbar.set_postfix(**dict)
Epoch 1/2000: 0%| | 0/142 [01:39<?, ?img/s, bce=0.581, bin_class=0.773, dsc=0.
>>> dict
{'heat': 0.2678065598011017, 'val_heat': 0, 'offset': 0.12296873331069946, 'val_offset': 0, 'bce': 0.5805851817131042, 'val_bce': 0, 'dsc': 0.12840551137924194, 'val_dsc': 0, 'verse_dsc': 0.12456103732510153, 'val_verse_dsc': 0, 'bin_class': 0.7730317115783691, 'val_bin_class': 0, 'iou': 16.11809539794922, 'val_iou': 0}
```
This was never a problem with older versions of tqdm (4.36.1), so not sure if there was an API change or something internal that I am missing. | open | 2022-05-05T16:00:11Z | 2024-02-10T15:53:47Z | https://github.com/tqdm/tqdm/issues/1324 | [
"question/docs ‽",
"need-feedback 📢"
] | kleingeo | 2 |
matterport/Mask_RCNN | tensorflow | 2,747 | Dataset Format Format for Binary Images | Hi,
I want to train with my own dataset for instance aware semantic segmentation. I've already trained my dataset with UNET model, I want to train with Mask RCNN using the same dataset but here is my question. My dataset mask images are binary images, only 2 class (including bg), how can I use the code with binary images. All the tutorials out there mentioning json files which my dataset does not include. Apart from the binary images my dataset also includes gray images, where each instance corresponds to a different gray level.
Thanks! | open | 2021-12-24T11:40:43Z | 2022-02-21T13:20:34Z | https://github.com/matterport/Mask_RCNN/issues/2747 | [] | DenizRumet | 2 |
lukas-blecher/LaTeX-OCR | pytorch | 368 | latexocr snip window opaque Ubuntu 23.10 | Hi everyone !
I have successfully installed the pix2tex package on conda, but the gui does not work for me. When running the commande "latexocr", the window pops up, but when trying the snip, the snip window is completely opaque (not very convenient you would agree). Snipping anyway, the code just crashes with the error message
"Sandboxing disabled by user.
This plugin does not support setting window opacity
This plugin does not support setting window opacity
Traceback (most recent call last):
File "/home/mathieu/miniconda3/envs/pix2tex_env/lib/python3.12/site-packages/pix2tex/gui.py", line 334, in mouseReleaseEvent
raise e
File "/home/mathieu/miniconda3/envs/pix2tex_env/lib/python3.12/site-packages/pix2tex/gui.py", line 328, in mouseReleaseEvent
img = ImageGrab.grab(bbox=(x1, y1, x2, y2), all_screens=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/mathieu/miniconda3/envs/pix2tex_env/lib/python3.12/site-packages/PIL/ImageGrab.py", line 70, in grab
size, data = Image.core.grabscreen_x11(xdisplay)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
OSError: X get_image failed: error 8 (73, 0, 1173)
Abandon (core dumped)
"
I am using Ubuntu 23.10, and I am not sure whether my gnome_screenshot is setup wrong. Do you have any idea of what might be the problem ?
Thanks ! | open | 2024-03-01T14:33:37Z | 2024-04-05T11:20:04Z | https://github.com/lukas-blecher/LaTeX-OCR/issues/368 | [] | MathieuFerey | 2 |
facebookresearch/fairseq | pytorch | 4,799 | Cannot reproduce the results of BART on ELI5 | I followed [https://github.com/facebookresearch/ELI5](url) and constructed the dataset. Then, I finetuned BART on the dataset.
Could you please provide the finetune script that BART paper used? @huihuifan
| open | 2022-10-15T13:49:35Z | 2022-10-15T13:49:35Z | https://github.com/facebookresearch/fairseq/issues/4799 | [
"question",
"needs triage"
] | mengyanggithub | 0 |
graphql-python/graphene-django | django | 835 | Handling database transactions | I noticed that graphene-django doesn't handle database transactions. I believe this is a issue that people should know or be careful about.
The same logic of DRF could be implemented as in the lines below that run in the dispatch method.
https://github.com/encode/django-rest-framework/blob/d985c7cbb999b2bc18a109249c583e91f4c27aec/rest_framework/views.py#L65-L102
Basically on one exception runs:
```python
atomic_requests = settings.get('ATOMIC_REQUESTS', False)
if atomic_requests and connection.in_atomic_block:
transaction.set_rollback(True)
````
In implementing this in graphene-django as we do not have the exception could run the rollback in the error checking of the execution result as in line 171.
https://github.com/graphql-python/graphene-django/blob/968002f1554e3a7a1c0617682be64b67823b2581/graphene_django/views.py#L167-L190
What do you think about this implementation? | closed | 2019-12-24T20:13:42Z | 2021-02-16T13:54:53Z | https://github.com/graphql-python/graphene-django/issues/835 | [
"✨enhancement",
"help wanted"
] | brucedesa | 10 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,178 | Warnings, exceptions and random crashes | I would really appreciate some help here I wanted to use this to help me with my Dyslexia, but it's too unreliable.
`
OS: Linux Mint 20.2 x86_64
Host: GA-78LMT-USB3 6.0
Kernel: 5.4.0-144-generic
Uptime: 5 hours, 29 mins
Packages: 2868 (dpkg), 7 (flatpak), 14 (snap)
Shell: bash 5.0.17
Resolution: 1600x900
DE: Cinnamon
WM: Mutter (Muffin)
WM Theme: Adapta-Nokto (Adapta-Nokto)
Icons: Mint-Y-Aqua [GTK2/3]
Terminal: gnome-terminal
CPU: AMD FX-8320 (8) @ 3.500GHz
GPU: NVIDIA GeForce GTX 1060 6GB
Memory: 4272MiB / 7938MiB
`
**python3 demo_toolbox.py**
`/usr/lib/python3/dist-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.26.7) or chardet (3.0.4) doesn't match a supported version!
warnings.warn("urllib3 ({}) or chardet ({}) doesn't match a supported "
Arguments:
datasets_root: None
models_dir: saved_models
cpu: False
seed: None
Warning: you did not pass a root directory for datasets as argument.
The recognized datasets are:
LibriSpeech/dev-clean
LibriSpeech/dev-other
LibriSpeech/test-clean
LibriSpeech/test-other
LibriSpeech/train-clean-100
LibriSpeech/train-clean-360
LibriSpeech/train-other-500
LibriTTS/dev-clean
LibriTTS/dev-other
LibriTTS/test-clean
LibriTTS/test-other
LibriTTS/train-clean-100
LibriTTS/train-clean-360
LibriTTS/train-other-500
LJSpeech-1.1
VoxCeleb1/wav
VoxCeleb1/test_wav
VoxCeleb2/dev/aac
VoxCeleb2/test/aac
VCTK-Corpus/wav48
Feel free to add your own. You can still use the toolbox by recording samples yourself.
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
Expression 'ret' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1736
Expression 'AlsaOpen( hostApi, parameters, streamDir, &pcm )' failed in 'src/hostapi/alsa/pa_linux_alsa.c', line: 1768
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm_dmix.c:1089:(snd_pcm_dmix_open) unable to open slave
`
random crash with
`Traceback (most recent call last):
File "/home/strings/Real-Time-Voice-Cloning/toolbox/__init__.py", line 170, in record
self.add_real_utterance(wav, name, speaker_name)
File "/home/strings/Real-Time-Voice-Cloning/toolbox/__init__.py", line 174, in add_real_utterance
spec = Synthesizer.make_spectrogram(wav)
File "/home/strings/Real-Time-Voice-Cloning/synthesizer/inference.py", line 152, in make_spectrogram
mel_spectrogram = audio.melspectrogram(wav, hparams).astype(np.float32)
File "/home/strings/Real-Time-Voice-Cloning/synthesizer/audio.py", line 60, in melspectrogram
D = _stft(preemphasis(wav, hparams.preemphasis, hparams.preemphasize), hparams)
File "/home/strings/Real-Time-Voice-Cloning/synthesizer/audio.py", line 121, in _stft
return librosa.stft(y=y, n_fft=hparams.n_fft, hop_length=get_hop_size(hparams), win_length=hparams.win_size)
File "/home/strings/.local/lib/python3.8/site-packages/librosa/core/spectrum.py", line 217, in stft
util.valid_audio(y)
File "/home/strings/.local/lib/python3.8/site-packages/librosa/util/utils.py", line 310, in valid_audio
raise ParameterError("Audio buffer is not finite everywhere")
librosa.util.exceptions.ParameterError: Audio buffer is not finite everywhere
Loaded encoder "encoder.pt" trained to step 1564501
python3: src/hostapi/alsa/pa_linux_alsa.c:3641: PaAlsaStreamComponent_BeginPolling: Assertion `ret == self->nfds' failed.
Aborted (core dumped)
`
| open | 2023-03-21T16:35:51Z | 2023-03-22T14:06:14Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1178 | [] | 007Srings | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.