repo_name
stringlengths
9
75
topic
stringclasses
30 values
issue_number
int64
1
203k
title
stringlengths
1
976
body
stringlengths
0
254k
state
stringclasses
2 values
created_at
stringlengths
20
20
updated_at
stringlengths
20
20
url
stringlengths
38
105
labels
listlengths
0
9
user_login
stringlengths
1
39
comments_count
int64
0
452
Gurobi/gurobi-logtools
plotly
17
Cut name pattern too restrictive
When extracting statistics about generated cuts, the pattern for matching cut names is too restrictive. It does not match `Relax-and-lift` which is generated for `912-glass4-0.log`. I believe the hyphen is not included in the `[\w ]` pattern for the cut name.
closed
2022-01-11T08:03:21Z
2022-01-12T12:48:46Z
https://github.com/Gurobi/gurobi-logtools/issues/17
[]
ronaldvdv
1
openapi-generators/openapi-python-client
fastapi
681
field required (type=value_error.missing)
Hi, I'm sure this is related to my project, not to the openapi-python-client, but needed some help. When running the app, I get the following errors: ``` Error(s) encountered while generating, client was not created Failed to parse OpenAPI document 50 validation errors for OpenAPI paths -> /auth/login -> post -> requestBody -> content -> application/x-www-form-urlencoded -> schema -> $ref field required (type=value_error.missing) paths -> /auth/login -> post -> requestBody -> content -> application/x-www-form-urlencoded -> schema -> properties -> user -> $ref field required (type=value_error.missing) paths -> /auth/login -> post -> requestBody -> content -> application/x-www-form-urlencoded -> schema -> properties -> user -> required value is not a valid list (type=type_error.list) paths -> /auth/login -> post -> requestBody -> content -> application/x-www-form-urlencoded -> schema -> properties -> password -> $ref field required (type=value_error.missing) paths -> /auth/login -> post -> requestBody -> content -> application/x-www-form-urlencoded -> schema -> properties -> password -> required .... ``` And here's some of api.yaml file that I'm using: ``` paths: /auth/login: post: tags: - Auth summary: Authenticate with API description: Authenticate with API getting bearer token requestBody: description: Request values required: true content: application/x-www-form-urlencoded: schema: type: object properties: user: description: API user name type: string required: true password: description: API password type: string required: true required: - user - password ``` Any thoughts on what's wrong?
closed
2022-10-04T22:03:24Z
2024-09-04T00:36:49Z
https://github.com/openapi-generators/openapi-python-client/issues/681
[ "🐞bug" ]
cristiang777
7
suitenumerique/docs
django
403
⚗️AI return with complex data
## Improvement ### Problem: When we request the AI, we transform the editor data in **markdown**, when the data have a simple structure it works fine, but when we start to have complex structure like "Table" by example, the data back from the AI will start to be very "lossy". ### Tests: - Try to see if we can send the json structure instead and see if the AI is smart enough to do the actions without impacting negatively the json structure. - Other solutions, probably better, send only the content text of the blocknote json to the AI, bind each content text with an ID (it is maybe already bind with an ID), then replace the content text of the json thanks to this ID. By doing so, we keep the complex structure on the frontside and replace only the text. ## Code to improve https://github.com/numerique-gouv/impress/blob/50891afd055b5dada1d34e57ab447638865410af/src/frontend/apps/impress/src/features/docs/doc-editor/components/AIButton.tsx#L284-L305
closed
2024-11-06T06:15:29Z
2025-03-13T15:38:35Z
https://github.com/suitenumerique/docs/issues/403
[ "bug", "enhancement", "frontend", "backend", "editor" ]
AntoLC
0
deepinsight/insightface
pytorch
1,793
How to 3D visualize face alignments?
**Hi** Is there any example to have 3D or even 2D visualize of face alignments like 3D Mesh and 2D 68 Landmarks in this photo? Is face.normed_embedding from FaceAnalysis, 3D alignments? please mention how to get alignments data and how to save it as image Thanks in advance [https://insightface.ai/assets/img/custom/thumb_retinaface.png](https://insightface.ai/assets/img/custom/thumb_retinaface.png) ![3D Mesh and 2D 68 Landmarks](https://insightface.ai/assets/img/custom/thumb_retinaface.png)
open
2021-10-19T09:08:36Z
2021-10-21T11:28:15Z
https://github.com/deepinsight/insightface/issues/1793
[]
zerodwide
3
jonaswinkler/paperless-ng
django
495
[BUG] Doesn't remember PDF-viewer setting
Hey, thanks for introducing the setting to switch back to the built-in embed pdf-viewer. However, the system does not remember the setting. If I close the session, it is set back to default. I am using two containers with cookie-prefix, if that is an information you may need. Thank you :)
closed
2021-02-02T15:23:49Z
2021-02-02T16:05:23Z
https://github.com/jonaswinkler/paperless-ng/issues/495
[]
praul
3
tableau/server-client-python
rest-api
1,377
Schedules REST API is encountering an issue with responses: interval details for 'Hourly' and 'Daily' schedules are missing. This information, vital for accurate task execution, is currently unavailable. Are you aware of this issue?
Missing interval details specifically days of the week to execute schedules in API responses for 'Daily' and 'Hourly' schedules, crucial for task configuration and execution. 'Monthly' and 'Weekly' schedules retrieving necessary details. Requesting assistance in schedule API responses to include the missing interval details, particularly the days of the week for 'Daily' and 'Hourly' schedules. Your insights and suggestions on how best to integrate this information into our API responses would be greatly appreciated. Attached to this message is a PDF containing the current API responses for reference. [Schedule Frequencies.pdf](https://github.com/tableau/server-client-python/files/15398196/Schedule.Frequencies.pdf)
closed
2024-05-22T05:05:22Z
2024-08-21T23:37:15Z
https://github.com/tableau/server-client-python/issues/1377
[ "help wanted", "fixed" ]
Hiraltailor1
3
CorentinJ/Real-Time-Voice-Cloning
pytorch
351
Clone
closed
2020-05-26T09:17:56Z
2020-06-25T07:42:55Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/351
[]
syrakusmrdrte
0
Tinche/aiofiles
asyncio
40
memory leak?
Hi Tinche First of all thanks for the great work! Asyncronous file support for asyncio is a great thing to have! While testing a small project, I noticed a large amount of threads and big memory consumption in the python process. I deciced to write a small testscript which just writes to a file in a loop and tracks the memory: ```python #!/usr/bin/python3 import asyncio import aiofiles import os import psutil async def printMemory(): for iteration in range(0, 20): # grab the memory statistics p = psutil.Process(os.getpid()) vms =p.memory_info().vms / (1024.0*1024.0) threads = p.num_threads() print(f'Iteration {iteration:>2d} - Memory usage (VMS): {vms:>6.1f} Mb; # threads: {threads:>2d}') # simple write to a test file async with aiofiles.open('test.txt',mode='w') as f: await f.write('hello\n') # a wait, just for the sake of it await asyncio.sleep(1) loop = asyncio.get_event_loop() try: loop.run_until_complete(printMemory()) finally: loop.close() ``` The output shows some worrisome numbers (run with Python 3.6.5 on Debian 8.10 (jessy) ): ``` Iteration 0 - Memory usage (VMS): 92.5 Mb; # threads: 1 Iteration 1 - Memory usage (VMS): 308.5 Mb; # threads: 4 Iteration 2 - Memory usage (VMS): 524.6 Mb; # threads: 7 Iteration 3 - Memory usage (VMS): 740.6 Mb; # threads: 10 Iteration 4 - Memory usage (VMS): 956.6 Mb; # threads: 13 Iteration 5 - Memory usage (VMS): 1172.6 Mb; # threads: 16 Iteration 6 - Memory usage (VMS): 1388.7 Mb; # threads: 19 Iteration 7 - Memory usage (VMS): 1604.7 Mb; # threads: 22 Iteration 8 - Memory usage (VMS): 1820.8 Mb; # threads: 25 Iteration 9 - Memory usage (VMS): 2036.8 Mb; # threads: 28 Iteration 10 - Memory usage (VMS): 2252.8 Mb; # threads: 31 Iteration 11 - Memory usage (VMS): 2468.8 Mb; # threads: 34 Iteration 12 - Memory usage (VMS): 2684.8 Mb; # threads: 37 Iteration 13 - Memory usage (VMS): 2900.8 Mb; # threads: 40 Iteration 14 - Memory usage (VMS): 2972.8 Mb; # threads: 41 Iteration 15 - Memory usage (VMS): 2972.8 Mb; # threads: 41 Iteration 16 - Memory usage (VMS): 2972.8 Mb; # threads: 41 Iteration 17 - Memory usage (VMS): 2972.8 Mb; # threads: 41 Iteration 18 - Memory usage (VMS): 2972.8 Mb; # threads: 41 Iteration 19 - Memory usage (VMS): 2972.8 Mb; # threads: 41 ``` Any idea where this could come from?
open
2018-05-02T05:24:53Z
2018-05-02T21:41:36Z
https://github.com/Tinche/aiofiles/issues/40
[]
alexlocher
1
junyanz/pytorch-CycleGAN-and-pix2pix
deep-learning
1,027
ConnectionRefusedError: [WinError 10061] 由于目标计算机积极拒绝,无法连接。
open
2020-05-14T07:45:35Z
2020-06-10T07:14:03Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1027
[]
bu-fan-jun
1
plotly/dash-table
dash
189
data types - decimal formatting options for numerical columns
closed
2018-11-01T11:06:25Z
2019-02-28T15:18:34Z
https://github.com/plotly/dash-table/issues/189
[]
chriddyp
7
pyjanitor-devs/pyjanitor
pandas
784
Add slice feature to `select_columns`
# Brief Description I love the [select_columns]() method. It is powerful, especially the glob style name selection (beautiful!). I wonder if it is possible to add slice option, or if it is just unnecessary. # Example API Currently, you pass the selection to a list: ```python # current implementation df = pd.DataFrame(...).select_columns(['a', 'b', 'c', 'col_*'], invert=True) ``` I think it would be nice if we could do this: ```python # proposed addition df = pd.DataFrame(...).select_columns(['a': 'c', 'col_*'], invert=True) ``` Probably similar to what you do with [np.r_](https://numpy.org/doc/stable/reference/generated/numpy.r_.html) (which is only for indices)
closed
2020-12-16T07:19:29Z
2021-01-28T17:02:39Z
https://github.com/pyjanitor-devs/pyjanitor/issues/784
[]
samukweku
2
mars-project/mars
scikit-learn
2,896
[BUG] Deref stage may raises AssertionError: chunk key xxx will have negative ref count
<!-- Thank you for your contribution! Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue. --> **Describe the bug** A clear and concise description of what the bug is. Try to run the `pytest -s -v --log-cli-level=debug mars/dataframe/groupby/tests/test_groupby_execution.py::test_groupby_sample` in the latest master. The test case will be ***PASSED***, but there is an unretrieved task exception: ```python ERROR asyncio:isolation.py:36 Task exception was never retrieved future: <Task finished coro=<<coroutine without __name__>()> exception=AssertionError('chunk key fba521b5c44e2f43199042f58b5fa971_0 will have negative ref count')> Traceback (most recent call last): File "mars/oscar/core.pyx", line 251, in __pyx_actor_method_wrapper return await result_handler(result) File "mars/oscar/core.pyx", line 385, in _handle_actor_result task_result = await coros[0] File "mars/oscar/core.pyx", line 428, in mars.oscar.core._BaseActor._run_actor_async_generator async with self._lock: File "mars/oscar/core.pyx", line 429, in mars.oscar.core._BaseActor._run_actor_async_generator with debug_async_timeout('actor_lock_timeout', File "mars/oscar/core.pyx", line 434, in mars.oscar.core._BaseActor._run_actor_async_generator res = await gen.athrow(*res) File "/home/admin/mars-ant/mars/services/task/supervisor/processor.py", line 655, in start yield processor.decref_stage.batch(*decrefs) File "mars/oscar/core.pyx", line 439, in mars.oscar.core._BaseActor._run_actor_async_generator res = await self._handle_actor_result(res) File "mars/oscar/core.pyx", line 359, in _handle_actor_result result = await result File "/home/admin/mars-ant/mars/oscar/batch.py", line 148, in _async_batch return [await self._async_call(*d.args, **d.kwargs)] File "/home/admin/mars-ant/mars/oscar/batch.py", line 95, in _async_call return await self.func(*args, **kwargs) File "/home/admin/mars-ant/mars/services/task/supervisor/processor.py", line 67, in inner return await func(processor, *args, **kwargs) File "/home/admin/mars-ant/mars/services/task/supervisor/processor.py", line 288, in decref_stage await self._lifecycle_api.decref_chunks(decref_chunk_keys) File "/home/admin/mars-ant/mars/services/lifecycle/api/oscar.py", line 134, in decref_chunks return await self._lifecycle_tracker_ref.decref_chunks(chunk_keys) File "mars/oscar/core.pyx", line 251, in __pyx_actor_method_wrapper return await result_handler(result) File "mars/oscar/core.pyx", line 385, in _handle_actor_result task_result = await coros[0] File "mars/oscar/core.pyx", line 428, in mars.oscar.core._BaseActor._run_actor_async_generator async with self._lock: File "mars/oscar/core.pyx", line 429, in mars.oscar.core._BaseActor._run_actor_async_generator with debug_async_timeout('actor_lock_timeout', File "mars/oscar/core.pyx", line 432, in mars.oscar.core._BaseActor._run_actor_async_generator res = await gen.asend(res) File "/home/admin/mars-ant/mars/services/lifecycle/supervisor/tracker.py", line 94, in decref_chunks to_remove_chunk_keys = self._get_remove_chunk_keys(chunk_keys) File "/home/admin/mars-ant/mars/services/lifecycle/supervisor/tracker.py", line 82, in _get_remove_chunk_keys assert ref_count >= 0, f"chunk key {chunk_key} will have negative ref count" AssertionError: chunk key fba521b5c44e2f43199042f58b5fa971_0 will have negative ref count ``` The exception may be caused by the operand execution error: ```python ERROR mars.services.task.supervisor.stage:stage.py:172 Subtask gTsCHkuFyK77u7ahMDUJzxD0 errored Traceback (most recent call last): File "/home/admin/mars-ant/mars/services/scheduling/worker/execution.py", line 356, in internal_run_subtask subtask, band_name, subtask_api, batch_quota_req File "/home/admin/mars-ant/mars/services/scheduling/worker/execution.py", line 456, in _retry_run_subtask return await _retry_run(subtask, subtask_info, _run_subtask_once) File "/home/admin/mars-ant/mars/services/scheduling/worker/execution.py", line 107, in _retry_run raise ex File "/home/admin/mars-ant/mars/services/scheduling/worker/execution.py", line 67, in _retry_run return await target_async_func(*args) File "/home/admin/mars-ant/mars/services/scheduling/worker/execution.py", line 398, in _run_subtask_once return await asyncio.shield(aiotask) File "/home/admin/mars-ant/mars/services/subtask/api.py", line 69, in run_subtask_in_slot subtask File "/home/admin/mars-ant/mars/oscar/backends/context.py", line 190, in send return self._process_result_message(result) File "/home/admin/mars-ant/mars/oscar/backends/context.py", line 70, in _process_result_message raise message.as_instanceof_cause() File "/home/admin/mars-ant/mars/oscar/backends/pool.py", line 541, in send result = await self._run_coro(message.message_id, coro) File "/home/admin/mars-ant/mars/oscar/backends/pool.py", line 333, in _run_coro return await coro File "/home/admin/mars-ant/mars/oscar/api.py", line 120, in __on_receive__ return await super().__on_receive__(message) File "mars/oscar/core.pyx", line 507, in __on_receive__ raise ex File "mars/oscar/core.pyx", line 500, in mars.oscar.core._BaseActor.__on_receive__ return await self._handle_actor_result(result) File "mars/oscar/core.pyx", line 385, in _handle_actor_result task_result = await coros[0] File "mars/oscar/core.pyx", line 428, in mars.oscar.core._BaseActor._run_actor_async_generator async with self._lock: File "mars/oscar/core.pyx", line 429, in mars.oscar.core._BaseActor._run_actor_async_generator with debug_async_timeout('actor_lock_timeout', File "mars/oscar/core.pyx", line 434, in mars.oscar.core._BaseActor._run_actor_async_generator res = await gen.athrow(*res) File "/home/admin/mars-ant/mars/services/subtask/worker/runner.py", line 125, in run_subtask result = yield self._running_processor.run(subtask) File "mars/oscar/core.pyx", line 439, in mars.oscar.core._BaseActor._run_actor_async_generator res = await self._handle_actor_result(res) File "mars/oscar/core.pyx", line 359, in _handle_actor_result result = await result File "/home/admin/mars-ant/mars/oscar/backends/context.py", line 190, in send return self._process_result_message(result) File "/home/admin/mars-ant/mars/oscar/backends/context.py", line 70, in _process_result_message raise message.as_instanceof_cause() File "/home/admin/mars-ant/mars/oscar/backends/pool.py", line 541, in send result = await self._run_coro(message.message_id, coro) File "/home/admin/mars-ant/mars/oscar/backends/pool.py", line 333, in _run_coro return await coro File "/home/admin/mars-ant/mars/oscar/api.py", line 120, in __on_receive__ return await super().__on_receive__(message) File "mars/oscar/core.pyx", line 507, in __on_receive__ raise ex File "mars/oscar/core.pyx", line 500, in mars.oscar.core._BaseActor.__on_receive__ return await self._handle_actor_result(result) File "mars/oscar/core.pyx", line 385, in _handle_actor_result task_result = await coros[0] File "mars/oscar/core.pyx", line 428, in mars.oscar.core._BaseActor._run_actor_async_generator async with self._lock: File "mars/oscar/core.pyx", line 429, in mars.oscar.core._BaseActor._run_actor_async_generator with debug_async_timeout('actor_lock_timeout', File "mars/oscar/core.pyx", line 434, in mars.oscar.core._BaseActor._run_actor_async_generator res = await gen.athrow(*res) File "/home/admin/mars-ant/mars/services/subtask/worker/processor.py", line 616, in run result = yield self._running_aio_task File "mars/oscar/core.pyx", line 439, in mars.oscar.core._BaseActor._run_actor_async_generator res = await self._handle_actor_result(res) File "mars/oscar/core.pyx", line 359, in _handle_actor_result result = await result File "/home/admin/mars-ant/mars/services/subtask/worker/processor.py", line 463, in run await self._execute_graph(chunk_graph) File "/home/admin/mars-ant/mars/services/subtask/worker/processor.py", line 223, in _execute_graph await to_wait File "/home/admin/mars-ant/mars/lib/aio/_threads.py", line 36, in to_thread return await loop.run_in_executor(None, func_call) File "/home/admin/.pyenv/versions/3.7.7/lib/python3.7/concurrent/futures/thread.py", line 57, in run result = self.fn(*self.args, **self.kwargs) File "/home/admin/mars-ant/mars/services/subtask/worker/tests/subtask_processor.py", line 70, in _execute_operand super()._execute_operand(ctx, op) File "/home/admin/mars-ant/mars/core/mode.py", line 77, in _inner return func(*args, **kwargs) File "/home/admin/mars-ant/mars/services/subtask/worker/processor.py", line 191, in _execute_operand return execute(ctx, op) File "/home/admin/mars-ant/mars/core/operand/core.py", line 485, in execute result = executor(results, op) File "/home/admin/mars-ant/mars/dataframe/groupby/sample.py", line 323, in execute errors=op.errors, File "/home/admin/mars-ant/mars/dataframe/groupby/sample.py", line 314, in <listcomp> sample_pd[iloc_col].to_numpy() File "/home/admin/mars-ant/mars/dataframe/groupby/sample.py", line 69, in _sample_groupby_iter n=n, frac=frac, replace=replace, weights=w, random_state=random_state File "/home/admin/.pyenv/versions/3.7.7/lib/python3.7/site-packages/pandas/core/generic.py", line 5365, in sample locs = rs.choice(axis_length, size=n, replace=replace, p=weights) File "mtrand.pyx", line 965, in numpy.random.mtrand.RandomState.choice ValueError: [address=127.0.0.1:42482, pid=84916] Cannot take a larger sample than population when 'replace=False' ``` **To Reproduce** To help us reproducing this bug, please provide information below: 1. Your Python version `3.7.7` 2. The version of Mars you use `Latest master` 3. Versions of crucial packages, such as numpy, scipy and pandas `numpy==1.21.5 pandas==1.3.5` 4. Full stack of the error. 5. Minimized code to reproduce the error. **Expected behavior** A clear and concise description of what you expected to happen. **Additional context** I added some logs and found that the decref stage does not match the incref stage: The incref stage only increfs the `MapReduceOperand` with `stage == reduce`: ```python for chunk in subtask.chunk_graph: if ( isinstance(chunk.op, MapReduceOperand) and chunk.op.stage == OperandStage.reduce ): # reducer data_keys = chunk.op.get_dependent_data_keys() incref_chunk_keys.extend(data_keys) # main key incref as well, to ensure existence of meta incref_chunk_keys.extend([key[0] for key in data_keys]) ``` But, the decref stage will derefs all the matched `MapReduceOperand`: ``` python # if subtask not executed, rollback incref of predecessors for inp_subtask in subtask_graph.predecessors(subtask): for result_chunk in inp_subtask.chunk_graph.results: # for reducer chunk, decref mapper chunks if isinstance(result_chunk.op, ShuffleProxy): for chunk in subtask.chunk_graph: if isinstance(chunk.op, MapReduceOperand): data_keys = chunk.op.get_dependent_data_keys() print(">>>>>>>>>", chunk.op, chunk.op.stage, data_keys) decref_chunk_keys.extend(data_keys) decref_chunk_keys.extend( [key[0] for key in data_keys] ) ``` The error data key in above traceback belongs to the a `DataFrameGroupByOperand` with `None` stage: ``` python >>>>>>>>> DataFrameGroupByOperand <key=7bbacc1ca962ead3202f2ee0d4774909> None ['fba521b5c44e2f43199042f58b5fa971_0'] ```
closed
2022-04-02T03:35:26Z
2022-04-09T12:01:38Z
https://github.com/mars-project/mars/issues/2896
[ "type: bug", "mod: lifecycle service" ]
fyrestone
1
python-restx/flask-restx
flask
632
How do you set the array of "Server Objects" per the spec
**Ask a question** The openAPI spec includes a [Server Object ](https://swagger.io/specification/#server-object). I have used this for localhost, staging, production URLs for generated clients. Is it possible to set these in restx so that the swagger document includes this object?
closed
2025-01-14T05:53:12Z
2025-01-14T16:29:48Z
https://github.com/python-restx/flask-restx/issues/632
[ "question" ]
rodericj
2
benbusby/whoogle-search
flask
308
[BUG] <brief bug description>
**Describe the bug** A clear and concise description of what the bug is. **To Reproduce** Steps to reproduce the behavior: 1. Go to '...' 2. Click on '....' 3. Scroll down to '....' 4. See error **Deployment Method** - [ ] Heroku (one-click deploy) - [x] Docker - [ ] `run` executable - [ ] pip/pipx - [ ] Other: [describe setup] **Version of Whoogle Search** - [ ] Latest build from [source] (i.e. GitHub, Docker Hub, pip, etc) - [x] Version [0.4.0 - 0.4.1] - [ ] Not sure **Additional context** So I have been messing with this for a couple of hours trying to get the latest version or the 0.4.0 version to work but Duckduckgo bangs don't work in either (I type then and click enter or search and nothing happens). Also the 0.4.1 version seems to have a weird error when i do a search it removes the last letter from my search. So right now I am using the 0.3.2 version and would like to have the css that the 0.4.0 and 0.4.1 has but can't because of these errors. Thanks for this great project. And making it easy to use and setup.
closed
2021-05-10T05:39:13Z
2021-05-10T16:13:52Z
https://github.com/benbusby/whoogle-search/issues/308
[ "bug" ]
czadikem
1
jackmpcollins/magentic
pydantic
410
Does magentic support o1 and o1-mini models?
Hi, I was trying to use these models today while generating structured output with `OpenaiChatModel`. But, I ran into issues as these models do not support the same range of parameters as older models like gpt-4. Thus magentic breaks quite heavily. Notably, the o1 models don't: * support to set `max_tokens` (aside from setting it to 1) * allow the `parallel_tool_calls parameter` * support `tools`
open
2025-01-28T13:35:08Z
2025-02-03T12:36:36Z
https://github.com/jackmpcollins/magentic/issues/410
[]
Lawouach
2
blacklanternsecurity/bbot
automation
1,422
Optimize scan status message
Recently I've noticed that in large scans, it takes a long time (10+ seconds) to calculate the third and final status message. This is a problem, since it significantly increases the duration of the scan.
closed
2024-06-01T20:29:39Z
2024-06-01T21:11:47Z
https://github.com/blacklanternsecurity/bbot/issues/1422
[ "enhancement" ]
TheTechromancer
1
aio-libs/aiopg
sqlalchemy
473
cannot insert in PostgreSQL 8.1
Hi, Recap: I'm using postgreSQL version 8.1.23 and I got this error: ``` psycopg2.ProgrammingError: syntax error at or near «RETURNING» LINE 1: INSERT INTO tbl (val) VALUES (E'abc') RETURNING tbl.id ``` This is the code that runs the insert from the demo code in README: ``` metadata = sa.MetaData() tbl = sa.Table('tbl', metadata, sa.Column('id', sa.Integer, primary_key=True), sa.Column('val', sa.String(255))) await conn.execute(tbl.insert().values(val='abc')) async for row in conn.execute(tbl.select()): print(row.id, row.val) ``` I've created the engine with **enable_hstore** set to False in order to work with this Postgres version. I am already acquiring connections from this engine and querying _selects_ and they're all working fine. I've also created tables with **implicit_returning=False**, which I found in source code to prevent adding RETURNING clause, cause I've seen that is not a valid clause in this PostgreSQL version. But now, I've seen that if no default is defined in column or is None, appears after awaiting query: `AttributeError: 'NoneType' object has no attribute 'is_callable'` I don't see clearly how can I set specific dialect or to insert items... :( Thank you in advance!
closed
2018-05-11T13:41:43Z
2020-12-21T05:53:26Z
https://github.com/aio-libs/aiopg/issues/473
[]
Marruixu
1
TencentARC/GFPGAN
deep-learning
95
Possible to use Real-ESRGAN on mac os AMD, and 16-bit images?
Faces work excellent with GFPGAN, it's pretty amazing, but without CUDA support, the backgrounds are left pretty nasty. Still amazed at how fast it runs on CPU (only a few seconds per image, even with scale of 8), granted it's using the non-color version, but I have colorizing software and do the rest manually. I've been unsuccessful trying to enable color using the Paper model and forcing CPU. More importantly than color, as I use other software for colorization, is the background. I need to get the backgrounds restored and not just faces. I've tried setting `--bg_upsampler realesrgan` flag, and it does not throw an error, but this seems to have no effect on the output image. I do get the warning though that Real-ESRGAN is slow and not used for CPU. Is it possible to enable Real-ESRGAN on macOS so that it uses AMD GPU and restores the background (I have a desktop with Pro Vega 64)? I saw the other Real-ESRGAN compiled for mac/AMD, maybe the two can be linked somehow? If it can't use the AMD GPU, can it be forced to use the CPU? I don't care if it's slow, I just need it to work. :) I do a lot of rendering that is slow, because sometimes it's the only way. Main thing is getting it to work. Also, is it possible to enable the use of 16-bit PNG, TIFF, or cinema DNG? Would be really cool if it could support 32-bit float TIFF or EXR. Thank you
open
2021-11-11T14:41:12Z
2021-11-12T20:06:38Z
https://github.com/TencentARC/GFPGAN/issues/95
[]
KeygenLLC
1
roboflow/supervision
computer-vision
1,567
Bug in git-committers-plugin-2, v2.4.0
At the moment, an error is observed when running the `mkdocs build` action from develop. ``` File "/opt/hostedtoolcache/Python/3.10.15/x64/lib/python3.10/site-packages/mkdocs_git_committers_plugin_2/plugin.py", line 121, in get_contributors_to_file 'avatar': commit['author']['avatar_url'] if user['avatar_url'] is not None else '' UnboundLocalError: local variable 'user' referenced before assignment ``` This is due to: https://github.com/ojacques/mkdocs-git-committers-plugin-2/issues/72
closed
2024-10-03T21:29:35Z
2024-10-04T23:41:07Z
https://github.com/roboflow/supervision/issues/1567
[ "bug", "documentation", "github_actions" ]
LinasKo
4
Evil0ctal/Douyin_TikTok_Download_API
fastapi
244
tiktok正常,抖音出现错误,这种情况是服务器问题吗?我用的日本服务器,用cloudflare做了泛解析!
***Platform where the error occurred?*** Such as: Douyin/TikTok ***The endpoint where the error occurred?*** Such as: API-V1/API-V2/Web APP ***Submitted input value?*** Such as: video link ***Have you tried again?*** Such as: Yes, the error still exists after X time after the error occurred. ***Have you checked the readme or interface documentation for this project?*** Such as: Yes, and it is very sure that the problem is caused by the program.
closed
2023-08-18T08:59:49Z
2023-08-28T07:53:55Z
https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/244
[ "BUG", "enhancement" ]
tiermove
7
huggingface/transformers
pytorch
36,532
After tokenizers upgrade, the length of the token does not correspond to the length of the model
### System Info transformers:4.48.1 tokenizers:0.2.1 python:3.9 ### Who can help? @ArthurZucker @itazap ### Information - [ ] The official example scripts - [x] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [x] My own task or dataset (give details below) ### Reproduction code snippet: ``` tokenizer = PegasusTokenizer.from_pretrained('IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese') model = AutoModelForSeq2SeqLM.from_pretrained( 'IDEA-CCNL/Randeng-Pegasus-238M-Summary-Chinese', config=config ) training_args = Seq2SeqTrainingArguments( output_dir=config['model_name'], evaluation_strategy="epoch", # report_to="none", save_strategy="epoch", per_device_train_batch_size=4, per_device_eval_batch_size=4, num_train_epochs=4, predict_with_generate=True, logging_steps=0.1 ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, data_collator=data_collator, compute_metrics=compute_metrics ) ``` 错误信息: ![Image](https://github.com/user-attachments/assets/05a636e0-5cde-47c7-9e31-f0a96b2a7f98) Trial process: My original Trasnformers: 4.29.1 tokenizers: 0.13.3. The model is capable of reasoning and training normally. After upgrading, the above error occurred and normal training was not possible. Therefore, I adjusted the length of the model to 'model. resice_tokec_embeddings' (len (tokenizer)). Original model length: 50000, tokenizer loading length: 50103. So the model I trained resulted in abnormal inference results. ![Image](https://github.com/user-attachments/assets/67e1454d-1dea-48da-8100-d7f4997d9ee9) Try again, keep tokenizers at 0.13.3, upgrade trasnformers at 4.33.3 (1. I need to upgrade because NPU only supports version 4.3.20. 2. This version is the highest compatible with tokenizers). After switching to this version, training and reasoning are normal.As long as tokenizers is greater than 0.13.3, length changes ### Expected behavior I expect tokenizer to be compatible with the original code
closed
2025-03-04T09:58:59Z
2025-03-05T09:49:44Z
https://github.com/huggingface/transformers/issues/36532
[ "bug" ]
CurtainRight
3
xorbitsai/xorbits
numpy
63
Register all members in top-level init file
closed
2022-12-09T04:50:16Z
2022-12-13T04:46:51Z
https://github.com/xorbitsai/xorbits/issues/63
[]
aresnow1
0
skypilot-org/skypilot
data-science
4,625
[k8s] Unexpected error when relaunching an INIT cluster on k8s which failed due to capacity error
To reproduce: 1. Launch a managed job with the controller on k8s with the following `~/.sky/config.yaml` ```yaml jobs: controller: resources: cpus: 2 cloud: kubernetes ``` ```console $ sky jobs launch test.yaml --cloud aws --cpus 2 -n test-mount-bucket Task from YAML spec: test.yaml Managed job 'test-mount-bucket' will be launched on (estimated): Considered resources (1 node): ---------------------------------------------------------------------------------------- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN ---------------------------------------------------------------------------------------- AWS m6i.large 2 8 - us-east-1 0.10 ✔ ---------------------------------------------------------------------------------------- Launching a managed job 'test-mount-bucket'. Proceed? [Y/n]: ⚙︎ Translating workdir and file_mounts with local source paths to SkyPilot Storage... Workdir: 'examples' -> storage: 'skypilot-filemounts-vscode-904d206c'. Folder : 'examples' -> storage: 'skypilot-filemounts-vscode-904d206c'. Created S3 bucket 'skypilot-filemounts-vscode-904d206c' in us-east-1 Excluded files to sync to cluster based on .gitignore. ✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-904d206c/ View logs at: ~/sky_logs/sky-2025-01-30-23-19-02-003572/storage_sync.log Excluded files to sync to cluster based on .gitignore. ✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-904d206c/ View logs at: ~/sky_logs/sky-2025-01-30-23-19-09-895566/storage_sync.log ✓ Uploaded local files/folders. Launching managed job 'test-mount-bucket' from jobs controller... Warning: Credentials used for [GCP, AWS] may expire. Clusters may be leaked if the credentials expire while jobs are running. It is recommended to use credentials that never expire or a service account. ⚙︎ Launching managed jobs controller on Kubernetes. W 01-30 23:19:33 instance.py:863] run_instances: Error occurred when creating pods: sky.provision.kubernetes.config.KubernetesError: Insufficient memory capacity on the cluster. Required resources (cpu=4, memory=34359738368) were not found in a single node. Other SkyPilot tasks or pods may be using resources. Check resource usage by running `kubectl describe nodes`. Full error: 0/1 nodes are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. sky.provision.kubernetes.config.KubernetesError: Insufficient memory capacity on the cluster. Required resources (cpu=4, memory=34359738368) were not found in a single node. Other SkyPilot tasks or pods may be using resources. Check resource usage by running `kubectl describe nodes`. Full error: 0/1 nodes are available: 1 Insufficient memory. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod. During handling of the above exception, another exception occurred: NotImplementedError The above exception was the direct cause of the following exception: sky.provision.common.StopFailoverError: During provisioner's failover, stopping 'sky-jobs-controller-11d9a692' failed. We cannot stop the resources launched, as it is not supported by Kubernetes. Please try launching the cluster again, or terminate it with: sky down sky-jobs-controller-11d9a692 ``` 2. Launch again: ```console $ sky jobs launch test.yaml --cloud aws --cpus 2 -n test-mount-bucket Task from YAML spec: test.yaml Managed job 'test-mount-bucket' will be launched on (estimated): Considered resources (1 node): ---------------------------------------------------------------------------------------- CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN ---------------------------------------------------------------------------------------- AWS m6i.large 2 8 - us-east-1 0.10 ✔ ---------------------------------------------------------------------------------------- Launching a managed job 'test-mount-bucket'. Proceed? [Y/n]: ⚙︎ Translating workdir and file_mounts with local source paths to SkyPilot Storage... Workdir: 'examples' -> storage: 'skypilot-filemounts-vscode-b7ba6a41'. Folder : 'examples' -> storage: 'skypilot-filemounts-vscode-b7ba6a41'. Created S3 bucket 'skypilot-filemounts-vscode-b7ba6a41' in us-east-1 Excluded files to sync to cluster based on .gitignore. ✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-b7ba6a41/ View logs at: ~/sky_logs/sky-2025-01-30-23-20-51-067815/storage_sync.log Excluded files to sync to cluster based on .gitignore. ✓ Storage synced: examples -> s3://skypilot-filemounts-vscode-b7ba6a41/ View logs at: ~/sky_logs/sky-2025-01-30-23-20-58-164407/storage_sync.log ✓ Uploaded local files/folders. Launching managed job 'test-mount-bucket' from jobs controller... Warning: Credentials used for [AWS, GCP] may expire. Clusters may be leaked if the credentials expire while jobs are running. It is recommended to use credentials that never expire or a service account. Cluster 'sky-jobs-controller-11d9a692' (status: INIT) was previously in Kubernetes (gke_sky-dev-465_us-central1-c_skypilotalpha). Restarting. ⚙︎ Launching managed jobs controller on Kubernetes. ⨯ Failed to set up SkyPilot runtime on cluster. View logs at: ~/sky_logs/sky-2025-01-30-23-21-05-243052/provision.log AssertionError: cpu_request should not be None ```
open
2025-01-30T23:24:59Z
2025-01-31T02:32:32Z
https://github.com/skypilot-org/skypilot/issues/4625
[ "good first issue", "good starter issues" ]
Michaelvll
0
coqui-ai/TTS
pytorch
3,371
Windows installation error
### Describe the bug when i installing TTS, it couldn't build wheels. ![image](https://github.com/coqui-ai/TTS/assets/145563206/5615a8a0-9f39-4bd5-be6b-ca9ce544f686) ### To Reproduce pip install TTS ### Expected behavior _No response_ ### Logs _No response_ ### Environment ```shell { "CUDA": { "GPU": [], "available": false, "version": null }, "Packages": { "PyTorch_debug": false, "PyTorch_version": "2.1.1+cpu", "TTS": "0.21.3", "numpy": "1.26.0" }, "System": { "OS": "Windows", "architecture": [ "64bit", "WindowsPE" ], "python": "3.11.5", "version": "10.0.22621" } } ``` ### Additional context _No response_
closed
2023-12-06T02:47:52Z
2024-01-14T09:35:19Z
https://github.com/coqui-ai/TTS/issues/3371
[ "bug" ]
yyi016100
7
tableau/server-client-python
rest-api
729
Add Documentation for Prep Flow classes and methods
Hi! Update: per comment below, verified FlowItem class and methods already exist. Updating this request for documentation of these items. Original request: Currently there is no FlowItem class and methods in TSC. Can we get that on the backlog (similar to [DatasourceItem](https://tableau.github.io/server-client-python/docs/api-ref#datasourceitem-class) and [WorkbookItem](https://tableau.github.io/server-client-python/docs/api-ref#workbookitem-class))? For now we have done a workaround using metadata api integration, but the code implementation was lengthy. Cheers! Chadd
open
2020-11-13T16:49:21Z
2022-07-22T19:14:34Z
https://github.com/tableau/server-client-python/issues/729
[ "docs" ]
thechadd
3
tqdm/tqdm
jupyter
646
File stream helpers
- [✓] I have visited the [source website], and in particular read the [known issues] - [✓] I have searched through the [issue tracker] for duplicates - [✓] I have mentioned version numbers, operating system and environment, where applicable: ``` >>> print(tqdm.__version__, sys.version, sys.platform) 4.26.0 3.6.6 |Anaconda custom (64-bit)| (default, Jun 28 2018, 11:07:29) [GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final)] darwin ``` Would a helper around file streams for `tqdm` be useful? It seems like inputs to `tqdm` are often files. Here is a simple example: ```python3 pbar = tqdm(total=os.path.getsize(filename), unit='B', unit_scale=True, unit_divisor=1024) with ProgressReader(open(filename, 'rb'), pbar) as f: pass class ProgressReader(): def __init__(self, reader, pbar): self._pbar = pbar self._reader = reader # Can also override other read methods def read(self, *args, **kwargs): prev = self.tell() result = self._reader.read(*args, **kwargs) self._pbar.update(self.tell() - prev) return result # Delegate everything else def __getattr__(self, attr): return getattr(self._reader, attr) # Support while blocks def __enter__(self, *args, **kwargs): self._reader.__enter__(*args, **kwargs) return self def __exit__(self, *args, **kwargs): self._reader.__exit__(type, *args, **kwargs) ``` Would welcome comments on the best approach here, e.g.: either wrap `tqdm` instance (as above) or directly extending `tqdm()` to support file streams by switching on the input type
open
2018-12-01T23:05:19Z
2018-12-01T23:06:53Z
https://github.com/tqdm/tqdm/issues/646
[]
smallnamespace
0
httpie/cli
api
727
JavaScript is disabled
I use httpie to access hackerone.com, prompting that javascript needs to be turned on > [root@localhost ~]# http https://hackerone.com/monero/hacktivity?sort_type=latest_disclosable_activity_at\&filter=type%3Aall%20to%3Amonero&page=1 > It looks like your JavaScript is disabled. To use HackerOne, enable JavaScript in your browser and refresh this page. How to fix it?
closed
2018-11-06T05:50:09Z
2018-11-06T05:57:17Z
https://github.com/httpie/cli/issues/727
[]
linkwik
1
DistrictDataLabs/yellowbrick
scikit-learn
337
Create a localization for Turkish language docs
I would like to translate some of the yellowbrick documentation into Turkish. Can you please create a localization for Turkish-language docs.
closed
2018-03-15T01:15:44Z
2018-03-16T02:23:25Z
https://github.com/DistrictDataLabs/yellowbrick/issues/337
[ "level: expert", "type: documentation" ]
Zeynepelabiad
2
jina-ai/clip-as-service
pytorch
691
when excuting python -m clip_server, the Public address doesnt appear…
![图片](https://user-images.githubusercontent.com/100932755/164890354-810fe8f2-193b-4ba3-85ca-1839d15ec460.png) it has been remained like this for a long time, is that normal TAT?
closed
2022-04-23T10:20:23Z
2022-04-25T05:29:43Z
https://github.com/jina-ai/clip-as-service/issues/691
[]
LioHaunt
3
PaddlePaddle/models
nlp
4,863
从字符串,直接进行推理问题
官方使用run_ernie_sequence_labeling.py 文件进行推理,发现推理过程是从硬盘读取数据,并封装成数据迭代器实现的, 请问有没有 可以直接从内存上把字符串接过来,直接进行推理呢? 我在内存中的字符串是列表格式的 ['字符串1' ,'字符串2', '字符串3'], 想直接进行推理,分词并返回词性标注, 谢谢!!
closed
2020-09-21T04:14:42Z
2020-09-22T07:03:31Z
https://github.com/PaddlePaddle/models/issues/4863
[]
LeeYongchao
1
iperov/DeepFaceLab
deep-learning
5,417
Deepfacelab Vram recongnition problem.
My computer specifications are as follows. cpu : ryzen 4800H gpu : Rtx3060 (notebook) 6GB memory: 16GB The job manager confirmed that the video RAM was normally caught at 6GB. However, deepfacelab recognizes vram as 3.4GB, resulting in frequent memory errors. Find someone who knows the solution.
open
2021-10-22T00:28:27Z
2023-06-08T22:48:35Z
https://github.com/iperov/DeepFaceLab/issues/5417
[]
Parksehoon1505
5
qubvel-org/segmentation_models.pytorch
computer-vision
657
save model for inference
i would thank you in your talented repos such as this repo i want to save my custom model to load it and not causing the error segmentaiton_models.pytorch not found and not installaing the pakage what would i have to do save the model class with the model_state_dict or what i was able to do this with the tensorflow repo compiling the model with its loss and meterices and load the model as any tensorflow model doesn't need to install any thing just tensorflow framework how i can do the same with this pytorch model
closed
2022-09-21T15:07:10Z
2022-11-29T09:42:25Z
https://github.com/qubvel-org/segmentation_models.pytorch/issues/657
[ "Stale" ]
Mahmuod1
3
Gozargah/Marzban
api
851
Wireguard
با عرض درود و خسته نباشید درخواست دارم تا در صورت امکان قابلیت پیکربندی وایرگارد هسته ایکس ری رو به هاست ستینگ مرزبان اضافه کنید و همچنین قابلیت اکانتینگ رو اضافه کنید به وایرگارد تا بشه برای هر یوزر مانند دیگر پروتکل ها اکانت حجم / زمان مشخص تعین کرد باتشکر
closed
2024-03-05T20:46:58Z
2024-03-06T08:36:42Z
https://github.com/Gozargah/Marzban/issues/851
[ "Duplicate" ]
w0l4i
1
paperless-ngx/paperless-ngx
django
8,910
[BUG] Custom fields endpoint does not respect API version
### Description Hey folks, thank you a ton for all the effort you put into Paperless, it's easily one of the best maintained self-hosted projects out there! A fairly recent update (I believe #8299) changed the API response for the custom fields endpoint from ```json "id": 1, "name": "TestSelect", "data_type": "select", "extra_data": { "select_options": [ "A", "B" ], "default_currency": null }, "document_count": 0 ``` to ```json "id": 27, "name": "TestSelect", "data_type": "select", "extra_data": { "select_options": [ { "id": "5t7Ix9oT5zdhPVrV", "label": "A" }, ... ] }, "document_count": 1 ``` but the [API version](https://docs.paperless-ngx.com/api/#api-versioning) wasn't incremented and calls using any of the older versions still receive responses using this new format. ### Steps to reproduce ```bash curl --request GET \ --url 'http://{{server}}/api/custom_fields/' \ --header 'accept: application/json; version=6' \ --header 'content-type: application/json' ``` ### Webserver logs ```bash Not applicable ``` ### Browser logs ```bash ``` ### Paperless-ngx version 2.14.5 ### Host OS Ubuntu 22.04 ### Installation method Docker - official image ### System status ```json ``` ### Browser _No response_ ### Configuration changes _No response_ ### Please confirm the following - [x] I believe this issue is a bug that affects all users of Paperless-ngx, not something specific to my installation. - [x] This issue is not about the OCR or archive creation of a specific file(s). Otherwise, please see above regarding OCR tools. - [x] I have already searched for relevant existing issues and discussions before opening this report. - [x] I have updated the title field above with a concise description.
closed
2025-01-25T23:17:09Z
2025-02-27T03:10:20Z
https://github.com/paperless-ngx/paperless-ngx/issues/8910
[ "bug", "backend" ]
LeoKlaus
4
tensorflow/tensor2tensor
deep-learning
1,306
Transformer model crashing due to memory issues trying to translate garbled text
I am translating a Chinese text file into English using the transformer model. My text file has a lot of garbled text and it causes memory usage to spike. Here is the text: https://gist.github.com/echan00/acde1d7e460cd9e467dfd612ea14ab66 Besides avoiding sending garbled text to the model, how else could I mitigate the problem described?
open
2018-12-16T03:54:45Z
2018-12-21T08:46:48Z
https://github.com/tensorflow/tensor2tensor/issues/1306
[]
echan00
0
StructuredLabs/preswald
data-visualization
520
[FEATURE] Introduce tab() Component for Multi-Section UI Navigation
**Goal** Add a `tab()` component that enables developers to organize UI content into labeled tabs within their Preswald apps—ideal for sectioning long dashboards, multiple views, or split data insights. --- ### 📌 Motivation Many dashboards and tools created with Preswald contain multiple logical sections (e.g., charts, tables, filters). Scrolling through all at once creates cognitive overload. A `tab()` component provides: - Better user experience through organized navigation - A way to switch between related views in-place - Clean separation of logic (e.g., Overview vs. Details) This aligns with the existing layout vision (e.g., `size`, `collapsible()`), and will significantly improve interface design for real-world dashboards. --- ### ✅ Acceptance Criteria - [ ] Add `tab()` component to `preswald/interfaces/components.py` - [ ] Frontend implementation using ShadCN’s `Tabs` from `@/components/ui/tabs.tsx` - [ ] Create `TabWidget.jsx` to render tabs and dynamic child content - [ ] Register component in `DynamicComponents.jsx` - [ ] Tab component should accept: - `label: str` – Label for the tab container - `tabs: list[dict]` – Each item with `title: str`, `components: list` - `size: float = 1.0` – For layout sizing - [ ] Ensure tabs support rendering other registered components (e.g., text, plotly, table) inside them - [ ] Fully documented in SDK --- ### 🛠 Implementation Plan #### 1. **Backend – Component Definition** In `preswald/interfaces/components.py`: ```python def tab( label: str, tabs: list[dict], size: float = 1.0, ) -> None: service = PreswaldService.get_instance() component_id = f"tab-{hashlib.md5(label.encode()).hexdigest()[:8]}" component = { "type": "tab", "id": component_id, "label": label, "size": size, "tabs": tabs, } service.append_component(component) ``` Register it in `__init__.py`: ```python from .components import tab ``` --- #### 2. **Frontend – UI Component** Create: `frontend/src/components/widgets/TabWidget.jsx` ```jsx import React from 'react'; import { Tabs, TabsList, TabsTrigger, TabsContent } from '@/components/ui/tabs'; import { Card } from '@/components/ui/card'; const TabWidget = ({ _label, tabs = [] }) => { const [activeTab, setActiveTab] = React.useState(tabs?.[0]?.title || ''); return ( <Card className="mb-4 p-4 rounded-2xl shadow-md"> <h2 className="font-semibold text-lg mb-2">{_label}</h2> <Tabs value={activeTab} onValueChange={setActiveTab}> <TabsList className="mb-2 flex space-x-2"> {tabs.map((tab) => ( <TabsTrigger key={tab.title} value={tab.title}> {tab.title} </TabsTrigger> ))} </TabsList> {tabs.map((tab) => ( <TabsContent key={tab.title} value={tab.title}> {/* Render nested components */} {tab.components?.map((child, i) => ( <div key={i}>{window.renderDynamicComponent(child)}</div> ))} </TabsContent> ))} </Tabs> </Card> ); }; export default TabWidget; ``` --- #### 3. **DynamicComponents.jsx – Register `tab`** ```jsx import TabWidget from '@/components/widgets/TabWidget'; case 'tab': return ( <TabWidget {...commonProps} _label={component.label} tabs={component.tabs} /> ); ``` Ensure `renderDynamicComponent()` is accessible globally or passed through props to render children. --- ### 🧪 Testing Plan - Add a test in `examples/iris/hello.py`: ```python from preswald import tab, text, table, get_df, connect connect() df = get_df("sample_csv") tab( label="Data Views", tabs=[ {"title": "Intro", "components": [text("Welcome to the Iris app.")]}, {"title": "Table", "components": [table(df)]}, ] ) ``` - Run with `preswald run` and confirm: - Tab titles appear - Content switches correctly - Styling matches Tailwind theme --- ### 🧾 SDK Usage Example ```python tab( label="Navigation", tabs=[ {"title": "Overview", "components": [text("Summary goes here")]}, {"title": "Data", "components": [table(df)]}, {"title": "Chart", "components": [plotly(fig)]} ] ) ``` --- ### 📚 Docs To Update - [ ] `/docs/sdk/tab.mdx` – Full parameters and example - [ ] `/docs/layout/guide.mdx` – Add layout pattern for `tab()` --- ### 🧩 Files Involved - `preswald/interfaces/components.py` - `frontend/src/components/widgets/TabWidget.jsx` - `frontend/src/components/ui/tabs.tsx` - `DynamicComponents.jsx` --- ### 💡 Future Ideas - Allow dynamic content loading (lazy tabs) - Optional `icon` support per tab - Persist selected tab state across sessions
open
2025-03-24T06:15:12Z
2025-03-24T06:15:12Z
https://github.com/StructuredLabs/preswald/issues/520
[ "enhancement" ]
amrutha97
0
jpadilla/django-rest-framework-jwt
django
485
Documentation not found
Hi there, the documentation has gone, could anyone find it? THANKS!!
closed
2019-07-09T15:32:47Z
2019-07-09T23:48:24Z
https://github.com/jpadilla/django-rest-framework-jwt/issues/485
[]
YipCyun
2
dpgaspar/Flask-AppBuilder
rest-api
2,090
How to enable ldap authentication
### Environment Flask-Appbuilder version: 4.3.4 ### Describe the expected results I am setting the following in `config.py` but it still doesn't seem to be using ldap in any way. Anything I can do to check? ```python AUTH_TYPE = AUTH_LDAP AUTH_LDAP_SERVER = "ldaps://my-ldap-server.com AUTH_LDAP_USE_TLS = False ``` ### Describe the actual results Nothing seems to happen, and checking the underlying database the user table is not populated (which is something I would have expected) ### Steps to reproduce
open
2023-07-21T19:02:18Z
2023-09-25T05:19:55Z
https://github.com/dpgaspar/Flask-AppBuilder/issues/2090
[ "question" ]
jcauserjt
2
proplot-dev/proplot
data-visualization
356
Plotting stuff example gives `ValueError'
I'm learning proplot according to the documents. When trying [Plotting stuff](https://proplot.readthedocs.io/en/latest/basics.html#Plotting-stuff) example, it gives a `ValueError: DiscreteNorm is not invertible.` My maplotlib and proplot version are as follows: ```python In [1]: matplotlib.__version__ Out[1]: '3.5.1' In [2]: pplt.__version__ Out[2]: '0.9.5.post301' ``` I'm not sure whether this is related to the incompatibility with matplotlib of version 3.5.1.
closed
2022-04-23T14:10:54Z
2023-03-29T05:45:02Z
https://github.com/proplot-dev/proplot/issues/356
[ "bug" ]
chenyulue
1
Lightning-AI/pytorch-lightning
pytorch
19,730
ValueError: dictionary update sequence element #0 has length 1; 2 is required
### Bug description I am trying to train a Lightning model that inherits from pl.LightningModule and implements a simple feed-forward network. The issue is that when I run it, it spits out the below error trace coming from trainer.fit(). I found this [very similar issue](https://github.com/Lightning-AI/pytorch-lightning/issues/9318), where downgrading to `torchmetrics<=0.5.0` fixed the issue, but that is not possible in my case as v2.2.0 of pytorch-lightning is not compatible with such an old version of torchmetrics. I tried downgrading to 0.7., the oldest compatible version, but it led to a different error also in the trainer.fit method. Thanks for your attention and I would appreciate any help with this. ### What version are you seeing the problem on? v2.2 ### How to reproduce the bug ```python Below is the model class definition import pytorch_lightning as pl import torch import numpy as np from torch.nn import MSELoss, L1Loss from torchmetrics import R2Score torch.random.manual_seed(123) class LightningModelSimple(pl.LightningModule): def __init__( self, latent_model, readout_model=None, losses={}, metrics=[], gpu=True, learning_rate=0.001, weight_decay=0.0, ): super().__init__() self.save_hyperparameters() self.latent_model = latent_model if readout_model is None: self.readout_model = torch.nn.Identity() else: self.readout_model = readout_model # losses if "target" in losses: self.loss_target = losses["target"] else: self.loss_target = None if "latent_target" in losses: self.loss_latent_target = losses["latent_target"] self.weight_loss_latent_target = losses["weight_loss_latent_target"] else: self.loss_latent_target = None self.gpu = gpu self.metrics = metrics self.learning_rate = learning_rate self.weight_decay = weight_decay def forward(self, x): x_latent = self.latent_model(x) y = self.readout_model(x_latent) return y def step(self, partition, batch, batch_idx): spectra, target_glucose = batch # get latent predictions self.pred_latent = self.latent_model(spectra.float()) # get glucose predictions self.pred_glucose = self.readout_model(self.pred_latent) # compute losses loss = 0 if self.loss_target is not None: loss += self.loss_target(self.pred_glucose, target_glucose) self.log(partition + "_loss_target", loss, on_epoch=True) if self.loss_latent_target is not None: loss_latent_target = ( self.weight_loss_latent_target * self.loss_latent_target(self.pred_latent, target_glucose.unsqueeze(1)) ) self.log( partition + "_loss_latent_target", loss_latent_target, on_epoch=True ) loss += loss_latent_target self.log(partition + "_loss_total", loss, on_epoch=True) for metric_name, metric in self.metrics: self.log( partition + "_" + metric_name, metric(self.pred_glucose, target_glucose), on_epoch=True, ) return loss def training_step(self, batch, batch_idx): return self.step("train", batch, batch_idx) def validation_step(self, batch, batch_idx): return self.step("val", batch, batch_idx) def test_step(self, batch, batch_idx): return self.step("test", batch, batch_idx) def configure_optimizers(self): return torch.optim.Adam( self.parameters(), lr=self.hparams.learning_rate, weight_decay=self.hparams.weight_decay, ) This should go in a different file called helpers.py def log_parameter(params, parser, param_name=""): if isinstance(params, dict): for key in params.keys(): if key == "class_path": parser = log_parameter(params[key], parser, param_name) else: parser = log_parameter(params[key], parser, key) else: parser.add_argument("--" + param_name, type=type(params), default=params) return parser def update(config_data, params): for k, v in params.items(): if isinstance(v, collections.abc.Mapping): config_data[k] = update(config_data.get(k, {}), v) else: config_data[k] = v return config_data def train_model(config_file, **kwargs): loader = yaml.SafeLoader with open(config_file, "r") as stream: config_data = yaml.load(stream, Loader=loader) if "params" in kwargs: config_data = update(config_data, kwargs["params"]) if "latent_model" in kwargs: config_data["lightning_model"]["init_args"]["latent_model"] = kwargs[ "latent_model" ] # experiment_name = config_data["experiment_name"] n_epochs = config_data["n_epochs"] pl.seed_everything(1234) # add arguments to parser parser = ArgumentParser(conflict_handler="resolve") parser.add_argument( "--auto-select-gpus", default=True, help="run automatically on GPU if available" ) parser.add_argument("--max-epochs", default=n_epochs, type=int) parser.add_argument("gpus", type=int, default=1) parser = log_parameter(config_data, parser) # parse arguments to trainer args = parser.parse_args() if args.gpus == 1: device = "cuda" elif args.gpus == 0: device = "cpu" # create mlflow experiment if it doesn't yet exist try: current_experiment = dict(mlflow.get_experiment_by_name(args.experiment_name)) experiment_id = current_experiment["experiment_id"] except: print("creating new experiment") experiment_id = mlflow.create_experiment(args.experiment_name) # # start experiment with mlflow.start_run(experiment_id=experiment_id) as run: with open("log.txt", "a") as log_file: log_file.write("'" + str(run.info.run_id) + "'" + ", ") path_mlflow_results = ( "mlruns/" + str(experiment_id) + "/" + str(run.info.run_id) ) path_checkpoints = path_mlflow_results + "/checkpoints" # copy yaml file to mlfow results # TODO: this is a hack for now, this should automatically be logged # with open(path_mlflow_results + "/" + config_file, "w") as f: with open(path_mlflow_results + "/config.yaml", "w") as f: yaml.dump(config_data, f) # initialize dataloader config_data = initialize_datamodule(config_data) datamodule = config_data["datamodule"] # extract key for model selection loss_key = config_data["metric_model_selection"] if ( config_data["datamodule"].split_label_val == "Barcode" and "val_" in loss_key[0] ): raise ValueError( "split_label_val=Barcode with metric_model_selection=", loss_key, " introduces data leakage", ) # initialize lightning model if ( config_data["lightning_model"]["class_path"] == "models.lightning_model.LightningModel" ): use_val_test_data_in_train = True elif ( config_data["lightning_model"]["class_path"] == "models.lightning_model.LightningModelSimple" ): use_val_test_data_in_train = False config_data = initialize_modules(config_data) lightning_model = config_data["lightning_model"] print(type(lightning_model)) print(type(datamodule)) # monitor different metrics depending on loss variable checkpoints = [] monitored_metrics = config_data["monitored_metrics"] for i, (me, mo) in enumerate(monitored_metrics): ckpt = pl.callbacks.ModelCheckpoint( monitor=me, mode=mo, dirpath=path_checkpoints, filename="{epoch:02d}-{" + me + ":.4f}", save_top_k=1, ) checkpoints.append(ckpt) # checkpoints.append( # pl.callbacks.ModelCheckpoint( # dirpath=path_checkpoints, # filename="every_n_{epoch:02d}", # every_n_epochs=10, # save_top_k=-1, # <--- this is important! # ) # ) # log all parameter mlflow.pytorch.autolog() for arg in vars(args): mlflow.log_param(arg, getattr(args, arg)) # train model trainer = pl.Trainer(max_epochs=n_epochs, logger=True, callbacks=checkpoints) # TODO: this is very hackey and should be revisited # we create a combined dataloader which is the same for train/validation/test # batching is applied to the train dataloader, thus there will be multiple batches with the batch size defined in config.yaml # the validation and test datloaders only have one batch which has the size of the entire validation/test set # insight the lightning module we read out the validation and test batch at step 0 and save it as a class # attribute such that all validation and test data can be used in all training steps if use_val_test_data_in_train: datamodule.setup(stage="") iterables_train = { "train": datamodule.train_dataloader(), "val": datamodule.val_dataloader(), "test": datamodule.test_dataloader(), } iterables_val = { "train": datamodule.train_dataloader(), "val": datamodule.val_dataloader(), "test": datamodule.test_dataloader(), } iterables_test = { "train": datamodule.train_dataloader(), "val": datamodule.val_dataloader(), "test": datamodule.test_dataloader(), } combined_loader_train = CombinedLoader(iterables_train, mode="max_size") combined_loader_val = CombinedLoader(iterables_val, mode="max_size") combined_loader_test = CombinedLoader(iterables_test, mode="max_size") trainer.fit(lightning_model, combined_loader_train, combined_loader_val) else: trainer.fit(lightning_model, datamodule=datamodule) # evaluate tests for all monitored metrics ckpts = glob.glob(path_checkpoints + "/*") for ckpt in ckpts: if loss_key[0] in ckpt: if use_val_test_data_in_train: result = trainer.test( dataloaders=combined_loader_test, ckpt_path=ckpt ) else: result = trainer.test(datamodule=datamodule, ckpt_path=ckpt) print(result) Finally the main file import torch import utils.helpers as helpers torch.random.manual_seed(123) if __name__ == "__main__": # profil data # train_model("config_profil_latent.yaml") # train_model("config_profil_readout.yaml") # train_model("config_profil.yaml") # train_model("config_profil_simple.yaml") for weight_decay in [1.0]: for val_subject in range(0, 14): params = { "datamodule": { "init_args": { "val_index": [val_subject], "test_index": [], } }, "lightning_model": { "init_args": { "weight_decay": weight_decay, } }, } helpers.train_model("config_profil_simple.yaml", params=params) ``` ### Error messages and logs ``` Traceback (most recent call last): File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 44, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 579, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 969, in _run _log_hyperparams(self) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/loggers/utilities.py", line 95, in _log_hyperparams logger.save() File "/opt/conda/envs/artemis/lib/python3.10/site-packages/lightning_utilities/core/rank_zero.py", line 42, in wrapped_fn return fn(*args, **kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/lightning_fabric/loggers/csv_logs.py", line 157, in save self.experiment.save() File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/loggers/csv_logs.py", line 67, in save save_hparams_to_yaml(hparams_file, self.hparams) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/core/saving.py", line 354, in save_hparams_to_yaml yaml.dump(v) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/__init__.py", line 253, in dump return dump_all([data], stream, Dumper=Dumper, **kwds) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/__init__.py", line 241, in dump_all dumper.represent(data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 356, in represent_object return self.represent_mapping(tag+function_name, value) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 330, in represent_object dictitems = dict(dictitems) ValueError: dictionary update sequence element #0 has length 1; 2 is required During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/home/pap_spiden_com/spiden_ds/experiments/artemis/main.py", line 28, in <module> helpers.train_model("config_profil_simple.yaml", params=params) File "/home/pap_spiden_com/spiden_ds/experiments/artemis/utils/helpers.py", line 191, in train_model trainer.fit(lightning_model, datamodule=datamodule) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/mlflow/utils/autologging_utils/safety.py", line 573, in safe_patch_function patch_function(call_original, *args, **kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/mlflow/utils/autologging_utils/safety.py", line 252, in patch_with_managed_run result = patch_function(original, *args, **kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/mlflow/pytorch/_lightning_autolog.py", line 386, in patched_fit result = original(self, *args, **kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/mlflow/utils/autologging_utils/safety.py", line 554, in call_original return call_original_fn_with_event_logging(_original_fn, og_args, og_kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/mlflow/utils/autologging_utils/safety.py", line 489, in call_original_fn_with_event_logging original_fn_result = original_fn(*og_args, **og_kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/mlflow/utils/autologging_utils/safety.py", line 551, in _original_fn original_result = original(*_og_args, **_og_kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/trainer/trainer.py", line 543, in fit call._call_and_handle_interrupt( File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/trainer/call.py", line 67, in _call_and_handle_interrupt logger.finalize("failed") File "/opt/conda/envs/artemis/lib/python3.10/site-packages/lightning_utilities/core/rank_zero.py", line 42, in wrapped_fn return fn(*args, **kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/lightning_fabric/loggers/csv_logs.py", line 166, in finalize self.save() File "/opt/conda/envs/artemis/lib/python3.10/site-packages/lightning_utilities/core/rank_zero.py", line 42, in wrapped_fn return fn(*args, **kwargs) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/lightning_fabric/loggers/csv_logs.py", line 157, in save self.experiment.save() File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/loggers/csv_logs.py", line 67, in save save_hparams_to_yaml(hparams_file, self.hparams) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/pytorch_lightning/core/saving.py", line 354, in save_hparams_to_yaml yaml.dump(v) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/__init__.py", line 253, in dump return dump_all([data], stream, Dumper=Dumper, **kwds) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/__init__.py", line 241, in dump_all dumper.represent(data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 27, in represent node = self.represent_data(data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 199, in represent_list return self.represent_sequence('tag:yaml.org,2002:seq', data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 92, in represent_sequence node_item = self.represent_data(item) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 356, in represent_object return self.represent_mapping(tag+function_name, value) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 48, in represent_data node = self.yaml_representers[data_types[0]](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 207, in represent_dict return self.represent_mapping('tag:yaml.org,2002:map', data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 118, in represent_mapping node_value = self.represent_data(item_value) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 52, in represent_data node = self.yaml_multi_representers[data_type](self, data) File "/opt/conda/envs/artemis/lib/python3.10/site-packages/yaml/representer.py", line 330, in represent_object dictitems = dict(dictitems) ValueError: dictionary update sequence element #0 has length 1; 2 is required ``` ### Environment <details> <summary>Current environment</summary> ``` #- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow): #- PyTorch Lightning Version (e.g., 1.5.0): #- Lightning App Version (e.g., 0.5.2): #- PyTorch Version (e.g., 2.0): #- Python version (e.g., 3.9): #- OS (e.g., Linux): #- CUDA/cuDNN version: #- GPU models and configuration: #- How you installed Lightning(`conda`, `pip`, source): #- Running environment of LightningApp (e.g. local, cloud): ``` </details> ### More info _No response_
closed
2024-04-03T11:52:24Z
2024-06-13T11:14:04Z
https://github.com/Lightning-AI/pytorch-lightning/issues/19730
[ "bug" ]
pau-altur
3
facebookresearch/fairseq
pytorch
5,552
Error when installing fairseq
Hi everyone, I am running into the following error when trying to install fairseq pip install fairseq This is the error in question : FileNotFoundError: [Errno 2] No such file or directory: 'fairseq/version.txt' Thanks for your help.
open
2024-10-13T21:56:45Z
2025-01-22T03:25:33Z
https://github.com/facebookresearch/fairseq/issues/5552
[ "bug", "needs triage" ]
ChristopheYe
6
serengil/deepface
machine-learning
1,169
File image type issue
Actually file images allowed to be loaded are constrained by extensions ["jpg", "jpeg", "png"]. This however does not guarantee file contents are actually images of such type or even images at all (a file extension can be easily changed to whatver user wants). On my personal version I implemented a check on file content type which is more consistent and as a result, just for your knowledge, I discovered not all files from `tests\dataset` are loaded: in fact "img47.jpg" is a WebP image (not a Jpg). FYI
closed
2024-04-06T16:20:19Z
2024-04-07T08:34:31Z
https://github.com/serengil/deepface/issues/1169
[ "enhancement" ]
AndreaLanfranchi
1
aleju/imgaug
deep-learning
849
Adding BlendAlphaSimplexNoise into an augmentation sequence fails to convert keypoints
Imgaug 0.4.0 Python 3.10 `iaa.BlendAlphaSimplexNoise` seems to cause problems when converting keypoints. I have created an sequence of augmentations: ```python seq = iaa.Sequential([ iaa.Affine(rotate=(-25, 25)), iaa.AllChannelsCLAHE(clip_limit=(1, 3), tile_grid_size_px=(10, 25)), iaa.BlendAlphaSimplexNoise(iaa.Multiply(iap.Uniform(0.7, 1.3), per_channel=True), size_px_max=(2, 16), upscale_method="nearest") # iaa.BlendAlphaFrequencyNoise(foreground=iaa.Multiply(iap.Choice([0.8, 1.2]), per_channel=True)) ], random_order=False) ``` When I try to augment image and the corresponding keypoints with: ```python image_aug, kps_aug = seq(image=image, keypoints=kps_oi) ``` I get the error: ```python File ~/anaconda3/envs/dlc239-gui/lib/python3.10/site-packages/imgaug/augmenters/blend.py:757, in BlendAlphaMask._blend_coordinates(cls, cbaoi, cbaoi_fg, cbaoi_bg, mask_image, mode) 755 subgen = zip(coords, coords_fg, coords_bg) 756 for coord, coord_fg, coord_bg in subgen: --> 757 x_int = int(np.round(coord[0])) 758 y_int = int(np.round(coord[1])) 759 if 0 <= y_int < h_img and 0 <= x_int < w_img: ValueError: cannot convert float NaN to integer ``` My keypoints include some NaN values (as a side note). If I remove specifically `iaa.BlendAlphaSimplexNoise` there no error. For example If use `iaa.BlendAlphaFrequencyNoise` instead there is also no error.
open
2024-05-03T12:39:05Z
2024-05-03T12:39:05Z
https://github.com/aleju/imgaug/issues/849
[]
vonaviv
0
taverntesting/tavern
pytest
514
Question - How can we include a yaml file from different directory?
In the same directory, it is possible to include via: ```yaml includes: - !include data.yaml ``` How can we achieve the same if I want to import a file which resides in outer(root) directory? ```root/common.yaml in -> root/first/test_hola.tavern.yaml```
closed
2020-01-28T13:18:59Z
2020-01-29T09:56:21Z
https://github.com/taverntesting/tavern/issues/514
[]
imkaka
2
pywinauto/pywinauto
automation
1,074
could pywinauto switch an input method?
It seems autohotkey could switch an input method based on certain logic:[Change input method from Chinese to English... is this AHK territory? ](https://www.reddit.com/r/AutoHotkey/comments/54tg3i/change_input_method_from_chinese_to_english_is/+&cd=1&hl=zh-CN&ct=clnk) Couldpywinauto do this too?
closed
2021-05-20T04:05:09Z
2021-05-22T02:36:27Z
https://github.com/pywinauto/pywinauto/issues/1074
[ "invalid", "question" ]
leafonsword
2
chaos-genius/chaos_genius
data-visualization
369
Make multidimensional drill down to be configurable for DeepDrills
Make multi-dimensional drill down to be configurable for DeepDrills. Disable the multidimensional option in the UI where it is disabled.
closed
2021-11-04T02:54:36Z
2021-11-12T12:00:47Z
https://github.com/chaos-genius/chaos_genius/issues/369
[ "🖥️ frontend", "🛠️ backend", "P2" ]
suranah
2
Farama-Foundation/PettingZoo
api
918
[Proposal] Custom board game env creation tutorial
### Proposal I've been thinking about making a custom env tutorial with the game from my personal project here: https://github.com/elliottower/gobblet-rl/blob/main/gobblet_rl/game/gobblet.py ### Motivation There's been a fair number of people trying to implement a custom board game or do action masking more complex than the extremely simple stuff done in the old environment creation tutorial. It would be a good way to illustrate action masking and a non-trivial observation space (I based mine of the Chess example which is based on AlphaZero) as well as interactive pygame rendering (I made pixel art in photoshop based off the connect four pixel art) and in the future RL training libraries (have tianshou and RLlib working, planning to do SB3 once it's compatible as well as CleanRL) ### Pitch I write a tutorial on implementing a custom board game using the code linked above from my personal project. I could also include other tutorial pages about adding rendering with Pygame, interactive/user control, integration with other RL libraries, maybe even WebAssembly (I have it running currently on my personal site, you can watch two simple greedy agents play each other entirely from your own browser: https://elliottower.github.io/gobblet-rl/). I created PRs for working WebAssembly code and an interactive connect four example, but I think either ones seem a bit disjointed and lack context without this. I'm also thinking about making a tutorial using CleanRL to record videos and do something similar to the [wandb Open RL Benchmark](https://wandb.ai/cleanrl/cleanrl.benchmark/reports/Open-RL-Benchmark-new---Vmlldzo0ODA0NjE). Having a full tutorial showing how to make a game from scratch and then add on all of these other features would be really useful for new users I think. ### Alternatives Alternatively, I could make tutorials on my own project's repo, but I would like to be able to reach a wider audience and help as many people as possible, and it seems a little weird to link people to my side project rather than the official repo tutorials (have linked a few people already when they wanted to see examples using RLlib or doing action masking or other questions). The game could also be added as an official PZ game, but it's pretty simple and doesn't have an existing research base, so based on that I think it probably makes more sense to put it as a tutorial example. I could design another simple board game from scratch as well (was considering [Blokus](https://en.wikipedia.org/wiki/Blokus), but that would be more complicated and require a bunch of code to hard code the piece shapes and such), but this game already works and I've done a lot of work already in making the rendering and integration with other RL training libraries and such. I could also potentially do a tutorial for this other game I implemented: https://github.com/elliottower/cathedral-rl but it's a bit more complicated and the code for checking territory is pretty messy and not ideal. A friend is considering rewriting the code in rust to make it run faster and work nicely with a web interface, if that ends up happening I could make a tutorial about how to wrap a non-python game with PettingZoo (similar to how openspiel or other libraries interface with C++/lua for underlying game envs). ### Additional context The game is basically tic-tac-toe but with 3 dimensions and pieces can move after being placed and larger pieces can "gobble" smaller pieces to capture the space. ### Checklist - [X] I have checked that there is no similar [issue](https://github.com/Farama-Foundation/PettingZoo/issues) in the repo
closed
2023-03-23T21:34:56Z
2023-11-29T15:42:51Z
https://github.com/Farama-Foundation/PettingZoo/issues/918
[ "enhancement" ]
elliottower
0
encode/httpx
asyncio
3,443
from httpx._types import AuthTypes, CertTypes, CookieTypes, HeaderTypes, VerifyTypes ImportError: cannot import name 'VerifyTypes' from 'httpx._types' (/usr/local/lib/python3.11/site-packages/httpx/_types.py)
I get an error when I try to build a Docker image. Can someone please help me? from langserve import add_routes File "/usr/local/lib/python3.11/site-packages/langserve/__init__.py", line 8, in <module> from langserve.client import RemoteRunnable File "/usr/local/lib/python3.11/site-packages/langserve/client.py", line 24, in <module> from httpx._types import AuthTypes, CertTypes, CookieTypes, HeaderTypes, VerifyTypes ImportError: cannot import name 'VerifyTypes' from 'httpx._types' (/usr/local/lib/python3.11/site-packages/httpx/_types.py) langchain-community psycopg2-binary tenacity scikit-learn pydantic fastapi langchain_google_genai httpx==0.27.2 langgraph langserve uvicorn langchain langchain-cli langchain-core langchain-openai==0.2.10
closed
2024-12-03T18:19:40Z
2024-12-03T19:25:13Z
https://github.com/encode/httpx/issues/3443
[]
maryam123errami
3
numpy/numpy
numpy
27,825
CI: circleCI build of NumPy is failing
The cirecleCI build, using their [python3.11.8 image](https://github.com/numpy/numpy/blob/5f70dc85d16454c81b19c02a012ce08cca9fc28e/.circleci/config.yml#L12), is failing to compile NumPy. Here are the relevant bits from the [build log](https://app.circleci.com/pipelines/github/numpy/numpy/29772/workflows/fb91454e-ee15-4217-96c5-ce2d5fc5a09d/jobs/43788). The gcc 11.3.0 compiler is crashing when compiling `loops_unary_fp_le.dispatch.c.src: In function ‘FLOAT_isfinite_SSE41’`: ``` The Meson build system Version: 1.5.2 Source dir: /home/circleci/repo Build dir: /home/circleci/repo/build Build type: native build Project name: NumPy Project version: 2.3.0.dev0+git20241124.8c021fc C compiler for the host machine: cc (gcc 11.3.0 "cc (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0") C linker for the host machine: cc ld.bfd 2.38 C++ compiler for the host machine: c++ (gcc 11.3.0 "c++ (Ubuntu 11.3.0-1ubuntu1~22.04.1) 11.3.0") C++ linker for the host machine: c++ ld.bfd 2.38 Cython compiler for the host machine: cython (cython 3.0.11) Host machine cpu family: x86_64 Host machine cpu: x86_64 ... Configuring npy_cpu_dispatch_config.h using configuration Message: CPU Optimization Options baseline: Requested : min Enabled : SSE SSE2 SSE3 dispatch: Requested : max -xop -fma4 Enabled : SSSE3 SSE41 POPCNT SSE42 AVX F16C FMA3 AVX2 AVX512F AVX512CD AVX512_KNL AVX512_KNM AVX512_SKX AVX512_CLX AVX512_CNL AVX512_ICL ... NOTICE: Future-deprecated features used: * 1.3.0: {'Source file src/umath/svml/linux/avx512/svml_z0_acos_d_la.s in the 'objects' kwarg is not an object.'} NumPy 2.3.0.dev0+git20241124.8c021fc User defined options prefix: /usr Found ninja-1.11.1.git.kitware.jobserver-1 at /home/circleci/repo/venv/bin/ninja ... [256/527] Generating 'numpy/_core/_multiarray_umath.cpython-311-x86_64-linux-gnu.so.p/einsum_sumprod.c' [257/527] Compiling C object numpy/_core/libloops_unary_complex.dispatch.h_AVX2.a.p/meson-generated_loops_unary_complex.dispatch.c.o FAILED: numpy/_core/libloops_unary_complex.dispatch.h_AVX2.a.p/meson-generated_loops_unary_complex.dispatch.c.o cc -Inumpy/_core/libloops_unary_complex.dispatch.h_AVX2.a.p -Inumpy/_core -I../numpy/_core -Inumpy/_core/include -I../numpy/_core/include -I../numpy/_core/src/common -I../numpy/_core/src/multiarray -I../numpy/_core/src/npymath -I../numpy/_core/src/umath -I../numpy/_core/src/highway -I/home/circleci/.pyenv/versions/3.11.8/include/python3.11 -I/home/circleci/repo/build/meson_cpu -fdiagnostics-color=always -Wall -Winvalid-pch -std=c11 -O2 -g -fno-strict-aliasing -msse -msse2 -msse3 -fPIC -DNPY_INTERNAL_BUILD -DHAVE_NPY_CONFIG_H -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -O3 -DNPY_HAVE_SSE2 -DNPY_HAVE_SSE -DNPY_HAVE_SSE3 -DNPY_HAVE_SSSE3 -DNPY_HAVE_SSE41 -DNPY_HAVE_POPCNT -DNPY_HAVE_SSE42 -DNPY_HAVE_AVX -DNPY_HAVE_F16C -DNPY_HAVE_FMA3 -DNPY_HAVE_AVX2 -msse -msse2 -msse3 -mssse3 -msse4.1 -mpopcnt -msse4.2 -mavx -mf16c -mfma -mavx2 -maes -mpclmul -mbmi -mbmi2 -DNPY_MTARGETS_CURRENT=AVX2 -MD -MQ numpy/_core/libloops_unary_complex.dispatch.h_AVX2.a.p/meson-generated_loops_unary_complex.dispatch.c.o -MF numpy/_core/libloops_unary_complex.dispatch.h_AVX2.a.p/meson-generated_loops_unary_complex.dispatch.c.o.d -o numpy/_core/libloops_unary_complex.dispatch.h_AVX2.a.p/meson-generated_loops_unary_complex.dispatch.c.o -c numpy/_core/libloops_unary_complex.dispatch.h_AVX2.a.p/loops_unary_complex.dispatch.c during RTL pass: cse2 ../numpy/_core/src/umath/loops_unary_complex.dispatch.c.src: In function ‘CFLOAT_absolute_AVX2’: ../numpy/_core/src/umath/loops_unary_complex.dispatch.c.src:138:1: internal compiler error: Segmentation fault 138 | } | ^ 0xd7403d internal_error(char const*, ...) ???:0 Please submit a full bug report, with preprocessed source if appropriate. Please include the complete backtrace with any bug report. See <file:///usr/share/doc/gcc-11/README.Bugs> for instructions. [258/527] Compiling C object numpy/_core/libloops_arithm_fp.dispatch.h_baseline.a.p/meson-generated_loops_arithm_fp.dispatch.c.o [259/527] Compiling C object numpy/_core/libloops_unary_fp_le.dispatch.h_SSE41.a.p/meson-generated_loops_unary_fp_le.dispatch.c.o FAILED: numpy/_core/libloops_unary_fp_le.dispatch.h_SSE41.a.p/meson-generated_loops_unary_fp_le.dispatch.c.o cc -Inumpy/_core/libloops_unary_fp_le.dispatch.h_SSE41.a.p -Inumpy/_core -I../numpy/_core -Inumpy/_core/include -I../numpy/_core/include -I../numpy/_core/src/common -I../numpy/_core/src/multiarray -I../numpy/_core/src/npymath -I../numpy/_core/src/umath -I../numpy/_core/src/highway -I/home/circleci/.pyenv/versions/3.11.8/include/python3.11 -I/home/circleci/repo/build/meson_cpu -fdiagnostics-color=always -Wall -Winvalid-pch -std=c11 -O2 -g -fno-strict-aliasing -msse -msse2 -msse3 -fPIC -DNPY_INTERNAL_BUILD -DHAVE_NPY_CONFIG_H -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -O3 -DNPY_HAVE_SSE2 -DNPY_HAVE_SSE -DNPY_HAVE_SSE3 -DNPY_HAVE_SSSE3 -DNPY_HAVE_SSE41 -msse -msse2 -msse3 -mssse3 -msse4.1 -DNPY_MTARGETS_CURRENT=SSE41 -MD -MQ numpy/_core/libloops_unary_fp_le.dispatch.h_SSE41.a.p/meson-generated_loops_unary_fp_le.dispatch.c.o -MF numpy/_core/libloops_unary_fp_le.dispatch.h_SSE41.a.p/meson-generated_loops_unary_fp_le.dispatch.c.o.d -o numpy/_core/libloops_unary_fp_le.dispatch.h_SSE41.a.p/meson-generated_loops_unary_fp_le.dispatch.c.o -c numpy/_core/libloops_unary_fp_le.dispatch.h_SSE41.a.p/loops_unary_fp_le.dispatch.c during RTL pass: sched2 ../numpy/_core/src/umath/loops_unary_fp_le.dispatch.c.src: In function ‘FLOAT_isfinite_SSE41’: ../numpy/_core/src/umath/loops_unary_fp_le.dispatch.c.src:560:1: internal compiler error: Segmentation fault 560 | } | ^ ``` Is this a known problem with that compiler?
closed
2024-11-24T07:16:03Z
2024-12-17T08:45:35Z
https://github.com/numpy/numpy/issues/27825
[ "component: CI" ]
mattip
4
pyg-team/pytorch_geometric
deep-learning
10,098
Exploding values in random walk function for positional encoding
### 🐛 Describe the bug The issue I am facing is numerical instabilities in the positional encoding based on random walk. The obtained positional encoding should consider the landing probability of a node $i$ to itself. Then a value representing a probability value should always be in 0-1 range. The following code snippet is a way to reproduce the issue: ``` import torch from torch_geometric.transforms import AddRandomWalkPE from torch_geometric.data import Data torch.manual_seed(42) num_nodes = 100 num_edges = 1000 edge_index = torch.randint(0, num_nodes, (2, num_edges)) edge_weight = torch.rand(num_edges) # Random edge weights between 0 and 1 data = Data(x=torch.rand((num_nodes, 3)), edge_index=edge_index, edge_weight=edge_weight) # Random walk positional encoding transform = AddRandomWalkPE(walk_length=20, attr_name='pe') data = transform(data) print(data.pe.max()) # tensor(22252.1230468750) # it should be between 0 and 1 ``` If I print the first row of the starting transition matrix used in the function to compute the random walk embedding I get: ``` tensor([0.1737565845, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1737565845, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1737565845, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1737565845, 0.1737565845, 0.1737565845, 0.0000000000, 0.0000000000, 0.1737565845, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1737565845, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1737565845, 0.0000000000, 0.0000000000, 0.0000000000, 0.1737565845, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1737565845, 0.1737565845, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.1737565845, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000, 0.0000000000]) ``` The sum of these element is not equal to one and that is why it has unstable behavior. I also don't understand why if I specify different weight for each edge then I end up having the same weight in the transition matrix. The fix I propose is the following, ensuring that the sum of each row of the transition matrix is equal to 1. This also allows a proper customization of edge weights: ``` class AddRandomWalkPE(BaseTransform): r"""Adds the random walk positional encoding from the `"Graph Neural Networks with Learnable Structural and Positional Representations" <https://arxiv.org/abs/2110.07875>`_ paper to the given graph (functional name: :obj:`add_random_walk_pe`). Args: walk_length (int): The number of random walk steps. attr_name (str, optional): The attribute name of the data object to add positional encodings to. If set to :obj:`None`, will be concatenated to :obj:`data.x`. (default: :obj:`"random_walk_pe"`) """ def __init__( self, walk_length: int, attr_name: Optional[str] = 'random_walk_pe', ) -> None: self.walk_length = walk_length self.attr_name = attr_name def forward(self, data: Data) -> Data: assert data.edge_index is not None row, col = data.edge_index N = data.num_nodes assert N is not None if N <= 2_000: # Dense code path for faster computation: adj = torch.zeros((N, N), device=row.device) adj[row, col] = data.edge_weight loop_index = torch.arange(N, device=row.device) elif torch_geometric.typing.WITH_WINDOWS: adj = to_torch_coo_tensor(data.edge_index, data.edge_weight, size=data.size()) else: adj = to_torch_csr_tensor(data.edge_index, data.edge_weight, size=data.size()) row_sums = adj.sum(dim=1, keepdim=True) # Sum along rows row_sums = row_sums.clamp(min=1e-8) # Prevent division by zero adj = adj / row_sums # Normalize each row to sum to 1 def get_pe(out: Tensor) -> Tensor: if is_torch_sparse_tensor(out): return get_self_loop_attr(*to_edge_index(out), num_nodes=N) return out[loop_index, loop_index] out = adj pe_list = [get_pe(out)] for _ in range(self.walk_length - 1): out = out @ adj pe_list.append(get_pe(out)) pe = torch.stack(pe_list, dim=-1) data = add_node_attr(data, pe, attr_name=self.attr_name) return data ``` Let me know what you think, I will open a PR in case. ### Versions Collecting environment information... PyTorch version: 2.5.1+cu121 Is debug build: False CUDA used to build PyTorch: 12.1 ROCM used to build PyTorch: N/A OS: Microsoft Windows 11 Pro (10.0.22631 64-bit) GCC version: Could not collect Clang version: Could not collect CMake version: Could not collect Libc version: N/A Python version: 3.12.6 | packaged by conda-forge | (main, Sep 22 2024, 14:01:26) [MSC v.1941 64 bit (AMD64)] (64-bit runtime) Python platform: Windows-11-10.0.22631-SP0 Is CUDA available: True CUDA runtime version: Could not collect CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA RTX A4000 Laptop GPU Nvidia driver version: 538.92 cuDNN version: Could not collect HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True CPU: Name: 11th Gen Intel(R) Core(TM) i7-11850H @ 2.50GHz Manufacturer: GenuineIntel Family: 198 Architecture: 9 ProcessorType: 3 DeviceID: CPU0 CurrentClockSpeed: 2496 MaxClockSpeed: 2496 L2CacheSize: 10240 L2CacheSpeed: None Revision: None Versions of relevant libraries: [pip3] numpy==1.26.3 [pip3] pytorch-model-summary==0.1.2 [pip3] torch==2.5.1+cu121 [pip3] torch-geometric==2.6.1 [pip3] torchaudio==2.5.1+cu121 [pip3] torchvision==0.20.1+cu121 [conda] numpy 2.2.1 pypi_0 pypi
open
2025-03-05T15:48:23Z
2025-03-05T15:48:23Z
https://github.com/pyg-team/pytorch_geometric/issues/10098
[ "bug" ]
MatteoMazzonelli
0
unit8co/darts
data-science
2,071
What other variables need to be passed to the TFTExplainer
I am trying to run a TFTExplainer in an Optuna callback. I did this with Shap and the linear regression models and had no issue. I created an Optuna callback like this for Shap: ``` def print_callback(study, trial): trial_value = trial.values mape, backtest_mape = trial_value[0], trial_value[1] model = trial.user_attrs['model'] hf.shap_LR_explainer(model) ``` And shap_LR_explainer is a function that runs the boilerplate shap code I find throughout the docs. However when I do the same thing and pull the model out through a `trial.user_attrs` for TFT: ``` def print_callback(study, trial): trial_value = trial.values mape, backtest_mape = trial_value[0], trial_value[1] model = trial.user_attrs['model'] explainer = TFTExplainer(model) results = explainer.explain() explainer.plot_variable_selection(explainability_result) ``` I get the following error: `AttributeError: 'NoneType' object has no attribute 'set_predict_parameters'` I'm not clear what other attributes I should grab from the study so that they can be passed to the explainer? I have tried this outside the study and the callback in the main workbook and the explainer works fine. I looked at the code for the `set_predict_parameter` but not sure what it is missing. If this is the wrong way to approach let me know. Thank you. Using Darts 0.26 Using Python 3.10.2
closed
2023-11-17T00:16:32Z
2024-01-21T15:22:14Z
https://github.com/unit8co/darts/issues/2071
[ "question", "triage" ]
gvas7
1
seleniumbase/SeleniumBase
pytest
2,357
Question about the "downloaded_files" folder
why seleniumbase start test, it auto delete the folder downloaded_files, i use sb.get_browser_downloads_folder(), because it have deleted, it throw a Exception
closed
2023-12-11T07:27:30Z
2023-12-11T16:04:59Z
https://github.com/seleniumbase/SeleniumBase/issues/2357
[ "duplicate", "question" ]
xiaocf-git
1
miguelgrinberg/python-socketio
asyncio
174
error happens when using socketio with aiohttp3.0.9
@miguelgrinberg run `python3 examples/aiohttp/app.py` ``` ======== Running on http://0.0.0.0:8080 ======== (Press CTRL+C to quit) message async handler error Traceback (most recent call last): File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/engineio/asyncio_server.py", line 269, in _trigger_event ret = await self.handlers[event](*args) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/socketio/asyncio_server.py", line 358, in _handle_eio_message pkt = packet.Packet(encoded_packet=data) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/socketio/packet.py", line 43, in __init__ self.attachment_count = self.decode(encoded_packet) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/socketio/packet.py", line 113, in decode self.data = self.json.loads(ep) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/__init__.py", line 354, in loads return _default_decoder.decode(s) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/decoder.py", line 339, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/json/decoder.py", line 357, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) Client disconnected ``` ## versions ``` python 3.6.2 python-socketio 1.9.0 aiohttp 3.0.9 ```
closed
2018-03-19T04:27:33Z
2018-03-22T06:35:54Z
https://github.com/miguelgrinberg/python-socketio/issues/174
[]
chenray844
3
voxel51/fiftyone
data-science
5,396
[?] How to export empty YOLOv5 labels files for images with no annotations
### Describe the problem when I run split_view.export( export_dir=export_dir, dataset_type=fo.types.YOLOv5Dataset, label_field=label_field, split=split, classes=classes, ) it exports only all images that have annotation, but there are some images with no annotation don't export! I need my model to train on it to learn them as a background, without this the model gives me many false positive on those empty images. ### System information - **OS Platform and Distribution** (Linux Ubuntu 22.04): - **Python version** (`3.10.12`): - **FiftyOne version** (`1.2.0`): - **FiftyOne installed from** (pip): ### Willingness to contribute The FiftyOne Community encourages bug fix contributions. Would you or another member of your organization be willing to contribute a fix for this bug to the FiftyOne codebase? - [x ] Yes. I can contribute a fix for this bug independently - [ ] Yes. I would be willing to contribute a fix for this bug with guidance from the FiftyOne community - [ ] No. I cannot contribute a bug fix at this time
open
2025-01-16T05:39:58Z
2025-01-16T15:46:55Z
https://github.com/voxel51/fiftyone/issues/5396
[ "question" ]
hassanbadawy
1
opengeos/leafmap
plotly
60
Add a colormaps module
This will allow users to create colormaps easily and add them to the map easily.
closed
2021-06-25T12:50:24Z
2021-06-26T02:33:54Z
https://github.com/opengeos/leafmap/issues/60
[ "Feature Request" ]
giswqs
1
pydantic/FastUI
pydantic
245
Returning a `201` raises an error in the ui
While generating a sign-up flow, i think it is common to return a `201` to signal user created. However, the app catches that as an exception and raised an alert toast. Example: ```python @router.post( "/register", response_model=user_schema, status_code=status.HTTP_201_CREATED, name="register:register", ) async def register( ): ... ``` ![image](https://github.com/pydantic/FastUI/assets/86600518/0a3e195b-c2f5-4604-ba54-5ed56add3b27)
closed
2024-03-12T15:51:34Z
2024-03-20T10:26:19Z
https://github.com/pydantic/FastUI/issues/245
[]
tim-x-y-z
1
vllm-project/vllm
pytorch
15,228
[Bug]: Out of Memory error for Qwen2.5 in 0.8.0 and 0.8.1. Worked fine in the previous versions
### Your current environment <details> <summary>The output of `python collect_env.py`</summary> ```text Your output of `python collect_env.py` here ``` </details> ### 🐛 Describe the bug I start the inference server by the following command `python -m vllm.entrypoints.openai.api_server --dtype auto --api-key token-abc123 --tensor-parallel-size 2 --host 0.0.0.0 --port 8004 --model Qwen2.5-72B-Instruct-GPTQ-Int4 --gpu-memory-utilization 0.9 --max-model-len 16000` **Here's the log during the service start-up in the version 0.7.2** ``` INFO 03-20 16:00:54 __init__.py:190] Automatically detected platform cuda. INFO 03-20 16:00:54 api_server.py:840] vLLM API server version 0.7.2 INFO 03-20 16:00:54 api_server.py:841] args: Namespace(host='0.0.0.0', port=8004, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key='token-abc123', lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, enable_reasoning=False, reasoning_parser=None, tool_call_parser=None, tool_parser_plugin='', model='Qwen2.5-72B-Instruct-GPTQ-Int4', task='auto', tokenizer=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=16000, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=0, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', generation_config=None, override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False) INFO 03-20 16:00:54 api_server.py:206] Started engine process with PID 931 INFO 03-20 16:00:58 __init__.py:190] Automatically detected platform cuda. INFO 03-20 16:01:00 config.py:542] This model supports multiple tasks: {'generate', 'reward', 'classify', 'score', 'embed'}. Defaulting to 'generate'. INFO 03-20 16:01:01 gptq_marlin.py:111] The model is convertible to gptq_marlin during runtime. Using gptq_marlin kernel. INFO 03-20 16:01:01 config.py:1401] Defaulting to use mp for distributed inference INFO 03-20 16:01:03 config.py:542] This model supports multiple tasks: {'embed', 'reward', 'classify', 'score', 'generate'}. Defaulting to 'generate'. INFO 03-20 16:01:04 gptq_marlin.py:111] The model is convertible to gptq_marlin during runtime. Using gptq_marlin kernel. INFO 03-20 16:01:04 config.py:1401] Defaulting to use mp for distributed inference INFO 03-20 16:01:05 llm_engine.py:234] Initializing a V0 LLM engine (v0.7.2) with config: model='Qwen2.5-72B-Instruct-GPTQ-Int4', speculative_config=None, tokenizer='Qwen2.5-72B-Instruct-GPTQ-Int4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=16000, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=gptq_marlin, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar'), observability_config=ObservabilityConfig(otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=0, served_model_name=Qwen2.5-72B-Instruct-GPTQ-Int4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=False, chunked_prefill_enabled=False, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"splitting_ops":[],"compile_sizes":[],"cudagraph_capture_sizes":[256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":256}, use_cached_outputs=True, WARNING 03-20 16:01:05 multiproc_worker_utils.py:300] Reducing Torch parallelism from 64 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed. INFO 03-20 16:01:05 custom_cache_manager.py:19] Setting Triton cache manager to: vllm.triton_utils.custom_cache_manager:CustomCacheManager (VllmWorkerProcess pid=1392) INFO 03-20 16:01:05 multiproc_worker_utils.py:229] Worker ready; awaiting tasks INFO 03-20 16:01:05 cuda.py:230] Using Flash Attention backend. (VllmWorkerProcess pid=1392) INFO 03-20 16:01:05 cuda.py:230] Using Flash Attention backend. INFO 03-20 16:01:06 utils.py:950] Found nccl from library libnccl.so.2 (VllmWorkerProcess pid=1392) INFO 03-20 16:01:06 utils.py:950] Found nccl from library libnccl.so.2 INFO 03-20 16:01:06 pynccl.py:69] vLLM is using nccl==2.21.5 (VllmWorkerProcess pid=1392) INFO 03-20 16:01:06 pynccl.py:69] vLLM is using nccl==2.21.5 INFO 03-20 16:01:06 custom_all_reduce_utils.py:206] generating GPU P2P access cache in /root/.cache/vllm/gpu_p2p_access_cache_for_2,3.json INFO 03-20 16:01:15 custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_2,3.json (VllmWorkerProcess pid=1392) INFO 03-20 16:01:15 custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_2,3.json INFO 03-20 16:01:15 shm_broadcast.py:258] vLLM message queue communication handle: Handle(connect_ip='127.0.0.1', local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_218a6dd5'), local_subscribe_port=49961, remote_subscribe_port=None) INFO 03-20 16:01:15 model_runner.py:1110] Starting to load model Qwen2.5-72B-Instruct-GPTQ-Int4... (VllmWorkerProcess pid=1392) INFO 03-20 16:01:15 model_runner.py:1110] Starting to load model Qwen2.5-72B-Instruct-GPTQ-Int4... INFO 03-20 16:01:15 gptq_marlin.py:202] Using MarlinLinearKernel for GPTQMarlinLinearMethod (VllmWorkerProcess pid=1392) INFO 03-20 16:01:15 gptq_marlin.py:202] Using MarlinLinearKernel for GPTQMarlinLinearMethod Loading safetensors checkpoint shards: 0% Completed | 0/11 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 9% Completed | 1/11 [00:00<00:07, 1.26it/s] Loading safetensors checkpoint shards: 18% Completed | 2/11 [00:01<00:04, 2.21it/s] Loading safetensors checkpoint shards: 27% Completed | 3/11 [00:01<00:04, 1.75it/s] Loading safetensors checkpoint shards: 36% Completed | 4/11 [00:02<00:03, 1.89it/s] Loading safetensors checkpoint shards: 45% Completed | 5/11 [00:03<00:03, 1.56it/s] Loading safetensors checkpoint shards: 55% Completed | 6/11 [00:03<00:03, 1.43it/s] Loading safetensors checkpoint shards: 64% Completed | 7/11 [00:04<00:03, 1.33it/s] Loading safetensors checkpoint shards: 73% Completed | 8/11 [00:05<00:02, 1.30it/s] Loading safetensors checkpoint shards: 82% Completed | 9/11 [00:06<00:01, 1.28it/s] Loading safetensors checkpoint shards: 91% Completed | 10/11 [00:07<00:00, 1.26it/s] Loading safetensors checkpoint shards: 100% Completed | 11/11 [00:07<00:00, 1.25it/s] Loading safetensors checkpoint shards: 100% Completed | 11/11 [00:07<00:00, 1.38it/s] INFO 03-20 16:01:24 model_runner.py:1115] Loading model weights took 19.2671 GB (VllmWorkerProcess pid=1392) INFO 03-20 16:01:25 model_runner.py:1115] Loading model weights took 19.2671 GB (VllmWorkerProcess pid=1392) INFO 03-20 16:01:35 worker.py:267] Memory profiling takes 10.51 seconds (VllmWorkerProcess pid=1392) INFO 03-20 16:01:35 worker.py:267] the current vLLM instance can use total_gpu_memory (44.53GiB) x gpu_memory_utilization (0.90) = 40.07GiB (VllmWorkerProcess pid=1392) INFO 03-20 16:01:35 worker.py:267] model weights take 19.27GiB; non_torch_memory takes 0.30GiB; PyTorch activation peak memory takes 2.78GiB; the rest of the memory reserved for KV Cache is 17.73GiB. INFO 03-20 16:01:35 worker.py:267] Memory profiling takes 10.54 seconds INFO 03-20 16:01:35 worker.py:267] the current vLLM instance can use total_gpu_memory (44.53GiB) x gpu_memory_utilization (0.90) = 40.07GiB INFO 03-20 16:01:35 worker.py:267] model weights take 19.27GiB; non_torch_memory takes 0.30GiB; PyTorch activation peak memory takes 2.78GiB; the rest of the memory reserved for KV Cache is 17.73GiB. INFO 03-20 16:01:36 executor_base.py:110] # CUDA blocks: 7262, # CPU blocks: 1638 INFO 03-20 16:01:36 executor_base.py:115] Maximum concurrency for 16000 tokens per request: 7.26x (VllmWorkerProcess pid=1392) INFO 03-20 16:01:38 model_runner.py:1434] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage. INFO 03-20 16:01:38 model_runner.py:1434] Capturing cudagraphs for decoding. This may lead to unexpected consequences if the model is not static. To run the model in eager mode, set 'enforce_eager=True' or use '--enforce-eager' in the CLI. If out-of-memory error occurs during cudagraph capture, consider decreasing `gpu_memory_utilization` or switching to eager mode. You can also reduce the `max_num_seqs` as needed to decrease memory usage. INFO 03-20 16:01:58 custom_all_reduce.py:226] Registering 5635 cuda graph addresses (VllmWorkerProcess pid=1392) INFO 03-20 16:01:59 custom_all_reduce.py:226] Registering 5635 cuda graph addresses (VllmWorkerProcess pid=1392) INFO 03-20 16:01:59 model_runner.py:1562] Graph capturing finished in 21 secs, took 2.89 GiB INFO 03-20 16:01:59 model_runner.py:1562] Graph capturing finished in 21 secs, took 2.89 GiB INFO 03-20 16:01:59 llm_engine.py:431] init engine (profile, create kv cache, warmup model) took 34.30 seconds INFO 03-20 16:01:59 api_server.py:756] Using supplied chat template: INFO 03-20 16:01:59 api_server.py:756] None INFO 03-20 16:01:59 launcher.py:21] Available routes are: INFO 03-20 16:01:59 launcher.py:29] Route: /openapi.json, Methods: HEAD, GET INFO 03-20 16:01:59 launcher.py:29] Route: /docs, Methods: HEAD, GET INFO 03-20 16:01:59 launcher.py:29] Route: /docs/oauth2-redirect, Methods: HEAD, GET INFO 03-20 16:01:59 launcher.py:29] Route: /redoc, Methods: HEAD, GET INFO 03-20 16:01:59 launcher.py:29] Route: /health, Methods: GET INFO 03-20 16:01:59 launcher.py:29] Route: /ping, Methods: POST, GET INFO 03-20 16:01:59 launcher.py:29] Route: /tokenize, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /detokenize, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /v1/models, Methods: GET INFO 03-20 16:01:59 launcher.py:29] Route: /version, Methods: GET INFO 03-20 16:01:59 launcher.py:29] Route: /v1/chat/completions, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /v1/completions, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /v1/embeddings, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /pooling, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /score, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /v1/score, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /rerank, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /v1/rerank, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /v2/rerank, Methods: POST INFO 03-20 16:01:59 launcher.py:29] Route: /invocations, Methods: POST INFO: Started server process [798] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8004 (Press CTRL+C to quit) ``` But when I do the same with 0.8.1, I get the CUDA Out of Memory error: ``` root@e00d0c989958:/workspace# python -m vllm.entrypoints.openai.api_server --dtype auto --api-key token-abc123 --tensor-parallel-size 2 --host 0.0.0.0 --port 8004 --model Qwen2.5-72B-Instruct-GPTQ-Int4 --gpu-memory-utilization 0.9 --max-model-len 16000 INFO 03-20 16:53:14 [__init__.py:256] Automatically detected platform cuda. INFO 03-20 16:53:14 [api_server.py:977] vLLM API server version 0.8.1 INFO 03-20 16:53:14 [api_server.py:978] args: Namespace(host='0.0.0.0', port=8004, uvicorn_log_level='info', allow_credentials=False, allowed_origins=['*'], allowed_methods=['*'], allowed_headers=['*'], api_key='token-abc123', lora_modules=None, prompt_adapters=None, chat_template=None, chat_template_content_format='auto', response_role='assistant', ssl_keyfile=None, ssl_certfile=None, ssl_ca_certs=None, enable_ssl_refresh=False, ssl_cert_reqs=0, root_path=None, middleware=[], return_tokens_as_token_ids=False, disable_frontend_multiprocessing=False, enable_request_id_headers=False, enable_auto_tool_choice=False, tool_call_parser=None, tool_parser_plugin='', model='Qwen2.5-72B-Instruct-GPTQ-Int4', task='auto', tokenizer=None, hf_config_path=None, skip_tokenizer_init=False, revision=None, code_revision=None, tokenizer_revision=None, tokenizer_mode='auto', trust_remote_code=False, allowed_local_media_path=None, download_dir=None, load_format='auto', config_format=<ConfigFormat.AUTO: 'auto'>, dtype='auto', kv_cache_dtype='auto', max_model_len=16000, guided_decoding_backend='xgrammar', logits_processor_pattern=None, model_impl='auto', distributed_executor_backend=None, pipeline_parallel_size=1, tensor_parallel_size=2, enable_expert_parallel=False, max_parallel_loading_workers=None, ray_workers_use_nsight=False, block_size=None, enable_prefix_caching=None, disable_sliding_window=False, use_v2_block_manager=True, num_lookahead_slots=0, seed=None, swap_space=4, cpu_offload_gb=0, gpu_memory_utilization=0.9, num_gpu_blocks_override=None, max_num_batched_tokens=None, max_num_partial_prefills=1, max_long_partial_prefills=1, long_prefill_token_threshold=0, max_num_seqs=None, max_logprobs=20, disable_log_stats=False, quantization=None, rope_scaling=None, rope_theta=None, hf_overrides=None, enforce_eager=False, max_seq_len_to_capture=8192, disable_custom_all_reduce=False, tokenizer_pool_size=0, tokenizer_pool_type='ray', tokenizer_pool_extra_config=None, limit_mm_per_prompt=None, mm_processor_kwargs=None, disable_mm_preprocessor_cache=False, enable_lora=False, enable_lora_bias=False, max_loras=1, max_lora_rank=16, lora_extra_vocab_size=256, lora_dtype='auto', long_lora_scaling_factors=None, max_cpu_loras=None, fully_sharded_loras=False, enable_prompt_adapter=False, max_prompt_adapters=1, max_prompt_adapter_token=0, device='auto', num_scheduler_steps=1, use_tqdm_on_load=True, multi_step_stream_outputs=True, scheduler_delay_factor=0.0, enable_chunked_prefill=None, speculative_model=None, speculative_model_quantization=None, num_speculative_tokens=None, speculative_disable_mqa_scorer=False, speculative_draft_tensor_parallel_size=None, speculative_max_model_len=None, speculative_disable_by_batch_size=None, ngram_prompt_lookup_max=None, ngram_prompt_lookup_min=None, spec_decoding_acceptance_method='rejection_sampler', typical_acceptance_sampler_posterior_threshold=None, typical_acceptance_sampler_posterior_alpha=None, disable_logprobs_during_spec_decoding=None, model_loader_extra_config=None, ignore_patterns=[], preemption_mode=None, served_model_name=None, qlora_adapter_name_or_path=None, show_hidden_metrics_for_version=None, otlp_traces_endpoint=None, collect_detailed_traces=None, disable_async_output_proc=False, scheduling_policy='fcfs', scheduler_cls='vllm.core.scheduler.Scheduler', override_neuron_config=None, override_pooler_config=None, compilation_config=None, kv_transfer_config=None, worker_cls='auto', worker_extension_cls='', generation_config='auto', override_generation_config=None, enable_sleep_mode=False, calculate_kv_scales=False, additional_config=None, enable_reasoning=False, reasoning_parser=None, disable_log_requests=False, max_log_len=None, disable_fastapi_docs=False, enable_prompt_tokens_details=False, enable_server_load_tracking=False) INFO 03-20 16:53:20 [config.py:583] This model supports multiple tasks: {'embed', 'reward', 'classify', 'generate', 'score'}. Defaulting to 'generate'. INFO 03-20 16:53:21 [gptq_marlin.py:143] The model is convertible to gptq_marlin during runtime. Using gptq_marlin kernel. INFO 03-20 16:53:21 [config.py:1515] Defaulting to use mp for distributed inference INFO 03-20 16:53:21 [config.py:1693] Chunked prefill is enabled with max_num_batched_tokens=2048. INFO 03-20 16:53:21 [core.py:53] Initializing a V1 LLM engine (v0.8.1) with config: model='Qwen2.5-72B-Instruct-GPTQ-Int4', speculative_config=None, tokenizer='Qwen2.5-72B-Instruct-GPTQ-Int4', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, override_neuron_config=None, tokenizer_revision=None, trust_remote_code=False, dtype=torch.float16, max_seq_len=16000, download_dir=None, load_format=auto, tensor_parallel_size=2, pipeline_parallel_size=1, disable_custom_all_reduce=False, quantization=gptq_marlin, enforce_eager=False, kv_cache_dtype=auto, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='xgrammar', reasoning_backend=None), observability_config=ObservabilityConfig(show_hidden_metrics=False, otlp_traces_endpoint=None, collect_model_forward_time=False, collect_model_execute_time=False), seed=None, served_model_name=Qwen2.5-72B-Instruct-GPTQ-Int4, num_scheduler_steps=1, multi_step_stream_outputs=True, enable_prefix_caching=True, chunked_prefill_enabled=True, use_async_output_proc=True, disable_mm_preprocessor_cache=False, mm_processor_kwargs=None, pooler_config=None, compilation_config={"level":3,"custom_ops":["none"],"splitting_ops":["vllm.unified_attention","vllm.unified_attention_with_output"],"use_inductor":true,"compile_sizes":[],"use_cudagraph":true,"cudagraph_num_of_warmups":1,"cudagraph_capture_sizes":[512,504,496,488,480,472,464,456,448,440,432,424,416,408,400,392,384,376,368,360,352,344,336,328,320,312,304,296,288,280,272,264,256,248,240,232,224,216,208,200,192,184,176,168,160,152,144,136,128,120,112,104,96,88,80,72,64,56,48,40,32,24,16,8,4,2,1],"max_capture_size":512} WARNING 03-20 16:53:21 [multiproc_worker_utils.py:306] Reducing Torch parallelism from 64 threads to 1 to avoid unnecessary CPU contention. Set OMP_NUM_THREADS in the external environment to tune this value as needed. INFO 03-20 16:53:21 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0, 1], buffer_handle=(2, 10485760, 10, 'psm_c6f9ea62'), local_subscribe_addr='ipc:///tmp/e4cde641-0b61-45c6-8043-c87f708cbb06', remote_subscribe_addr=None, remote_addr_ipv6=False) WARNING 03-20 16:53:22 [utils.py:2282] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f1bb526b7f0> (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:22 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_f52a60bf'), local_subscribe_addr='ipc:///tmp/e0a5ee76-7adf-489a-89e4-182820b0b641', remote_subscribe_addr=None, remote_addr_ipv6=False) WARNING 03-20 16:53:22 [utils.py:2282] Methods determine_num_available_blocks,device_config,get_cache_block_size_bytes,initialize_cache not implemented in <vllm.v1.worker.gpu_worker.Worker object at 0x7f1bb526b1c0> (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:22 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[0], buffer_handle=(1, 10485760, 10, 'psm_d04b4dd2'), local_subscribe_addr='ipc:///tmp/8902b815-5d45-4f9c-a4c3-e863946aeb4d', remote_subscribe_addr=None, remote_addr_ipv6=False) (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:23 [utils.py:925] Found nccl from library libnccl.so.2 (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:23 [pynccl.py:69] vLLM is using nccl==2.21.5 (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:23 [utils.py:925] Found nccl from library libnccl.so.2 (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:23 [pynccl.py:69] vLLM is using nccl==2.21.5 (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:23 [custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_2,3.json (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:23 [custom_all_reduce_utils.py:244] reading GPU P2P access cache from /root/.cache/vllm/gpu_p2p_access_cache_for_2,3.json (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:23 [shm_broadcast.py:258] vLLM message queue communication handle: Handle(local_reader_ranks=[1], buffer_handle=(1, 4194304, 6, 'psm_4b189ac3'), local_subscribe_addr='ipc:///tmp/5f351a76-9949-4c64-b19f-d6803a05d430', remote_subscribe_addr=None, remote_addr_ipv6=False) (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:23 [parallel_state.py:967] rank 1 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 1 (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:23 [parallel_state.py:967] rank 0 in world size 2 is assigned as DP rank 0, PP rank 0, TP rank 0 (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:23 [cuda.py:215] Using Flash Attention backend on V1 engine. (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:23 [cuda.py:215] Using Flash Attention backend on V1 engine. (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:23 [gpu_model_runner.py:1164] Starting to load model Qwen2.5-72B-Instruct-GPTQ-Int4... (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:23 [gpu_model_runner.py:1164] Starting to load model Qwen2.5-72B-Instruct-GPTQ-Int4... (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:23 [gptq_marlin.py:235] Using MarlinLinearKernel for GPTQMarlinLinearMethod (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:23 [gptq_marlin.py:235] Using MarlinLinearKernel for GPTQMarlinLinearMethod (VllmWorker rank=0 pid=2764) WARNING 03-20 16:53:23 [topk_topp_sampler.py:63] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. (VllmWorker rank=1 pid=2775) WARNING 03-20 16:53:23 [topk_topp_sampler.py:63] FlashInfer is not available. Falling back to the PyTorch-native implementation of top-p & top-k sampling. For the best performance, please install FlashInfer. Loading safetensors checkpoint shards: 0% Completed | 0/11 [00:00<?, ?it/s] Loading safetensors checkpoint shards: 9% Completed | 1/11 [00:00<00:07, 1.25it/s] Loading safetensors checkpoint shards: 18% Completed | 2/11 [00:01<00:04, 2.10it/s] Loading safetensors checkpoint shards: 27% Completed | 3/11 [00:01<00:04, 1.70it/s] Loading safetensors checkpoint shards: 36% Completed | 4/11 [00:02<00:03, 1.80it/s] Loading safetensors checkpoint shards: 45% Completed | 5/11 [00:03<00:03, 1.51it/s] Loading safetensors checkpoint shards: 55% Completed | 6/11 [00:03<00:03, 1.37it/s] Loading safetensors checkpoint shards: 64% Completed | 7/11 [00:04<00:03, 1.27it/s] Loading safetensors checkpoint shards: 73% Completed | 8/11 [00:05<00:02, 1.22it/s] Loading safetensors checkpoint shards: 82% Completed | 9/11 [00:06<00:01, 1.20it/s] Loading safetensors checkpoint shards: 91% Completed | 10/11 [00:07<00:00, 1.18it/s] (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:31 [loader.py:429] Loading weights took 7.82 seconds (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:31 [gpu_model_runner.py:1176] Model loading took 19.2671 GB and 8.253775 seconds Loading safetensors checkpoint shards: 100% Completed | 11/11 [00:08<00:00, 1.16it/s] Loading safetensors checkpoint shards: 100% Completed | 11/11 [00:08<00:00, 1.31it/s] (VllmWorker rank=0 pid=2764) (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:32 [loader.py:429] Loading weights took 8.52 seconds (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:32 [gpu_model_runner.py:1176] Model loading took 19.2671 GB and 8.943086 seconds (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:48 [backends.py:409] Using cache directory: /root/.cache/vllm/torch_compile_cache/50a2bed080/rank_0_0 for vLLM's torch.compile (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:48 [backends.py:419] Dynamo bytecode transform time: 15.64 s (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:48 [backends.py:409] Using cache directory: /root/.cache/vllm/torch_compile_cache/50a2bed080/rank_1_0 for vLLM's torch.compile (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:48 [backends.py:419] Dynamo bytecode transform time: 15.67 s (VllmWorker rank=0 pid=2764) INFO 03-20 16:53:49 [backends.py:115] Directly load the compiled graph for shape None from the cache (VllmWorker rank=1 pid=2775) INFO 03-20 16:53:49 [backends.py:115] Directly load the compiled graph for shape None from the cache (VllmWorker rank=0 pid=2764) INFO 03-20 16:54:02 [monitor.py:33] torch.compile takes 15.64 s in total (VllmWorker rank=1 pid=2775) INFO 03-20 16:54:02 [monitor.py:33] torch.compile takes 15.67 s in total INFO 03-20 16:54:04 [kv_cache_utils.py:537] GPU KV cache size: 128,496 tokens INFO 03-20 16:54:04 [kv_cache_utils.py:540] Maximum concurrency for 16,000 tokens per request: 8.03x INFO 03-20 16:54:04 [kv_cache_utils.py:537] GPU KV cache size: 128,496 tokens INFO 03-20 16:54:04 [kv_cache_utils.py:540] Maximum concurrency for 16,000 tokens per request: 8.03x (VllmWorker rank=0 pid=2764) INFO 03-20 16:54:37 [custom_all_reduce.py:229] Registering 8352 cuda graph addresses (VllmWorker rank=1 pid=2775) INFO 03-20 16:54:38 [custom_all_reduce.py:229] Registering 8352 cuda graph addresses (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] WorkerProc hit an exception: %s (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] Traceback (most recent call last): (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/vllm/v1/executor/multiproc_executor.py", line 371, in worker_busy_loop (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] output = func(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/vllm/v1/worker/gpu_worker.py", line 216, in compile_or_warm_up_model (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] self.model_runner.capture_model() (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 1492, in capture_model (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] self._dummy_run(num_tokens) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py", line 116, in decorate_context (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return func(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/vllm/v1/worker/gpu_model_runner.py", line 1301, in _dummy_run (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] hidden_states = model( (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return forward_call(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 462, in forward (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] hidden_states = self.model(input_ids, positions, intermediate_tensors, (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/vllm/compilation/decorators.py", line 245, in __call__ (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] model_output = self.forward(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/vllm/model_executor/models/qwen2.py", line 320, in forward (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] def forward( (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return forward_call(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/_dynamo/eval_frame.py", line 745, in _fn (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return fn(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 822, in call_wrapped (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return self._wrapped_call(self, *args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 400, in __call__ (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] raise e (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/fx/graph_module.py", line 387, in __call__ (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return super(self.cls, obj).__call__(*args, **kwargs) # type: ignore[misc] (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1739, in _wrapped_call_impl (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return self._call_impl(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1750, in _call_impl (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] return forward_call(*args, **kwargs) (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "<eval_with_key>.162", line 2660, in forward (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] submod_140 = self.submod_140(getitem_348, s0, l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_qweight_, l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_scales_, l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_qzeros_, l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_g_idx_, l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_g_idx_sort_indices_, l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_quant_method_kernel_workspace, getitem_349, l_self_modules_layers_modules_69_modules_post_attention_layernorm_parameters_weight_, l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_qweight_, l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_scales_, l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_qzeros_, l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_g_idx_, l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_g_idx_sort_indices_, l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_quant_method_kernel_workspace, l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_qweight_, l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_scales_, l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_qzeros_, l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_g_idx_, l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_g_idx_sort_indices_, l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_quant_method_kernel_workspace, l_self_modules_layers_modules_70_modules_input_layernorm_parameters_weight_, l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_qweight_, l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_scales_, l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_qzeros_, l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_g_idx_, l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_g_idx_sort_indices_, l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_quant_method_kernel_workspace, l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_bias_, l_positions_, l_self_modules_layers_modules_0_modules_self_attn_modules_rotary_emb_buffers_cos_sin_cache_); getitem_348 = l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_qweight_ = l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_scales_ = l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_qzeros_ = l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_g_idx_ = l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_parameters_g_idx_sort_indices_ = l_self_modules_layers_modules_69_modules_self_attn_modules_o_proj_quant_method_kernel_workspace = getitem_349 = l_self_modules_layers_modules_69_modules_post_attention_layernorm_parameters_weight_ = l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_qweight_ = l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_scales_ = l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_qzeros_ = l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_g_idx_ = l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_parameters_g_idx_sort_indices_ = l_self_modules_layers_modules_69_modules_mlp_modules_gate_up_proj_quant_method_kernel_workspace = l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_qweight_ = l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_scales_ = l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_qzeros_ = l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_g_idx_ = l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_parameters_g_idx_sort_indices_ = l_self_modules_layers_modules_69_modules_mlp_modules_down_proj_quant_method_kernel_workspace = l_self_modules_layers_modules_70_modules_input_layernorm_parameters_weight_ = l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_qweight_ = l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_scales_ = l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_qzeros_ = l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_g_idx_ = l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_g_idx_sort_indices_ = l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_quant_method_kernel_workspace = l_self_modules_layers_modules_70_modules_self_attn_modules_qkv_proj_parameters_bias_ = None (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/vllm/compilation/backends.py", line 670, in __call__ (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] with torch.cuda.graph(cudagraph, pool=self.graph_pool): (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/cuda/graphs.py", line 186, in __exit__ (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] self.cuda_graph.capture_end() (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] File "/usr/local/lib/python3.10/dist-packages/torch/cuda/graphs.py", line 84, in capture_end (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] super().capture_end() (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] RuntimeError: CUDA error: out of memory (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect. (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] For debugging consider passing CUDA_LAUNCH_BLOCKING=1 (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions. (VllmWorker rank=1 pid=2775) ERROR 03-20 16:54:38 [multiproc_executor.py:375] (VllmWorker rank=0 pid=2764) ERROR 03-20 16:54:38 [multiproc_executor.py:375] WorkerProc hit an exception: %s (VllmWorker rank=0 pid=2764) ERROR 03-20 16:54:38 [multiproc_executor.py:375] Traceback (most recent call last): ``` I set the `--max-num-seqs 2` and still got the same error. The visible difference that I see is 0.7.2 uses V0 engine and 0.8.1 uses V1 engine. However it works in 0.8.1 if I set the `--enforce-eager` flag. But my project requires to do the Async operations and therefore I use the OpenAIAsync on the client side. Will setting `--enforce-eager` cause any problem to the async process? Another thing to also notice is in 0.7.2 I see the `INFO 03-20 16:01:36 executor_base.py:115] Maximum concurrency for 16000 tokens per request: 7.26x` whereas in 0.8.1 with --enforce-eager I see `INFO 03-20 17:00:20 [kv_cache_utils.py:540] Maximum concurrency for 16,000 tokens per request: 5.90x`. Why is that happening? Thanks in advance! 😄 ### Before submitting a new issue... - [x] Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the [documentation page](https://docs.vllm.ai/en/latest/), which can answer lots of frequently asked questions.
open
2025-03-20T15:16:17Z
2025-03-24T10:20:51Z
https://github.com/vllm-project/vllm/issues/15228
[ "bug" ]
venki-lfc
14
Nemo2011/bilibili-api
api
375
【提问】直播弹幕丢失
**Python 版本:** 3.10.11 **模块版本:** 15.5.1 **运行环境:** Windows --- 连接监听都显示成功,但是弹幕丢失了很多 ![image](https://github.com/Nemo2011/bilibili-api/assets/40910637/7ab698a9-3ff1-44a4-a0fa-309304e157f8)
closed
2023-07-05T16:25:15Z
2023-07-17T08:01:04Z
https://github.com/Nemo2011/bilibili-api/issues/375
[ "question" ]
Ikaros-521
5
gradio-app/gradio
data-science
10,087
Add the ability to reorder files in `gr.File`
When uploading multiple files, add a sortable function, such as the drag-and-drop method, to sort the files. Sometimes we hope to process the uploaded files or the generated files in a certain order.
closed
2024-12-02T02:13:57Z
2024-12-17T15:03:02Z
https://github.com/gradio-app/gradio/issues/10087
[ "enhancement" ]
ecshoper
0
lux-org/lux
jupyter
60
Running Pandas test suite to ensure Pandas function coverage
Running Pandas test suite to ensure that metadata and recommendations maintained for a variety of Pandas functions.
open
2020-08-11T01:45:58Z
2021-02-19T04:52:25Z
https://github.com/lux-org/lux/issues/60
[ "test" ]
dorisjlee
0
python-gino/gino
sqlalchemy
228
transaction context manager not rolling back on exception
* GINO version: 0.7.3 * Python version: 3.6.3 * Operating System: Ubuntu 17.10 ### Description Transaction context manager doesn't roll back on exception. ### What I Did ```python async with db.transaction(): await instance.update(name='splunge').apply() raise RuntimeError ``` Upon exiting the context manager, the instance row has been modified to the new value.
closed
2018-05-22T14:01:37Z
2018-08-15T10:43:05Z
https://github.com/python-gino/gino/issues/228
[ "enhancement" ]
aglassdarkly
8
ultralytics/ultralytics
machine-learning
19,117
predict on rgb or bgr
### Search before asking - [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question hello, I have a question. I'm confused. When the yolo11_seg predicts, instead of the local image, I call the model.predict() function. Just like: `results = model.predict(img, imgsz=640, save=True, save_txt=False, conf=0.5)` I need to ensure that the image data is rgb channel or bgr channel. During the model training, I did not do additional data channel processing, his default should be RGB channel, does that mean that the prediction is also RGB channel. ### Additional I test on my data. The original image is: ![Image](https://github.com/user-attachments/assets/ec1a65d7-f7aa-46e7-a436-203eacd2d4be) when I predict, I read the image with opencv. just like: `image = cv2.imread('xxx.jpg')` the result predict on bgr is: ![Image](https://github.com/user-attachments/assets/700aceb6-ca1f-4cdb-9e3e-4cc1f3dc6850) the result predict on RGB is: `image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)` ![Image](https://github.com/user-attachments/assets/ab007f4a-deaf-46a3-bef2-c060d758dd90) From the results, should I use the bgr channel? Thanks.
closed
2025-02-07T05:43:29Z
2025-02-13T18:06:01Z
https://github.com/ultralytics/ultralytics/issues/19117
[ "question", "segment" ]
uuu686
4
vipstone/faceai
tensorflow
47
图片上色的教程什么时候能出呢?
作者大大,图片上色是使用的什么模型呢?教程什么时候能出呢?
closed
2019-12-14T04:17:23Z
2019-12-18T12:16:01Z
https://github.com/vipstone/faceai/issues/47
[]
schrodingercatss
0
jowilf/starlette-admin
sqlalchemy
100
Bug: IntegrityError in ModelView.delete method causes PendingRollbackError in some cases
**Describe the bug** I am using `starlette_admin` with sqlalchemy contrib. `DBSessionMiddleware`, as first middleware, creates session and then properly closes it at the end. However, if error occurs during `ModelView.delete` method, the IntegrityError occurs, which should be handled. I have inherited `BaseAdmin` and re-defined exception handlers to handle StarletteAdminException and SQLAlchemy's IntegrityError with the same function `_render_error`, which was used for `HttpException`. However, this function renders template "error.html" by using `AuthProvider.get_admin_user`, which (in my case, at least) internally asks session for user fields. And since session is in invalid state, but not rollbacked **yet** by `DBSessionMiddleware`, there occurs a `sqlalchemy.exc.PendingRollbackError` exception. The traceback is as follows: ``` Traceback (most recent call last): File "D:\Projects\starlette-web\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 403, in run_asgi result = await app( # type: ignore[func-returns-value] File "D:\Projects\starlette-web\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 74, in __call__ return await self.app(scope, receive, send) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\applications.py", line 112, in __call__ await self.middleware_stack(scope, receive, send) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__ raise exc File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "D:\Projects\starlette-web\venv\lib\site-packages\sentry_sdk\integrations\asgi.py", line 139, in _run_asgi3 return await self._run_app(scope, lambda: self.app(scope, receive, send)) File "D:\Projects\starlette-web\venv\lib\site-packages\sentry_sdk\integrations\asgi.py", line 186, in _run_app raise exc from None File "D:\Projects\starlette-web\venv\lib\site-packages\sentry_sdk\integrations\asgi.py", line 183, in _run_app return await callback() File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\exceptions.py", line 75, in __call__ raise exc File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\exceptions.py", line 64, in __call__ await self.app(scope, receive, sender) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\routing.py", line 687, in __call__ await route.handle(scope, receive, send) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\routing.py", line 436, in handle await self.app(scope, receive, send) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\applications.py", line 112, in __call__ await self.middleware_stack(scope, receive, send) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\errors.py", line 184, in __call__ raise exc File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\errors.py", line 162, in __call__ await self.app(scope, receive, _send) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\base.py", line 104, in __call__ response = await self.dispatch_func(request, call_next) File "D:\Projects\starlette-web\starlette_web\contrib\admin\middleware.py", line 28, in dispatch response = await call_next(request) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next raise app_exc File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\base.py", line 66, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "D:\Projects\starlette-web\starlette_web\contrib\admin\middleware.py", line 75, in __call__ await self.app(scope, receive, send_wrapper) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\base.py", line 104, in __call__ response = await self.dispatch_func(request, call_next) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette_admin\auth.py", line 161, in dispatch return await call_next(request) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\base.py", line 80, in call_next raise app_exc File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\base.py", line 66, in coro await self.app(scope, receive_or_disconnect, send_no_error) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\middleware\exceptions.py", line 84, in __call__ response = await handler(request, exc) File "D:\Projects\starlette-web\starlette_web\contrib\admin\admin.py", line 101, in _render_error return self.templates.TemplateResponse( File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\templating.py", line 112, in TemplateResponse return _TemplateResponse( File "D:\Projects\starlette-web\venv\lib\site-packages\starlette\templating.py", line 38, in __init__ content = template.render(context) File "D:\Projects\starlette-web\venv\lib\site-packages\jinja2\environment.py", line 1285, in render self.environment.handle_exception() File "D:\Projects\starlette-web\venv\lib\site-packages\jinja2\environment.py", line 926, in handle_exception raise rewrite_traceback_stack(source=source) File "D:\Projects\starlette-web\venv\lib\site-packages\starlette_admin\templates\error.html", line 1, in top-level template code {% extends "layout.html" %} ``` ``` File "D:\Projects\starlette-web\venv\lib\site-packages\starlette_admin\templates\layout.html", line 3, in top-level template code {% set current_user = (request | get_admin_user) %} File "D:\Projects\starlette-web\starlette_web\contrib\admin\auth_provider.py", line 65, in get_admin_user username = request.scope["user"].email ``` ``` File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\attributes.py", line 478, in __get__ return self.impl.get(state, dict_) File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\attributes.py", line 919, in get value = self._fire_loader_callables(state, key, passive) File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\attributes.py", line 946, in _fire_loader_callables return state._load_expired(state, passive) File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\state.py", line 695, in _load_expired self.manager.expired_attribute_loader(self, toload, passive) File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\loading.py", line 1383, in load_scalar_attributes result = load_on_ident( File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\loading.py", line 385, in load_on_ident return load_on_pk_identity( File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\loading.py", line 499, in load_on_pk_identity session.execute( File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\session.py", line 1676, in execute conn = self._connection_for_bind(bind) File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\session.py", line 1529, in _connection_for_bind return self._transaction._connection_for_bind(engine, execution_options) File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\session.py", line 714, in _connection_for_bind self._assert_active() File "D:\Projects\starlette-web\venv\lib\site-packages\sqlalchemy\orm\session.py", line 598, in _assert_active raise sa_exc.PendingRollbackError( sqlalchemy.exc.PendingRollbackError: This Session's transaction has been rolled back due to a previous exception during flush. To begin a new transaction with this Session, first issue Session.rollback(). Original exception was: (sqlalchemy.dialects.postgresql.asyncpg.IntegrityError) <class 'asyncpg.exceptions.ForeignKeyViolationError'>: UPDATE или DELETE в таблице "auth_users" нарушает ограничение внешнего ключа "auth_sessions_user_id_fkey" таблицы "auth_sessions" DETAIL: На ключ (id)=(1) всё ещё есть ссылки в таблице "auth_sessions". [SQL: DELETE FROM auth_users WHERE auth_users.id = %s] [parameters: ((1,), (2,))] (Background on this error at: https://sqlalche.me/e/14/gkpj) (Background on this error at: https://sqlalche.me/e/14/7s2a) ``` (Sorry that the last lines are in Russian, but idea is understandable, I hope) **Environment (please complete the following information):** - starlette==0.25.0 - starlette-admin==0.5.2 - SQLAlchemy==1.4.46 **Additional context** This seems to be a part of more general problem, that points out, that whenever exception occurs in methods, session should be rolled back immediately. Luckly, the bug seems to be solvable by simply wrapping this 2 lines in `async with session.begin_nested()` https://github.com/jowilf/starlette-admin/blob/main/starlette_admin/contrib/sqla/view.py#L378-L379 (in the same way, with sync-session) I haven't studied this behavior on methods, other than `delete.`
closed
2023-02-25T17:21:07Z
2023-02-26T10:30:42Z
https://github.com/jowilf/starlette-admin/issues/100
[ "bug" ]
dolamroth
2
CorentinJ/Real-Time-Voice-Cloning
deep-learning
792
Datasets_Root
In the instructions to run the preprocess. With the datasets how to I figure where the dataset root is. What is the command line needed to do so. I'm using Windows 10 BTW. It launches and works without any dataset_root listed but I want to make it work better. And without the datasets I can't
closed
2021-07-08T17:46:29Z
2021-08-25T09:42:20Z
https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/792
[]
C4l1b3r
10
WZMIAOMIAO/deep-learning-for-image-processing
deep-learning
299
数据集下载不了
http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar 数据集链接打不开
closed
2021-06-10T01:32:42Z
2021-06-21T09:40:27Z
https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/299
[]
jolt2017
2
docarray/docarray
fastapi
1,496
Implement a DocDict
# Context We want to introduce a in-memory AnyDocArray that can be accessed by id. It would basically be similar to DocList but whit the ability to be accessed by id instead of by index. The final design is not defined yet and we will iterate on this issue to propose on the design
open
2023-05-05T09:40:41Z
2023-05-08T11:44:08Z
https://github.com/docarray/docarray/issues/1496
[]
samsja
0
GibbsConsulting/django-plotly-dash
plotly
175
django dash
Please i'll love to setup a simple plotly dash app on django. Can you help me out?
closed
2019-10-01T16:13:26Z
2020-06-20T19:17:34Z
https://github.com/GibbsConsulting/django-plotly-dash/issues/175
[]
osasblack
1
python-gino/gino
sqlalchemy
213
Add Quart extension
https://gitlab.com/pgjones/quart
closed
2018-04-28T04:20:24Z
2018-06-08T09:26:52Z
https://github.com/python-gino/gino/issues/213
[ "help wanted", "feature request" ]
fantix
0
tensorly/tensorly
numpy
522
svd_interface will throw an error if the number of rows of the matrix is smaller than it's columns
Let's say, `matrix.shape==(2, 4)` and `n_eigenvecs==3`, `truncated_svd` will output `U.shape ==(2,2)`, `V.shape==(3, 4)` and `S.shape==2`. In this case, `S.diag` in the following line will return a matrix of shape `(2,2)`. The shapes `(2,2), (2,2) and (3,4)` are not compatible for matmul. Ideally, S should be updated as: `tl.fill_diagonal(tl.eye(U.shape[-1], V.shape[-2]), S)` https://github.com/tensorly/tensorly/blob/de05e178850eb2abe43ec1a40f80624ca606807d/tensorly/tenalg/svd.py#L428
closed
2023-08-21T14:12:59Z
2023-09-08T18:10:57Z
https://github.com/tensorly/tensorly/issues/522
[]
hello-fri-end
1
AutoViML/AutoViz
scikit-learn
86
A clean install fails on import
I attempted to install the latest version (`0.1.604`) in a fresh poetry project and the quickstart code fails on import: ```In [1]: from autoviz import AutoViz_Class ...: AV = AutoViz_Class() --------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) File ~/Library/Caches/pypoetry/virtualenvs/autoviz-trial-x6XO0Xb6-py3.9/lib/python3.9/site-packages/holoviews/plotting/bokeh/annotation.py:17 16 try: ---> 17 from bokeh.models.arrow_heads import TeeHead, NormalHead 18 arrow_start = {'<->': NormalHead, '<|-|>': NormalHead} ModuleNotFoundError: No module named 'bokeh.models.arrow_heads' During handling of the above exception, another exception occurred: ModuleNotFoundError Traceback (most recent call last) Cell In[1], line 1 ----> 1 from autoviz import AutoViz_Class 2 AV = AutoViz_Class() File ~/Library/Caches/pypoetry/virtualenvs/autoviz-trial-x6XO0Xb6-py3.9/lib/python3.9/site-packages/autoviz/__init__.py:3 1 name = "autoviz" 2 from .__version__ import __version__, __holo_version__ ----> 3 from .AutoViz_Class import AutoViz_Class 4 from .classify_method import data_cleaning_suggestions 5 if __name__ == "__main__": File ~/Library/Caches/pypoetry/virtualenvs/autoviz-trial-x6XO0Xb6-py3.9/lib/python3.9/site-packages/autoviz/AutoViz_Class.py:61 59 from sklearn.model_selection import train_test_split 60 ########################################################################################## ---> 61 from autoviz.AutoViz_Holo import AutoViz_Holo 62 from autoviz.AutoViz_Utils import * 63 from autoviz.AutoViz_NLP import draw_word_clouds File ~/Library/Caches/pypoetry/virtualenvs/autoviz-trial-x6XO0Xb6-py3.9/lib/python3.9/site-packages/autoviz/AutoViz_Holo.py:5 3 import pandas as pd 4 ############# Import from autoviz.AutoViz_Class the following libraries ####### ----> 5 from autoviz.AutoViz_Utils import * 6 ############## make sure you use: conda install -c pyviz hvplot ############### 7 import hvplot.pandas # noqa File ~/Library/Caches/pypoetry/virtualenvs/autoviz-trial-x6XO0Xb6-py3.9/lib/python3.9/site-packages/autoviz/AutoViz_Utils.py:61 59 from sklearn.model_selection import train_test_split 60 ######## This is where we import HoloViews related libraries ######### ---> 61 import hvplot.pandas 62 import holoviews as hv 63 from holoviews import opts File ~/Library/Caches/pypoetry/virtualenvs/autoviz-trial-x6XO0Xb6-py3.9/lib/python3.9/site-packages/hvplot/__init__.py:69 65 import holoviews as _hv 67 from holoviews import Store, render # noqa ---> 69 from .converter import HoloViewsConverter 70 from .interactive import Interactive 71 from .ui import explorer # noqa File ~/Library/Caches/pypoetry/virtualenvs/autoviz-trial-x6XO0Xb6-py3.9/lib/python3.9/site-packages/hvplot/converter.py:23 16 from holoviews.core.util import max_range 17 from holoviews.element import ( 18 Curve, Scatter, Area, Bars, BoxWhisker, Dataset, Distribution, 19 Table, HeatMap, Image, HexTiles, QuadMesh, Bivariate, Histogram, 20 Violin, Contours, Polygons, Points, Path, Labels, RGB, ErrorBars, 21 VectorField, Rectangles, Segments 22 ) ---> 23 from holoviews.plotting.bokeh import OverlayPlot, colormap_generator 24 from holoviews.plotting.util import process_cmap 25 from holoviews.operation import histogram File ~/Library/Caches/pypoetry/virtualenvs/autoviz-trial-x6XO0Xb6-py3.9/lib/python3.9/site-packages/holoviews/plotting/bokeh/__init__.py:26 23 except: 24 DFrame = None ---> 26 from .annotation import ( 27 TextPlot, LineAnnotationPlot, BoxAnnotationPlot, SplinePlot, ArrowPlot, 28 DivPlot, LabelsPlot, SlopePlot 29 ) 30 from ..plot import PlotSelector 31 from ..util import fire File ~/Library/Caches/pypoetry/virtualenvs/autoviz-trial-x6XO0Xb6-py3.9/lib/python3.9/site-packages/holoviews/plotting/bokeh/annotation.py:22 19 arrow_end = {'->': NormalHead, '-[': TeeHead, '-|>': NormalHead, 20 '-': None} 21 except: ---> 22 from bokeh.models.arrow_heads import OpenHead, NormalHead 23 arrow_start = {'<->': NormalHead, '<|-|>': NormalHead} 24 arrow_end = {'->': NormalHead, '-[': OpenHead, '-|>': NormalHead, 25 '-': None} ModuleNotFoundError: No module named 'bokeh.models.arrow_heads' ``` I have tried using Python 3.10 and 3.9 with the same results. I know your setup instructions call for Python 3.7, but this looks more like an incompatibility with recent versions of `bokeh` that could be resolved by constraining the `bokeh` dependency in setup.py. The install works if I peg `bokeh` to 2.4.2
closed
2023-05-25T15:29:37Z
2023-06-07T12:09:02Z
https://github.com/AutoViML/AutoViz/issues/86
[]
ahasha
6
HIT-SCIR/ltp
nlp
672
IndexError: list index out of range
【文本】绿色和平年年都在上述高峰会的开会地点游说,以争取太平洋地区的进一步支持,来因应温室效应问题,并为其他环保计划争取更大的协助。 【错误信息】 out1 = self.ltp.pipeline(sents, tasks = ['cws', 'pos']) File "/home/.venv/a100/lib/python3.8/site-packages/ltp/nerual.py", line 24, in wrapper return func(*args, **kwargs) File "/home/.venv/a100/lib/python3.8/site-packages/ltp/nerual.py", line 122, in pipeline tokenized = self.tokenizer.batch_encode_plus( File "/home/.venv/a100/lib/python3.8/site-packages/transformers/tokenization_utils_base.py", line 2854, in batch_encode_plus return self._batch_encode_plus( File "/home/.venv/a100/lib/python3.8/site-packages/transformers/tokenization_utils_fast.py", line 458, in _batch_encode_plus for key in tokens_and_encodings[0][0].keys(): IndexError: list index out of range
closed
2023-10-20T01:22:41Z
2023-10-20T01:27:19Z
https://github.com/HIT-SCIR/ltp/issues/672
[]
gandolfxu
0
scikit-optimize/scikit-optimize
scikit-learn
230
Unpin PyQT version for CI builds
Undo #229 once ContinuumIO/anaconda-issues#1068 has been fixed. Once the bug in conda has been fixed we should stop pinning the version of pyqt that we use in the CI builds.
closed
2016-09-19T14:14:18Z
2016-11-30T12:28:49Z
https://github.com/scikit-optimize/scikit-optimize/issues/230
[ "Build / CI", "Easy" ]
betatim
1
apache/airflow
machine-learning
47,377
Avoid scheduler crash when executor_config is passed for executors using task sdk
### Body Following on https://github.com/apache/airflow/pull/47375#discussion_r1980895131 The scheduler should never crash as result of problems with specific task. It should have protection against such cases. The expected behavior is to log the exception and continue. ### Committer - [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project.
closed
2025-03-05T08:39:12Z
2025-03-09T16:20:28Z
https://github.com/apache/airflow/issues/47377
[ "kind:bug", "area:Scheduler", "priority:medium", "area:core", "affected_version:3.0.0beta" ]
eladkal
1
junyanz/pytorch-CycleGAN-and-pix2pix
computer-vision
812
test.py RuntimeError: Error(s) in loading state_dict for UnetGenerator
I have a saved model, and when i run this command: sudo python3 test.py --dataroot ./datasets/mydataset--model pix2pix --direction BtoA --name test10 I get this error: dataset [AlignedDataset] was created initialize network with normal model [Pix2PixModel] was created loading the model from ./checkpoints/test10/latest_net_G.pth Traceback (most recent call last): File "test.py", line 47, in <module> model.setup(opt) # regular setup: load and print networks; create schedulers File "/usr/myec2/pix2pix/models/base_model.py", line 88, in setup self.load_networks(load_suffix) File "/usr/myec2/pix2pix/models/base_model.py", line 198, in load_networks net.load_state_dict(state_dict) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 839, in load_state_dict self.__class__.__name__, "\n\t".join(error_msgs))) RuntimeError: Error(s) in loading state_dict for UnetGenerator: Missing key(s) in state_dict: "model.model.1.model.2.weight", "model.model.1.model.2.bias", "model.model.1.model.2.running_mean", "model.model.1.model.2.running_var", "model.model.1.model.3.model.2.weight", "model.model.1.model.3.model.2.bias", "model.model.1.model.3.......
closed
2019-10-23T03:52:10Z
2019-10-23T05:41:35Z
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/812
[]
Pavm96
3
mljar/mercury
data-visualization
41
Dynamic parameter configuration
If we can add one preprocess block on top that runs only one time when notebooks is added. In this we can get some values which can be used to set configurations of parameter. An example. suppose we want to make "select" with choices as name of files in folder we can do something like this. ![image](https://user-images.githubusercontent.com/19734489/153516712-708085bf-d313-4d7c-9c48-0aa26875eba5.png)
closed
2022-02-10T23:54:24Z
2023-02-15T10:09:00Z
https://github.com/mljar/mercury/issues/41
[]
shivendra2015iiit
2
fastapi/sqlmodel
fastapi
249
Create Relationships with Unique Fields (UniqueViolationError)
### First Check - [X] I added a very descriptive title to this issue. - [X] I used the GitHub search to find a similar issue and didn't find it. - [X] I searched the SQLModel documentation, with the integrated search. - [X] I already searched in Google "How to X in SQLModel" and didn't find any information. - [X] I already read and followed all the tutorial in the docs and didn't find an answer. - [X] I already checked if it is not related to SQLModel but to [Pydantic](https://github.com/samuelcolvin/pydantic). - [X] I already checked if it is not related to SQLModel but to [SQLAlchemy](https://github.com/sqlalchemy/sqlalchemy). ### Commit to Help - [X] I commit to help with one of those options 👆 ### Example Code ```python # From the SQLModel Tutorial (https://sqlmodel.tiangolo.com/tutorial/relationship-attributes/create-and-update-relationships/) from typing import List, Optional from sqlmodel import Field, Relationship, Session, SQLModel, create_engine class Team(SQLModel, table=True): # id: Optional[int] = Field(default=None, primary_key=True) id: int = Field(default=None, primary_key=True) # NEW name: str = Field(index=True) heroes: List["Hero"] = Relationship(back_populates="team") class Hero(SQLModel, table=True): # id: Optional[int] = Field(default=None, primary_key=True) id: int = Field(default=None, primary_key=True) # NEW name: str = Field(index=True) # team_id: Optional[int] = Field(default=None, foreign_key="team.id") # team: Optional[Team] = Relationship(back_populates="heroes") team_id: int = Field(default=None, foreign_key="team.id") # NEW team: Team = Relationship(back_populates="heroes") # NEW from sqlalchemy.ext.asyncio import AsyncSession # ADDED: 2022_02_24 # def create_heroes(): async def create_heroes(session: AsyncSession, request: Hero): # NEW, EDITED: 2022_02_24 # with Session(engine) as session: # EDITED # team_preventers = Team(name="Preventers", headquarters="Sharp Tower") # REMOVE # team_z_force = Team(name="Z-Force", headquarters="Sister Margaret’s Bar") # REMOVE assigned_team = Team(name=request.team_to_assign) new_hero = Hero( name=request.hero_name, team=assigned_team ) session.add(new_hero) await session.commit() # EDITED: 2022_02_24 await session.refresh(new_hero) # EDITED: 2022_02_24 return new_hero # ADDED: 2022_02_24 # Code below omitted 👇 ``` ### Description I'm following the SQLModel tutorial as I implement my own version. I have a model very similar to the above example (derived from the Hero/Team example given in the tutorial on how to implement One-To-Many relationships with SQLModel. When I use this approach, it does write the required Team and Hero objects to my database. However, it does not check the Team table to ensure that the "team_to_assign" from the request object does not already exist. So, if I use the "create_heroes" function (in two separate commits) to create two Heroes who are on the same team, I get two entries for the same team in the Team table. This is not desirable. If the team already exists, the Hero being created should use the id that already exists for that team. When I implement "sa_column_kwargs={"unique": True}" within the "name" Field of the Team table, I can no longer create a new Hero if they are to be connected to a Team that already exists. I get the error: > `sqlalchemy.exc.IntegrityError: (sqlalchemy.dialects.postgresql.asyncpg.IntegrityError) <class 'asyncpg.exceptions.UniqueViolationError'>: duplicate key value violates unique constraint "ix_team_name" > DETAIL: Key (name)=(team_name) already exists. > [SQL: INSERT INTO "team" (name) VALUES (%s) RETURNING "team".id]` I was hoping that would somehow tell SQLModel to skip the insertion of a Team that already exists and get the appropriate Team id instead. Clearly it just stops it from happening. SQLModel doesn't appear to check that a Team already exists before inserting it into the Team table. Am I missing something about how to handle this with SQLModel, or am I meant to employ my own logic to check the Team table prior to generating the Hero object to insert? Thanks for your time! ### Operating System Linux ### Operating System Details _No response_ ### SQLModel Version 0.0.6 ### Python Version 3.10 ### Additional Context Using async libraries: SQLAlchemy = {extras = ["asyncio"], version = "^1.4.31"} asyncpg = "^0.25.0"
closed
2022-02-23T17:02:05Z
2022-03-02T14:08:42Z
https://github.com/fastapi/sqlmodel/issues/249
[ "question" ]
njdowdy
11
dask/dask
numpy
11,349
map_overlap passes wrong block_info[:]['array-location']
<!-- Please include a self-contained copy-pastable example that generates the issue if possible. Please be concise with code posted. See guidelines below on how to provide a good bug report: - Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports - Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly. --> **Describe the issue**: `dask.array.map_overlap()` reports the wrong position in the full array to the mapping function. It should report the indices of the original array not after the addition of the overlaps. **Minimal Complete Verifiable Example**: ```python >>> import numpy as np >>> from dask import array as da >>> x = da.from_array(np.arange(1000), chunks=100) >>> x.map_overlap(lambda b, block_info=None: np.array([[b[0], b[-1]]]).T, ... depth=5, dtype=int, chunks=(2,1), new_axis=0) dask.array<_trim, shape=(2, 10), dtype=int64, chunksize=(2, 1), chunktype=numpy.ndarray> >>> _.compute() array([[ 0, 95, 195, 295, 395, 495, 595, 695, 795, 895], [104, 204, 304, 404, 504, 604, 704, 804, 904, 999]]) >>> x.map_overlap(lambda b, block_info=None: np.array(block_info[0]['array-location']).T, ... depth=5, dtype=int, chunks=(2,1), new_axis=0) dask.array<_trim, shape=(2, 10), dtype=int64, chunksize=(2, 1), chunktype=numpy.ndarray> >>> _.compute() array([[ 0, 105, 215, 325, 435, 545, 655, 765, 875, 985], [ 105, 215, 325, 435, 545, 655, 765, 875, 985, 1090]]) ``` **Environment**: - Dask version: 2024.8.1 - Python version: 3.11.9 - Operating System: Linux - Install method (conda, pip, source): pip
open
2024-08-27T13:55:32Z
2024-10-11T17:06:18Z
https://github.com/dask/dask/issues/11349
[ "array" ]
bnavigator
2
sepandhaghighi/samila
matplotlib
88
Support random mode
#### Description Support random mode for `color`, `bgcolor` and `projection`
closed
2022-01-18T17:30:19Z
2022-05-04T12:15:56Z
https://github.com/sepandhaghighi/samila/issues/88
[ "enhancement", "new feature" ]
sepandhaghighi
0
dask/dask
scikit-learn
11,740
dask.array.pad does not chunk up padded region
<!-- Please include a self-contained copy-pastable example that generates the issue if possible. Please be concise with code posted. See guidelines below on how to provide a good bug report: - Craft Minimal Bug Reports http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports - Minimal Complete Verifiable Examples https://stackoverflow.com/help/mcve Bug reports that follow these guidelines are easier to diagnose, and so are often handled much more quickly. --> **Describe the issue**: **Minimal Complete Verifiable Example**: ```python import dask.array array = dask.array.ones( (61440, 30757), chunks=(4000, 4000)) dask.array.pad(array, ((0, 88970 - 61440), (0,0)), mode="constant", constant_values=0) ``` <img width="449" alt="Image" src="https://github.com/user-attachments/assets/61b8e385-c58d-4068-ad0e-bf04fdd7e469" />
closed
2025-02-13T00:17:07Z
2025-02-13T15:33:53Z
https://github.com/dask/dask/issues/11740
[ "needs triage" ]
dcherian
0
TvoroG/pytest-lazy-fixture
pytest
23
Nested fixtures are not seen by pytest_lazyfixture unless autouse=True
Using modified example from the official Usage instructions: @fixture(params=[0, 1], autouse=True) def zero_one(request): return request.param @fixture(params=[1]) def one(request, zero_one): return zero_one * request.param @fixture(params=[0, 1], autouse=False) def zero_two(request): return request.param @fixture def two(request, zero_two): return zero_two * request.param @fixture(params=[ lazy_fixture('one'), ]) def some_auto(request): return request.param @fixture(params=[ lazy_fixture('two') ]) def some(request): return request.param def test_pytest_can_see_all_fixtures(zero_one, zero_two, one, two): for f in (zero_one, zero_two, one, two): assert isinstance(f, int) def test_func_autouse(some_auto): assert some_auto in [0, 1] def test_func_no_autouse(some): assert some in [0, 1] The last test will fail unexpectedly with error: C:\Anaconda\lib\site-packages\_pytest\python.py:680: ValueError During handling of the above exception, another exception occurred: request = <FixtureRequest for <Function 'test_func_no_autouse[0-some0]'>> def fill(request): item = request._pyfuncitem fixturenames = item.fixturenames argnames = item._fixtureinfo.argnames for fname in fixturenames: if fname not in item.funcargs and fname not in argnames: item.funcargs[fname] = request.getfixturevalue(fname) if hasattr(item, 'callspec'): for param, val in sorted_by_dependency(item.callspec.params, fixturenames): if is_lazy_fixture(val): > item.callspec.params[param] = request.getfixturevalue(val.name) E Failed: The requested fixture has no parameter defined for the current test. E E Requested fixture 'zero_two' defined in: E some_package\tests\test_example.py:47 E E Requested here: E C:\Anaconda\lib\site-packages\_pytest\fixtures.py:523 C:\Anaconda\lib\site-packages\pytest_lazyfixture.py:40: Failed For some reason `pytest_lazyfixture` doesn't see fixtures that depend on other fixtures that `pytest` can see unless `autouse=True` in the lowest level fixture.
open
2018-02-05T14:04:00Z
2018-02-05T14:04:00Z
https://github.com/TvoroG/pytest-lazy-fixture/issues/23
[]
wikiped
0
aminalaee/sqladmin
fastapi
765
TypeError: UUID objects are immutable (fastapi_users)
### Checklist - [X] The bug is reproducible against the latest release or `master`. - [X] There are no similar issues or pull requests to fix it yet. ### Describe the bug Hy there, I am using fastapi_users for authentication . Here is the code, inside app directory app.py ``` from contextlib import asynccontextmanager from fastapi import FastAPI from sqladmin import Admin, ModelView from app.db import User, create_db_and_tables, engine @asynccontextmanager async def lifespan(app: FastAPI): await create_db_and_tables() yield app = FastAPI(lifespan=lifespan) admin = Admin(app, engine) class UserAdmin(ModelView, model=User): ... admin.add_view(UserAdmin) ``` in db.py ``` from typing import AsyncGenerator from fastapi import Depends from fastapi_users.db import SQLAlchemyBaseUserTableUUID, SQLAlchemyUserDatabase from sqlalchemy.ext.asyncio import AsyncSession, async_sessionmaker, create_async_engine from sqlalchemy.orm import DeclarativeBase DATABASE_URL = "sqlite+aiosqlite:///./test.db" class Base(DeclarativeBase): pass class User(SQLAlchemyBaseUserTableUUID, Base): pass engine = create_async_engine(DATABASE_URL) async_session_maker = async_sessionmaker(engine, expire_on_commit=False) async def create_db_and_tables(): async with engine.begin() as conn: await conn.run_sync(Base.metadata.create_all) async def get_async_session() -> AsyncGenerator[AsyncSession, None]: async with async_session_maker() as session: yield session async def get_user_db(session: AsyncSession = Depends(get_async_session)): yield SQLAlchemyUserDatabase(session, User) ``` in main.py ``` import uvicorn if __name__ == "__main__": uvicorn.run("app.app:app", host="0.0.0.0", log_level="info", port=8000, reload=True) ``` ``` directory structure project -app - app.py - db.py -main.py python main.py to run program ``` requirements.txt ``` fastapi fastapi-users[sqlalchemy] uvicorn[standard] aiosqlite sqladmin ``` this line generate issue https://github.com/aminalaee/sqladmin/blob/cdff6b4a641448099133b79f2d941f55f288bf29/sqladmin/helpers.py#L229 ### Steps to reproduce the bug This issue arise when I visit details and edit section , just for example http://127.0.0.1:8000/admin/user/details/0ab165e9-7408-45ef-8179-c2d5f18f44fb http://127.0.0.1:8000/admin/user/edit/0ab165e9-7408-45ef-8179-c2d5f18f44fb ### Expected behavior _No response_ ### Actual behavior _No response_ ### Debugging material _No response_ ### Environment - window 11 , python 3.11 - ### Additional context _No response_
closed
2024-05-13T10:05:09Z
2024-05-13T14:54:10Z
https://github.com/aminalaee/sqladmin/issues/765
[]
BhuwanPandey
4
streamlit/streamlit
data-visualization
10,543
Set `[auth]` secrets config from environment variables
### Checklist - [x] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests. - [x] I added a descriptive title and summary to this issue. ### Summary The docs page for secrets management says: > Existing secrets management tools, such as .... will work fine in Streamlit. We just add native secrets management for times when it's useful. But the docs page for authentication says: > st.login() uses your app's secrets.toml file to configure your connection I would like to use existing secrets management tools to configure st.login secrets. As far as I can tell this is currently only possible through `.streamlit/secrets.toml`. Would it be possible to also allow setting these secrets using environment variables? ### Why? _No response_ ### How? _No response_ ### Additional Context _No response_
open
2025-02-27T15:47:19Z
2025-03-12T07:18:38Z
https://github.com/streamlit/streamlit/issues/10543
[ "type:enhancement", "feature:st.secrets", "feature:authentication" ]
Riezebos
4
google-research/bert
tensorflow
416
Is Abstract Summary Possible on Only Encoder Model like BERT?
Hello. I read it with interest in BERT paper. I have plan to project about Abstract Summary. :D It's super awesome.. **Is Abstract Summary Possible on Only Encoder Model like BERT in finetuning??** I think We should need Decoder on target language for Abstract Summary Task. Could I ask BERT is Only used Encoder when it's Pre-trained? or Is there more Optional Point not in BERT Paper (This is just my Curiosity)
open
2019-02-05T01:55:32Z
2019-02-05T01:55:32Z
https://github.com/google-research/bert/issues/416
[]
graykode
0
voxel51/fiftyone
data-science
5,334
[FR] custom colors for `color by` in embedding visualizations?
### Proposal Summary Allow to specify a custom color for each sample dot in an embeddings visualization. ### What areas of FiftyOne does this feature affect? - [ ] App: FiftyOne application - [X] Core: Core `fiftyone` Python library - [ ] Server: FiftyOne server ### Details When visualizing embeddings, we can color by filepath, tag and metadata-attributes. However, it seems that other sample fields (either float fields or color string fields such as `#FF0000`) can NOT be selected as `color by` parameter. This would be great to check label consistency (using embeddings and custom colors based on label attributes). //EDIT: It seems that float fields can be selected using [interactive embeddings plots](https://docs.voxel51.com/user_guide/plots.html#visualizing-embeddings). ### Willingness to contribute The FiftyOne Community welcomes contributions! Would you or another member of your organization be willing to contribute an implementation of this feature? - [ ] Yes. I can contribute this feature independently - [X] Yes. I would be willing to contribute this feature with guidance from the FiftyOne community - [ ] No. I cannot contribute this feature at this time
open
2025-01-02T15:28:31Z
2025-01-02T15:46:19Z
https://github.com/voxel51/fiftyone/issues/5334
[ "feature" ]
cgebbe
0
deepspeedai/DeepSpeed
machine-learning
5,793
[BUG] Excessive CPU and GPU Memory Usage with Multi-GPU Inference Using DeepSpeed
I am experiencing excessive CPU and GPU memory usage when running multi-GPU inference with DeepSpeed. Specifically, the memory usage does not scale as expected when increasing the number of GPUs. Below is the code I am using for inference: ```python import os import torch import deepspeed import time from transformers import AutoTokenizer, AutoModelForCausalLM, AutoConfig from deepspeed.runtime.zero.config import DeepSpeedZeroConfig from deepspeed.inference.config import DeepSpeedTPConfig from deepspeed.runtime.utils import see_memory_usage local_rank = int(os.getenv("LOCAL_RANK", "0")) world_size = int(os.getenv("WORLD_SIZE", "1")) model_dir = "/mnt/sgnfsdata/tolo-03-97/pretrained_models/internlm2-chat-20b" trust_remote_code = True tokenizer = AutoTokenizer.from_pretrained(model_dir, trust_remote_code=trust_remote_code) config = AutoConfig.from_pretrained(model_dir, trust_remote_code=trust_remote_code) model = AutoModelForCausalLM.from_pretrained(model_dir, torch_dtype=torch.bfloat16, trust_remote_code=trust_remote_code ) model = model.eval() see_memory_usage("After load model", force=True) tp_config = DeepSpeedTPConfig(tp_size=world_size) zero_config = DeepSpeedZeroConfig(stage=3, model_persistence_threshold=0, max_live_parameters=0, mics_shard_size=world_size ) ds_engine = deepspeed.init_inference(model=model, tensor_parallel=tp_config, dtype=torch.bfloat16, zero=zero_config, max_out_tokens=1024, replace_method="auto", replace_with_kernel_inject=True) see_memory_usage("After DS-inference init", force=True) model = ds_engine.module print("device: ", model.device) prompt = "what is deepspeed?" t0 = time.time() response = model.chat(tokenizer=tokenizer, query=prompt, history=[], max_new_tokens=1024, do_sample=True, temperature=0.8, top_p=0.8 ) t1 = time.time() print(response) print('=' * 100) print("inference time: ", t1 - t0) print('=' * 100) ``` Steps to Reproduce: 1. Run the script with 2 GPUs: ```bash deepspeed --num_gpus 2 main.py --ds_inference ``` ![image](https://github.com/user-attachments/assets/4cb9b326-e801-4310-afb8-a649573fa77a) ![image](https://github.com/user-attachments/assets/15931a93-b7fd-4d25-9098-0b96166a5bd4) 2. Run the script with 4 GPUs: ```bash deepspeed --num_gpus 4 main.py --ds_inference ``` ![image](https://github.com/user-attachments/assets/48049eef-f0fb-4262-a489-12e69036328a) ![image](https://github.com/user-attachments/assets/2453b5ea-880d-454c-a34a-bd096c2db24c) Expected Behavior: I expected that using 4 GPUs would reduce the memory usage per GPU, ideally halving the GPU memory usage compared to running with 2 GPUs. Actual Behavior: With 2 GPUs: CPU virtual memory: 92.87GB Each GPU memory: 37.74GB With 4 GPUs: CPU virtual memory: 162.92GB (significantly higher than expected) Each GPU memory: 37.74GB (no reduction) Questions: Why does the CPU virtual memory usage increase significantly when using more GPUs? How can I reduce the memory usage per GPU when scaling up the number of GPUs? System Info: DeepSpeed version: 0.14.4 PyTorch version: 2.3.1 Transformers version: 4.42.3 Python version: 3.10 OS: ubuntu 24.04 Additional Context: Any insights or suggestions on how to optimize the memory usage for multi-GPU inference with DeepSpeed would be greatly appreciated. Thank you!
open
2024-07-23T07:55:48Z
2024-10-10T03:02:31Z
https://github.com/deepspeedai/DeepSpeed/issues/5793
[ "bug", "inference" ]
gawain000000
3
statsmodels/statsmodels
data-science
9,353
ENH: calibration (gof) plots for multiple testing p-value correction
just some references Giai Gianetto, Quentin, Florence Combes, Claire Ramus, Christophe Bruley, Yohann Couté, and Thomas Burger. 2016. “Calibration Plot for Proteomics: A Graphical Tool to Visually Check the Assumptions Underlying FDR Control in Quantitative Experiments.” PROTEOMICS 16 (1): 29–32. https://doi.org/10.1002/pmic.201500189. SCHWEDER, T., and E. SPJØTVOLL. 1982. “Plots of P-Values to Evaluate Many Tests Simultaneously.” Biometrika 69 (3): 493–502. https://doi.org/10.1093/biomet/69.3.493. (I did not see a SUMM or roadmap issue for multiple testing)
open
2024-09-11T20:25:30Z
2024-09-11T20:25:31Z
https://github.com/statsmodels/statsmodels/issues/9353
[ "type-enh", "comp-stats" ]
josef-pkt
0
blockchain-etl/bitcoin-etl
dash
12
Factor out graph_operations.py to a separate lib
https://github.com/blockchain-etl/bitcoin-etl/blob/master/blockchainetl/service/graph_operations.py
open
2019-02-08T14:54:08Z
2020-05-18T03:04:21Z
https://github.com/blockchain-etl/bitcoin-etl/issues/12
[ "enhancement" ]
medvedev1088
0
BeanieODM/beanie
pydantic
1,010
[BUG] FindMany count ignores limit set
**Describe the bug** When getting to count of a query with a limit applied before, the limit gets ignored. That is a problem, because so mongodb has to find all. documents matching the query to count them, instead of stopping after reaching the limit. In big collections (big documents, millions of them) this has a big performance impact. **To Reproduce** ```python count = await MyDocument.find_many(MyDocument.foo == "bar").limit(1000).count() ``` does not return the count of all Documents with foo==bar, instead of maximum 1000. **Expected behavior** Returns 1000 if more documents than 1000 or found, otherwise the count lower than 1000. **Additional context** We moved from pure pymongo queries to beanie. With pymongo this worked as I described my expectation here. So it would be nice to have the same behaviour here.
closed
2024-08-29T07:34:56Z
2024-10-08T12:32:24Z
https://github.com/BeanieODM/beanie/issues/1010
[ "bug" ]
pschoen-itsc
1
ultralytics/ultralytics
deep-learning
19,796
Issue with Ground Truth Mapping: Class IDs from a Different Dataset Mapped to COCO Class Names
### Search before asking - [ ] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions. ### Question **When validating a model trained on the COCO dataset using a different dataset, the ground truth bounding box coordinates and class IDs are taken from the label file of the new dataset. However, the class IDs are being mapped to the class names of the COCO dataset instead of the class names from the new dataset. Why is this happening?** Dataset used : https://github.com/ultralytics/ultralytics/blob/main/ultralytics/cfg/datasets/medical-pills.yaml This is the code I have run : !pip install ultralytics from ultralytics import YOLO # Load YOLOv8 pose model model = YOLO("yolov8n.pt") # Validate on the hand keypoints dataset metrics = model.val(data="/content/medical-pills.yaml", save_json=True) # Print results print(metrics) ### Additional _No response_
open
2025-03-20T10:00:47Z
2025-03-21T08:36:31Z
https://github.com/ultralytics/ultralytics/issues/19796
[ "question", "detect" ]
Agh404180
4
google/trax
numpy
1,497
why not use tensorflow keras instead
### Description ... ### Environment information ``` OS: <your answer here> $ pip freeze | grep trax # your output here $ pip freeze | grep tensor # your output here $ pip freeze | grep jax # your output here $ python -V # your output here ``` ### For bugs: reproduction and error logs ``` # Steps to reproduce: ... ``` ``` # Error logs: ... ```
closed
2021-02-24T10:01:01Z
2021-05-22T18:25:05Z
https://github.com/google/trax/issues/1497
[]
lomessa
2
strawberry-graphql/strawberry
graphql
3,605
Federated types aren't federated in field extensions
<!--- Provide a general summary of the changes you want in the title above. --> I'm submitting this as an "other issue" as I'm not sure this really classifies as a bug. This issue is derived from a [question on the Discord server](https://discord.com/channels/689806334337482765/1275412426132815903/1275412426132815903). Original text: Say you have this type in some file: ```py @strawberry.federation.type(keys=["id"]) class User: id: strawberry.ID name: str age: int @classmethod def resolve_reference(cls, id: strawberry.ID): return cls(id=id, name="Rick Astley", age=58) ``` Then this in another file: ```py @strawberry.federation.type(keys=["id"]) class User: id: strawberry.ID ``` If I had an extension like this, which extends a query that returns the minimal `User`: ```py class Ext(FieldExtension): def resolve(self, next_, source, info, **kwargs): result = next_(source, info, **kwargs) print(f"The name of the user is {result.name}!") return result ``` it would throw an error saying `AttributeError: 'User' object has no attribute 'name'`, indicating it's not been federated. We're trying to create a cache that is able to reconstruct Strawberry types from cached information, and while this works for normal types, it doesn't work for Strawberry types. My questions are: 1. Is there a better way of doing this that doesn't require the federated type? 2. If not, how do I get the federated type to be returned from the `next_` callable?
closed
2024-08-26T14:47:30Z
2025-03-20T15:56:50Z
https://github.com/strawberry-graphql/strawberry/issues/3605
[]
parafoxia
2
vitalik/django-ninja
pydantic
421
How can I not cause an error when the header is not set in HttpBeer authentication?
```py class AuthBearer(HttpBearer): def authenticate(self, request: HttpRequest, token: str): token = jwt.decode(token, settings.JWT_SECRET, algorithms="HS256") user = User.objects.filter(id=token["id"]).get() return user ``` https://django-ninja.rest-framework.com/tutorial/authentication/#http-bearer JWT certification is currently implemented. I hope that people who have not set JWT token in Header can call API. However, if the header is not set, 401, Unauthorized will occur. How can it be implemented? <img width="1360" alt="Screen Shot 2022-04-12 at 9 34 54 AM" src="https://user-images.githubusercontent.com/57439651/162855131-0d7aa9d8-3d2d-467c-91e4-6edd61e02ddf.png">
closed
2022-04-12T00:37:55Z
2022-04-13T08:02:03Z
https://github.com/vitalik/django-ninja/issues/421
[]
HoJin9622
4
s3rius/FastAPI-template
fastapi
191
Loguru startup error
After initializing blank project with loguru as logger `poetry run python -m project_name` it gives an error: ```shell Traceback (most recent call last): File "<frozen runpy>", line 198, in _run_module_as_main File "<frozen runpy>", line 88, in _run_code File "C:\Users\User\Documents\Projects\project_name\project_name\__main__.py", line 4, in <module> import uvicorn File "C:\Users\User\AppData\Local\pypoetry\Cache\virtualenvs\project-name-uwC7DCiD-py3.11\Lib\site-packages\uvicorn\__init__.py", line 1, in <module> from uvicorn.config import Config File "C:\Users\User\AppData\Local\pypoetry\Cache\virtualenvs\project-name-uwC7DCiD-py3.11\Lib\site-packages\uvicorn\config.py", line 1, in <module> import asyncio File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\asyncio\__init__.py", line 8, in <module> from .base_events import * File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\asyncio\base_events.py", line 18, in <module> import concurrent.futures File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\__init__.py", line 8, in <module> from concurrent.futures._base import (FIRST_COMPLETED, File "C:\Users\User\AppData\Local\Programs\Python\Python311\Lib\concurrent\futures\_base.py", line 7, in <module> import logging File "C:\Users\User\Documents\Projects\project_name\project_name\logging.py", line 5, in <module> from loguru import logger File "C:\Users\User\AppData\Local\pypoetry\Cache\virtualenvs\project-name-uwC7DCiD-py3.11\Lib\site-packages\loguru\__init__.py", line 10, in <module> from ._logger import Core as _Core File "C:\Users\User\AppData\Local\pypoetry\Cache\virtualenvs\project-name-uwC7DCiD-py3.11\Lib\site-packages\loguru\_logger.py", line 99, in <module> from . import _asyncio_loop, _colorama, _defaults, _filters File "C:\Users\User\AppData\Local\pypoetry\Cache\virtualenvs\project-name-uwC7DCiD-py3.11\Lib\site-packages\loguru\_asyncio_loop.py", line 27, in <module> get_task_loop, get_running_loop = load_loop_functions() ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\User\AppData\Local\pypoetry\Cache\virtualenvs\project-name-uwC7DCiD-py3.11\Lib\site-packages\loguru\_asyncio_loop.py", line 11, in load_loop_functions get_running_loop = asyncio.get_running_loop ^^^^^^^^^^^^^^^^^^^^^^^^ AttributeError: partially initialized module 'asyncio' has no attribute 'get_running_loop' (most likely due to a circular import) ``` After renaming file `project_name/logging.py` to `project_name/log.py` it works
open
2023-10-02T12:35:53Z
2024-07-12T08:16:43Z
https://github.com/s3rius/FastAPI-template/issues/191
[]
RoyalGoose
13
numpy/numpy
numpy
28,578
CI: Make `clang_TSAN` CI job use cpython_sanity docker image
See https://github.com/nascheme/cpython_sanity Should shave anywhere from 5-10 minutes of build time off each run. I'll get to this eventually but if someone else wants to take a crack at it go ahead.
open
2025-03-24T16:12:58Z
2025-03-24T16:22:18Z
https://github.com/numpy/numpy/issues/28578
[ "component: CI", "39 - free-threading" ]
ngoldbaum
0
deepfakes/faceswap
deep-learning
1,279
Error during Installation at windows by building project for dev
*Note: For general usage questions and help, please use either our [FaceSwap Forum](https://faceswap.dev/forum) or [FaceSwap Discord server](https://discord.gg/FC54sYg). General usage questions are liable to be closed without response.* **Crash reports MUST be included when reporting bugs.** **Describe the bug** Hi, I want to contribute to this repo as an assignment of my college class. So I folked this repo and clone it on my local and i ran setup.py with python 3.9 interpreter installed at anaconda env folder. and this error is occured... ```bash INFO Running without root/admin privileges INFO The tool provides tips for installation and installs required python packages INFO Setup in Windows 10 INFO Installed Python: 3.9.0 64bit INFO Running in Conda INFO Running in a Virtual Environment INFO Encoding: cp949 INFO Installed pip: 22.2.2 Traceback (most recent call last): File "D:\faceswap\setup.py", line 1400, in <module> ENV = Environment() File "D:\faceswap\setup.py", line 76, in __init__ self.installed_packages.update(self.get_installed_conda_packages()) File "D:\faceswap\setup.py", line 277, in get_installed_conda_packages retval[item[0]] = item[1] IndexError: list index out of range Process finished with exit code 1 ``` So I go down to the error occuring code and see what's the problem by debugging. And I found that ![debug](https://user-images.githubusercontent.com/96936113/200165222-576725fe-24c0-4559-83fe-a46c62bf516e.png) the pkg variable should have package name and version but some crazy key(\x1b[0m) was input without any value. That's why the IndexError was occured so I covered that code with try-except that do nothing when except occured. Then, It' works perfectly without any error. So the question I want you to ask is "Is is okay to make an pr about the issues similar to this one(related to setup)? and will you merge it?" **To Reproduce** Steps to reproduce the behavior: 1. Go to root of your Faceswap folder 2. Click on setup.py and run it 3. Scroll down to '....' 4. See error **Expected behavior** Merge the pr **Screenshots** **Desktop (please complete the following information):** - OS: Windows 10 Home 21H2 - Python Version: 3.9 - Conda Version: 22.9.0 - Commit ID: X **Additional context** Add any other context about the problem here. **Crash Report** There is no crash report about this issue
open
2022-11-06T10:57:49Z
2022-11-08T02:41:45Z
https://github.com/deepfakes/faceswap/issues/1279
[]
DonOhhhh
3
python-gitlab/python-gitlab
api
2,555
gitlab project-label output mismatch
hello running gitlab project-label list a node have { "id": 13176, "name": "OriginMR::4782", "description": null, "description_html": "", "text_color": "#FFFFFF", "color": "#84462c", "subscribed": false, "priority": null, "is_project_label": true } but via api i see { "id": 7, "name": "bug", "color": "#FF0000", "text_color" : "#FFFFFF", "description": null, "description_html": null, "open_issues_count": 0, "closed_issues_count": 0, "open_merge_requests_count": 0, "subscribed": false }, seems there are missing values
closed
2023-04-21T12:14:20Z
2024-07-15T01:25:49Z
https://github.com/python-gitlab/python-gitlab/issues/2555
[ "need info", "stale" ]
ramarro123
4
netbox-community/netbox
django
18,982
rack changelog - show changes related to reservations in rack changelog
### NetBox version v4.2.5 ### Feature type Data model extension ### Proposed functionality show rack reservation changelogs on related rack changelogs ### Use case when viewing a device changelog you can see when interfaces were created updated and deleted. the same should be made possible for racks. when viewing a rack object in the ui and you assign or edit or delete reservations related to that rack it should show those reservation changes on the rack changelog ### Database changes unknown ### External dependencies n/a
open
2025-03-21T17:19:47Z
2025-03-23T23:22:39Z
https://github.com/netbox-community/netbox/issues/18982
[ "type: feature", "status: needs triage" ]
ITJamie
1
MaxHalford/prince
scikit-learn
154
Feature idea: biplot
Have you considered to enhance this package by [biplots](https://en.wikipedia.org/wiki/Biplot), as discussed in [this SO thread](https://stackoverflow.com/questions/39216897)? Also, I see some potential overlap with the [PCA package](https://erdogant.github.io/pca/pages/html/index.html), though prince appears fitter with regard to the methods. Also, I'm not totally convinced by the API design of the PCA package. Still, maybe there's potential to team up. I'm dropping these ideas here for further discussion.
closed
2023-05-31T17:52:12Z
2023-10-11T22:40:04Z
https://github.com/MaxHalford/prince/issues/154
[]
normanius
2
autokey/autokey
automation
82
Apostrophe [ ' ] changed in É
Hello All, When i put an Apostrophe [ ' ], it is changed by a É when I use AutoKey. Do you knoq how I can fix this? Thank you so much! All my best David
closed
2017-05-19T15:29:02Z
2023-05-19T19:58:20Z
https://github.com/autokey/autokey/issues/82
[]
davduf
5