repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
labmlai/annotated_deep_learning_paper_implementations | pytorch | 131 | About DDPM q_sample why is the official implementation different from this code? | Thanks for releasing the detailed implementation.
I checked official [DDPM](https://github.com/hojonathanho/diffusion/blob/1e0dceb3b3495bbe19116a5e1b3596cd0706c543/diffusion_tf/diffusion_utils.py#L188) and [DDIM](https://github.com/ermongroup/ddim/blob/51cb290f83049e5381b09a4cc0389f16a4a02cc9/functions/denoising.py#L65)implementation.
Then q_sample formula was set up as follows
```
# DDPM (Tensorflow)
model_mean + nonzero_mask * tf.exp(0.5 * model_log_variance) * noise
```
```
# DDIM (PyTorch)
sample = mean + mask * torch.exp(0.5 * logvar) * noise
```
But implementation at this project
```
mean + (var ** .5) * eps
```
I would like to know if the two implementations represent the same meaning or different meanings. | closed | 2022-06-30T07:38:58Z | 2022-07-17T04:03:23Z | https://github.com/labmlai/annotated_deep_learning_paper_implementations/issues/131 | [
"question"
] | ryhhtn | 1 |
waditu/tushare | pandas | 1,544 | 基金持仓数据和同花顺的数据不对齐 | 
仅展示OF结尾的场外基金,持仓数据、持股比例和响应数量,和同花顺中的数据完全不一致,请问是我的统计方式不对,还是什么原因呢? | open | 2021-04-25T07:43:18Z | 2021-05-07T14:27:17Z | https://github.com/waditu/tushare/issues/1544 | [] | congcong009 | 1 |
biolab/orange3 | data-visualization | 6,893 | Add menu option to clean up copied-widget sequence numbers | <!--
Thanks for taking the time to submit a feature request!
For the best chance at our team considering your request, please answer the following questions to the best of your ability.
-->
**What's your use case?**
I'd like to have a quick way for removing all numberings added to widgets when they've been copy-pasted. So that all widget names like this
<img width="141" alt="image" src="https://github.com/user-attachments/assets/b9f33800-ae42-4d24-b2db-f61dba741877">
can easily be changed into this with one action:
<img width="90" alt="image" src="https://github.com/user-attachments/assets/e7535aad-de5c-43a9-90dc-adc5cb23b702">
**What's your proposed solution?**
Add a menu item "Remove widget copy counts" or something similar, perhaps under Edit.
**Are there any alternative solutions?**
Manual editing of all concerned widget names
| closed | 2024-09-18T15:50:26Z | 2024-09-28T12:20:54Z | https://github.com/biolab/orange3/issues/6893 | [] | wvdvegte | 2 |
QingdaoU/OnlineJudge | django | 365 | Presentation error: Judge does not trim or wrap trailing spaces and judges 'Wrong Answer' |
Hi, thanks for your great job.
In my testcases with trailing spaces, judge does not trim or wrap the input and output.
Therefore it judges the submission as a 'Wrong Answer', if the input has different number of trailing spaces.
As an example, a submission with the following input and output is judged as a 'Wrong Answer'
1.in: 1 2 3\n4 5 6
1.out: 1 2. 3' '\n4 5 6
| open | 2021-04-03T13:40:34Z | 2021-04-03T13:40:34Z | https://github.com/QingdaoU/OnlineJudge/issues/365 | [] | joonion | 0 |
aio-libs/aiomysql | sqlalchemy | 562 | SQL Execution Creates Implicit Transaction without Rollback. | Hello -
First of all, I want to thank you all for your work maintaining this library.
I'm currently using this library to support some read-only requirements in an application.
Unfortunately, this library implicitly begins transactions, but fails to roll them back when a connection is released back into a connection pool. The problem with this implementation is that even a read-only client needs to be aware of these transactions and choose to either 1) set autocommit, or 2) explicitly rollback after every query.
The aside from the risk of getting increasingly stale data the longer an application runs, this also locks any table accessed within that transaction from access for operations which may change the shape of the table (i.e., adding an index or column). If an alter is attempted on a table with a Read Metadata lock (which transactions acquire), that alter will hang indefinitely and also prevent any subsequent operations on the table in question.
In my experience with other libraries, if a transaction block is begun but not committed, the library will automatically emit a rollback. This can be handled one of two ways:
1. The library requires an explicit transaction context (when not in autocommit mode). When used as a context manager, the library can automatically emit a commit/rollback when exiting the context. (This is how asyncpg has implemented [transactions](https://magicstack.github.io/asyncpg/current/api/index.html#transactions)).
2. The library has an implicit transaction for every connection. It should also have an implicit rollback when that connection is released back to the connection pool (when not in autocommit mode).
A sample of the code which I'm using:
The Connector class:
```python
import aiomysql
from app import settings
class MySQLConnector:
"""A simple connector for aiomysql."""
__slots__ = ("pool",)
def __init__(self, *, pool: aiomysql.Pool = None):
self.pool: aiomysql.Pool = pool
def __repr__(self):
open = self.pool is not None
return f"<{self.__class__.__name__} {open=}>"
async def initialize(self):
if not self.pool:
self.pool = await create_pool()
@contextlib.asynccontextmanager
async def connection(self, *, c: aiomysql.Connection = None) -> aiomysql.Connection:
if c:
yield c
else:
async with self.pool.acquire() as conn:
yield conn
async def close(self, timeout: int = 10):
if self.pool:
self.pool.close()
await asyncio.wait_for(self.pool.wait_closed(), timeout=timeout)
self.pool = None
def __del__(self):
if not asyncio: # Python is shutting down.
return
try:
loop = asyncio.get_event_loop()
if loop.is_running():
loop.create_task(self.close())
else:
loop.run_until_complete(self.close())
except RuntimeError:
pass
async def create_pool():
return await aiomysql.create_pool(
user=settings.user,
password=settings.password,
host=settings.host,
db=settings.db,
cursorclass=DictCursor,
maxsize=200,
)
```
A sample query client:
```python
class QueryClient:
__slots__ = ("connector",)
def __init__(self, *, c: MySQLConnector = None):
self.connector = c or MySQLConnector()
async def get_the_answer(
self, *, c: aiomysql.Connection = None
) -> Literal[42]:
async with self.connector.connection() as c:
async with c.cursor() as cur:
await cur.execute("SELECT answer FROM everything;")
(answer,) = await cur.fetchone()
return answer
```
Reproduce the locked state by:
1. Execute QueryClient.get_the_answer()
2. In a separate connection, run an `ALTER` on the table (any will do).
The alter will hang indefinitely. At this point, even disconnecting the client which ran the `ALTER` will leave the query pending server side and block any other access attempts on the table. The only mitigation is to close all connections by the first client, or manually delete the pending queries in MySQL.
You can verify that there is an active transaction held by the first client by running the following commands and cross-referencing thread IDs in the process list with thread IDs listed as active under the `Transactions` section of the engine status.
```sql
SHOW FULL PROCESSLIST;
SHOW ENGINE InnoDB STATUS;
```
Now, this can be mitigated by either running in autocommit mode or explicitly rolling back after every query, but I do think both options are less than ideal and are prone to individuals entering the state which I described above. | closed | 2021-01-26T18:12:30Z | 2022-07-11T01:00:24Z | https://github.com/aio-libs/aiomysql/issues/562 | [
"duplicate",
"enhancement"
] | seandstewart | 1 |
jmcnamara/XlsxWriter | pandas | 811 | Bug cannot put temperature character as sheet name '25°C' | Hi,
I am using Python version 3.8.
I want to create sheet name with the '25°C'.
° -> is allowed sheet name temperature symbol but I have an exception from the library when trying to create such name.
Expected:
Should be allowed to create sheet names with with non-braking special characters.
| closed | 2021-06-14T11:16:44Z | 2021-06-14T12:37:21Z | https://github.com/jmcnamara/XlsxWriter/issues/811 | [] | ievgennaida | 0 |
hack4impact/flask-base | sqlalchemy | 84 | Should we add csslint, eslint? | Used by CodeClimate | closed | 2016-12-11T21:16:09Z | 2021-08-31T04:46:58Z | https://github.com/hack4impact/flask-base/issues/84 | [] | yoninachmany | 2 |
allenai/allennlp | data-science | 4,938 | Demo pages don't scroll right | Since this morning Jan 28, the Demo pages no longer allow to scroll right when overflow | closed | 2021-01-28T13:49:14Z | 2021-02-11T16:40:30Z | https://github.com/allenai/allennlp/issues/4938 | [
"bug"
] | lenyabloko | 2 |
graphql-python/graphene | graphql | 1,371 | graphene/docs/execution/queryvalidation.rst missing usage example | From graphene/docs/execution/queryvalidation.rst
> **Usage**
> Here is how you would implement depth-limiting on your schema.
Is missing a link or example code | closed | 2021-09-22T10:36:46Z | 2021-10-08T16:23:58Z | https://github.com/graphql-python/graphene/issues/1371 | [
"🐛 bug"
] | reubenosborne1 | 6 |
plotly/dash | jupyter | 2,606 | [BUG] Duplicate callback outputs error when background callbacks or long_callbacks share a cancel input | **Describe your context**
Currently I am trying to migrate a Dash application from using versions 2.5.1 to a newer one, and as suggested for Dash >2.5.1 I am moving from using ```long_callbacks``` to using the ```dash.callback``` with ```background=True```, along with moving to using ```background_callback_manager```.
Environment
```
dash >2.5.1
```
**Describe the bug**
Updating to Dash 2.6.0+ causes previously working ```long_callbacks``` to cause Dash to give a duplicate callback output error, even though there are no duplicate outputs, the error is still present even after switching to the ```dash.callback``` with ```background=True```, and moving to using ```background_callback_manager```. This only appears when multiple background (or long_callbacks) have a shared cancel input.
Here is a [link](https://github.com/C-C-Shen/dash_background_callback_test) to a test repo that shows this, with a Dash 2.5.1 version that works and a Dash 2.6.1 version that does not.
Here is a code snippet from the example repo:
```
@callback(
output=Output("paragraph_id_1", "children"),
inputs=Input("button_id_1", "n_clicks"),
prevent_initial_call=True,
background=True,
running=[
(Output("button_id_1", "disabled"), True, False),
],
cancel=[Input("cancel_button_id", "n_clicks")],
)
def update_clicks(n_clicks):
time.sleep(2.0)
return [f"Clicked Button 1: {n_clicks} times"]
@callback(
output=Output("paragraph_id_2", "children"),
inputs=Input("button_id_2", "n_clicks"),
prevent_initial_call=True,
background=True,
running=[
(Output("button_id_2", "disabled"), True, False),
],
cancel=[Input("cancel_button_id", "n_clicks")],
)
def update_clicks(n_clicks):
time.sleep(2.0)
return [f"Clicked Button 2: {n_clicks} times"]
```
**Expected behavior**
Sharing the same cancel parameter between multiple callbacks should work like it does in Dash 2.5.1. Moving to Dash 2.6.0+ should probably not be causing a duplicate callback output error when no outputs are duplicated.
- if this is expected then it should be mentioned in the changelog
**Screenshots**
This an example error that is associated with the ```new.py``` in the example repo linked above

| closed | 2023-07-31T19:02:22Z | 2023-08-01T11:57:05Z | https://github.com/plotly/dash/issues/2606 | [] | C-C-Shen | 1 |
streamlit/streamlit | machine-learning | 10,023 | Add `height` and `key` to `st.columns` | ### Checklist
- [X] I have searched the [existing issues](https://github.com/streamlit/streamlit/issues) for similar feature requests.
- [X] I added a descriptive title and summary to this issue.
### Summary
It would be great if the features of st.columns() and st.container() could be combined in a single API call, say st.columns(). For example, if one wants to create two scrollable fixed-height side-by-side columns one has to first instantiate two columns with cols = st.columns(2), and then create a st.container() within each column for a specified fixed height.
### Why?
Presently, st.columns() allows one to create multiple columns side by side, but one can neither make them fixed height scrollable. And with st.container(), one can make the container fixed height & scrollable, but there is no way to place containers side by side. This is frustrating when writing code since one has to remember these two types of behavior for the two functions when they could easily be rolled into a single function.
### How?
The present st.columns() API call is:
st.columns(spec, *, gap="small", vertical_alignment="top", border=False)
and the st.container() API call is:
st.container(*, height=None, border=None, key=None)
Simply add to st.columns() three of the parameters of st.container(), namely height, border and key. If you can do this this, there will be no need for st.container(), which will make it far easier for programmers.
### Additional Context
_No response_ | open | 2024-12-13T23:56:31Z | 2024-12-22T02:18:15Z | https://github.com/streamlit/streamlit/issues/10023 | [
"type:enhancement",
"feature:st.columns"
] | iscoyd | 4 |
yzhao062/pyod | data-science | 367 | Issues with Saving and Loading SUOD - Fails to find bps_prediction.joblib. | I train the SUOD model on a training compute and then deploy it to an inferece compute. When I try loading it on an inference compute, it fails to find a file bps_prediction.joblib.
_**FileNotFoundError: [Errno 2] No such file or directory: '/azureml-envs/azureml_59f19523a58c011168cb1902c0a74dff/lib/python3.6/site-packages/suod/models/saved_models/bps_prediction.joblib'**_
below is my code
x_train, x_test, y_train, y_test = train_test_split(training_data, training_data, test_size=0.1, random_state=42)
# initialized a group of outlier detectors for acceleration
detector_list = [LOF(n_neighbors=15),
COPOD(), IForest(n_estimators=150)]
clf = SUOD(base_estimators=detector_list, n_jobs=2, combination='average',
verbose=False)
clf.fit(x_train)
# get outlier scores
y_train_scores = clf.decision_scores_ # raw outlier scores on the train data
y_test_scores = clf.decision_function(x_test) # predict raw outlier scores on test
decisions= clf.predict(x_test)
os.makedirs('outputs', exist_ok=True)
fname=os.path.join('outputs/', model_name)
##Save the model
joblib.dump(clf, filename=fname, compress=1)
##Load the model on a different compute
loadedmodel= joblib.load(model_path)
Where do I find this file "bps_prediction.joblib" or what is the way around ?
| closed | 2022-01-27T15:28:52Z | 2022-01-27T19:35:24Z | https://github.com/yzhao062/pyod/issues/367 | [] | bhowmiks | 1 |
Miserlou/Zappa | flask | 1,678 | [Feature] add dependency monitoring to repo | Hi,
i don't have an issue (yet) but see lots of open PRs for dependencies created by doppins.
I would suggest adding the repo to a dependency monitoring service such as [requires.io](https://requires.io) that gives an easy overview on outdated packages.
kind regards | open | 2018-10-24T16:18:10Z | 2018-10-24T16:18:10Z | https://github.com/Miserlou/Zappa/issues/1678 | [] | cl1ent | 0 |
JaidedAI/EasyOCR | machine-learning | 1,128 | Process finished with exit code -1073741795 (0xC000001D) | Description of the problem:
EasyOcr is successfully installed, the program starts, but the text from the picture does not output at all.
Code:
import easy ocr
Reader = easyocr.Reader(['ru', 'en']) # this needs to be run only once to load the model into memory
result = Reader.readtext('12.jpg ')
Conclusion:
neither CUDA nor MPS are available — the CPU is used by default. Note. This module works much faster with a GPU.
The process is completed with exit code -1073741795 (0xC000001D).
My torment:
I ran into a big problem when using easyOCR, namely, when working on a laptop with an i5 -8250U processor, I installed and tested your library for the first time, it started almost immediately without problems and recognized the text with the image, which I was very happy about.
Developing a program for classifying PDF files by keywords) At the end of the practice, I threw the virtual environment with this project on a flash drive, then started running it on an old laptop (i3-2100m, GT-610m) and the library refused to work on it, then I tried to run it on a PC (i7-4960X, RTX 2060, 64 RAM). I spent 10 hours trying to run this library, in the end I didn't succeed, the attempts I made during these 10 hours:
Reinstalling EasyOCR
Reinstalling PIL, V2, Torch
I was poking around in the code, I didn't understand anything
Created a new virtual environment, reinstalled everything, nothing helped
Old Python and other Python versions are installed.
Changed dependency versions randomly
I tried to install it very carefully several times according to the manual:
pip install torch torchvision torchaudio
pip install easyocr
And it didn't help, it outputs "The process is completed with exit code -1073741795 (0xC000001D)
I don't know what to do anymore, but I'm asking for help with this problem. | open | 2023-09-01T21:05:34Z | 2024-11-08T02:52:42Z | https://github.com/JaidedAI/EasyOCR/issues/1128 | [] | Michaelufcb | 6 |
anselal/antminer-monitor | dash | 125 | need auto OC | Most antminer machines need to manually adjust the frequency, hoping to increase the automatic overclocking function. | open | 2018-09-22T02:36:28Z | 2018-10-04T07:03:52Z | https://github.com/anselal/antminer-monitor/issues/125 | [
":star: feature request"
] | lhz7797251 | 1 |
python-visualization/folium | data-visualization | 1,997 | Format time to show year only in choropleth time slider | There may already be a way to do this, but I was not able to stumble on it...
I would like to be able to control the data format in the slider bar in a time slider choropleth. In my case (and I believe many others), I am describing annual trends and all that is required in the slider is the year.
I am requesting:
1) Ability to set the time format displayed in the choropleth time slider itself by passing a date format. or setting a "show year only" variable, or if this is already present,
2) Enhanced documentation describing how this can be modified.
| closed | 2024-09-24T15:56:44Z | 2024-10-18T19:20:10Z | https://github.com/python-visualization/folium/issues/1997 | [
"work in progress"
] | TenThumbsWPG | 0 |
psf/black | python | 3,857 | Indentation of split string in dict would be better align to the value rather than key | <!--
Please make sure that the bug is not already fixed either in newer versions or the
current development version. To confirm this, you have three options:
1. Update Black's version if a newer release exists: `pip install -U black`
2. Use the online formatter at <https://black.vercel.app/?version=main>, which will use
the latest main branch.
3. Or run _Black_ on your machine:
- create a new virtualenv (make sure it's the same Python version);
- clone this repository;
- run `pip install -e .[d]`;
- run `pip install -r test_requirements.txt`
- make sure it's sane by running `python -m pytest`; and
- run `black` like you did last time.
-->
**Describe the bug**
Indentation of split string in dict would be better align to the value rather than key
<!-- A clear and concise description of what the bug is. -->
**To Reproduce**
<!--
Minimal steps to reproduce the behavior with source code and Black's configuration.
-->
For example, take this code:
```python
a_dict = {
"key1": "value1",
"key2": "long longlonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglong"
"long long value",
}
```
**Expected behavior**
<!-- A clear and concise description of what you expected to happen. -->
I think it would be visually more pleasing to have it aligned by value
```python
a_dict = {
"key1": "value1",
"key2": "long longlonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglong"
"long long value",
}
```
currently it is aligns it by key
```python
a_dict = {
"key1": "value1",
"key2": "long longlonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglonglong"
"long long value",
}
``` | closed | 2023-08-30T06:03:59Z | 2023-10-28T05:08:21Z | https://github.com/psf/black/issues/3857 | [
"T: bug"
] | masterspelling | 4 |
pyppeteer/pyppeteer | automation | 373 | Not working | ```
...
DeprecationWarning: There is no current event loop
asyncio.get_event_loop().run_until_complete(run())
```
```python
asyncio.run(run())
```
```
...
await client.send('Page.enable'),
pyppeteer.errors.NetworkError: Protocol error Page.enable: Target closed.
```
```bash
~
➜ python --version
Python 3.10.4
``` | open | 2022-03-30T05:13:17Z | 2023-09-08T14:10:56Z | https://github.com/pyppeteer/pyppeteer/issues/373 | [] | s3rgeym | 6 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 1,135 | Hello, | Hello,
using a new pyenv environment with the following versions and lib installed (after doing the `pip3 install torch torchvision torchaudio`)
```
% python --version
Python 3.10.4
% pyenv --version
pyenv 2.3.0
% pip list
Package Version
------------------ ---------
certifi 2022.9.14
charset-normalizer 2.1.1
idna 3.4
numpy 1.23.3
Pillow 9.2.0
pip 22.2.2
requests 2.28.1
setuptools 58.1.0
torch 1.12.1
torchaudio 0.12.1
torchvision 0.13.1
typing_extensions 4.3.0
urllib3 1.26.12`
I am having an error when trying to install the requirements `pip3 install -r requirements.txt`
` Getting requirements to build wheel ... done
Preparing metadata (pyproject.toml) ... error
error: subprocess-exited-with-error
× Preparing metadata (pyproject.toml) did not run successfully.
│ exit code: 1
╰─> [29 lines of output]
Traceback (most recent call last):
File "/Users/adpablos/.pyenv/versions/3.10.4/envs/real-time-voice-cloning/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 156, in prepare_metadata_for_build_wheel
hook = backend.prepare_metadata_for_build_wheel
AttributeError: module 'sipbuild.api' has no attribute 'prepare_metadata_for_build_wheel'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/adpablos/.pyenv/versions/3.10.4/envs/real-time-voice-cloning/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 363, in <module>
main()
File "/Users/adpablos/.pyenv/versions/3.10.4/envs/real-time-voice-cloning/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 345, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
File "/Users/adpablos/.pyenv/versions/3.10.4/envs/real-time-voice-cloning/lib/python3.10/site-packages/pip/_vendor/pep517/in_process/_in_process.py", line 160, in prepare_metadata_for_build_wheel
whl_basename = backend.build_wheel(metadata_directory, config_settings)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/sipbuild/api.py", line 46, in build_wheel
project = AbstractProject.bootstrap('wheel',
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/sipbuild/abstract_project.py", line 87, in bootstrap
project.setup(pyproject, tool, tool_description)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/sipbuild/project.py", line 584, in setup
self.apply_user_defaults(tool)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-install-eenv0a8p/pyqt5_77eef741f3924b23ad38cc2613c5171c/project.py", line 63, in apply_user_defaults
super().apply_user_defaults(tool)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/pyqtbuild/project.py", line 70, in apply_user_defaults
super().apply_user_defaults(tool)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/sipbuild/project.py", line 236, in apply_user_defaults
self.builder.apply_user_defaults(tool)
File "/private/var/folders/7t/5snbn06x5j17zqr251ryl7p40000gn/T/pip-build-env-fp3sbooh/overlay/lib/python3.10/site-packages/pyqtbuild/builder.py", line 67, in apply_user_defaults
raise PyProjectOptionException('qmake',
sipbuild.pyproject.PyProjectOptionException
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: metadata-generation-failed
× Encountered error while generating package metadata.
╰─> See above for output.
note: This is an issue with the package mentioned above, not pip.
hint: See above for details.
```
Any idea what I am missing?
Thanks in advance!
__Originally posted by @adpablos in https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1113__ | closed | 2022-11-19T17:25:27Z | 2022-12-02T08:51:51Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1135 | [] | ImanuillKant1 | 0 |
nonebot/nonebot2 | fastapi | 2,542 | Plugin: SakuraFrp | ### PyPI 项目名
nonebot-plugin-enatfrp
### 插件 import 包名
nonebot_plugin_enatfrp
### 标签
[]
### 插件配置项
_No response_ | closed | 2024-01-20T14:17:23Z | 2024-01-21T11:48:51Z | https://github.com/nonebot/nonebot2/issues/2542 | [
"Plugin"
] | eya46 | 1 |
google/seq2seq | tensorflow | 113 | Character Seq2Seq Attention model | For training character level model, do we have to provide the delimiter for training as
python -m bin.train \
.....
--input_pipeline_train "
class: ParallelTextInputPipeline
params:
source_delimiter :
-""
target_delimiter :
-""
.....
"
Similarly for input_pipeline_dev and also when inferring?
I tried as suggested in https://google.github.io/seq2seq/tools/, passing delimiter flag during training.
python -m bin.train \
--delimiter = ""
--config_paths = "....."
but I don't think it is using this parameter (looking at the code as well as the results.)
For character level seq2seq model, is it necessary to generate vocabulary using generate_vocab.py as solved by #108 or can we use our own defined vocabulary. | closed | 2017-03-27T15:23:19Z | 2017-07-09T15:20:19Z | https://github.com/google/seq2seq/issues/113 | [] | shubhamagarwal92 | 2 |
vitalik/django-ninja | rest-api | 1,158 | Exceptions log level | **Is your feature request related to a problem? Please describe.**
_Feature to change log level of exceptions and/or remove the logging._
By default in operation.py any raised exception during endpoint handling inside context manager activates this part of the code regardless of what kind of exception it is.

_By default **django** treats 404 and such kind of errors with **WARNING**._
So even 404 in django extra -> ERROR log.
Cuz of it we can't lets say create custom django log handler that sends real ERROR's to email/messanger and etc.
_Ideally logic should be like that:_
_exception_handlers handled exception? -> WARNING
_exception_handlers not handled exception? -> ERROR
_There is def on_exception() in NinjaAPI for that_
So i am not sure why there is some logging logic in operation.py before we find a handler for an Exception.
**Describe the solution you'd like**
**Add ability to change and/or remove exception logging in operation.py** | closed | 2024-05-09T11:52:05Z | 2024-05-09T11:56:05Z | https://github.com/vitalik/django-ninja/issues/1158 | [] | mrisedev | 1 |
pytest-dev/pytest-django | pytest | 1,134 | Tables in test db are not created for models with app_label | I use `--no-migrations` option to create a test database by inspecting all models. This usually works well, but I got a problem with django models that have an `app_label` meta attr. Such models are ignored and do not create automatically.
Model for example:
```python
class SAPDepartment(models.Model):
class Meta:
app_label = 'structure'
sap_name = models.CharField(max_length=150, null=False, blank=False)
sap_id = models.IntegerField()
up_dep = models.ForeignKey('self', related_name='low_dep', on_delete=models.SET_NULL, null=True)
```
Tables list if we do not use `app_label` (`sapdepartment` table exists):
```
test_procurement_db=# \dt
List of relations
Schema | Name | Type | Owner
--------+---------------------------------------+-------+----------
...
public | django_content_type | table | postgres
public | django_session | table | postgres
public | info_db_sapdepartment | table | postgres
public | info_db_sapemployee | table | postgres
public | material_analogue_employee | table | postgres
...
(27 rows)
```
Tables list if we use `app_label` (there is not `sapdepartment` table):
```
test_procurement_db=# \dt
List of relations
Schema | Name | Type | Owner
--------+---------------------------------------+-------+----------
public | auth_group | table | postgres
public | auth_group_permissions | table | postgres
public | auth_permission | table | postgres
public | auth_user | table | postgres
public | auth_user_groups | table | postgres
public | auth_user_user_permissions | table | postgres
public | django_admin_log | table | postgres
public | django_celery_beat_clockedschedule | table | postgres
public | django_celery_beat_crontabschedule | table | postgres
public | django_celery_beat_intervalschedule | table | postgres
public | django_celery_beat_periodictask | table | postgres
public | django_celery_beat_periodictasks | table | postgres
public | django_celery_beat_solarschedule | table | postgres
public | django_content_type | table | postgres
public | django_session | table | postgres
public | material_analogue_employee | table | postgres
public | material_analogue_group | table | postgres
public | material_analogue_group_members | table | postgres
public | material_analogue_lot | table | postgres
public | material_analogue_materialrequest | table | postgres
public | material_analogue_materialrequestfile | table | postgres
public | material_analogue_procurer | table | postgres
public | material_analogue_requeststatus | table | postgres
public | material_analogue_subsection | table | postgres
public | material_analogue_supplier | table | postgres
(25 rows)
| open | 2024-07-29T10:41:31Z | 2024-07-29T12:01:19Z | https://github.com/pytest-dev/pytest-django/issues/1134 | [] | kosdmit | 0 |
quokkaproject/quokka | flask | 609 | cli: deploy scp | Transfer static site via scp
https://github.com/rochacbruno/quokka_ng/issues/62 | open | 2018-02-07T01:46:29Z | 2018-02-07T01:46:29Z | https://github.com/quokkaproject/quokka/issues/609 | [
"1.0.0",
"hacktoberfest"
] | rochacbruno | 0 |
rpicard/explore-flask | flask | 25 | Ch. 8: Tweak explanation of context | From @willkg
> [...] Then you can tweak the rest of the paragraph to talk about the environment context formed by Flask's context processors and also the developer's context processors.
| open | 2014-01-09T18:21:25Z | 2014-01-09T18:21:50Z | https://github.com/rpicard/explore-flask/issues/25 | [
"content suggestion"
] | rpicard | 0 |
piskvorky/gensim | machine-learning | 2,916 | Add script to verify that autogenerated docs are up to date | I see no real alternative either. Doesn't moving parts of our docs + testing elsewhere simply shift the problem around? That new place will still need testing, whenever things change. No difference.
Except code + docs in one place (=now) makes more sense conceptually IMO.
Yes, scripts / hooks that verify docs are in-sync, preferably at PR merge-time, would be nice.
_Originally posted by @piskvorky in https://github.com/RaRe-Technologies/gensim/pull/2907#issuecomment-671317949_ | closed | 2020-08-16T11:01:16Z | 2022-04-22T10:11:47Z | https://github.com/piskvorky/gensim/issues/2916 | [
"documentation",
"housekeeping"
] | mpenkov | 1 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 550 | Scipy distances in Contrastive Loss | Hi!
I would like to know if scipy distance measures are compatible with Contrastive Loss. And if so, would it be possible to see an example of use?
Thank you very much :) | closed | 2022-11-22T07:57:49Z | 2022-11-22T11:14:07Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/550 | [
"question"
] | evablanco | 1 |
pytorch/pytorch | deep-learning | 149,627 | DISABLED test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int16 (__main__.TestForeachCUDA) | Platforms: linux, slow
This test was disabled because it is failing in CI. See [recent examples](https://hud.pytorch.org/flakytest?name=test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int16&suite=TestForeachCUDA&limit=100) and the most recent trunk [workflow logs](https://github.com/pytorch/pytorch/runs/39097709474).
Over the past 3 hours, it has been determined flaky in 3 workflow(s) with 6 failures and 3 successes.
**Debugging instructions (after clicking on the recent samples link):**
DO NOT ASSUME THINGS ARE OKAY IF THE CI IS GREEN. We now shield flaky tests from developers so CI will thus be green but it will be harder to parse the logs.
To find relevant log snippets:
1. Click on the workflow logs linked above
2. Click on the Test step of the job so that it is expanded. Otherwise, the grepping will not work.
3. Grep for `test_binary_op_with_scalar_self_support__foreach_pow_is_fastpath_True_cuda_int16`
4. There should be several instances run (as flaky tests are rerun in CI) from which you can study the logs.
Test file path: `test_foreach.py`
cc @clee2000 @crcrpar @mcarilli @janeyx99 | open | 2025-03-20T15:42:47Z | 2025-03-20T15:42:52Z | https://github.com/pytorch/pytorch/issues/149627 | [
"triaged",
"module: flaky-tests",
"skipped",
"module: mta"
] | pytorch-bot[bot] | 1 |
alteryx/featuretools | data-science | 2,106 | EPIC: Refactor `DeepFeatureSynthesis._build_features` | `DeepFeatureSynthesis._build_features` is in need of a refactor to improve speed, maintainability, and scalability.
There are many optimizations that can be made underneath this function to improve performance while maintaining API signature. As a rough benchmark, the `get_valid_primitives` function takes 2 hours to run on the retail entityset to produce a little over 5 million feature defintions. This can be optimized to be much take a much shorter time.
## Functions should be more granular and testable:
For example one of the most granular functions should take a datastructure which is a hashmap of features, keyed by their ColumnSchema as a single argument, and another argument which is an inputset (eg. Numeric, Boolean), and return a list of lists of all feature combinations that match this inputtype signature. This function should be pure, which would improve maintainability by being very readable and testable.
## Optimizations:
### Caching
Using the example above, this function could be wrapped with an LRU Cache decorator that would allow primitives that have input signatures matching other primitives to return immediately. Memory issues should be of little concern since these calculations can be perfomed using very datastructures containing logical types only and no data, but this should be measured and tested.
### Data Structures
Features and primitives should be hashed by their associated logical types for faster lookup. | open | 2022-06-10T21:02:00Z | 2023-01-17T15:36:28Z | https://github.com/alteryx/featuretools/issues/2106 | [
"enhancement",
"refactor",
"tech debt"
] | dvreed77 | 0 |
ets-labs/python-dependency-injector | asyncio | 643 | unable to install on system with cp949 default locale | When installing `dependency-injector` with `pip` on a Windows system where the default locale is `cp949` ([EUC-KR](https://en.wikipedia.org/wiki/Unified_Hangul_Code)), the setup crashes as follows:
```
File "C:\Users\kang\AppData\Local\Temp\pip-install-s5l83_ea\dependency-injector_1f5595dca62448e69eff82ed9399ee9d\setup.py", line 15, in
description = readme_file.read()
^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'cp949' codec can't decode byte 0xf0 in position 8665: illegal multibyte sequence
```
unclear whether this happens with any other locales. | closed | 2022-12-13T06:06:44Z | 2022-12-19T02:25:02Z | https://github.com/ets-labs/python-dependency-injector/issues/643 | [
"bug"
] | ebr | 2 |
jumpserver/jumpserver | django | 14,211 | [Bug] jumpserver/jms_all:latest(v4.2.0)版本升级失败,无法启动 | ### 产品版本
v4.2.0
### 版本类型
- [X] 社区版
- [ ] 企业版
- [ ] 企业试用版
### 安装方式
- [ ] 在线安装 (一键命令安装)
- [ ] 离线包安装
- [X] All-in-One
- [ ] 1Panel
- [ ] Kubernetes
- [ ] 源码安装
### 环境信息
1.操作系统:centos 7.9
2.docker环境
Client: Docker Engine - Community
Version: 26.1.4
API version: 1.45
Go version: go1.21.11
Git commit: 5650f9b
Built: Wed Jun 5 11:32:04 2024
OS/Arch: linux/amd64
Context: default
Server: Docker Engine - Community
Engine:
Version: 26.1.4
API version: 1.45 (minimum version 1.24)
Go version: go1.21.11
Git commit: de5c9cf
Built: Wed Jun 5 11:31:02 2024
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.6.33
GitCommit: d2d58213f83a351ca8f528a95fbd145f5654e957
runc:
Version: 1.1.12
GitCommit: v1.1.12-0-g51d5e94
docker-init:
Version: 0.19.0
GitCommit: de40ad0
3. 持久化采用本地目录映射的方式,数据存储在/data/jumpserver目录.
### 🐛 缺陷描述
使用lastest版本(对应v4.2.0)重启容器后数据库无法migrate,docker logs查看容器报告如下错误:
```bash
docker logs -f jumpserver
rm: cannot remove '/opt/jumpserver/data': Device or resource busy
rm: cannot remove '/opt/koko/data': Device or resource busy
rm: cannot remove '/opt/lion/data': Device or resource busy
rm: cannot remove '/opt/chen/data': Device or resource busy
rm: cannot remove '/var/log/nginx': Device or resource busy
>> Init database
External database skip start, 172.17.0.1
>> Init nginx
External redis server skip start, 172.17.0.1
Starting periodic command scheduler: cron.
>> Update database structure
Traceback (most recent call last):
File "/opt/jumpserver/./jms", line 21, in <module>
django.setup()
File "/opt/py3/lib/python3.11/site-packages/django/__init__.py", line 24, in setup
apps.populate(settings.INSTALLED_APPS)
File "/opt/py3/lib/python3.11/site-packages/django/apps/registry.py", line 124, in populate
app_config.ready()
File "/opt/jumpserver/apps/common/apps.py", line 23, in ready
django_ready.send(CommonConfig)
File "/opt/py3/lib/python3.11/site-packages/django/dispatch/dispatcher.py", line 176, in send
return [
^
File "/opt/py3/lib/python3.11/site-packages/django/dispatch/dispatcher.py", line 177, in <listcomp>
(receiver, receiver(signal=self, sender=sender, **named))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/jumpserver/apps/orgs/signal_handlers/common.py", line 50, in subscribe_orgs_mapping_expire
orgs_mapping_for_memory_pub_sub.subscribe(
File "/opt/jumpserver/apps/common/utils/connection.py", line 29, in subscribe
ps.subscribe(self.ch)
File "/opt/py3/lib/python3.11/site-packages/redis/client.py", line 927, in subscribe
ret_val = self.execute_command("SUBSCRIBE", *new_channels.keys())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py3/lib/python3.11/site-packages/redis/client.py", line 756, in execute_command
self.connection = self.connection_pool.get_connection(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py3/lib/python3.11/site-packages/redis/connection.py", line 1104, in get_connection
connection.connect()
File "/opt/py3/lib/python3.11/site-packages/redis/connection.py", line 288, in connect
self.on_connect()
File "/opt/py3/lib/python3.11/site-packages/redis/connection.py", line 354, in on_connect
auth_response = self.read_response()
^^^^^^^^^^^^^^^^^^^^
File "/opt/py3/lib/python3.11/site-packages/redis/connection.py", line 512, in read_response
response = self._parser.read_response(disable_decoding=disable_decoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py3/lib/python3.11/site-packages/redis/_parsers/resp2.py", line 15, in read_response
result = self._read_response(disable_decoding=disable_decoding)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/py3/lib/python3.11/site-packages/redis/_parsers/resp2.py", line 38, in _read_response
raise error
redis.exceptions.AuthenticationError: AUTH <password> called without any password configured for the default user. Are you sure your configuration is correct?
Failed to change the table structure.
```
而且**数据目录会被清空**
### 复现步骤
1. 使用备份的4.0.2版本的jumpserver数据目录(重要,v4.2.0版本容器启动失败后会清空目录!!)
2. 启动容器,命令:docker run --name jumpserver --restart=always -d --privileged=true -v /data/jumpserver/core/data:/opt/jumpserver/data -v /data/jumpserver/koko/data:/opt/koko/data -v /data/jumpserver/lion/data:/opt/lion/data -v /data/jumpserver/kael/data:/opt/kael/data -v /data/jumpserver/chen/data:/opt/chen/data -v /data/jumpserver/web/log:/var/log/nginx -p 8088:80 -p 2222:2222 -e SECRET_KEY=xxx -e BOOTSTRAP_TOKEN=xxx -e DB_HOST=xxx -e DB_PORT=3316 -e DB_USER=jumpserver -e DB_PASSWORD=xxx -e DB_NAME=jumpserver -e LOG_LEVEL=INFO -e REDIS_HOST=172.17.0.1 -e REDIS_PORT=6379 -e REDIS_PASSWORD= jumpserver/jms_all
### 期望结果
参考历史版本,可以自行migrate数据库,并启动各项服务
### 补充信息
_No response_
### 尝试过的解决方案
_No response_ | closed | 2024-09-20T08:35:04Z | 2024-11-26T10:10:38Z | https://github.com/jumpserver/jumpserver/issues/14211 | [
"🐛 Bug",
"💡 FAQ"
] | casualfish | 19 |
falconry/falcon | api | 1,479 | Test client fails when WSGI application is a generator | Here is the test that proves it:
```
from falcon.testing import TestClient
def another_dummy_wsgi_app(environ, start_response):
start_response('200 OK', [('Content-Type', 'text/plain')])
yield 'It works!'
def test_falcon_testing_client():
client = TestClient(another_dummy_wsgi_app)
client.simulate_get('/nevermind')
```
Here is [the link](/tribals/falcon/blob/feature%2Fgenerator-app/tests/test_testing.py) for convenience. | closed | 2019-03-12T09:03:02Z | 2019-04-01T21:11:54Z | https://github.com/falconry/falcon/issues/1479 | [] | tribals | 1 |
xmu-xiaoma666/External-Attention-pytorch | pytorch | 94 | 部分模块在使用GPU计算时报错(Most modules report errors when using GPU) | 报错内容类似于:
RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same
或者
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument index in method wrapper__index_select)
修复方案:
将list修改为modulelist
如model/mlp/mlp_mixer.py文件中(49行):
将self.mlp_blocks=[]修改为self.mlp_blocks=nn.ModuleList([]) | open | 2022-11-20T14:08:44Z | 2023-01-11T12:02:44Z | https://github.com/xmu-xiaoma666/External-Attention-pytorch/issues/94 | [] | ImcwjHere | 2 |
google-deepmind/sonnet | tensorflow | 93 | Too strict sanity check on initializer dict | Dear deepminder:
I constructed a model using snt.Conv2D and snt.SeparableConv2D. When I create an initializer dict and pass it to the constructor, it complains.
The initializer dict I constructed is as follows:
```
initializers = {"w": tf.contrib.layers.xavier_initializer(),
"w_dw": tf.contrib.layers.xavier_initializer(),
"w_pw": tf.contrib.layers.xavier_initializer(),
"b": tf.zeros_initializer()
}
```
And the complain message is as follows:
```
File "/home/yuming/tensorflow/lib/python2.7/site-packages/sonnet/python/modules/util.py", line 170, in check_initializers
", ".join("'{}'".format(key) for key in keys)))
KeyError: "Invalid initializer keys 'w_pw', 'w_dw', initializers can only be provided for 'b', 'w'"
SIGCHLD received
```
I personally thinking it is not necessary for constructing initializers for snt.Conv2D and snt.SeparableConv2D separately since it is a little redundant, how about relieve sanity check on the keys? | closed | 2018-08-19T06:29:24Z | 2018-10-11T00:45:11Z | https://github.com/google-deepmind/sonnet/issues/93 | [] | mingyr | 2 |
dask/dask | numpy | 11,398 | Boolean index assignment fails for values of `ndim>1` | This works in NumPy, but fails in Dask.
```python
In [1]: import numpy as np
In [2]: import dask.array as da
In [3]: x = np.asarray([[1, 2]])
In [4]: y = np.asarray([[3, 4]])
In [5]: i = np.asarray([True])
In [6]: xd = da.asarray(x)
In [7]: yd = da.asarray(y)
In [8]: id = da.asarray(i)
In [9]: x[i] = y[i]
In [10]: x
Out[10]: array([[3, 4]])
In [11]: xd[id] = yd[id]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[11], line 1
----> 1 xd[id] = yd[id]
File ~/programming/pixi-dev-scipystack/scipy/.pixi/envs/array-api/lib/python3.12/site-packages/dask/array/core.py:1903, in Array.__setitem__(self, key, value)
1900 from dask.array.routines import where
1902 if isinstance(value, Array) and value.ndim > 1:
-> 1903 raise ValueError("boolean index array should have 1 dimension")
1904 try:
1905 y = where(key, value, self)
ValueError: boolean index array should have 1 dimension
In [12]: yd[id].ndim
Out[12]: 2
```
**Anything else we need to know?**:
I think the error message is unclear too. I initially thought that the "boolean index array" referred to `id` here (the index array), as opposed to `yd[id]`, the value array being assigned.
| open | 2024-09-18T23:19:17Z | 2025-03-10T01:50:59Z | https://github.com/dask/dask/issues/11398 | [
"array",
"needs attention"
] | lucascolley | 7 |
Kanaries/pygwalker | pandas | 187 | Open Source License UNCLEAR for commercial developers ? | Please clarify the licensing terms for the PyGWalker codebase for corporate or commercial developer. It looks like you are charging $10 per month for PyGWalker.
Is PyGWalker completely Free and Open Source? Please clarify the FOSS terms for PyGWalker.
Can it be forever decoupled from your other commercial products?
Please explain your licensing terms for people who want to use or embed PyGWalker codebase:
1) into their own applications, for internal use with PyGWalker code embedded.
2) into their own applications, for possibly resale with PyGWalker code embedded.
On the PyGWalker home page there is a "Plans" button in the upper right hand corner.
https://docs.kanaries.net/pygwalker
When I click on that it says the minimum plan is "Plus", not FREE:
- - Plus
- - $10.00 per month
- - 5-day free trial
- - Kanaries RATH (Augmented Analytics) Plus
- - Dataset Size Up to 200,000 Rows
- - vizGPT: Build Data Visualization with Chat Interface
- - Graphic Walker: Embeddable Visual Analysis
- - PyGWalker: No-code Visual Analysis in Jupyter Notebook
- - Cloud Storage Space Up to 2GB
-
I don't want to use RATH or vizGPT, because these look like proprietary licensed technologies that use specific AI backends.
The "Branding and Attribution" terms are not standard for Apache licenses. The "White Label License" are rather onerous in the PyGWalker "License 2", which was just updated 5 days ago. The cost for white label licenses is missing on your website? I want to see transparency about this problem.
## 2. Branding and Attribution
You are allowed to integrate the Software into Your own software products, provided that:
- You must clearly and prominently declare that Your software product is based on or incorporates the Software, in all user interfaces where such information is normally presented, and in all marketing and promotional materials.
- You must not modify, remove, or obscure any branding information, logos, trademarks, or other proprietary notices in the Software, including but not limited to the "Kanaries" logo. You must keep the "Kanaries" logo intact and unmodified, and it must be clearly visible in all user interfaces where it is normally presented, and in all marketing and promotional materials.
## 3. White Label License
If You wish to use the Software in a "white label" manner, where Your branding replaces the original branding of the Software, You must apply for and obtain a separate White Label License from Graphic-Walker. Until You obtain such a license, You must comply with the branding and attribution requirements described in Section 2.
Please clarify the FOSS terms for PyGWalker. Can it be forever decoupled from your other commercial products? | closed | 2023-08-01T23:05:53Z | 2023-08-09T02:14:16Z | https://github.com/Kanaries/pygwalker/issues/187 | [
"fixed but needs feedback"
] | richlysakowski | 1 |
miguelgrinberg/microblog | flask | 199 | Possible bug in "/user/<username>" route in v0.6 tag corresponding to Part 6 of tutorial | I was trying out the Part 6 of the mega tutorial and found that I can easily replace a known username which is different that the logged in username, and can view his information, which is the function https://github.com/miguelgrinberg/microblog/blob/v0.6/app/routes.py#L75
I think we need to check if the id of the user we get from the URL matches the id of the current user, and if that happens, we show the profile, else we can perhaps redirect him to the `index` page, so something along the lines of
```
@app.route('/user/<username>')
@login_required
def user(username):
user = User.query.filter_by(username=username).first_or_404()
if user.id != current_user.id:
return redirect(url_for('index'))
posts = [
{'author': user, 'body': 'Test post #1'},
{'author': user, 'body': 'Test post #2'}
]
return render_template('user.html', user=user, posts=posts)
```
| closed | 2019-12-31T05:34:04Z | 2019-12-31T08:07:14Z | https://github.com/miguelgrinberg/microblog/issues/199 | [
"question"
] | deveshks | 3 |
ultralytics/ultralytics | machine-learning | 19,219 | Questions about different resolutions | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and [discussions](https://github.com/orgs/ultralytics/discussions) and found no similar questions.
### Question
Hi,I want to use yolov8n-seg model for my segmentation task, I train the model using 640x640, but I want to deploy it using different resolutions, so I need to transfer the pt model into on model with different input resolution. Is it possible and how can I do that?
### Additional
_No response_ | open | 2025-02-13T06:29:30Z | 2025-02-13T06:30:19Z | https://github.com/ultralytics/ultralytics/issues/19219 | [
"question",
"segment",
"exports"
] | QiqLiang | 1 |
piskvorky/gensim | data-science | 3,224 | Can't add vector to pretrained fasttext model via .add_vector | <!--
**IMPORTANT**:
- Use the [Gensim mailing list](https://groups.google.com/forum/#!forum/gensim) to ask general or usage questions. Github issues are only for bug reports.
- Check [Recipes&FAQ](https://github.com/RaRe-Technologies/gensim/wiki/Recipes-&-FAQ) first for common answers.
Github bug reports that do not include relevant information and context will be closed without an answer. Thanks!
-->
#### Problem description
I'm trying to add a new vector to a pretrained fasttext model via `.add_vector`. However, it seems like the vector is not added if I check via `.has_index_for`.
#### Steps/code/corpus to reproduce
```
>>> from gensim.models import fasttext
>>> import numpy as np
>>> ft_model = fasttext.load_facebook_vectors("fastText/cc.en.300.bin")
>>> ft_model.has_index_for("testtest")
False
>>> ft_model.add_vector("testtest", np.zeros((300,)))
2000000
>>> ft_model.has_index_for("testtest")
False
>>> ft_model.index_to_key[2000000]
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
IndexError: list index out of range
```
#### Versions
```
Windows-10-10.0.19041-SP0
Python 3.8.5 (default, Sep 3 2020, 21:29:08) [MSC v.1916 64 bit (AMD64)]
Bits 64
NumPy 1.20.3
SciPy 1.6.2
gensim 4.0.1
FAST_VERSION 1
```
| open | 2021-08-31T13:17:12Z | 2022-04-15T10:05:24Z | https://github.com/piskvorky/gensim/issues/3224 | [
"bug",
"impact HIGH",
"reach MEDIUM"
] | om-hb | 8 |
comfyanonymous/ComfyUI | pytorch | 6,850 | apply_preprocessor is not implemented | ### Expected Behavior
ControlNet Preprocessor missing processors
### Actual Behavior

### Steps to Reproduce

### Debug Logs
```powershell
re_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 202, in get_output_data
return_values = _map_node_over_list(obj, input_data_all, obj.FUNCTION, allow_interrupt=True, execution_block_cb=execution_block_cb, pre_execute_cb=pre_execute_cb)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 174, in _map_node_over_list
process_inputs(input_dict, i)
File "S:\Ai\ComfyUI_windows_portable\ComfyUI\execution.py", line 163, in process_inputs
results.append(getattr(obj, func)(**inputs))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-art-venture\modules\controlnet\__init__.py", line 104, in detect_controlnet
image = apply_preprocessor(image, preprocessor, resolution=resolution)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "S:\Ai\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfyui-art-venture\modules\controlnet\preprocessor.py", line 56, in apply_preprocessor
raise NotImplementedError("apply_preprocessor is not implemented")
NotImplementedError: apply_preprocessor is not implemented
Prompt executed in 0.01 seconds
```
### Other
Hi there,
I Installed ControlNet Preprocessor but when run it in ComfyUI it give an error "apply_preprocessor is not implemented" look like it missing some of the processor. Can someone please help me. Thanks.


| closed | 2025-02-18T00:52:34Z | 2025-02-18T08:52:33Z | https://github.com/comfyanonymous/ComfyUI/issues/6850 | [
"Potential Bug"
] | allpixels63 | 1 |
artefactory/streamlit_prophet | streamlit | 21 | poetry install error | ## 🐛 Bug Report
<!-- A clear and concise description of what the bug is. -->
## 🔬 How To Reproduce
Steps to reproduce the behavior:
1. ...
### Code sample
<!-- If applicable, attach a minimal code sample to reproduce the decried issue. -->
### Environment
* OS: macOS
* Python version, 3.8
```bash
python --version
```
### Screenshots
at ~/miniconda3/lib/python3.8/site-packages/poetry/installation/executor.py:627 in _download_link
623│ )
624│ )
625│
626│ if archive_hashes.isdisjoint(hashes):
→ 627│ raise RuntimeError(
628│ "Invalid hashes ({}) for {} using archive {}. Expected one of {}.".format(
629│ ", ".join(sorted(archive_hashes)),
630│ package,
631│ archive_path.name,

## 📈 Expected behavior
it works as README showed.
## 📎 Additional context
maybe related to #20
finally it works after a clean conda created py38, then poetry install all successed. the cmd under project root is
```
poetry run streamlit run streamlit_prophet/app/dashboard.py
```
hope it helps others.

| closed | 2022-05-27T03:10:00Z | 2022-06-01T00:58:17Z | https://github.com/artefactory/streamlit_prophet/issues/21 | [
"bug"
] | yangboz | 2 |
home-assistant/core | python | 140,360 | Matter Aqara Integration does not fetch T1 switches and Wireless Relay Controller switch | ### The problem
I have added two Aqara hubs M2 to HA using matter integration and there is no T1 switch added to HA or Wireless Relay Controller which is double switch.
Is there anything that can be done or should I wait for update of Matter integration?
### What version of Home Assistant Core has the issue?
core-2025.3.1
### What was the last working version of Home Assistant Core?
_No response_
### What type of installation are you running?
Home Assistant OS
### Integration causing the issue
matter
### Link to integration documentation on our website
_No response_
### Diagnostics information
[matter-01JM5BBTRS2JAA6KFDR4FQTQZ6-Aqara Hub M2-9cace797b16b7373bf1ba1f4e279135c.json](https://github.com/user-attachments/files/19182123/matter-01JM5BBTRS2JAA6KFDR4FQTQZ6-Aqara.Hub.M2-9cace797b16b7373bf1ba1f4e279135c.json)
### Example YAML snippet
```yaml
```
### Anything in the logs that might be useful for us?
```txt
```
### Additional information
_No response_ | open | 2025-03-11T11:39:23Z | 2025-03-13T10:31:24Z | https://github.com/home-assistant/core/issues/140360 | [
"integration: matter"
] | avatar25pl | 2 |
nltk/nltk | nlp | 2,718 | Error when using nltk.stem.arlstem2 module | I was trying to use the stemming function in arlstem2 object and always found an error that there is no object called adject but, i tried to fix adject and replaced it with adjective in the source code it worked perfectly.
I just wanted to mention that it works for me :)

| closed | 2021-05-28T06:52:23Z | 2021-05-28T15:50:38Z | https://github.com/nltk/nltk/issues/2718 | [] | LMech | 2 |
darrenburns/posting | rest-api | 175 | Add option to configure both a light and dark theme and support auto switch between them based on OS light/dark mode | Currently the configuration allows setting a single theme using the `theme` key.
It would be great if Posting was able to listen to the event where the OS switches from light to dark mode and vice versa and allow the user to configure both a light and a dark theme accordingly. | open | 2025-02-01T14:21:13Z | 2025-02-01T14:21:13Z | https://github.com/darrenburns/posting/issues/175 | [] | sherif-fanous | 0 |
apachecn/ailearning | python | 501 | 加群失败。 | 781689008,麻烦通过一下,谢谢, | closed | 2019-04-23T03:26:45Z | 2019-04-26T02:03:37Z | https://github.com/apachecn/ailearning/issues/501 | [] | ange521 | 1 |
SYSTRAN/faster-whisper | deep-learning | 899 | French words are split when word_timestamp is enabled | It happens everywhere with default settings (tested with 1.0.2)
Sentence: [0 0.00s -> 4.12s] Merci d'avoir appelé Abyssé Banque, comment puis-je vous aider aujourd'hui ?
[0 0.00s -> 0.24s] Merci
[0 0.24s -> 0.42s] d <- single wrond
[0 0.42s -> 0.54s] 'avoir <- single word
[0 0.54s -> 1.02s] appelé
[0 1.02s -> 1.34s] Abyssé
[0 1.34s -> 1.80s] Banque,
[0 1.82s -> 2.26s] comment
[0 2.26s -> 2.50s] puis
[0 2.50s -> 2.58s] -je
[0 2.58s -> 2.72s] vous
[0 2.72s -> 2.94s] aider
[0 2.94s -> 3.18s] aujourd <- single wrond
[0 3.18s -> 3.44s] 'hui <- single wrond
[0 3.44s -> 4.12s] ?
Sentence: [0 9.00s -> 10.20s] C'est super !
[0 9.00s -> 9.48s] C <- single wrond
[0 9.48s -> 9.60s] 'est <- single wrond
[0 9.60s -> 9.86s] super
[0 9.86s -> 10.20s] ! | open | 2024-07-03T14:41:01Z | 2024-07-11T12:31:17Z | https://github.com/SYSTRAN/faster-whisper/issues/899 | [] | vkras | 2 |
mirumee/ariadne | graphql | 214 | Resolver function for union type | I am trying to write a query resolver function for union type in Ariadne. How can I accomplish this?
As I have read in the documentation there is a field called `__typename` which helps us to resolve the union type. But I am not getting any `__typename` to my resolver function.
**Schema**
```graphql
type User {
username: String!
firstname: String
email: String
}
type UserDuplicate {
username: String!
firstname: String
email: String
}
union UnionTest = User | UserDuplicate
type UnionForCustomTypes {
user: UnionTest
name: String!
}
type Query {
user: String!
unionForCustomTypes: [UnionForCustomTypes]!
}
```
**Ariadne resolver functions**
```python
query = QueryType()
mutation = MutationType()
unionTest = UnionType("UnionTest")
@unionTest.type_resolver
def resolve_union_type(obj, *_):
if obj[0]["__typename"] == "User":
return "User"
if obj[0]["__typename"] == "DuplicateUser":
return "DuplicateUser"
return None
# Query resolvers
@query.field("unionForCustomTypes")
def resolve_union_for_custom_types(_, info):
result = [
{"name": "Manisha Bayya", "user": [{"__typename": "User", "username": "abcd"}]}
]
return result
```
**Query I am trying**
```graphql
{
unionForCustomTypes {
name
user {
__typename
...on User {
username
firstname
}
}
}
}
```
When I try the query I am getting below error
```
{
"data": null,
"errors": [
{
"message": "Cannot return null for non-nullable field Query.unionForCustomTypes.",
"locations": [
[
2,
3
]
],
"path": [
"unionForCustomTypes"
],
"extensions": {
"exception": {
"stacktrace": [
"Traceback (most recent call last):",
" File \"/root/manisha/prisma/ariadne_envs/lib/python3.6/site-packages/graphql/execution/execute.py\", line 675, in complete_value_catching_error",
" return_type, field_nodes, info, path, result",
" File \"/root/manisha/prisma/ariadne_envs/lib/python3.6/site-packages/graphql/execution/execute.py\", line 754, in complete_value",
" \"Cannot return null for non-nullable field\"",
"TypeError: Cannot return null for non-nullable field Query.unionForCustomTypes."
],
"context": {
"completed": "None",
"result": "None",
"path": "ResponsePath(...rCustomTypes')",
"info": "GraphQLResolv...f04e9c1fc50>})",
"field_nodes": "[FieldNode at 4:135]",
"return_type": "<GraphQLNonNu...ustomTypes'>>>",
"self": "<graphql.exec...x7f04e75677f0>"
}
}
}
}
]
}
``` | closed | 2019-07-16T09:46:32Z | 2019-07-16T14:29:14Z | https://github.com/mirumee/ariadne/issues/214 | [] | ManishaBayya | 4 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,171 | [Bug]: RuntimeError: Couldn't clone Stable Diffusion. <v1.10.0-RC> | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
WebUI failed to install
### Steps to reproduce the problem
Download https://github.com/AUTOMATIC1111/stable-diffusion-webui/tree/v1.10.0-RC
Run WebUI.bat to install
### What should have happened?
The installation should complete and launch the WebUI
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
python launch.py --dump-sysinfo
Traceback (most recent call last):
File "C:\tut\stable-diffusion-webui-1.10.0-RC\launch.py", line 48, in <module>
main()
File "C:\tut\stable-diffusion-webui-1.10.0-RC\launch.py", line 29, in main
filename = launch_utils.dump_sysinfo()
File "C:\tut\stable-diffusion-webui-1.10.0-RC\modules\launch_utils.py", line 473, in dump_sysinfo
from modules import sysinfo
File "C:\tut\stable-diffusion-webui-1.10.0-RC\modules\sysinfo.py", line 8, in <module>
import psutil
ModuleNotFoundError: No module named 'psutil'
### Console logs
```Shell
venv "C:\tut\stable-diffusion-webui-1.10.0-RC\venv\Scripts\Python.exe"
fatal: not a git repository (or any of the parent directories): .git
fatal: not a git repository (or any of the parent directories): .git
Python 3.10.6 (tags/v3.10.6:9c7b4bd, Aug 1 2022, 21:53:49) [MSC v.1932 64 bit (AMD64)]
Version: 1.10.0
Commit hash: <none>
Cloning Stable Diffusion into C:\tut\stable-diffusion-webui-1.10.0-RC\repositories\stable-diffusion-stability-ai...
Cloning into 'C:\tut\stable-diffusion-webui-1.10.0-RC\repositories\stable-diffusion-stability-ai'...
remote: Enumerating objects: 580, done.
remote: Counting objects: 100% (571/571), done.
remote: Compressing objects: 100% (304/304), done.
error: RPC failed; curl 92 HTTP/2 stream 5 was not closed cleanly: CANCEL (err 8)
error: 11592 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output
Traceback (most recent call last):
File "C:\tut\stable-diffusion-webui-1.10.0-RC\launch.py", line 48, in <module>
main()
File "C:\tut\stable-diffusion-webui-1.10.0-RC\launch.py", line 39, in main
prepare_environment()
File "C:\tut\stable-diffusion-webui-1.10.0-RC\modules\launch_utils.py", line 411, in prepare_environment
git_clone(stable_diffusion_repo, repo_dir('stable-diffusion-stability-ai'), "Stable Diffusion", stable_diffusion_commit_hash)
File "C:\tut\stable-diffusion-webui-1.10.0-RC\modules\launch_utils.py", line 191, in git_clone
run(f'"{git}" clone --config core.filemode=false "{url}" "{dir}"', f"Cloning {name} into {dir}...", f"Couldn't clone {name}", live=True)
File "C:\tut\stable-diffusion-webui-1.10.0-RC\modules\launch_utils.py", line 115, in run
raise RuntimeError("\n".join(error_bits))
RuntimeError: Couldn't clone Stable Diffusion.
Command: "git" clone --config core.filemode=false "https://github.com/Stability-AI/stablediffusion.git" "C:\tut\stable-diffusion-webui-1.10.0-RC\repositories\stable-diffusion-stability-ai"
Error code: 128
Press any key to continue . . .
```
### Additional information
N. A. | open | 2024-07-08T06:20:58Z | 2025-01-18T23:26:41Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16171 | [
"bug-report"
] | nitinmukesh | 6 |
jupyter-book/jupyter-book | jupyter | 1,440 | jupyter-book: command not found | ### Describe the problem
I just switched to linux (Ubuntu 20.04), and I can install jupyter-book sucessfully using `pip install jupyter-book`. However, I cannot use the command `jupyter-book`, the error is `command not found`.
If I run `pip list` I get the following output
```
$ pip list -v | grep jupyter-book
jupyter-book 0.11.3 /home/nils/.local/lib/python3.8/site-packages pip
```
I'm guessing I have to add something to my `PATH`, but what?
### Link to your repository or website
_No response_
### Steps to reproduce
```
$ pip install jupyter-book
$ jupyter-book
jupyter-book: command not found
```
### The version of Python you're using
python 3.8
### Your operating system
Ubuntu 20.04
### Versions of your packages
0.11.3
### Additional context
_No response_ | closed | 2021-08-26T08:31:37Z | 2021-08-26T08:34:58Z | https://github.com/jupyter-book/jupyter-book/issues/1440 | [
"bug"
] | ratnanil | 1 |
ray-project/ray | deep-learning | 51,445 | [<Ray component: java>] expose ObjectRef in DeploymentResponse class | ### Description
It is possible to convert a DeploymentResponse to an ObjectRef in Python, as described in
https://docs.ray.io/en/master/serve/model_composition.html#advanced-convert-a-deploymentresponse-to-a-ray-objectref. However, Ray Java lacks a similar capability, making certain use cases unsupported. It would be beneficial to expose ObjectRef in DeploymentResponse, similar to the approach in this PR https://github.com/ray-project/ray/pull/51444
### Use case
we build a streaming pipeline by ray java which the architecture as below, the data pass through from **Subscriber** -> **Processor** -> **Publisher**, the current issue is DeploymentResponse unable to pass as remote call parameter, is it able to expose ObjectRef in the DeploymentResponse and then we can used as publisherHandle.method("handle").remote(response.getObjectRef());
```
public class Processor {
public Object handle(Object input) {
Object output = null;
return output;
}
}
public class Publisher {
public void handle(Object input) {
}
}
public class Subscriber {
DeploymentHandle processorHandle;
DeploymentHandle publisherHandle;
public void handle() {
while (true) {
DeploymentResponse response = processorHandle.method("handle").remote("");
publisherHandle.method("handle").remote(response);
}
}
}
```
| open | 2025-03-18T07:15:52Z | 2025-03-24T03:21:35Z | https://github.com/ray-project/ray/issues/51445 | [
"java",
"enhancement",
"triage",
"serve"
] | zhiqiwangebay | 1 |
SYSTRAN/faster-whisper | deep-learning | 757 | Super long video processing failure | I am tangled with a long video(about 5 hours) processing task and it kept running for about 24 hours and have not been finished yet, which takes much time than i expected and could possibly fail to generate ASR result. Is there a upper-limit video length for the model? i suspect it fell into infinite sequence generation problem. | closed | 2024-03-25T01:18:53Z | 2024-11-19T23:16:09Z | https://github.com/SYSTRAN/faster-whisper/issues/757 | [] | another1s | 7 |
microsoft/nni | tensorflow | 5,687 | Does NNI v3.0 support multi-gpu training for model compression (pruner, distiller, quantizer)? | **Describe the issue**:
I found multi-gpu training via DataParallel is supported in v1.4 and here is the example:
https://github.com/microsoft/nni/blob/v1.4/examples/model_compress/multi_gpu.py
However, I found this DataParallel related issue is still open:
https://github.com/microsoft/nni/issues/3626
Can I use DataParallel for pruner or other compressors?
| open | 2023-09-30T09:30:10Z | 2023-09-30T09:30:10Z | https://github.com/microsoft/nni/issues/5687 | [] | DY-ATL | 0 |
InstaPy/InstaPy | automation | 6,052 | Instapy not following people even though the option is enabled. | <!-- Did you know that we have a Discord channel ? Join us: https://discord.gg/FDETsht -->
<!-- Is this a Feature Request ? Please, check out our Wiki first https://github.com/timgrossmann/InstaPy/wiki -->
## Expected Behavior
Instapy should automatically follow people since it is enabled in my config file.
## Current Behavior
It likes posts, but it does not follow people.
## InstaPy configuration
[Here is the config file.](https://github.com/LiterallyJohnny/instabot/blob/main/instabot.py) | open | 2021-01-25T05:04:59Z | 2021-07-21T02:19:01Z | https://github.com/InstaPy/InstaPy/issues/6052 | [
"wontfix"
] | johnnynalley | 3 |
horovod/horovod | tensorflow | 3,103 | Available Tensor Operations do not support NCCL. | I following the doc(https://github.com/horovod/horovod/blob/master/docs/gpus.rst).
Firstly install nccl:
The nccl_test:
make;
```
Compiling all_reduce.cu > ../build/all_reduce.o
Compiling common.cu > ../build/common.o
Linking ../build/all_reduce.o > ../build/all_reduce_perf
Compiling all_gather.cu > ../build/all_gather.o
Linking ../build/all_gather.o > ../build/all_gather_perf
Compiling broadcast.cu > ../build/broadcast.o
Linking ../build/broadcast.o > ../build/broadcast_perf
Compiling reduce_scatter.cu > ../build/reduce_scatter.o
Linking ../build/reduce_scatter.o > ../build/reduce_scatter_perf
Compiling reduce.cu > ../build/reduce.o
Linking ../build/reduce.o > ../build/reduce_perf
Compiling alltoall.cu > ../build/alltoall.o
Linking ../build/alltoall.o > ../build/alltoall_perf
```
test:
```
# size count type redop time algbw busbw error time algbw busbw error
# (B) (elements) (us) (GB/s) (GB/s) (us) (GB/s) (GB/s)
8 2 float sum 6.13 0.00 0.00 0e+00 0.69 0.01 0.00 0e+00
16 4 float sum 6.11 0.00 0.00 0e+00 0.69 0.02 0.00 0e+00
32 8 float sum 6.33 0.01 0.00 0e+00 0.68 0.05 0.00 0e+00
64 16 float sum 6.18 0.01 0.00 0e+00 0.68 0.09 0.00 0e+00
128 32 float sum 6.17 0.02 0.00 0e+00 0.69 0.19 0.00 0e+00
256 64 float sum 6.11 0.04 0.00 0e+00 0.68 0.37 0.00 0e+00
512 128 float sum 5.97 0.09 0.00 0e+00 0.68 0.75 0.00 0e+00
1024 256 float sum 6.35 0.16 0.00 0e+00 0.68 1.50 0.00 0e+00
2048 512 float sum 6.25 0.33 0.00 0e+00 0.68 2.99 0.00 0e+00
4096 1024 float sum 5.93 0.69 0.00 0e+00 0.68 5.99 0.00 0e+00
8192 2048 float sum 5.97 1.37 0.00 0e+00 0.69 11.95 0.00 0e+00
16384 4096 float sum 6.31 2.60 0.00 0e+00 0.68 23.92 0.00 0e+00
32768 8192 float sum 6.02 5.44 0.00 0e+00 0.68 48.04 0.00 0e+00
65536 16384 float sum 5.99 10.94 0.00 0e+00 0.68 96.04 0.00 0e+00
131072 32768 float sum 6.23 21.03 0.00 0e+00 0.68 192.33 0.00 0e+00
262144 65536 float sum 6.40 40.93 0.00 0e+00 0.68 386.07 0.00 0e+00
524288 131072 float sum 7.14 73.43 0.00 0e+00 0.51 1020.31 0.00 0e+00
1048576 262144 float sum 7.85 133.65 0.00 0e+00 0.62 1695.90 0.00 0e+00
2097152 524288 float sum 12.69 165.23 0.00 0e+00 0.66 3164.80 0.00 0e+00
4194304 1048576 float sum 21.92 191.36 0.00 0e+00 0.67 6302.01 0.00 0e+00
8388608 2097152 float sum 41.21 203.56 0.00 0e+00 0.70 12007.74 0.00 0e+00
16777216 4194304 float sum 79.68 210.55 0.00 0e+00 0.72 23301.69 0.00 0e+00
33554432 8388608 float sum 156.7 214.10 0.00 0e+00 0.99 33998.11 0.00 0e+00
67108864 16777216 float sum 309.3 216.99 0.00 0e+00 0.62 108537.71 0.00 0e+00
134217728 33554432 float sum 614.9 218.27 0.00 0e+00 0.66 204693.81 0.00 0e+00
# Out of bounds values : 0 OK
# Avg bus bandwidth : 0
```
Then ```$ HOROVOD_GPU_OPERATIONS=NCCL pip install --no-cache-dir horovod```
```
Horovod v0.19.5:
Available Frameworks:
[ ] TensorFlow
[X] PyTorch
[X] MXNet
Available Controllers:
[X] MPI
[X] Gloo
Available Tensor Operations:
[ ] NCCL
[ ] DDL
[ ] CCL
[X] MPI
[X] Gloo
```
envs: ubuntu16.04
mxnet: 1.8 | closed | 2021-08-13T07:27:02Z | 2021-08-13T07:51:08Z | https://github.com/horovod/horovod/issues/3103 | [] | powermano | 0 |
skfolio/skfolio | scikit-learn | 108 | [ENH] Convex-solver consuming metadata to define time-dependent constraints | **Is your feature request related to a problem? Please describe.**
The API shall support the case one is interested in working with metadata-dependent constrains. Examples include beta-neutral portfolios in which the beta is a time series. Or portfolios in which the group definition is a function of time.
**Describe the solution you'd like**
Metadata Routing on MeanRisk class and more generic constrains API.
**Describe alternatives you've considered**
While for the latter problem a custom clustering class within the NCO class could work, there is still value in supporting this for convex solvers. | open | 2024-12-23T18:10:04Z | 2025-02-25T07:24:22Z | https://github.com/skfolio/skfolio/issues/108 | [
"enhancement"
] | matteoettam09 | 1 |
pmaji/crypto-whale-watching-app | dash | 44 | Better plot coloring / market price visual | - [x] show logic for margins specification and adding background color
- [ ] need to choose a better background / plot color (just picked one arbitrarily to show proof of concept)
- [x] need to add horizontal line to show market price
- [ ] would love to be to selectively color the background such that above the market price is a very faint color of red and below market price is a very faint color of green (this comes after the horizontal line implementation) | closed | 2018-02-18T20:53:52Z | 2018-03-28T01:30:24Z | https://github.com/pmaji/crypto-whale-watching-app/issues/44 | [] | pmaji | 35 |
napari/napari | numpy | 7,101 | On contexts, keys and context hierarchies | **Note: I need to expand on these points but just didn't want to lose the stuff I'd already written**
This issue tracks various pain points that need to be resolve or design decisions that need to be made w.r.t. our internal implementation of app contexts and context keys.
## Contexts should live on the `app`
Currently, our application maintains three separate `context` objects, each as an attribute on the internal object we deemed semi-relevant to the context's keys:
- The `ViewerModel` [context](https://github.com/napari/napari/blob/main/napari/components/viewer_model.py#L216) seems to be empty, so it's not clear why it is currently being defined here. Potentially, this should be the root context that we use to define the others?
- The `LayerList` [context](https://github.com/napari/napari/blob/main/napari/components/layerlist.py#L94) contains all keys that maintain information about the layerlist or its current selection
- The `QtMainWindow` [context](https://github.com/napari/napari/blob/main/napari/_qt/qt_main_window.py#L174) which currently contains a single context key related to the `Debug` menu trace functionality
As initially noted by Talley when creating the [`LayerList`](https://github.com/napari/napari/blob/main/napari/components/layerlist.py#L83) and the [`ViewerModel`](https://github.com/napari/napari/blob/main/napari/_qt/qt_main_window.py#L173) contexts, keeping these `contexts` on the individual objects is a leaky abstraction - these objects shouldn't have to know about, track or update contexts. Rather, the `app` should manage these contexts, by connecting to relevant events on the object the context is tracking state for. See #7106 for an example of an issue that may have been caught earlier if the `contexts` lived and died with the `app`.
## Hierarchical contexts
Contexts in `app-model` can be hierarchical. Each of the contexts mentioned above is rooted in a custom `SettingsAwareContext` which is given as a default `root_class` to our `create_context` function. Other than this `root_class`, each of our three contexts are independent from each other, and contain different sets of keys.
This means that when we're updating the state of our application based on the values of specific context keys, e.g. updating the enabled/disabled state of menu items, we have to pass the correct context to the update method, and this context has to contain **all** keys that will be queried.
- We now have three different flavours of context keys but no guiding principles as to when we should use which
- Contexts should exist in a hierarchy that ensures that higher level keys are available to lower level contexts as required | open | 2024-07-17T06:58:43Z | 2024-08-29T05:08:30Z | https://github.com/napari/napari/issues/7101 | [
"task"
] | DragaDoncila | 2 |
pytest-dev/pytest-django | pytest | 388 | Cannot access db in fixtures at 'session' level. | I am trying to auto use a fixture at `session` level. So I kept the fixture at `conftest.py`. The code is as follows.
``` python
import pytest
pytestmark = [
pytest.mark.django_db(transaction = True),]
@pytest.fixture(scope="session", autouse = True)
def afixture():
a_func_which_talks_to_db()
```
Running test throws following error.
`Failed: Database access not allowed, use the "django_db" mark to enable it.`
| closed | 2016-09-05T09:06:49Z | 2019-04-11T21:04:01Z | https://github.com/pytest-dev/pytest-django/issues/388 | [] | ludbek | 3 |
vitalik/django-ninja | django | 963 | [BUG] Issue migrating nested ModelSchema with Schema to ninja v1 | My schemas look like this:
``` python
class CatalogueDetailsOut(ModelSchema):
author: AuthorStampOut = Field(..., alias="user")
class Meta:
model = Catalogue
fields = "__all__"
class CuratedCatalogOut(ModelSchema):
catalogues: list[CatalogueDetailsOut] = Field(...)
class Meta:
model = CuratedCatalog
exclude = ["tags", "tagged_items"]
class ExploreOut(Schema):
curated_catalogs: list[CuratedCatalogOut]
```
ExploreOut is my response schema for the API.
But I get the following error:
``` python
12 validation errors for NinjaResponseSchema
response.curated_catalogs.0.catalogues.0.user.full_name
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc6f80>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.0.user.username
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc6f80>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.0.collection_id
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc4040>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.0.user_id
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc4040>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.1.user.full_name
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc6200>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.1.user.username
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc6200>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.1.collection_id
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc5b40>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.1.user_id
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc5b40>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.2.user.full_name
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc4100>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.2.user.username
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc4100>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.2.collection_id
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc7680>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
response.curated_catalogs.0.catalogues.2.user_id
Field required [type=missing, input_value=<ninja.schema.DjangoGette...bject at 0x7f1e84cc7680>, input_type=DjangoGetter]
For further information visit https://errors.pydantic.dev/2.5/v/missing
```
Notice how there is collection_id for FK parsing, and user.field_name instead of author.field_name as specified in the field.
The code worked fine till **Ninja v0.22.2**
Won't be migrating to v1 unless this is fixed.
- Python version: 3.11
- Django version: 4.2.7
- Django-Ninja version: 1.0.1
- Pydantic version: 2.0
| closed | 2023-11-27T15:09:54Z | 2023-12-19T17:33:31Z | https://github.com/vitalik/django-ninja/issues/963 | [] | ganeshprasadrao | 7 |
huggingface/datasets | tensorflow | 7,008 | Support ruff 0.5.0 in CI | Support ruff 0.5.0 in CI.
Also revert:
- #7007 | closed | 2024-06-28T05:11:26Z | 2024-06-28T07:11:18Z | https://github.com/huggingface/datasets/issues/7008 | [
"maintenance"
] | albertvillanova | 0 |
RobertCraigie/prisma-client-py | asyncio | 219 | Provide a JSON Schema generator | ## Problem
This would be useful as a real world example of a custom Prisma generator.
It could also help find any features we could add that would make building custom Prisma generators easier.
## Additional context
This has already been implemented in TypeScript: https://github.com/valentinpalkovic/prisma-json-schema-generator | open | 2022-01-12T00:47:05Z | 2022-02-01T17:51:59Z | https://github.com/RobertCraigie/prisma-client-py/issues/219 | [
"kind/improvement",
"topic: external",
"level/intermediate",
"priority/medium"
] | RobertCraigie | 0 |
keras-rl/keras-rl | tensorflow | 262 | [feature-request] Add the possibility to load and store memory. | In some cases, the memory could be even more important than weights. Also, it is independent with respect to the neural networks involved in the algorithm. I think that it could be good if there is the possibility to load and store memory. In these days I will try to implement it by myself but I don't know if you have some policy about merges for new features.
| closed | 2018-10-28T16:27:45Z | 2019-04-12T14:24:47Z | https://github.com/keras-rl/keras-rl/issues/262 | [
"enhancement",
"wontfix"
] | gvgramazio | 2 |
python-security/pyt | flask | 187 | Use function summaries instead of inlining | We [currently use inlining instead of summaries](https://github.com/python-security/pyt/tree/master/pyt/cfg), for inter-procedural analysis, which makes PyT slower than it needs to be.
Here are some videos, specifically the last one, explains function summaries well:
[#57 Call Graphs](https://www.youtube.com/watch?v=giGqdwuZBKQ)
[#58 Interprocedural Data Flow Analysis](https://www.youtube.com/watch?v=TJA7dvAV0ZI)
[#59 Procedure Summaries in Data Flow Analysis](https://www.youtube.com/watch?v=LrbPmaLEbwM) | open | 2018-11-24T22:23:15Z | 2018-11-24T22:23:33Z | https://github.com/python-security/pyt/issues/187 | [
"difficult",
"help wanted"
] | KevinHock | 0 |
horovod/horovod | tensorflow | 3,589 | Can not run `RayExecutor` | Code:
```python
import ray
from horovod.ray import RayExecutor
import horovod.torch as hvd
# Start the Ray cluster or attach to an existing Ray cluster
ray.init()
num_workers = 1
# Start num_workers actors on the cluster
settings = RayExecutor.create_settings(timeout_s=30)
executor = RayExecutor(
settings, num_workers=num_workers,cpus_per_worker=1, use_gpu=True)
# This will launch `num_workers` actors on the Ray Cluster.
executor.start()
# Using the stateless `run` method, a function can take in any args or kwargs
def simple_fn():
hvd.init()
print("hvd rank", hvd.rank())
return hvd.rank()
# Execute the function on all workers at once
result = executor.run(simple_fn)
print(result)
executor.shutdown()
```
Result:
```python
2022-06-28 22:05:33,656 INFO services.py:1477 -- View the Ray dashboard at http://127.0.0.1:8266
(BaseHorovodWorker pid=14896) *** SIGSEGV received at time=1656453935 on cpu 0 ***
(BaseHorovodWorker pid=14896) PC: @ 0x7f80eff99fcc (unknown) horovod::common::(anonymous namespace)::BackgroundThreadLoop()
(BaseHorovodWorker pid=14896) @ 0x7f833f117980 4560 (unknown)
(BaseHorovodWorker pid=14896) @ 0x7f833cbffaa3 24 execute_native_thread_routine
(BaseHorovodWorker pid=14896) @ ... and at least 4 more frames
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,709 E 14896 14934] logging.cc:325: *** SIGSEGV received at time=1656453935 on cpu 0 ***
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,709 E 14896 14934] logging.cc:325: PC: @ 0x7f80eff99fcc (unknown) horovod::common::(anonymous namespace)::BackgroundThreadLoop()
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,710 E 14896 14934] logging.cc:325: @ 0x7f833f117980 4560 (unknown)
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,710 E 14896 14934] logging.cc:325: @ 0x7f833cbffaa3 24 execute_native_thread_routine
(BaseHorovodWorker pid=14896) [2022-06-28 22:05:35,710 E 14896 14934] logging.cc:325: @ ... and at least 4 more frames
(BaseHorovodWorker pid=14896) Fatal Python error: Segmentation fault
(BaseHorovodWorker pid=14896)
2022-06-28 22:05:35,839 WARNING worker.py:1728 -- A worker died or was killed while executing a task by an unexpected system error. To troubleshoot the problem, check the logs for the dead worker. RayTask ID: ffffffffffffffff192c4de9ae2ac15aafc2ab1801000000 Worker ID: 64a4d8791c417ee4cc7d4ea3473dcb4fa6303705700c01ddc8bfd3fa Node ID: 78571af15b4fb6c918265637b8618542b59fd3c7f493a81a0b6b7569 Worker IP address: 10.0.2.180 Worker port: 33549 Worker PID: 14896 Worker exit type: SYSTEM_ERROR Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors.
Traceback (most recent call last):
File "test_0628.py", line 25, in <module>
result = executor.run(simple_fn)
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/horovod/ray/runner.py", line 376, in run
return self._maybe_call_ray(self.adapter.run, **kwargs_)
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/horovod/ray/runner.py", line 421, in _maybe_call_ray
return driver_func(**kwargs)
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/horovod/ray/runner.py", line 599, in run
return ray.get(self._run_remote(fn=f))
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper
return func(*args, **kwargs)
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/ray/_private/worker.py", line 2176, in get
raise value
ray.exceptions.RayActorError: The actor died unexpectedly before finishing this task.
class_name: BaseHorovodWorker
actor_id: 192c4de9ae2ac15aafc2ab1801000000
pid: 14896
namespace: acb48abc-7432-420d-923f-2d19a510800c
ip: 10.0.2.180
The actor is dead because its worker process has died. Worker exit type: SYSTEM_ERROR Worker exit detail: Worker unexpectedly exits with a connection error code 2. End of file. There are some potential root causes. (1) The process is killed by SIGKILL by OOM killer due to high memory usage. (2) ray stop --force is called. (3) The worker is crashed unexpectedly due to SIGSEGV or other unexpected errors.
None
Exception ignored in: <function ActorHandle.__del__ at 0x7f93cce2ef70>
Traceback (most recent call last):
File "/home/ubuntu/anaconda3/envs/hd/lib/python3.8/site-packages/ray/actor.py", line 1029, in __del__
AttributeError: 'NoneType' object has no attribute 'worker'
``` | closed | 2022-06-28T22:41:00Z | 2025-03-20T19:51:17Z | https://github.com/horovod/horovod/issues/3589 | [
"bug"
] | JiahaoYao | 1 |
seleniumbase/SeleniumBase | web-scraping | 2,753 | Interrupting the recorder from outside the terminal | Hi @mdmintz, I am creating a product using Seleniumbase library to record and playback my browser actions. The only problem I am facing is that after I run the command **sbase mkrec my_project.py --url=wikipedia.org** and it starts the recorder, the only way to stop is to type 'c' and press "Enter" in the command line. Once I start the recording at the user end, the user cannot go to the terminal to press 'c' and hit 'enter' as the users don't have the access to the terminal. So I want to kill/interrupt/add breakpoint from outside the terminal either by hitting a shortcut key or calling another script, or I can make changes to the installed seleniumbase script through the python package. Please enlighten me with the solution as it has become a bottleneck in the project. | closed | 2024-05-06T08:14:51Z | 2024-05-06T19:29:03Z | https://github.com/seleniumbase/SeleniumBase/issues/2753 | [
"duplicate"
] | Roboflex30 | 1 |
AntonOsika/gpt-engineer | python | 251 | OPENAI_API_KEY | Hello, I've been trying to use setup GptEngineer, but I can't get paste the openAI API KEY setup.
I have tried
- SET OPENAI... trough command window
- import openai... key in main.py
- import openai... key in ai.py
none of it worked, I'm on a PC btw.
Here is the error I get:
-------------------------------------------------------------------------------------------------------------------
'''''┌─────────────────────────────── Traceback (most recent call last) ────────────────────────────────┐
│ C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\gpt_engineer\main.py:44 │
│ in main │
│ │
│ 41 │ │ shutil.rmtree(memory_path, ignore_errors=True) │
│ 42 │ │ shutil.rmtree(workspace_path, ignore_errors=True) │
│ 43 │ │
│ > 44 │ ai = AI( │
│ 45 │ │ model=model, │
│ 46 │ │ temperature=temperature, │
│ 47 │ ) │
│ │
│ ┌─────────────────────────────────────────── locals ───────────────────────────────────────────┐ │
│ │ delete_existing = False │ │
│ │ input_path = WindowsPath('C:/Users/Admin/AI/GptEngineer/gpt-engineer/projects/i2s_micr… │ │
│ │ memory_path = WindowsPath('C:/Users/Admin/AI/GptEngineer/gpt-engineer/projects/i2s_micr… │ │
│ │ model = 'gpt-4' │ │
│ │ project_path = 'projects/i2s_microphone' │ │
│ │ run_prefix = '' │ │
│ │ steps_config = 'default' │ │
│ │ temperature = 0.1 │ │
│ │ verbose = False │ │
│ │ workspace_path = WindowsPath('C:/Users/Admin/AI/GptEngineer/gpt-engineer/projects/i2s_micr… │ │
│ └──────────────────────────────────────────────────────────────────────────────────────────────┘ │
│ │
│ C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\gpt_engineer\ai.py:13 │
│ in __init__ │
│ │
│ 10 │ │ self.temperature = temperature │
│ 11 │ │ │
│ 12 │ │ try: │
│ > 13 │ │ │ openai.Model.retrieve(model) │
│ 14 │ │ │ self.model = model │
│ 15 │ │ except openai.InvalidRequestError: │
│ 16 │ │ │ print( │
│ │
│ ┌──────────────────────────── locals ─────────────────────────────┐ │
│ │ model = 'gpt-4' │ │
│ │ self = <gpt_engineer.ai.AI object at 0x0000024E2D000700> │ │
│ │ temperature = 0.1 │ │
│ └─────────────────────────────────────────────────────────────────┘ │
│ │
│ C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_resources\ab │
│ stract\api_resource.py:20 in retrieve │
│ │
│ 17 │ │ cls, id, api_key=None, request_id=None, request_timeout=None, **params │
│ 18 │ ): │
│ 19 │ │ instance = cls(id=id, api_key=api_key, **params) │
│ > 20 │ │ instance.refresh(request_id=request_id, request_timeout=request_timeout) │
│ 21 │ │ return instance │
│ 22 │ │
│ 23 │ @classmethod │
│ │
│ ┌─────────────────────────── locals ───────────────────────────┐ │
│ │ api_key = None │ │
│ │ cls = <class 'openai.api_resources.model.Model'> │ │
│ │ id = 'gpt-4' │ │
│ │ instance = <Model id=gpt-4 at 0x24e2cff7d80> JSON: { │ │
│ │ "id": "gpt-4" │ │
│ │ } │ │
│ │ params = {} │ │
│ │ request_id = None │ │
│ │ request_timeout = None │ │
│ └──────────────────────────────────────────────────────────────┘ │
│ │
│ C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_resources\ab │
│ stract\api_resource.py:32 in refresh │
│ │
│ 29 │ │
│ 30 │ def refresh(self, request_id=None, request_timeout=None): │
│ 31 │ │ self.refresh_from( │
│ > 32 │ │ │ self.request( │
│ 33 │ │ │ │ "get", │
│ 34 │ │ │ │ self.instance_url(), │
│ 35 │ │ │ │ request_id=request_id, │
│ │
│ ┌────────────────────────── locals ───────────────────────────┐ │
│ │ request_id = None │ │
│ │ request_timeout = None │ │
│ │ self = <Model id=gpt-4 at 0x24e2cff7d80> JSON: { │ │
│ │ "id": "gpt-4" │ │
│ │ } │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\openai_object.py │
│ :172 in request │
│ │
│ 169 │ ): │
│ 170 │ │ if params is None: │
│ 171 │ │ │ params = self._retrieve_params │
│ > 172 │ │ requestor = api_requestor.APIRequestor( │
│ 173 │ │ │ key=self.api_key, │
│ 174 │ │ │ api_base=self.api_base_override or self.api_base(), │
│ 175 │ │ │ api_type=self.api_type, │
│ │
│ ┌────────────────────────── locals ───────────────────────────┐ │
│ │ headers = None │ │
│ │ method = 'get' │ │
│ │ params = {} │ │
│ │ plain_old_data = False │ │
│ │ request_id = None │ │
│ │ request_timeout = None │ │
│ │ self = <Model id=gpt-4 at 0x24e2cff7d80> JSON: { │ │
│ │ "id": "gpt-4" │ │
│ │ } │ │
│ │ stream = False │ │
│ │ url = '/models/gpt-4' │ │
│ └─────────────────────────────────────────────────────────────┘ │
│ │
│ C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\api_requestor.py │
│ :138 in __init__ │
│ │
│ 135 │ │ organization=None, │
│ 136 │ ): │
│ 137 │ │ self.api_base = api_base or openai.api_base │
│ > 138 │ │ self.api_key = key or util.default_api_key() │
│ 139 │ │ self.api_type = ( │
│ 140 │ │ │ ApiType.from_str(api_type) │
│ 141 │ │ │ if api_type │
│ │
│ ┌──────────────────────────────────── locals ─────────────────────────────────────┐ │
│ │ api_base = None │ │
│ │ api_type = None │ │
│ │ api_version = None │ │
│ │ key = None │ │
│ │ organization = None │ │
│ │ self = <openai.api_requestor.APIRequestor object at 0x0000024E2D000940> │ │
│ └─────────────────────────────────────────────────────────────────────────────────┘ │
│ │
│ C:\Users\Admin\AppData\Local\Programs\Python\Python310\lib\site-packages\openai\util.py:186 in │
│ default_api_key │
│ │
│ 183 │ elif openai.api_key is not None: │
│ 184 │ │ return openai.api_key │
│ 185 │ else: │
│ > 186 │ │ raise openai.error.AuthenticationError( │
│ 187 │ │ │ "No API key provided. You can set your API key in code using 'openai.api_key │
│ 188 │ │ ) │
│ 189 │
└──────────────────────────────────────────────────────────────────────────────────────────────────┘
AuthenticationError: No API key provided. You can set your API key in code using 'openai.api_key = <API-KEY>', or you
can set the environment variable OPENAI_API_KEY=<API-KEY>). If your API key is stored in a file, you can point the
openai module at it with 'openai.api_key_path = <PATH>'. You can generate API keys in the OpenAI web interface. See
https://platform.openai.com/account/api-keys for details.''''''
| closed | 2023-06-20T12:38:51Z | 2023-06-20T22:17:21Z | https://github.com/AntonOsika/gpt-engineer/issues/251 | [] | Niutonian | 7 |
google-research/bert | nlp | 379 | where can i fine the "model.ckpt" file? | python run_classifier.py \
--task_name=MRPC \
--do_train=true \
--do_eval=true \
--data_dir=$GLUE_DIR/MRPC \
--vocab_file=$BERT_BASE_DIR/vocab.txt \
--bert_config_file=$BERT_BASE_DIR/bert_config.json \
**--init_checkpoint=$BERT_BASE_DIR/bert_model.ckpt** \
--max_seq_length=128 \
--train_batch_size=32 \
--learning_rate=2e-5 \
--num_train_epochs=3.0 \
--output_dir=/tmp/mrpc_output/
trying run this cmd but "bert_model.ckpt" file does not exist at all, actually i got "bert_model.ckpt.data-00000-of-00001", "bert_model.ckpt.index", and "bert_model.ckpt.meta" instead, am i missed "bert_model.ckpt" or there is another way to run cmd without "bert_model.ckpt"? | open | 2019-01-19T07:51:40Z | 2019-01-30T03:04:29Z | https://github.com/google-research/bert/issues/379 | [] | a907471325 | 1 |
keras-team/keras | tensorflow | 20,676 | Untracked embedding layer inside Sequential keras 3 model | During tracking of variables ther is an issue where an embedding layer in a keras 3 model is not tracked properly, whereas the same model in keras 2 is tracked properly.
Here is the error message:
```
AssertionError: Tried to export a function which references an 'untracked' resource. TensorFlow objects (e.g. tf.Variable) captured by functions must be 'tracked' by assigning them to an attribute of a tracked object or assigned to an attribute of the main object directly. See the information below:
Function name = b'__inference_signature_wrapper_604'
Captured Tensor = <ResourceHandle(name="sequential/embedding/embeddings/8", device="/job:localhost/replica:0/task:0/device:CPU:0", container="Anonymous", type="tensorflow::Var", dtype and shapes : "[ DType enum: 1, Shape: [10,5] ]")>
Trackable referencing this tensor = <tf.Variable 'sequential/embedding/embeddings:0' shape=(10, 5) dtype=float32>
Internal Tensor = Tensor("600:0", shape=(), dtype=resource)
```
This is model function used for keras 2 and keras 3 model respectably:
```
def build_embedding_keras_model(vocab_size=10):
"""Builds a test model with an embedding initialized to one-hot vectors."""
keras_model = tf_keras.models.Sequential()
keras_model.add(tf_keras.layers.Embedding(input_dim=vocab_size, output_dim=5,
embeddings_initializer=keras.initializers.RandomUniform(seed=42)))
keras_model.add(tf_keras.layers.Softmax())
return keras_model
def build_embedding_keras3_model(vocab_size=10):
"""Builds a test model with an embedding initialized to one-hot vectors."""
keras_model = keras.models.Sequential()
keras_model.add(keras.layers.Embedding(input_dim=vocab_size, output_dim=5,
embeddings_initializer=keras.initializers.RandomUniform(seed=42)))
keras_model.add(keras.layers.Softmax())
return keras_model
```
The tensor called "handle" inside embedding seems to be not tracked properly:

Here is the func graph after tracking:

The error is raised inside the ExportedConcreteFunction call during mapping of captured tensors:
https://github.com/tensorflow/tensorflow/blob/96a931bb3e145719ae111507f004b151a653027d/tensorflow/python/eager/polymorphic_function/saved_model_exported_concrete.py#L45
| closed | 2024-12-20T16:55:08Z | 2025-01-25T01:56:57Z | https://github.com/keras-team/keras/issues/20676 | [
"stat:awaiting response from contributor",
"stale",
"type:Bug"
] | markomitos | 4 |
ydataai/ydata-profiling | data-science | 824 | High correlation warning printed multiple times | I get the same warning "High correlation" with the same other column four times in the report.
Looks like a bug where the warning is accidentally generated multiple times or not de-duplicated properly.
Is it easy to spot the issue or reproduce? Or should I try to extract a standalone test case?
This is with pandas 1.3.0 and pandas-profiling 3.0.0.
<img width="572" alt="Screenshot 2021-09-05 at 18 54 44" src="https://user-images.githubusercontent.com/852409/132135015-45c0a273-763a-430e-b12f-d340e79b3ea7.png">
| closed | 2021-09-05T16:58:47Z | 2022-08-23T22:59:23Z | https://github.com/ydataai/ydata-profiling/issues/824 | [
"bug 🐛",
"getting started ☝",
"help wanted 🙋"
] | cdeil | 2 |
Yorko/mlcourse.ai | scikit-learn | 756 | Proofread topic 5 | - Fix issues
- Fix typos
- Correct the translation where needed
- Add images where necessary | closed | 2023-10-24T07:41:21Z | 2024-08-25T07:50:33Z | https://github.com/Yorko/mlcourse.ai/issues/756 | [
"enhancement",
"wontfix",
"articles"
] | Yorko | 1 |
nl8590687/ASRT_SpeechRecognition | tensorflow | 29 | 准确率低 | 我自己录音测试感觉准确率比较低,怎么样才能提高准确率?谢谢 | closed | 2018-07-20T06:23:37Z | 2018-08-13T09:41:09Z | https://github.com/nl8590687/ASRT_SpeechRecognition/issues/29 | [] | shikongy | 1 |
vimalloc/flask-jwt-extended | flask | 452 | Supported cryptography version | https://github.com/vimalloc/flask-jwt-extended/blob/6d726bee8ed3fc06d7cf88b034812a192d6ddc10/setup.py#L33
There is a new version scheme is being used by cryptography: https://cryptography.io/en/latest/api-stability/#versioning. | closed | 2021-10-07T17:59:37Z | 2022-02-18T19:06:18Z | https://github.com/vimalloc/flask-jwt-extended/issues/452 | [] | decaz | 4 |
huggingface/datasets | nlp | 7,086 | load_dataset ignores cached datasets and tries to hit HF Hub, resulting in API rate limit errors | ### Describe the bug
I have been running lm-eval-harness a lot which has results in an API rate limit. This seems strange, since all of the data should be cached locally. I have in fact verified this.
### Steps to reproduce the bug
1. Be Me
2. Run `load_dataset("TAUR-Lab/MuSR")`
3. Hit rate limit error
4. Dataset is in .cache/huggingface/datasets
5. ???
### Expected behavior
We should not run into API rate limits if we have cached the dataset
### Environment info
datasets 2.16.0
python 3.10.4 | open | 2024-08-02T18:12:23Z | 2024-08-02T18:12:23Z | https://github.com/huggingface/datasets/issues/7086 | [] | tginart | 0 |
google-deepmind/sonnet | tensorflow | 83 | get_variables_in_module() failed to get local variables in module | Dear deepminder:
In some scenario I intend to wrap the tf.metrics into a sonnet module for convenience, the following the wrapper class:
```
class Metrics(snt.AbstractModule):
def __init__(self, indicator, name = "metrics"):
super(Metrics, self).__init__(name = name)
self._indicator = indicator
def _build(self, labels, logits):
if self._indicator == "accuracy":
metric, metric_update = tf.metrics.accuracy(labels, logits)
with tf.control_dependencies([metric_update]):
outputs = tf.identity(metric)
elif self._indicator == "precision":
metric, metric_update = tf.metrics.precision(labels, logits)
with tf.control_dependencies([metric_update]):
outputs = tf.identity(metric)
elif self._indicator == "recall":
metric, metric_update = tf.metrics.recall(labels, logits)
with tf.control_dependencies([metric_update]):
outputs = tf.identity(metric)
elif self._indicator == "f1_score":
metric_recall, metric_update_recall = tf.metrics.recall(labels, logits)
metric_precision, metric_update_precision = tf.metrics.precision(labels, logits)
with tf.control_dependencies([metric_update_recall, metric_update_precision]):
outputs = 2.0 / (1.0 / metric_recall + 1.0 / metric_precision)
else:
raise ValueError("unsupported metrics")
return outputs
```
The test code is as follows:
```
def test2():
import numpy as np
labels = tf.constant([1, 0, 0, 0, 0, 1, 1, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], tf.int32)
logits = tf.constant([1, 0, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0], tf.int32)
metrics = Metrics("accuracy")
accuracy = metrics(labels, logits)
with tf.Session() as sess:
sess.run([tf.global_variables_initializer(), tf.local_variables_initializer()]) # NOT applicable in practice
# variables_names = [v.name for v in snt.get_variables_in_module(metrics)] # Nothing returns
variables_names = [v.name for v in tf.get_collection(tf.GraphKeys.LOCAL_VARIABLES, scope="metrics")] # current work-around
values = sess.run(variables_names)
for k, v in zip(variables_names, values):
print "Variable: ", k
print "Shape: ", v.shape
print v
'''
score = sess.run(accuracy)
print(score)
'''
if __name__ == "__main__":
test2()
```
If I want to inspect the metrics for each batch, I cannot rely tf.local_variables_initializer() since other modules might also put variables into the local variable collection for maintenance. So I have to retrieve variables from Metrics module for proper management for example re-initialization.
The problem is when I call snt.get_variables_in_module(metrics), it returns nothing. I just wonder is it the behavior by design or it needs some enhancement?
Thanks a lot. | closed | 2018-05-17T01:36:37Z | 2018-06-21T11:53:13Z | https://github.com/google-deepmind/sonnet/issues/83 | [] | mingyr | 1 |
Johnserf-Seed/TikTokDownload | api | 771 | [BUG]获取tiktok视频失败,一直转圈 | - [ ] 我查看了 [文档](https://johnserf-seed.github.io/f2/quick-start.html) 以及 [已关闭的问题](https://github.com/Johnserf-Seed/f2/issues?q=is%3Aissue+is%3Aclosed) 以寻找可能的解决方案。
- [ ] 我在 [常见的问题与解决办法](https://johnserf-seed.github.io/f2/question-answer/qa.html) 中找不到我的问题。
- [ ] ~~*你的问题是公开的,请注意删除个人敏感内容再上传*~~
- [ ] 不按照模板填写的问题将不会得到优先的处理。
- 如果错误问题是可视化的,请在**屏幕截图**贴截图。如果你是开发者请在**错误重现**提供一个最小的代码示例来演示该问题。
- 相同的问题将会被标记`重复(duplicate)`,如果你的问题被标记为`已确认(confirmed)`则会在后续的推送中修复,请时刻留意。
- 退订邮件提醒请点击邮件的底部`unsubscribe`。
**详细描述错误**
通过浏览器可以打开tiktok看视频,使用的未登录状态下的cookie,下载主页作品一直转圈不下,是网络问题吗,我开vpn了呀但是
**系统平台**
<details>
<summary>单击展开</summary>
Q:你在哪个平台(Win/Linux/Mac)上运行?你使用的是什么浏览器?你使用的是什么终端软件?你使用的F2是什么版本?
A:
- 操作系统: [e.g. Win10 x64 22H2 19045.4046]
- 浏览器 [google]
- 终端 [e.g. WT 1.18.10301.0]
- F2版本 [e.g. 0.0.1.6]
</details>
**错误重现**
<details>
<summary>单击展开</summary>
Q: 请你复制并粘贴出错时运行的命令和配置文件内容,以及重现该行为的步骤。如果你一次性就提供完整信息,就会节省很多解决问题的时间。
A:
INFO 自定义配置路径:F:\学习资源\github开源项目\视频去水印\f2\f2\conf\app.yaml
INFO 筛选日期区间:2024-01-01 00:00:00 至 2025-01-01 23:59:59
INFO [{'hasMore': True, 'cursor': '1731944251000', 'aweme_id': '7421595112062323976', 'collectCount': 25400,
'collected': False, 'commentCount': 1902, 'createTime': '2024-10-04 00-59-31', 'desc': '_________2__MEOVV__',
'desc_raw': '집에서 뮤뱅찍기 2 @MEOVV 🤍' , 'diggCount': 379500, 'digged': False, 'duetDisplay': 0,
'duetEnabled': True, 'forFriend': False, 'isPinnedItem': True, 'itemCommentStatus': 0, 'music_album': None,
'music_authorName': 'pinktiez___', 'music_authorName_raw': 'pinktiez🐈\u200d⬛', 'music_coverLarge':
'https://p16-sign-va.tiktokcdn.com/tos-maliva-avt-0068/794082ba080922daea8bf3ede9000b3d~c5_1080x1080.jpeg?lk3s
=a5d48078&nonce=15262&refresh_token=339f1dbecf0db86f0c12e9f8381181f0&x-expires=1733364000&x-signature=tJ9%2B93
DyJlhZ%2FMSp5A%2B4ZKGl0Ek%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 35, 'music_id':
'7411559205082581765', 'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/630429d6a13c491ea3201c3dd17535cf/674e79ad/video/tos/useast2a/tos-useast2a-v-27dcd7
/oIubPA2NbSvC0tjis4dM9XUJBIMZQi6l9lVAI/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C
0&br=250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=OWgzNTM1NjplaTw7aTwzOUBpM3E7eHA5cm5sdTM
zNzU8M0BiNTViLTJgXjYxM14tMV5iYSMzbjRyMmRrX2FgLS1kMTZzcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag
=e00088000&cc=3', 'music_title': 'original_sound', 'music_title_raw': 'original sound', 'nickname':
'____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 3100000, 'privateItem': False, 'secUid':
'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 4897,
'shareEnabled': True, 'textExtra': [{'awemeId': '', 'end': 17, 'hashtagName': '', 'isCommerce': False,
'secUid': 'MS4wLjABAAAAPAoZ9652CkIMxk6pWt2fzmkou0cES0v6vgDmsHIlMK0B5fS-mvmnprJGc_ECvtmb', 'start': 11,
'subType': 0, 'type': 0, 'userId': '7180255003483014145', 'userUniqueId': 'meovv_official'}], 'uid':
'77415581122', 'video_bitrate': 2348351, 'video_bitrateInfo': [2348351, 616147], 'video_codecType': 'h264',
'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/5b16d99cb7f348998b5092899b52144f_1727974780?lk3s=81f88
b70&x-expires=1733364000&x-signature=nCB6sL336RpIDqxX5jkex0US2Dw%3D&shp=81f88b70&shcp=-', 'video_definition':
'720p', 'video_duration': 35, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/63e7c4e15ab4473780e8291972f3bb08_1727974780?lk3s=81f88
b70&x-expires=1733364000&x-signature=vy1jOQJ1sDvReRgQ3yvltjO1Wx0%3D&shp=81f88b70&shcp=-', 'video_height':
1280, 'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/okQbPDiznAtPAG1RAgW8fzQMp8eitYZQGR
FfeR/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=4586&bt=2293&cs=0&ds=3&ft=-Csk_mfM
PD12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=O2ZpZDdlZThmaTg3PGZnOUBpanJtNGs5cm9mdTMzODczNEAwL2Ez
MWIwXy8xMS4xL2AuYSNranFxMmQ0bHNgLS1kMTFzcw%3D%3D&btag=e00088000&expire=1733214205&l=20241203022249CF3AAE191D82
C204807B&ply_type=2&policy=2&signature=196bc02b4ea178590f2fe3332dade7af&tk=tt_chain_token', 'video_width':
720}, {'hasMore': True, 'cursor': '1731944251000', 'aweme_id': '7404458343873465608', 'collectCount': 83900,
'collected': False, 'commentCount': 1561, 'createTime': '2024-08-18 20-40-05', 'desc': '#asap_#newjeans_',
'desc_raw': '#asap #newjeans ', 'diggCount': 831200, 'digged': False, 'duetDisplay': 0, 'duetEnabled': True,
'forFriend': False, 'isPinnedItem': True, 'itemCommentStatus': 0, 'music_album': None, 'music_authorName':
'____Hyemin', 'music_authorName_raw': '김혜민 Hyemin', 'music_coverLarge':
'https://p16-sign-sg.tiktokcdn.com/aweme/1080x1080/tos-alisg-avt-0068/1e8faa42eb7e094bfe088411ce19003f.jpeg?lk
3s=a5d48078&nonce=50416&refresh_token=5110d31369f43fdda764ab025abbdf66&x-expires=1733364000&x-signature=QWIq15
29m0ehJcUFNiOGY91XiFo%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 20, 'music_id': '7404458371564260113',
'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/11021f6ef0b530204fd9f4f4428c21f8/674e799e/video/tos/alisg/tos-alisg-v-27dcd7/o4fMW
C7C9opesMdHBoTqQ8RTgrgmNgEtJDyTAe/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C0&br=
250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=NDxnPDQzNDw0OmQzOzxnOUBpM293OG85cmR4dTMzODU8
NEAxYjBeXzAvXzIxM2JeL141YSNlcGExMmRjay1gLS1kMS1zcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag=e000
b8000&cc=3', 'music_title': '_______________Hyemin', 'music_title_raw': '오리지널 사운드 - 김혜민 Hyemin',
'nickname': '____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 5100000, 'privateItem': False,
'secUid': 'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 11700,
'shareEnabled': True, 'textExtra': [{'awemeId': '', 'end': 5, 'hashtagName': 'asap', 'isCommerce': False,
'start': 0, 'subType': 0, 'type': 1}, {'awemeId': '', 'end': 15, 'hashtagName': 'newjeans', 'isCommerce':
False, 'start': 6, 'subType': 0, 'type': 1}], 'uid': '77415581122', 'video_bitrate': 773399,
'video_bitrateInfo': [773399, 677566, 487157, 411524], 'video_codecType': 'h264', 'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/66d32c56e24341e3906c3b658858d016_1723984807?lk3s=81f88
b70&x-expires=1733364000&x-signature=9SzSNuFEjcEexf%2BdFtPLcGiYcXI%3D&shp=81f88b70&shcp=-',
'video_definition': '720p', 'video_duration': 20, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/b28a1dff27784c9e9e028031f21c636a_1723984808?lk3s=81f88
b70&x-expires=1733364000&x-signature=CiSpIKfsj28wgO6fSyfXEo20jE0%3D&shp=81f88b70&shcp=-', 'video_height':
1280, 'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/ooTu8Q4DNC6oEKeFEB5ebAIEIDABfgrA1j
E6bR/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1510&bt=755&cs=0&ds=3&ft=-Csk_mfMP
D12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=ZDo2Z2g2PGRmNDY7OmQzZUBpajc0OW45cjd4dTMzODczNEAwLTZhN
i1eXjQxM2IzLl8wYSM2cGguMmRjay1gLS1kMWBzcw%3D%3D&btag=e000b8000&expire=1733214190&l=20241203022249CF3AAE191D82C
204807B&ply_type=2&policy=2&signature=25cbc1ac9e3d365ec827a91c9d04418d&tk=tt_chain_token', 'video_width':
720}, {'hasMore': True, 'cursor': '1731944251000', 'aweme_id': '7442713688047357202', 'collectCount': 23800,
'collected': False, 'commentCount': 1082, 'createTime': '2024-11-29 22-50-20', 'desc': '1_____________',
'desc_raw': '1미터 탕후루 도전 ⭐️ ', 'diggCount': 620900, 'digged': False, 'duetDisplay': 0, 'duetEnabled':
True, 'forFriend': False, 'isPinnedItem': None, 'itemCommentStatus': 0, 'music_album': None,
'music_authorName': '____Hyemin', 'music_authorName_raw': '김혜민 Hyemin', 'music_coverLarge':
'https://p16-sign-sg.tiktokcdn.com/aweme/1080x1080/tos-alisg-avt-0068/1e8faa42eb7e094bfe088411ce19003f.jpeg?lk
3s=a5d48078&nonce=50416&refresh_token=5110d31369f43fdda764ab025abbdf66&x-expires=1733364000&x-signature=QWIq15
29m0ehJcUFNiOGY91XiFo%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 8, 'music_id': '7442713703978453761',
'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/4aae196eac5e874eefbec1ae09f08178/674e7992/video/tos/alisg/tos-alisg-v-27dcd7/owOSI
QesuDCL17IHXdF6BJnCtU0gSBvSMQftJE/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C0&br=
250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=aGQ8PDhnaTVnNDs0aDg4NEBpMzw0Onc5cmV3dzMzODU8
NEBfMmAtXjEwXmExXzFhMTFjYSNjaGFfMmRrNjFgLS1kMS1zcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag=e000
b0000&cc=3', 'music_title': '_______________Hyemin', 'music_title_raw': '오리지널 사운드 - 김혜민 Hyemin',
'nickname': '____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 3500000, 'privateItem': False,
'secUid': 'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 75800,
'shareEnabled': True, 'textExtra': [{'awemeId': '', 'end': 31, 'hashtagName': 'fyp', 'isCommerce': False,
'start': 27, 'subType': 0, 'type': 1}], 'uid': '77415581122', 'video_bitrate': 1338662, 'video_bitrateInfo':
[1463946, 1338662, 618482], 'video_codecType': 'h264', 'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/os6TSnBoEfIdAFtCWAUIAgBqdnASCODlEQUfR2?lk3s=81f88b70&x
-expires=1733364000&x-signature=ByHk8Qp69OzqDseAyFSKROFzpgw%3D&shp=81f88b70&shcp=-', 'video_definition':
'720p', 'video_duration': 8, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/os6TSnBoEfIdAFtCWAUIAgBqdnASCODlEQUfR2?lk3s=81f88b70&x
-expires=1733364000&x-signature=ByHk8Qp69OzqDseAyFSKROFzpgw%3D&shp=81f88b70&shcp=-', 'video_height': 960,
'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/oc1SUI6EDO0RdB1JgEfLAWQYfFjQBnBEkn
2RWJ/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=2614&bt=1307&cs=0&ds=3&ft=-Csk_mfM
PD12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=cnF8b2hsc2d3SkBwaHIxaDFybndmNTk5aDZoZ2RoNjZlNDM6N0Bp
Mzx3dG05cnh3dzMzODczNEBjRl5Nc3FePmJKYSNvYF90aHFmOiNhLTMzYzMwNjIxNjA1Y2JjYSNvci1sMmRrNTFgLS1kMTFzcw%3D%3D&btag=
e000b0000&expire=1733214178&l=20241203022249CF3AAE191D82C204807B&ply_type=2&policy=2&signature=1d92a5bd99c3955
deb58185860dec434&tk=tt_chain_token', 'video_width': 720}, {'hasMore': True, 'cursor': '1731944251000',
'aweme_id': '7440021447516359944', 'collectCount': 1104, 'collected': False, 'commentCount': 324,
'createTime': '2024-11-22 16-43-05', 'desc': '__________________________#fyp_', 'desc_raw': '들리는 대로
립싱크 해보기 (가사 잘 모름🥲 ) #fyp ', 'diggCount': 24800, 'digged': False, 'duetDisplay': 0, 'duetEnabled':
True, 'forFriend': False, 'isPinnedItem': None, 'itemCommentStatus': 0, 'music_album': None,
'music_authorName': '____Hyemin', 'music_authorName_raw': '김혜민 Hyemin', 'music_coverLarge':
'https://p16-sign-sg.tiktokcdn.com/aweme/1080x1080/tos-alisg-avt-0068/1e8faa42eb7e094bfe088411ce19003f.jpeg?lk
3s=a5d48078&nonce=50416&refresh_token=5110d31369f43fdda764ab025abbdf66&x-expires=1733364000&x-signature=QWIq15
29m0ehJcUFNiOGY91XiFo%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 19, 'music_id': '7440021468609530640',
'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/a917ea3771b4e5c7db6efc5b127744e3/674e799d/video/tos/alisg/tos-alisg-v-27dcd7/ooqaE
fDVBANziAlKNAiRCzIBoAoM2uNw93EVIB/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C0&br=
250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=NGVlaTVmaTQ7O2llZWU8NUBpMzZuOHc5cjk3dzMzODU8
NEBeMi5iXl5eNWExNV4yLS0yYSNrbmBvMmRzNS1gLS1kMS1zcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag=e000
b8000&cc=3', 'music_title': '_______________Hyemin', 'music_title_raw': '오리지널 사운드 - 김혜민 Hyemin',
'nickname': '____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 480300, 'privateItem': False,
'secUid': 'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 82,
'shareEnabled': True, 'textExtra': [{'awemeId': '', 'end': 19, 'hashtagName': 'fyp', 'isCommerce': False,
'start': 15, 'subType': 0, 'type': 1}], 'uid': '77415581122', 'video_bitrate': 925416, 'video_bitrateInfo':
[925416, 743644, 398544], 'video_codecType': 'h264', 'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/osIVmBE0YaAJD1ABBhAVIIpa8oPRnziBL1BiZ?lk3s=81f88b70&x-
expires=1733364000&x-signature=KdtCKEZ6bVvsG%2FpOXYAiLZ24Tgs%3D&shp=81f88b70&shcp=-', 'video_definition':
'720p', 'video_duration': 18, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/oER0AYgsZan1BaVBHFmAA5mPEzLDiPBIBB1iI?lk3s=81f88b70&x-
expires=1733364000&x-signature=sLL2W44Ocs4DWxdAN76Vi%2BYhbFI%3D&shp=81f88b70&shcp=-', 'video_height': 1280,
'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/osAPGDBLaABEEYVniCFz1iI1aQC0RiIZAB
mGw/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1806&bt=903&cs=0&ds=3&ft=-Csk_mfMPD
12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=cnF8b2hsc2d3SkBwaHIxaDFybndmNDQ5MzplNDc8Nzs4NWVpM0BpMz
pzd285cm03dzMzODczNEBjRl5Nc3FePmJKYSNvYF90aHFmOiMzLmMwMmAuXi4xYjU0NjY0YSNhaWAyMmQ0NC1gLS1kMTFzcw%3D%3D&btag=e0
00b8000&expire=1733214189&l=20241203022249CF3AAE191D82C204807B&ply_type=2&policy=2&signature=fa5aced4a45c5b8aa
96f66696f3bb389&tk=tt_chain_token', 'video_width': 720}, {'hasMore': True, 'cursor': '1731944251000',
'aweme_id': '7438650119928466696', 'collectCount': 6416, 'collected': False, 'commentCount': 383,
'createTime': '2024-11-19 00-01-42', 'desc': '______________#fyp_', 'desc_raw': '하이디라오 먹으러 옴 😍 #fyp
', 'diggCount': 99900, 'digged': False, 'duetDisplay': 0, 'duetEnabled': True, 'forFriend': False,
'isPinnedItem': None, 'itemCommentStatus': 0, 'music_album': None, 'music_authorName': '__',
'music_authorName_raw': '𝘀𝗰' , 'music_coverLarge':
'https://p16-sign-sg.tiktokcdn.com/aweme/1080x1080/tos-alisg-avt-0068/ea108801f0eee6b7929de3a2928495d0.jpeg?lk
3s=a5d48078&nonce=79538&refresh_token=942409db0dba1a31bf1e83af9b11d180&x-expires=1733364000&x-signature=G4BfJ5
xSx4LR7Z3rEl4FWoqhAtk%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 46, 'music_id': '7415614508359617297',
'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/83633f50aedec8801a013d3413fc02a4/674e79b8/video/tos/alisg/tos-alisg-v-27dcd7/oQIT7
0YTfvHGFVGZCAeuL9EgWwQNDge3Dx0NHo/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C0&br=
250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=NThpO2Q3Ojo2Z2U6OTNnOUBpajlwO3k5cjlydTMzODU8
NEAuYzUyYC82XzMxNjNfNTMvYSNvbW9zMmRzcGhgLS1kMS1zcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag=e000
88000&cc=3', 'music_title': '_____________', 'music_title_raw': '오리지널 사운드 - 𝘀𝗰' , 'nickname':
'____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 1100000, 'privateItem': False, 'secUid':
'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 1831,
'shareEnabled': True, 'textExtra': None, 'uid': '77415581122', 'video_bitrate': 1560947, 'video_bitrateInfo':
[1560947, 1268673, 659120], 'video_codecType': 'h264', 'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/ookdoQvQGiCbsCZxAIeOA4KeOLzMRODIsfCjZ7?lk3s=81f88b70&x
-expires=1733364000&x-signature=jaVWmDRw0LJK54oNSFoaX96r0vs%3D&shp=81f88b70&shcp=-', 'video_definition':
'720p', 'video_duration': 16, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/ooAC7LOvbdzOGZyojeMCs5IfAoIGDgeQxOJiQC?lk3s=81f88b70&x
-expires=1733364000&x-signature=di%2BKE%2BFza1mS1YFe3zn4tmKdzn0%3D&shp=81f88b70&shcp=-', 'video_height': 960,
'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/oE8xIG2IjoMeOb5CyAgsfIIdQAzOifGEpv
dDLC/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=3048&bt=1524&cs=0&ds=3&ft=-Csk_mfM
PD12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=ZzhpOmllNDtpZWgzOjc5O0BpanI3c3k5cndwdjMzODczNEA2YWFe
Ml42NWAxL14yMV4wYSNkYmZwMmRzM3FgLS1kMTFzcw%3D%3D&btag=e000b8000&expire=1733214186&l=20241203022249CF3AAE191D82
C204807B&ply_type=2&policy=2&signature=2994ece4239841888b6133b27761a764&tk=tt_chain_token', 'video_width':
720}]
INFO 筛选日期区间:2024-01-01 00:00:00 至 2025-01-01 23:59:59
INFO {'hasMore': True, 'cursor': '1731944251000', 'aweme_id': '7421595112062323976', 'collectCount': 25400,
'collected': False, 'commentCount': 1902, 'createTime': '2024-10-04 00-59-31', 'desc': '_________2__MEOVV__',
'desc_raw': '집에서 뮤뱅찍기 2 @MEOVV 🤍' , 'diggCount': 379500, 'digged': False, 'duetDisplay': 0,
'duetEnabled': True, 'forFriend': False, 'isPinnedItem': True, 'itemCommentStatus': 0, 'music_album': None,
'music_authorName': 'pinktiez___', 'music_authorName_raw': 'pinktiez🐈\u200d⬛', 'music_coverLarge':
'https://p16-sign-va.tiktokcdn.com/tos-maliva-avt-0068/794082ba080922daea8bf3ede9000b3d~c5_1080x1080.jpeg?lk3s
=a5d48078&nonce=15262&refresh_token=339f1dbecf0db86f0c12e9f8381181f0&x-expires=1733364000&x-signature=tJ9%2B93
DyJlhZ%2FMSp5A%2B4ZKGl0Ek%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 35, 'music_id':
'7411559205082581765', 'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/630429d6a13c491ea3201c3dd17535cf/674e79ad/video/tos/useast2a/tos-useast2a-v-27dcd7
/oIubPA2NbSvC0tjis4dM9XUJBIMZQi6l9lVAI/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C
0&br=250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=OWgzNTM1NjplaTw7aTwzOUBpM3E7eHA5cm5sdTM
zNzU8M0BiNTViLTJgXjYxM14tMV5iYSMzbjRyMmRrX2FgLS1kMTZzcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag
=e00088000&cc=3', 'music_title': 'original_sound', 'music_title_raw': 'original sound', 'nickname':
'____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 3100000, 'privateItem': False, 'secUid':
'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 4897,
'shareEnabled': True, 'textExtra': [{'awemeId': '', 'end': 17, 'hashtagName': '', 'isCommerce': False,
'secUid': 'MS4wLjABAAAAPAoZ9652CkIMxk6pWt2fzmkou0cES0v6vgDmsHIlMK0B5fS-mvmnprJGc_ECvtmb', 'start': 11,
'subType': 0, 'type': 0, 'userId': '7180255003483014145', 'userUniqueId': 'meovv_official'}], 'uid':
'77415581122', 'video_bitrate': 2348351, 'video_bitrateInfo': [2348351, 616147], 'video_codecType': 'h264',
'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/5b16d99cb7f348998b5092899b52144f_1727974780?lk3s=81f88
b70&x-expires=1733364000&x-signature=nCB6sL336RpIDqxX5jkex0US2Dw%3D&shp=81f88b70&shcp=-', 'video_definition':
'720p', 'video_duration': 35, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/63e7c4e15ab4473780e8291972f3bb08_1727974780?lk3s=81f88
b70&x-expires=1733364000&x-signature=vy1jOQJ1sDvReRgQ3yvltjO1Wx0%3D&shp=81f88b70&shcp=-', 'video_height':
1280, 'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/okQbPDiznAtPAG1RAgW8fzQMp8eitYZQGR
FfeR/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=4586&bt=2293&cs=0&ds=3&ft=-Csk_mfM
PD12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=O2ZpZDdlZThmaTg3PGZnOUBpanJtNGs5cm9mdTMzODczNEAwL2Ez
MWIwXy8xMS4xL2AuYSNranFxMmQ0bHNgLS1kMTFzcw%3D%3D&btag=e00088000&expire=1733214205&l=20241203022249CF3AAE191D82
C204807B&ply_type=2&policy=2&signature=196bc02b4ea178590f2fe3332dade7af&tk=tt_chain_token', 'video_width':
720}
INFO 作品发布时间在指定区间内:作品id 7421595112062323976 发布时间 2024-10-04 00-59-31
INFO 筛选日期区间:2024-01-01 00:00:00 至 2025-01-01 23:59:59
INFO {'hasMore': True, 'cursor': '1731944251000', 'aweme_id': '7404458343873465608', 'collectCount': 83900,
'collected': False, 'commentCount': 1561, 'createTime': '2024-08-18 20-40-05', 'desc': '#asap_#newjeans_',
'desc_raw': '#asap #newjeans ', 'diggCount': 831200, 'digged': False, 'duetDisplay': 0, 'duetEnabled': True,
'forFriend': False, 'isPinnedItem': True, 'itemCommentStatus': 0, 'music_album': None, 'music_authorName':
'____Hyemin', 'music_authorName_raw': '김혜민 Hyemin', 'music_coverLarge':
'https://p16-sign-sg.tiktokcdn.com/aweme/1080x1080/tos-alisg-avt-0068/1e8faa42eb7e094bfe088411ce19003f.jpeg?lk
3s=a5d48078&nonce=50416&refresh_token=5110d31369f43fdda764ab025abbdf66&x-expires=1733364000&x-signature=QWIq15
29m0ehJcUFNiOGY91XiFo%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 20, 'music_id': '7404458371564260113',
'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/11021f6ef0b530204fd9f4f4428c21f8/674e799e/video/tos/alisg/tos-alisg-v-27dcd7/o4fMW
C7C9opesMdHBoTqQ8RTgrgmNgEtJDyTAe/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C0&br=
250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=NDxnPDQzNDw0OmQzOzxnOUBpM293OG85cmR4dTMzODU8
NEAxYjBeXzAvXzIxM2JeL141YSNlcGExMmRjay1gLS1kMS1zcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag=e000
b8000&cc=3', 'music_title': '_______________Hyemin', 'music_title_raw': '오리지널 사운드 - 김혜민 Hyemin',
'nickname': '____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 5100000, 'privateItem': False,
'secUid': 'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 11700,
'shareEnabled': True, 'textExtra': [{'awemeId': '', 'end': 5, 'hashtagName': 'asap', 'isCommerce': False,
'start': 0, 'subType': 0, 'type': 1}, {'awemeId': '', 'end': 15, 'hashtagName': 'newjeans', 'isCommerce':
False, 'start': 6, 'subType': 0, 'type': 1}], 'uid': '77415581122', 'video_bitrate': 773399,
'video_bitrateInfo': [773399, 677566, 487157, 411524], 'video_codecType': 'h264', 'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/66d32c56e24341e3906c3b658858d016_1723984807?lk3s=81f88
b70&x-expires=1733364000&x-signature=9SzSNuFEjcEexf%2BdFtPLcGiYcXI%3D&shp=81f88b70&shcp=-',
'video_definition': '720p', 'video_duration': 20, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/b28a1dff27784c9e9e028031f21c636a_1723984808?lk3s=81f88
b70&x-expires=1733364000&x-signature=CiSpIKfsj28wgO6fSyfXEo20jE0%3D&shp=81f88b70&shcp=-', 'video_height':
1280, 'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/ooTu8Q4DNC6oEKeFEB5ebAIEIDABfgrA1j
E6bR/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1510&bt=755&cs=0&ds=3&ft=-Csk_mfMP
D12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=ZDo2Z2g2PGRmNDY7OmQzZUBpajc0OW45cjd4dTMzODczNEAwLTZhN
i1eXjQxM2IzLl8wYSM2cGguMmRjay1gLS1kMWBzcw%3D%3D&btag=e000b8000&expire=1733214190&l=20241203022249CF3AAE191D82C
204807B&ply_type=2&policy=2&signature=25cbc1ac9e3d365ec827a91c9d04418d&tk=tt_chain_token', 'video_width': 720}
INFO 作品发布时间在指定区间内:作品id 7404458343873465608 发布时间 2024-08-18 20-40-05
INFO 筛选日期区间:2024-01-01 00:00:00 至 2025-01-01 23:59:59
INFO {'hasMore': True, 'cursor': '1731944251000', 'aweme_id': '7442713688047357202', 'collectCount': 23800,
'collected': False, 'commentCount': 1082, 'createTime': '2024-11-29 22-50-20', 'desc': '1_____________',
'desc_raw': '1미터 탕후루 도전 ⭐️ ', 'diggCount': 620900, 'digged': False, 'duetDisplay': 0, 'duetEnabled':
True, 'forFriend': False, 'isPinnedItem': None, 'itemCommentStatus': 0, 'music_album': None,
'music_authorName': '____Hyemin', 'music_authorName_raw': '김혜민 Hyemin', 'music_coverLarge':
'https://p16-sign-sg.tiktokcdn.com/aweme/1080x1080/tos-alisg-avt-0068/1e8faa42eb7e094bfe088411ce19003f.jpeg?lk
3s=a5d48078&nonce=50416&refresh_token=5110d31369f43fdda764ab025abbdf66&x-expires=1733364000&x-signature=QWIq15
29m0ehJcUFNiOGY91XiFo%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 8, 'music_id': '7442713703978453761',
'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/4aae196eac5e874eefbec1ae09f08178/674e7992/video/tos/alisg/tos-alisg-v-27dcd7/owOSI
QesuDCL17IHXdF6BJnCtU0gSBvSMQftJE/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C0&br=
250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=aGQ8PDhnaTVnNDs0aDg4NEBpMzw0Onc5cmV3dzMzODU8
NEBfMmAtXjEwXmExXzFhMTFjYSNjaGFfMmRrNjFgLS1kMS1zcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag=e000
b0000&cc=3', 'music_title': '_______________Hyemin', 'music_title_raw': '오리지널 사운드 - 김혜민 Hyemin',
'nickname': '____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 3500000, 'privateItem': False,
'secUid': 'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 75800,
'shareEnabled': True, 'textExtra': [{'awemeId': '', 'end': 31, 'hashtagName': 'fyp', 'isCommerce': False,
'start': 27, 'subType': 0, 'type': 1}], 'uid': '77415581122', 'video_bitrate': 1338662, 'video_bitrateInfo':
[1463946, 1338662, 618482], 'video_codecType': 'h264', 'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/os6TSnBoEfIdAFtCWAUIAgBqdnASCODlEQUfR2?lk3s=81f88b70&x
-expires=1733364000&x-signature=ByHk8Qp69OzqDseAyFSKROFzpgw%3D&shp=81f88b70&shcp=-', 'video_definition':
'720p', 'video_duration': 8, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/os6TSnBoEfIdAFtCWAUIAgBqdnASCODlEQUfR2?lk3s=81f88b70&x
-expires=1733364000&x-signature=ByHk8Qp69OzqDseAyFSKROFzpgw%3D&shp=81f88b70&shcp=-', 'video_height': 960,
'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/oc1SUI6EDO0RdB1JgEfLAWQYfFjQBnBEkn
2RWJ/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=2614&bt=1307&cs=0&ds=3&ft=-Csk_mfM
PD12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=cnF8b2hsc2d3SkBwaHIxaDFybndmNTk5aDZoZ2RoNjZlNDM6N0Bp
Mzx3dG05cnh3dzMzODczNEBjRl5Nc3FePmJKYSNvYF90aHFmOiNhLTMzYzMwNjIxNjA1Y2JjYSNvci1sMmRrNTFgLS1kMTFzcw%3D%3D&btag=
e000b0000&expire=1733214178&l=20241203022249CF3AAE191D82C204807B&ply_type=2&policy=2&signature=1d92a5bd99c3955
deb58185860dec434&tk=tt_chain_token', 'video_width': 720}
INFO 作品发布时间在指定区间内:作品id 7442713688047357202 发布时间 2024-11-29 22-50-20
INFO 筛选日期区间:2024-01-01 00:00:00 至 2025-01-01 23:59:59
INFO {'hasMore': True, 'cursor': '1731944251000', 'aweme_id': '7440021447516359944', 'collectCount': 1104,
'collected': False, 'commentCount': 324, 'createTime': '2024-11-22 16-43-05', 'desc':
'__________________________#fyp_', 'desc_raw': '들리는 대로 립싱크 해보기 (가사 잘 모름🥲 ) #fyp ',
'diggCount': 24800, 'digged': False, 'duetDisplay': 0, 'duetEnabled': True, 'forFriend': False,
'isPinnedItem': None, 'itemCommentStatus': 0, 'music_album': None, 'music_authorName': '____Hyemin',
'music_authorName_raw': '김혜민 Hyemin', 'music_coverLarge':
'https://p16-sign-sg.tiktokcdn.com/aweme/1080x1080/tos-alisg-avt-0068/1e8faa42eb7e094bfe088411ce19003f.jpeg?lk
3s=a5d48078&nonce=50416&refresh_token=5110d31369f43fdda764ab025abbdf66&x-expires=1733364000&x-signature=QWIq15
29m0ehJcUFNiOGY91XiFo%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 19, 'music_id': '7440021468609530640',
'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/a917ea3771b4e5c7db6efc5b127744e3/674e799d/video/tos/alisg/tos-alisg-v-27dcd7/ooqaE
fDVBANziAlKNAiRCzIBoAoM2uNw93EVIB/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C0&br=
250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=NGVlaTVmaTQ7O2llZWU8NUBpMzZuOHc5cjk3dzMzODU8
NEBeMi5iXl5eNWExNV4yLS0yYSNrbmBvMmRzNS1gLS1kMS1zcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag=e000
b8000&cc=3', 'music_title': '_______________Hyemin', 'music_title_raw': '오리지널 사운드 - 김혜민 Hyemin',
'nickname': '____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 480300, 'privateItem': False,
'secUid': 'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 82,
'shareEnabled': True, 'textExtra': [{'awemeId': '', 'end': 19, 'hashtagName': 'fyp', 'isCommerce': False,
'start': 15, 'subType': 0, 'type': 1}], 'uid': '77415581122', 'video_bitrate': 925416, 'video_bitrateInfo':
[925416, 743644, 398544], 'video_codecType': 'h264', 'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/osIVmBE0YaAJD1ABBhAVIIpa8oPRnziBL1BiZ?lk3s=81f88b70&x-
expires=1733364000&x-signature=KdtCKEZ6bVvsG%2FpOXYAiLZ24Tgs%3D&shp=81f88b70&shcp=-', 'video_definition':
'720p', 'video_duration': 18, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/oER0AYgsZan1BaVBHFmAA5mPEzLDiPBIBB1iI?lk3s=81f88b70&x-
expires=1733364000&x-signature=sLL2W44Ocs4DWxdAN76Vi%2BYhbFI%3D&shp=81f88b70&shcp=-', 'video_height': 1280,
'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/osAPGDBLaABEEYVniCFz1iI1aQC0RiIZAB
mGw/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=1806&bt=903&cs=0&ds=3&ft=-Csk_mfMPD
12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=cnF8b2hsc2d3SkBwaHIxaDFybndmNDQ5MzplNDc8Nzs4NWVpM0BpMz
pzd285cm03dzMzODczNEBjRl5Nc3FePmJKYSNvYF90aHFmOiMzLmMwMmAuXi4xYjU0NjY0YSNhaWAyMmQ0NC1gLS1kMTFzcw%3D%3D&btag=e0
00b8000&expire=1733214189&l=20241203022249CF3AAE191D82C204807B&ply_type=2&policy=2&signature=fa5aced4a45c5b8aa
96f66696f3bb389&tk=tt_chain_token', 'video_width': 720}
INFO 作品发布时间在指定区间内:作品id 7440021447516359944 发布时间 2024-11-22 16-43-05
INFO 筛选日期区间:2024-01-01 00:00:00 至 2025-01-01 23:59:59
INFO {'hasMore': True, 'cursor': '1731944251000', 'aweme_id': '7438650119928466696', 'collectCount': 6416,
'collected': False, 'commentCount': 383, 'createTime': '2024-11-19 00-01-42', 'desc': '______________#fyp_',
'desc_raw': '하이디라오 먹으러 옴 😍 #fyp ', 'diggCount': 99900, 'digged': False, 'duetDisplay': 0,
'duetEnabled': True, 'forFriend': False, 'isPinnedItem': None, 'itemCommentStatus': 0, 'music_album': None,
'music_authorName': '__', 'music_authorName_raw': '𝘀𝗰' , 'music_coverLarge':
'https://p16-sign-sg.tiktokcdn.com/aweme/1080x1080/tos-alisg-avt-0068/ea108801f0eee6b7929de3a2928495d0.jpeg?lk
3s=a5d48078&nonce=79538&refresh_token=942409db0dba1a31bf1e83af9b11d180&x-expires=1733364000&x-signature=G4BfJ5
xSx4LR7Z3rEl4FWoqhAtk%3D&shp=a5d48078&shcp=81f88b70', 'music_duration': 46, 'music_id': '7415614508359617297',
'music_original': False, 'music_playUrl':
'https://v16m.tiktokcdn.com/83633f50aedec8801a013d3413fc02a4/674e79b8/video/tos/alisg/tos-alisg-v-27dcd7/oQIT7
0YTfvHGFVGZCAeuL9EgWwQNDge3Dx0NHo/?a=1180&bti=ODszNWYuMDE6&ch=0&cr=0&dr=0&er=0&lr=default&cd=0%7C0%7C0%7C0&br=
250&bt=125&ft=.NpOcInz7ThYPbSOXq8Zmo&mime_type=audio_mpeg&qs=6&rc=NThpO2Q3Ojo2Z2U6OTNnOUBpajlwO3k5cjlydTMzODU8
NEAuYzUyYC82XzMxNjNfNTMvYSNvbW9zMmRzcGhgLS1kMS1zcw%3D%3D&vvpl=1&l=20241203022249CF3AAE191D82C204807B&btag=e000
88000&cc=3', 'music_title': '_____________', 'music_title_raw': '오리지널 사운드 - 𝘀𝗰' , 'nickname':
'____Hyemin', 'nickname_raw': '김혜민 Hyemin', 'playCount': 1100000, 'privateItem': False, 'secUid':
'MS4wLjABAAAApsEHZ_bLh4DQFAN03YlXaBIi4RE9Iul6BM2Zn1hSjDo', 'secret': False, 'shareCount': 1831,
'shareEnabled': True, 'textExtra': None, 'uid': '77415581122', 'video_bitrate': 1560947, 'video_bitrateInfo':
[1560947, 1268673, 659120], 'video_codecType': 'h264', 'video_cover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/ookdoQvQGiCbsCZxAIeOA4KeOLzMRODIsfCjZ7?lk3s=81f88b70&x
-expires=1733364000&x-signature=jaVWmDRw0LJK54oNSFoaX96r0vs%3D&shp=81f88b70&shcp=-', 'video_definition':
'720p', 'video_duration': 16, 'video_dynamicCover':
'https://p16-sign-sg.tiktokcdn.com/obj/tos-alisg-p-0037/ooAC7LOvbdzOGZyojeMCs5IfAoIGDgeQxOJiQC?lk3s=81f88b70&x
-expires=1733364000&x-signature=di%2BKE%2BFza1mS1YFe3zn4tmKdzn0%3D&shp=81f88b70&shcp=-', 'video_height': 960,
'video_playAddr':
'https://v16-webapp-prime.tiktok.com/video/tos/alisg/tos-alisg-pve-0037c001/oE8xIG2IjoMeOb5CyAgsfIIdQAzOifGEpv
dDLC/?a=1988&bti=ODszNWYuMDE6&ch=0&cr=3&dr=0&lr=all&cd=0%7C0%7C0%7C&cv=1&br=3048&bt=1524&cs=0&ds=3&ft=-Csk_mfM
PD12N_dzFE-Uxzo2LY6e3wv25pcAp&mime_type=video_mp4&qs=0&rc=ZzhpOmllNDtpZWgzOjc5O0BpanI3c3k5cndwdjMzODczNEA2YWFe
Ml42NWAxL14yMV4wYSNkYmZwMmRzM3FgLS1kMTFzcw%3D%3D&btag=e000b8000&expire=1733214186&l=20241203022249CF3AAE191D82
C204807B&ply_type=2&policy=2&signature=2994ece4239841888b6133b27761a764&tk=tt_chain_token', 'video_width':
720}
INFO 作品发布时间在指定区间内:作品id 7438650119928466696 发布时间 2024-11-19 00-01-42
← [ 视频 ]:2024-10-04 00-59-31_...2__MEOVV___video.mp4 ----------------------------------- 0.0% • 0/? bytes ? ETA
← [ 视频 ]:2024-08-18 20-40-05_...#newjeans__video.mp4 ----------------------------------- 0.0% • 0/? bytes ? ETA
↖ [ 视频 ]:2024-10-04 00-59-31_...2__MEOVV___video.mp4 ----------------------------------- 0.0% • 0/? bytes ? ETA
↖ [ 视频 ]:2024-08-18 20-40-05_...#newjeans__video.mp4 ----------------------------------- 0.0% • 0/? bytes ? ETA
↖ [ 视频 ]:2024-11-29 22-50-20_1______________video.mp4 ----------------------------------- 0.0% • 0/? bytes ? ETA
↖ [ 视频 ]:2024-11-22 16-43-05_..._____#fyp__video.mp4 ----------------------------------- 0.0% • 0/? bytes ? ETA
↖ [ 视频 ]:2024-11-19 00-01-42_..._____#fyp__video.mp4 ----------------------------------- 0.0% • 0/? bytes ? ETA
> 配置部分
tiktok:
headers:
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36
Referer: https://www.tiktok.com/
cookie: tt_csrf_token=mXcPPu5r-FbN_83r8FyvyWU239jpoDGTxoW0; tt_chain_token=.......
max_connections: 5
max_counts: 0
max_retries: 5
max_tasks: 5
mode: post
naming: '{create}_{desc}'
page_counts: 5
path: F:\视频爬取
timeout: 10
# 作品时间范围
interval: 2024-01-01|2025-01-01
url: https://www.tiktok.com/@gawonaa
Q: 请添加调试命令`f2 -d DEBUG`重新运行出错的命令并提供日志目录下的日志文件。
A:
Q: 如果是开发者请提供最小的代码示例
A:
```python
```
</details>
**预期行为**
简明扼要地描述期望发生的事情。
**屏幕截图**

**日志文件**
请添加调试日志文件以帮助解释你的问题。
**其他**
如有,可以添加有关问题的其他信息。
| open | 2024-12-03T02:34:17Z | 2024-12-03T03:16:56Z | https://github.com/Johnserf-Seed/TikTokDownload/issues/771 | [
"故障(bug)"
] | wangzhuangzhuang-dev | 0 |
hatchet-dev/hatchet | fastapi | 584 | Webhook worker cancellation behavior | see https://github.com/hatchet-dev/hatchet-typescript/pull/106#pullrequestreview-2111032995 for details
from @grutt
> with the current implementation, we'll make a new request to the worker to cancel the running step. with our grpc model, we send that request to the correct worker which has a ref to the running step run to cancel.
with the webhook worker, the request might get pickedup by a different lambda which will have no reference to the run...
Also, once the behavior is decided, this needs to be adapted in all SDKs individually | closed | 2024-06-12T23:12:24Z | 2024-12-14T00:20:09Z | https://github.com/hatchet-dev/hatchet/issues/584 | [] | steebchen | 0 |
tox-dev/tox | automation | 3,171 | Support for parameter expansion in --override | ## What's the problem this feature will solve?
Configuration values cannot be overridden (via `-x/--override` or `TOX_OVERRIDE`) without losing the ability to use positional arguments in the value.
For example: `tox -x "testenv.commands=poetry run pylint {posargs}" --`
## Describe the solution you'd like
Value substitution should be performed after overrides have been applied to the configuration.
## Alternative Solutions
Positional arguments can be put into the override value directly, but only if they are known in advance. An example of a tool that cannot interpolate arguments into the override value is pre-commit.
Bash could also be used to create the override value, but this requires more setup for Windows development.
| open | 2023-12-12T22:11:07Z | 2024-03-05T22:15:15Z | https://github.com/tox-dev/tox/issues/3171 | [
"help:wanted",
"enhancement"
] | jschwartzentruber | 3 |
joke2k/django-environ | django | 295 | Support for mssql-django | There is an official django-mssql-backend port by Microsoft in the making.
To use it you have to provide "mssql" as the engine.
https://github.com/microsoft/mssql-django#installation
Currently django-environ maps "mssql" to "sql_server.pyodbc".
https://github.com/joke2k/django-environ/pull/157
Don't know what the solution is, but it would be nice if there's a way to configure mssql-django in the next release. | open | 2021-05-21T11:28:26Z | 2023-04-06T23:22:34Z | https://github.com/joke2k/django-environ/issues/295 | [
"enhancement"
] | valentinschabschneider | 3 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 16,423 | [Bug]: error during generation werfault.exe 0xc000012d | ### Checklist
- [ ] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [X] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
I'm using lora to generate an image, 95% of the time the browser breaks down and an error appears werfault.exe 0xc000012d
Lora Everlasting Summer Zhenya by GraffMetal
### Steps to reproduce the problem
1. Using Lora
2. Generation by any prompt
### What should have happened?
You should get a picture after generation
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
"Platform": "Windows-10-10.0.19045-SP0",
"Python": "3.10.0",
"Version": "v1.10.1",
"Commit": "82a973c04367123ae98bd9abdf80d9eda9b910e2",
"Git status": "On branch master\nYour branch is up to date with 'origin/master'.\n\nChanges not staged for commit:\n (use \"git add <file>...\" to update what will be committed)\n (use \"git restore <file>...\" to discard changes in working directory)\n\tmodified: webui-user.bat\n\nno changes added to commit (use \"git add\" and/or \"git commit -a\")",
"Script path": "C:\\StableDiffusion\\stable-diffusion-webui",
"Data path": "C:\\StableDiffusion\\stable-diffusion-webui",
"Extensions dir": "C:\\StableDiffusion\\stable-diffusion-webui\\extensions",
"Checksum": "b83aeb02f21ad76258564361a836c6e170c9dd80faf9fd6002bf06e61ccae836",
"Commandline": [
"launch.py",
"--xformers",
"--opt-sdp-attention",
"--upcast-sampling",
"--no-hashing",
"--always-batch-cond-uncond",
"--medvram"
],
"Torch env info": {
"torch_version": "2.1.2+cu121",
"is_debug_build": "False",
"cuda_compiled_version": "12.1",
"gcc_version": null,
"clang_version": null,
"cmake_version": null,
"os": "Майкрософт Windows 10 Домашняя",
"libc_version": "N/A",
"python_version": "3.10.0 (tags/v3.10.0:b494f59, Oct 4 2021, 19:00:18) [MSC v.1929 64 bit (AMD64)] (64-bit runtime)",
"python_platform": "Windows-10-10.0.19045-SP0",
"is_cuda_available": "True",
"cuda_runtime_version": null,
"cuda_module_loading": "LAZY",
"nvidia_driver_version": "546.33",
"nvidia_gpu_models": "GPU 0: NVIDIA GeForce GTX 1660 SUPER",
"cudnn_version": null,
"pip_version": "pip3",
"pip_packages": [
"numpy==1.26.2",
"open-clip-torch==2.20.0",
"pytorch-lightning==1.9.4",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.1",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121"
],
"conda_packages": null,
"hip_compiled_version": "N/A",
"hip_runtime_version": "N/A",
"miopen_runtime_version": "N/A",
"caching_allocator_config": "garbage_collection_threshold:0.9,max_split_size_mb:512",
"is_xnnpack_available": "True",
"cpu_info": [
"Architecture=9",
"CurrentClockSpeed=3501",
"DeviceID=CPU0",
"Family=198",
"L2CacheSize=1024",
"L2CacheSpeed=",
"Manufacturer=GenuineIntel",
"MaxClockSpeed=3501",
"Name=Intel(R) Core(TM) i7-3770K CPU @ 3.50GHz",
"ProcessorType=3",
"Revision=14857"
]
},
"Exceptions": [],
"CPU": {
"model": "Intel64 Family 6 Model 58 Stepping 9, GenuineIntel",
"count logical": 8,
"count physical": 4
},
"RAM": {
"total": "8GB",
"used": "5GB",
"free": "3GB"
},
"Extensions": [],
"Inactive extensions": [],
"Environment": {
"COMMANDLINE_ARGS": "--xformers --opt-sdp-attention --upcast-sampling --no-hashing --always-batch-cond-uncond --medvram",
"GRADIO_ANALYTICS_ENABLED": "False"
},
"Config": {
"ldsr_steps": 100,
"ldsr_cached": false,
"SCUNET_tile": 256,
"SCUNET_tile_overlap": 8,
"SWIN_tile": 192,
"SWIN_tile_overlap": 8,
"SWIN_torch_compile": false,
"hypertile_enable_unet": false,
"hypertile_enable_unet_secondpass": false,
"hypertile_max_depth_unet": 3,
"hypertile_max_tile_unet": 256,
"hypertile_swap_size_unet": 3,
"hypertile_enable_vae": false,
"hypertile_max_depth_vae": 3,
"hypertile_max_tile_vae": 128,
"hypertile_swap_size_vae": 3,
"sd_model_checkpoint": "autismmixSDXL_autismmixPony.safetensors [821aa5537f]",
"sd_checkpoint_hash": "821aa5537f8ddafdbf963827551865c31c5bbfab1abe7925cb5f006c8f71e485"
},
"Startup": {
"total": 28.416757822036743,
"records": {
"initial startup": 0.05522894859313965,
"prepare environment/checks": 0.01615309715270996,
"prepare environment/git version info": 0.14559125900268555,
"prepare environment/torch GPU test": 7.662805080413818,
"prepare environment/clone repositores": 0.3027527332305908,
"prepare environment/run extensions installers": 0.0024504661560058594,
"prepare environment": 8.22752571105957,
"launcher": 0.00488591194152832,
"import torch": 7.3306050300598145,
"import gradio": 2.997941493988037,
"setup paths": 3.0106570720672607,
"import ldm": 0.016428232192993164,
"import sgm": 0.0,
"initialize shared": 0.6664214134216309,
"other imports": 1.7067046165466309,
"opts onchange": 0.0,
"setup SD model": 0.0009996891021728516,
"setup codeformer": 0.003007650375366211,
"setup gfpgan": 0.04252219200134277,
"set samplers": 0.0,
"list extensions": 0.004396200180053711,
"restore config state file": 0.0,
"list SD models": 0.05798220634460449,
"list localizations": 0.001999378204345703,
"load scripts/custom_code.py": 0.016994953155517578,
"load scripts/img2imgalt.py": 0.0009992122650146484,
"load scripts/loopback.py": 0.0010013580322265625,
"load scripts/outpainting_mk_2.py": 0.0015349388122558594,
"load scripts/poor_mans_outpainting.py": 0.0010027885437011719,
"load scripts/postprocessing_codeformer.py": 0.0010006427764892578,
"load scripts/postprocessing_gfpgan.py": 0.0011854171752929688,
"load scripts/postprocessing_upscale.py": 0.0008199214935302734,
"load scripts/prompt_matrix.py": 0.0010037422180175781,
"load scripts/prompts_from_file.py": 0.0010025501251220703,
"load scripts/sd_upscale.py": 0.0020008087158203125,
"load scripts/xyz_grid.py": 0.003996610641479492,
"load scripts/ldsr_model.py": 1.4831657409667969,
"load scripts/lora_script.py": 0.3317122459411621,
"load scripts/scunet_model.py": 0.05398273468017578,
"load scripts/swinir_model.py": 0.050981998443603516,
"load scripts/hotkey_config.py": 0.0009419918060302734,
"load scripts/extra_options_section.py": 0.0020029544830322266,
"load scripts/hypertile_script.py": 0.09596848487854004,
"load scripts/postprocessing_autosized_crop.py": 0.0009999275207519531,
"load scripts/postprocessing_caption.py": 0.0009987354278564453,
"load scripts/postprocessing_create_flipped_copies.py": 0.0012624263763427734,
"load scripts/postprocessing_focal_crop.py": 0.004714488983154297,
"load scripts/postprocessing_split_oversized.py": 0.0009984970092773438,
"load scripts/soft_inpainting.py": 0.0009987354278564453,
"load scripts/comments.py": 0.04698753356933594,
"load scripts/refiner.py": 0.0007231235504150391,
"load scripts/sampler.py": 0.001003265380859375,
"load scripts/seed.py": 0.0010006427764892578,
"load scripts": 2.1109864711761475,
"load upscalers": 0.008245229721069336,
"refresh VAE": 0.004211902618408203,
"refresh textual inversion templates": 0.0006375312805175781,
"scripts list_optimizers": 0.0019085407257080078,
"scripts list_unets": 0.0,
"reload hypernetworks": 0.0009999275207519531,
"initialize extra networks": 0.07018423080444336,
"scripts before_ui_callback": 0.014822006225585938,
"create ui": 1.0407750606536865,
"gradio launch": 1.1156854629516602,
"add APIs": 0.01899409294128418,
"app_started_callback/lora_script.py": 0.0009999275207519531,
"app_started_callback": 0.0009999275207519531
}
},
"Packages": [
"accelerate==0.21.0",
"aenum==3.1.15",
"aiofiles==23.2.1",
"aiohappyeyeballs==2.4.0",
"aiohttp==3.10.5",
"aiosignal==1.3.1",
"altair==5.4.0",
"antlr4-python3-runtime==4.9.3",
"anyio==3.7.1",
"async-timeout==4.0.3",
"attrs==24.2.0",
"blendmodes==2022",
"certifi==2024.7.4",
"charset-normalizer==3.3.2",
"clean-fid==0.1.35",
"click==8.1.7",
"clip @ https://github.com/openai/CLIP/archive/d50d76daa670286dd6cacf3bcd80b5e4823fc8e1.zip#sha256=b5842c25da441d6c581b53a5c60e0c2127ebafe0f746f8e15561a006c6c3be6a",
"colorama==0.4.6",
"contourpy==1.2.1",
"cycler==0.12.1",
"deprecation==2.1.0",
"diskcache==5.6.3",
"einops==0.4.1",
"exceptiongroup==1.2.2",
"facexlib==0.3.0",
"fastapi==0.94.0",
"ffmpy==0.4.0",
"filelock==3.15.4",
"filterpy==1.4.5",
"fonttools==4.53.1",
"frozenlist==1.4.1",
"fsspec==2024.6.1",
"ftfy==6.2.3",
"gitdb==4.0.11",
"GitPython==3.1.32",
"gradio==3.41.2",
"gradio_client==0.5.0",
"h11==0.12.0",
"httpcore==0.15.0",
"httpx==0.24.1",
"huggingface-hub==0.24.6",
"idna==3.8",
"imageio==2.35.1",
"importlib_resources==6.4.4",
"inflection==0.5.1",
"Jinja2==3.1.4",
"jsonmerge==1.8.0",
"jsonschema==4.23.0",
"jsonschema-specifications==2023.12.1",
"kiwisolver==1.4.5",
"kornia==0.6.7",
"lark==1.1.2",
"lazy_loader==0.4",
"lightning-utilities==0.11.6",
"llvmlite==0.43.0",
"MarkupSafe==2.1.5",
"matplotlib==3.9.2",
"mpmath==1.3.0",
"multidict==6.0.5",
"narwhals==1.5.5",
"networkx==3.3",
"numba==0.60.0",
"numpy==1.26.2",
"omegaconf==2.2.3",
"open-clip-torch==2.20.0",
"opencv-python==4.10.0.84",
"orjson==3.10.7",
"packaging==24.1",
"pandas==2.2.2",
"piexif==1.1.3",
"Pillow==9.5.0",
"pillow-avif-plugin==1.4.3",
"pip==24.2",
"protobuf==3.20.0",
"psutil==5.9.5",
"pydantic==1.10.18",
"pydub==0.25.1",
"pyparsing==3.1.4",
"python-dateutil==2.9.0.post0",
"python-multipart==0.0.9",
"pytorch-lightning==1.9.4",
"pytz==2024.1",
"PyWavelets==1.7.0",
"PyYAML==6.0.2",
"referencing==0.35.1",
"regex==2024.7.24",
"requests==2.32.3",
"resize-right==0.0.2",
"rpds-py==0.20.0",
"safetensors==0.4.2",
"scikit-image==0.21.0",
"scipy==1.14.1",
"semantic-version==2.10.0",
"sentencepiece==0.2.0",
"setuptools==69.5.1",
"six==1.16.0",
"smmap==5.0.1",
"sniffio==1.3.1",
"spandrel==0.3.4",
"spandrel_extra_arches==0.1.1",
"starlette==0.26.1",
"sympy==1.13.2",
"tifffile==2024.8.10",
"timm==1.0.9",
"tokenizers==0.13.3",
"tomesd==0.1.3",
"torch==2.1.2+cu121",
"torchdiffeq==0.2.3",
"torchmetrics==1.4.1",
"torchsde==0.2.6",
"torchvision==0.16.2+cu121",
"tqdm==4.66.5",
"trampoline==0.1.2",
"transformers==4.30.2",
"typing_extensions==4.12.2",
"tzdata==2024.1",
"urllib3==2.2.2",
"uvicorn==0.30.6",
"wcwidth==0.2.13",
"websockets==11.0.3",
"xformers==0.0.23.post1",
"yarl==1.9.4"
]
}
### Console logs
```Shell
cmd is broken and windows system is buggy when an error occurs werfault.exe 0xc000012d
```
### Additional information
_No response_ | closed | 2024-08-25T16:52:46Z | 2024-12-28T07:03:58Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/16423 | [
"bug-report"
] | Kirpich56 | 2 |
gunthercox/ChatterBot | machine-learning | 2,218 | Problem with time.clock in compat.py | I tested the sample code in your documentation "Quick Start Guide". But I faced error like this.
----------------------------------------------------------------------------------------------------------
Traceback (most recent call last):
File "D:/2021 2학기/인공지능/Team project/1.py", line 2, in <module>
chatbot = ChatBot("Ron Obvious")
File "C:\Users\이동훈\AppData\Local\Programs\Python\Python39\lib\site-packages\chatterbot\chatterbot.py", line 34, in __init__
self.storage = utils.initialize_class(storage_adapter, **kwargs)
File "C:\Users\이동훈\AppData\Local\Programs\Python\Python39\lib\site-packages\chatterbot\utils.py", line 54, in initialize_class
return Class(*args, **kwargs)
File "C:\Users\이동훈\AppData\Local\Programs\Python\Python39\lib\site-packages\chatterbot\storage\sql_storage.py", line 22, in __init__
from sqlalchemy import create_engine
File "C:\Users\이동훈\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\__init__.py", line 8, in <module>
from . import util as _util # noqa
File "C:\Users\이동훈\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\util\__init__.py", line 14, in <module>
from ._collections import coerce_generator_arg # noqa
File "C:\Users\이동훈\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\util\_collections.py", line 16, in <module>
from .compat import binary_types
File "C:\Users\이동훈\AppData\Local\Programs\Python\Python39\lib\site-packages\sqlalchemy\util\compat.py", line 264, in <module>
time_func = time.clock
AttributeError: module 'time' has no attribute 'clock'
----------------------------------------------------------------------------------------------------------
So, I changed 'time.clock' into 'time.time' in compat.py Ln 264. And then it worked well.
I'm using Windows10 64bit OS, x64 processor.
And python version is 3.9.4. | closed | 2021-11-10T07:04:04Z | 2025-02-17T21:31:12Z | https://github.com/gunthercox/ChatterBot/issues/2218 | [] | leedh0209 | 2 |
Lightning-AI/pytorch-lightning | machine-learning | 20,407 | update dataset at "on_train_epoch_start", but "training_step" still get old data | ### Bug description
I use `trainer.fit(model, datamodule=dm)` to start training.
"dm" is an object whose class inherited from `pl.LightningDataModule`, and in the class, I override the function:
```python
def train_dataloader(self):
train_dataset = MixedBatchMultiviewDataset(self.args, self.tokenizer,
known_exs=self.known_train,
unknown_exs=self.unknown_train,
feature=self.args.feature)
train_dataloader = DataLoader(train_dataset,
batch_size = self.args.train_batch_size,
shuffle=True, num_workers=self.args.num_workers,
pin_memory=True, collate_fn=self.collate_batch_feat)
return train_dataloader
```
at the model's hook `on_train_epoch_start`, I update the dataset:
```python
train_dl = self.trainer.train_dataloader
train_dl.dataset.update_pseudo_labels(uid2pl)
loop = self.trainer.fit_loop
loop._combined_loader = None
loop.setup_data()
```
in the `training_step`, the batch data is still old data, but `trainer.train_dataloader.dataset` is new:
```python
def training_step(self, batch: List[Dict[str, torch.Tensor]], batch_idx: int):
self.mv_model._on_train_batch_start()
logger.info(self.trainer.train_dataloader.dataset.unknown_feats) # new
logger.info(batch) # old
```
### What version are you seeing the problem on?
v2.3
### How to reproduce the bug
_No response_
### Error messages and logs
```
# Error messages and logs here please
```
### Environment
<details>
<summary>Current environment</summary>
```
#- PyTorch Lightning Version (e.g., 2.4.0):
#- PyTorch Version (e.g., 2.4):
#- Python version (e.g., 3.12):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
```
</details>
### More info
_No response_
cc @justusschock | open | 2024-11-08T16:22:03Z | 2024-11-18T22:48:19Z | https://github.com/Lightning-AI/pytorch-lightning/issues/20407 | [
"bug",
"waiting on author",
"loops"
] | Yak1m4Sg | 1 |
zihangdai/xlnet | nlp | 273 | Removing mem-reuse will not decrease the pretraining model performance for short text? | Am I right? | open | 2020-09-29T09:47:27Z | 2020-09-29T09:47:27Z | https://github.com/zihangdai/xlnet/issues/273 | [] | guotong1988 | 0 |
erdewit/ib_insync | asyncio | 431 | Sanic and ib_insync - issues with timeout "peer closed connection" | Hi @erdewit and all
First thanks for making it easy to pull ib data with ib_insync!
I'm trying to build a simple web app using the asyncio-friendly web framework Sanic on windows. The goal is to simply specify a ticker and a date which then downloads some historical data to an SQLite database.
In short, when I run my program i get the famous "peer closed connection". Is there an update on how once and for all to disconnect or other workaround?
```
import asyncio
from sanic import Sanic
from sanic import response
from sanic.response import json
from sanic.log import logger
from controller import my_bp
from IBKR_app import *
@app.route("/<company>/<date>/download")
async def runit(request, company, date):
ib = IB()
await ib.connectAsync("127.0.0.1", 7497, 2)
try:
ibkr = application(company,date)
res = asyncio.run(ibkr.request_data(ib, 15, '1 D', ibkr.backtest_date_ib))
return response.html(res)
except:
return response.text("No data could be downloaded for " + ibkr.backtest_date_ib)
ib.disconnect()
ib.disconnect()
if __name__ == "__main__":
app.run(host="127.0.0.1", port=8000, debug=False, access_log=False)
```
The function that calls ib.reqHistoricaldata is defined as a method in the class 'application' and defined as follows:
```
def request_data(self, ib, barsize, durationString, endtime):
mintext = " min"
if (int(barsize) > 1):
mintext = " mins"
self.bars = ib.reqHistoricalData(
self.stock, endDateTime=endtime, durationStr=durationString,
barSizeSetting=str(barsize) + mintext, whatToShow='TRADES', useRTH=True)
<some code to load to SQLite db>
message = "Data download for " + self.symbol
return message
```
| closed | 2022-01-25T13:34:35Z | 2022-01-28T16:29:51Z | https://github.com/erdewit/ib_insync/issues/431 | [] | Zeddbits | 1 |
FactoryBoy/factory_boy | sqlalchemy | 895 | Regression on 3.2.1 causes RecursionError with django-dirtyfields | #### Description
A regression done in https://github.com/FactoryBoy/factory_boy/pull/799 causes a RecursionError. After upgrading from 3.2.0 to 3.2.1 the error happened.
To verify that this was really the cause of the issue, I installed 3.2.1 in current dir using `pip install factory-boy -t .`, modified these lines back to what it is originally in 3.2.0 and it is working again.
```diff
- signal.receivers += receivers
+ signal.receivers = receivers
```
#### To Reproduce
##### Model / Factory code
Here is the model. To add, there are also signal receivers but will not include the full details.
```python
class Company(DirtyFieldsMixin, models.Model):
...
@receiver(post_save, sender=Company)
def my_receiver(instance, raw, **kwargs):
...
class CompanyFactory(DjangoModelFactory): # standard
class Meta:
model = Company
```
##### The issue
You can find the error here that happens on `instance.save()`
```
.venv/lib/python3.9/site-packages/factory/django.py:173: in _after_postgeneration
instance.save()
src/company/models/company.py:607: in save
return super().save(*args, **kwargs)
.venv/lib/python3.9/site-packages/safedelete/models.py:96: in save
super(SafeDeleteModel, self).save(**kwargs)
.venv/lib/python3.9/site-packages/django/db/models/base.py:726: in save
self.save_base(using=using, force_insert=force_insert,
.venv/lib/python3.9/site-packages/django/db/models/base.py:774: in save_base
post_save.send(
.venv/lib/python3.9/site-packages/django/dispatch/dispatcher.py:180: in send
return [
.venv/lib/python3.9/site-packages/django/dispatch/dispatcher.py:181: in <listcomp>
(receiver, receiver(signal=self, sender=sender, **named))
.venv/lib/python3.9/site-packages/dirtyfields/dirtyfields.py:155: in reset_state
new_state = instance._as_dict(check_relationship=True)
.venv/lib/python3.9/site-packages/dirtyfields/dirtyfields.py:94: in _as_dict
all_field[field.name] = deepcopy(field_value)
/usr/local/lib/python3.9/copy.py:172: in deepcopy
y = _reconstruct(x, memo, *rv)
/usr/local/lib/python3.9/copy.py:270: in _reconstruct
state = deepcopy(state, memo)
/usr/local/lib/python3.9/copy.py:146: in deepcopy
y = copier(x, memo)
/usr/local/lib/python3.9/copy.py:230: in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
/usr/local/lib/python3.9/copy.py:172: in deepcopy
y = _reconstruct(x, memo, *rv)
/usr/local/lib/python3.9/copy.py:270: in _reconstruct
state = deepcopy(state, memo)
/usr/local/lib/python3.9/copy.py:146: in deepcopy
y = copier(x, memo)
/usr/local/lib/python3.9/copy.py:230: in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
/usr/local/lib/python3.9/copy.py:146: in deepcopy
y = copier(x, memo)
/usr/local/lib/python3.9/copy.py:230: in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
/usr/local/lib/python3.9/copy.py:172: in deepcopy
y = _reconstruct(x, memo, *rv)
/usr/local/lib/python3.9/copy.py:270: in _reconstruct
state = deepcopy(state, memo)
/usr/local/lib/python3.9/copy.py:146: in deepcopy
y = copier(x, memo)
/usr/local/lib/python3.9/copy.py:230: in _deepcopy_dict
y[deepcopy(key, memo)] = deepcopy(value, memo)
E RecursionError: maximum recursion depth exceeded while calling a Python object
!!! Recursion detected (same locals & position)
```
#### Notes
*Add any notes you feel relevant here :)*
| open | 2021-11-04T09:27:12Z | 2024-06-25T13:56:02Z | https://github.com/FactoryBoy/factory_boy/issues/895 | [] | roniemartinez | 3 |
jordaneremieff/djantic | pydantic | 73 | JetBrains Code Completion | Is there a way to add support for JetBrains code completion?
```
class MyDjangoModel(models.Model):
name = model.CharField(max_lenght=200)
class MyDjangoModelSchema(ModelSchema):
class Config:
model = MyDjangoModel
obj = MyDjangoModelSchema.from_orm(MyDjangoModel.objects.first())
obj.name # PyCharm says "Unresolved attribute reference 'name' for class 'MyDjangoModelSchema' "
```
| open | 2023-03-18T15:39:46Z | 2023-03-18T15:40:00Z | https://github.com/jordaneremieff/djantic/issues/73 | [] | bloodwithmilk25 | 0 |
seleniumbase/SeleniumBase | web-scraping | 2,139 | Add option `--driver-version="browser"` (exact match on browser version) | ## Add option `--driver-version="browser"` (exact match on browser version)
This should be similar to https://github.com/seleniumbase/SeleniumBase/issues/2137, but will now use the exact version of the browser, assuming Chrome and CFT (Chrome >= 115), and that the Chrome version fits in `X.X.X.X`.
Eg. If Chrome `117.0.5938.62` is installed, but you currently have chromedriver `117.0.5938.92`, then this should download chromedriver `117.0.5938.62` into the `seleniumbase/drivers` folder for the tests.
(NOTE that for some [Syntax Formats](https://github.com/seleniumbase/SeleniumBase/blob/master/help_docs/syntax_formats.md), the driver version is passed via method arg: `driver_version="VERSION"`)
### Note that this option will ONLY take effect if the browser version is `>=115` (because earlier versions did not necessarily have exact matches) | closed | 2023-09-24T17:02:18Z | 2023-09-26T01:36:08Z | https://github.com/seleniumbase/SeleniumBase/issues/2139 | [
"enhancement"
] | mdmintz | 1 |
developmentseed/lonboard | data-visualization | 729 | [BUG] Map height fixed at 24rem | When embedding lonboard in panel (and probably other similar toolkits), the map height is fixed to 24rem, regardless of the parent container's height.
This issue is new since 0.9.3, I believe this issue was first introduced in 44a45f8. Continuing discussion from #264
## Environment
- OS: Fedora 41
- Browser: Chrome/Brave
- Lonboard Version: 0.10.3
## Steps to reproduce the bug
```python
import panel as pn
from lonboard import Map
from lonboard.basemap import CartoBasemap
pn.extension()
title = pn.pane.Markdown('# THIS IS A TITLE')
lonb_map = Map([], basemap_style=CartoBasemap.Positron)
lonb_map_panel_container = pn.pane.IPyWidget(lonb_map, sizing_mode='stretch_both', align='center')
pn.Column(title, lonb_map_panel_container).servable()
```

| open | 2025-01-08T19:36:22Z | 2025-02-25T13:18:11Z | https://github.com/developmentseed/lonboard/issues/729 | [
"bug"
] | wx4stg | 1 |
SYSTRAN/faster-whisper | deep-learning | 976 | Are there any plans to publish the latest code to pypi? | The new features, such as "multi-segment language detection" and "Batched faster-whisper", are not available on the latest version 1.0.3. Do you have any plans to release it?Is there anything I should be aware of if I want to upgrade to use the new features? | closed | 2024-08-22T14:41:39Z | 2024-11-21T17:27:53Z | https://github.com/SYSTRAN/faster-whisper/issues/976 | [] | wildwind0 | 2 |
google/trax | numpy | 1,773 | AttributeError: module 'jax.ops' has no attribute 'index_add' | ### Description
I am trying to do something basic in my code:
```python
import numpy as np # regular ol' numpy
from trax import layers as tl # core building block
from trax import shapes # data signatures: dimensionality and type
from trax import fastmath # uses jax, offers numpy on steroids
```
Upon import it errors out doing the basics here. What am I doing wrong? Should I be pinning a different version of the code?
### Environment information
```
```
OS: Cento
lsb_release
LSB Version: :core-4.1-amd64:core-4.1-ia32:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-ia32:cxx-4.1-noarch:desktop-4.1-amd64:desktop-4.1-ia32:desktop-4.1-noarch:languages-4.1-amd64:languages-4.1-noarch:printing-4.1-amd64:printing-4.1-noarch
$ pip freeze | grep trax
trax==1.3.9
$ pip freeze | grep tensor
mesh-tensorflow==0.1.21
tensorboard==2.11.2
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.11.0
tensorflow-datasets==4.8.2
tensorflow-estimator==2.11.0
tensorflow-hub==0.12.0
tensorflow-io-gcs-filesystem==0.30.0
tensorflow-metadata==1.12.0
tensorflow-text==2.11.0
$ pip freeze | grep jax
jax==0.4.4
jaxlib==0.4.4
$ python -V
Python 3.9.16
```
### For bugs: reproduction and error logs
# Error logs:
...
1 # coding=utf-8
2 # Copyright 2021 The Trax Authors.
3 #
(...)
13 # See the License for the specific language governing permissions and
14 # limitations under the License.
16 """Trax top level import."""
---> 18 from trax import data
19 from trax import fastmath
20 from trax import layers
File ./ds_work/miniconda3/envs/coursera-nlp/lib/python3.9/site-packages/trax/data/__init__.py:36, in <module>
16 """Functions and classes for obtaining and preprocesing data.
17
18 The ``trax.data`` module presents a flattened (no subpackages) public API.
(...)
...
217 'vjp': jax.vjp,
218 'vmap': jax.vmap,
219 }
AttributeError: module 'jax.ops' has no attribute 'index_add'
```
| open | 2023-02-26T15:29:07Z | 2023-04-10T22:26:34Z | https://github.com/google/trax/issues/1773 | [] | cmosguy | 1 |
Lightning-AI/LitServe | rest-api | 310 | Pyright issues [Name mismatches] | ## 🐛 Bug Report
### Description
When creating a new notebook via Lightning and copying the example code from the README, Pyright shows issues with the abstract methods of the `LitAPI` base class.
The error occurs due to a parameter name mismatch in method overrides:
```
Method "setup" overrides class "LitAPI" in an incompatible manner Parameter 2 name mismatch: base parameter is named "devices", override parameter is named "device" Pyright[reportIncompatibleMethodOverride]
```

### Steps to Reproduce
1. Open a new notebook in Lightning Studio.
2. Copy the example code from the README.
3. Observe the Pyright issues that appear for the `LitAPI` base class methods.
### Expected Behavior
No Pyright issues should be present. Although this won't block users, it can cause confusion, especially for those new to the framework.
### Environment
- Lightning Studio
- Pyright
### Additional Context
I’d be happy to work on resolving this issue (Updating the README should do the job?)
| closed | 2024-10-01T07:53:03Z | 2024-10-01T18:28:40Z | https://github.com/Lightning-AI/LitServe/issues/310 | [
"bug",
"help wanted"
] | grumpyp | 2 |
zappa/Zappa | django | 570 | [Migrated] How to know when a lambda fails because timeout and also when it is out of memory | Originally from: https://github.com/Miserlou/Zappa/issues/1494 by [alexanderfanz](https://github.com/alexanderfanz)
I have a process that feed a lambda and some times it is over fed (bug that I cannot prevent :( ), this causes the lambda sometimes run out of memory and others it just keep running until it dies without finish the job.
I would like to know if there is a way to know when one this things happens.
Thanks in advance
[The question in StackOverFlow](https://stackoverflow.com/q/50181524/4855401) | closed | 2021-02-20T12:22:55Z | 2022-07-16T07:04:13Z | https://github.com/zappa/Zappa/issues/570 | [] | jneves | 1 |
Evil0ctal/Douyin_TikTok_Download_API | api | 537 | 设置robots.txt失败 | 我不想搜索引擎爬取收录该站点,于是尝试修改/app/app/main.py。在其中设置了get('robot.txt')的路由,访问将返回robots.txt。但是无论我将robots.txt放置在/app/还是/app/app/中,都无法正确得到结果。提示错误 404 not found。 | closed | 2025-01-15T12:02:54Z | 2025-01-16T09:54:44Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/537 | [
"enhancement"
] | hadwinfu | 2 |
xlwings/xlwings | automation | 2,416 | enable xlwings Reports with xlwings Server | closed | 2024-03-14T16:13:31Z | 2024-07-19T09:12:29Z | https://github.com/xlwings/xlwings/issues/2416 | [
"Server"
] | fzumstein | 1 | |
django-oscar/django-oscar | django | 4,058 | Upgrade to Django 4.0 | closed | 2023-02-24T09:56:58Z | 2023-06-14T06:27:39Z | https://github.com/django-oscar/django-oscar/issues/4058 | [] | viggo-devries | 1 | |
TencentARC/GFPGAN | deep-learning | 516 | gfpgan file | def get_face_enhancer() -> Any:
global FACE_ENHANCER
with THREAD_LOCK:
if FACE_ENHANCER is None:
model_path = resolve_relative_path('/workspace/roop/models/GFPGANv1.4.pth') #('../models/GFPGANv1.4.pth')
FACE_ENHANCER = GFPGANer(model_path=model_path,
upscale=1,
device=get_device()
)
return FACE_ENHANCER
gfpgan file downloading code where can we find it in the code. I want to give it as a path from local. | closed | 2024-02-14T14:55:49Z | 2024-02-22T06:39:12Z | https://github.com/TencentARC/GFPGAN/issues/516 | [] | MehmetcanTozlu | 0 |
amidaware/tacticalrmm | django | 918 | escaping issue with custom field string replacement | **Describe the bug**
Puting a custom field with a value like `asdf\basdf` and passing that custom field to a power shell script as a parameter you get output like:

BUT
If you don't use a custom field, and just enter it as a parameter to a power shell script, the `\b` doesn't corrupt.
I'm thinking this is fixable? Seems like it should be.
Create a Simple powershell script in tactical:
```
param ($MYARG)
Write-Host "MYARG PARAM: $MYARG"
```
call it with Args : `-MYARG asdf\basdf`
...Which should work.
Put `asdf\basdf` into custom field
call script with arg: `-MYARG {{agent.FIELD}}
You should get the corrupt output.
| closed | 2022-01-05T23:58:36Z | 2022-08-14T18:20:59Z | https://github.com/amidaware/tacticalrmm/issues/918 | [
"bug",
"dev-triage"
] | bbrendon | 1 |
scrapy/scrapy | python | 6,350 | CachingHostnameResolver with CONCURRENT_REQUESTS_PER_IP fails | Scrapy 2.11.1
lxml 5.2.1.0, libxml2 2.11.7, cssselect 1.2.0, parsel 1.9.1, w3lib 2.1.2, Twisted 24.3.0
I'm using the following settings:
```
'AUTOTHROTTLE_MAX_DELAY': 8,
'AUTOTHROTTLE_START_DELAY': 3,
'AUTOTHROTTLE_TARGET_CONCURRENCY': 3,
'CONCURRENT_REQUESTS': 1,
'CONCURRENT_REQUESTS_PER_DOMAIN': 4,
'CONCURRENT_REQUESTS_PER_IP': 4,
'COOKIES_ENABLED': False,
'DEPTH_PRIORITY': 1,
'DNSCACHE_SIZE': 100000,
'DNS_RESOLVER': 'scrapy.resolver.CachingHostnameResolver',
'DNS_TIMEOUT': 120,
'DOWNLOAD_MAXSIZE': 10000000,
'DOWNLOAD_TIMEOUT': 100,
'HTTPPROXY_ENABLED': False,
'MEMUSAGE_ENABLED': False,
'REACTOR_THREADPOOL_MAXSIZE': 100,
'REFERER_ENABLED': False,
'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
'RETRY_TIMES': 3,
'SCHEDULER_DISK_QUEUE': 'scrapy.squeues.PickleFifoDiskQueue',
'SCHEDULER_MEMORY_QUEUE': 'scrapy.squeues.FifoMemoryQueue',
'SCRAPER_SLOT_MAX_ACTIVE_SIZE': 20000000,
```
When I added the DNS_RESOLVER line, I start getting *every* request except the very first one made by a spider resulting in:
```
2024-05-08 22:15:05 [GSGenericSpider] WARNING: Twisted/Scrapy Error Detected: http://localhost:8070/get_page?url=example.com
```
The very first query made by the spider works fine, but 100% of the rest are the above error, at a rate of several hundred per second.
I'm using ProxyMiddleware, so I wonder if it's because all the requests are to localhost. Works absolutely fine with the default resolver
Edit: Further testing shows it also fails with the proxy off. It seems to work for the first request to any given domain, but all later requests return the `Twisted/Scrapy Error Detected` | open | 2024-05-08T21:22:20Z | 2024-05-09T12:57:10Z | https://github.com/scrapy/scrapy/issues/6350 | [
"bug"
] | mohmad-null | 7 |
python-restx/flask-restx | api | 110 | Add support for recursive nested models | **Is your feature request related to a problem? Please describe.**
Currently, `flask-restx` swagger.json generation breaks when one defines some models with recursive/circular references to one another. See noirbizarre/flask-restplus#190
**Describe the solution you'd like**
A PR was made in the `flask-resplus` repo that fixes the issue: noirbizarre/flask-restplus#656
| closed | 2020-04-08T09:11:09Z | 2020-07-28T14:14:58Z | https://github.com/python-restx/flask-restx/issues/110 | [
"enhancement"
] | louise-davies | 1 |
ivy-llc/ivy | tensorflow | 28,765 | Fix Frontend Failing Test: tensorflow - tensor.torch.Tensor.new_zeros | To-do List: https://github.com/Transpile-AI/ivy/issues/27499 | open | 2024-06-17T09:46:14Z | 2025-03-18T14:49:08Z | https://github.com/ivy-llc/ivy/issues/28765 | [
"Sub Task"
] | Mubashirshariq | 1 |
dropbox/PyHive | sqlalchemy | 388 | hive create table No result set | python3.7
pyhive:0.6.3
create hive tables cursor.fetchall() return "No result set"
but select no error | open | 2021-04-27T10:23:03Z | 2021-04-27T10:23:03Z | https://github.com/dropbox/PyHive/issues/388 | [] | nmgliangwei | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.