repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
polakowo/vectorbt | data-visualization | 195 | Constant Position Sizing | Hi great package! I wondered whether it is it possible to have constant position size of 100$ to prevent compounding metrics and to interpret total profits in percents? Thanks for suggestions. | closed | 2021-07-16T01:06:56Z | 2024-03-16T09:29:36Z | https://github.com/polakowo/vectorbt/issues/195 | [] | systats | 4 |
plotly/plotly.py | plotly | 4,172 | Sunburst figures not aggregating custom_data for parents with >1 leaf | Consider the following Sunburst figure:
```
import pandas as pd
import plotly.express as px
data = [
["A", "1", 200.0, 20.0],
["A", "2", 100.0, 10.0],
["A", "3", 350.0, 35.0],
["B", "1", 500.0, 50.0],
["C", "1", 20.0, 2.0],
["C", "2", 50.0, 5.0],
]
columns = ["Category Group", "Category", "Value", "Additional Value"]
df = pd.DataFrame(data, columns=columns)
fig = px.sunburst(df, path=['Category Group', 'Category'], values='Value', custom_data=['Additional Value'])
fig.update_traces(textinfo="label+percent entry")
fig.update_traces(hovertemplate='<b>%{label}</b><br>Value: $%{value}<br>Additional Value: $%{customdata[0]}<br>')
```

Any parents with >1 leaf do not display the sum of their children for the `custom_data` field. This behavior is not in line with other fields such as `value` or `color` which correctly display the sum of their children | open | 2023-04-22T07:18:28Z | 2024-08-12T20:53:06Z | https://github.com/plotly/plotly.py/issues/4172 | [
"bug",
"P3"
] | DoubleGremlin181 | 0 |
Sentdex/socialsentiment | plotly | 14 | Sentiment analysis for tweets written in other language | Hi! First of all, thanks for your amazing work!
I would like to share my work based on this great repository, I made sentiment analysis dashboards from **Portuguese language**, check it out! You are able to change the language.
- https://github.com/Alro10/twitter-sentiment-live
Cheers. | open | 2019-07-30T01:39:49Z | 2019-07-30T01:39:49Z | https://github.com/Sentdex/socialsentiment/issues/14 | [] | Alro10 | 0 |
explosion/spaCy | machine-learning | 12,788 | A | <!-- Describe the problem or suggestion here. If you've found a mistake and you know the answer, feel free to submit a pull request straight away: https://github.com/explosion/spaCy/pulls -->
## Which page or section is this issue related to?
<!-- Please include the URL and/or source. --> | closed | 2023-07-04T15:02:21Z | 2023-08-04T00:02:20Z | https://github.com/explosion/spaCy/issues/12788 | [
"invalid"
] | 4158794031 | 1 |
microsoft/qlib | deep-learning | 1,763 | 这里更新信号时报错,参数改为0可以取到 | https://github.com/microsoft/qlib/blob/39f88daaa7b78fdb6b2a10ce6518b6bf154f225c/qlib/workflow/online/manager.py#L280 | open | 2024-03-18T04:22:37Z | 2024-05-28T12:37:46Z | https://github.com/microsoft/qlib/issues/1763 | [] | chenli90s | 1 |
StratoDem/sd-material-ui | dash | 47 | Port Popover to Dash component | http://www.material-ui.com/#/components/popover | closed | 2018-01-25T21:46:30Z | 2018-08-30T20:57:51Z | https://github.com/StratoDem/sd-material-ui/issues/47 | [
"Tech: JS",
"Tech: Single Component"
] | mjclawar | 0 |
CorentinJ/Real-Time-Voice-Cloning | python | 1,192 | ost vram | closed | 2023-04-14T22:03:24Z | 2023-04-14T22:03:41Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/1192 | [] | jhuebner79 | 0 | |
mitmproxy/mitmproxy | python | 6,211 | AssertionError: `self.quic` | ```
[12:13:19.702][[::1]:54450] client connect
[12:13:19.703][[::1]:54450] >> Start({})
[12:13:19.703][[::1]:54450] >> Start({})
[12:13:19.703][[::1]:54450] >> DataReceived(client, b'Z"8X\xd1\xbe\xf3\xdf\xe3it\x82\xef\x89s\xb4\x84\x0b\xe6\xd3\xac]\x97\x9c\x18\xdb\xd6kP\xd9')
[12:13:19.703][[::1]:54450] >> DataReceived(client, b'Z"8X\xd1\xbe\xf3\xdf\xe3it\x82\xef\x89s\xb4\x84\x0b\xe6\xd3\xac]\x97\x9c\x18\xdb\xd6kP\xd9')
[12:13:19.703][[::1]:54450] << NextLayerHook(data=NextLayer:None)
[12:13:19.704][[::1]:54450] << NextLayerHook(data=NextLayer:None)
[12:13:19.704][[::1]:54450] >> DataReceived(client, b'J"8X\xd1\xbe\xf3\xdf\xe3\xcb\x00`5D\xa0\xd4\x1e\x17GS\xad\xa94\xb1Y\xb8\x88\x90\xcb\xa8\x93\x17O\xf2\x1d')
[12:13:19.704][[::1]:54450] >! DataReceived(client, b'J"8X\xd1\xbe\xf3\xdf\xe3\xcb\x00`5D\xa0\xd4\x1e\x17GS\xad\xa94\xb1Y\xb8\x88\x90\xcb\xa8\x93\x17O\xf2\x1d')
[12:13:19.705][[::1]:54450] >> Reply(NextLayerHook(data=NextLayer:ServerQuicLayer(inactive)), None)
[12:13:19.705][[::1]:54450] >> Reply(NextLayerHook(data=NextLayer:ServerQuicLayer(inactive)), None)
[12:13:19.705][[::1]:54450] [nextlayer] ServerQuicLayer(inactive)
[12:13:19.705][[::1]:54450] >> Start({})
[12:13:19.705][[::1]:54450] >> Start({})
[12:13:19.705][[::1]:54450] >> DataReceived(client, b'Z"8X\xd1\xbe\xf3\xdf\xe3it\x82\xef\x89s\xb4\x84\x0b\xe6\xd3\xac]\x97\x9c\x18\xdb\xd6kP\xd9')
[12:13:19.705][[::1]:54450] >> DataReceived(client, b'Z"8X\xd1\xbe\xf3\xdf\xe3it\x82\xef\x89s\xb4\x84\x0b\xe6\xd3\xac]\x97\x9c\x18\xdb\xd6kP\xd9')
[12:13:19.705][[::1]:54450] Client QUIC handshake failed. Cannot parse QUIC header: Malformed head (5a223858d1bef3dfe3697482ef8973b4840be6d3ac5d979c18dbd66b50d9)
[12:13:19.705][[::1]:54450] << TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (5a223858d1bef3dfe3697482ef8973b4840be6d3ac5d979c18dbd66b50d9)', 'tls': True, 'timestamp_start': 1687860799.7021859, 'proxy_mode': ProxyMode.parse('reverse:http3://mitmproxy.org/@443')}), context=Context(
Client({'id': 'à8d63a9', 'address': Nà
[12:13:19.705][[::1]:54450] << TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malà
[12:13:19.705][[::1]:54450] << TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malà
[12:13:19.705][[::1]:54450] << TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malà
[12:13:19.706][[::1]:54450] !> DataReceived(client, b'J"8X\xd1\xbe\xf3\xdf\xe3\xcb\x00`5D\xa0\xd4\x1e\x17GS\xad\xa94\xb1Y\xb8\x88\x90\xcb\xa8\x93\x17O\xf2\x1d')
[12:13:19.706][[::1]:54450] >> DataReceived(client, b'J"8X\xd1\xbe\xf3\xdf\xe3\xcb\x00`5D\xa0\xd4\x1e\x17GS\xad\xa94\xb1Y\xb8\x88\x90\xcb\xa8\x93\x17O\xf2\x1d')
[12:13:19.706][[::1]:54450] >! DataReceived(client, b'J"8X\xd1\xbe\xf3\xdf\xe3\xcb\x00`5D\xa0\xd4\x1e\x17GS\xad\xa94\xb1Y\xb8\x88\x90\xcb\xa8\x93\x17O\xf2\x1d')
[12:13:19.706][[::1]:54450] >> Reply(TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (5a223858d1bef3dfe3697482ef8973b4840be6d3ac5d979c18dbd66b50d9)', 'tls': True, 'timestamp_start': 1687860799.7021859, 'proxy_mode': ProxyMode.parse('reverse:http3://mitmproxy.org/@443')}), context=Context(
Client({'id': 'à8d63a9', 'addreà
[12:13:19.708][[::1]:54450] >> Reply(TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC headeà
[12:13:19.708][[::1]:54450] >> Reply(TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC headeà
[12:13:19.708][[::1]:54450] << CloseConnection({'connection': Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (5a223858d1bef3dfe3697482ef8973b4840be6d3ac5d979c18dbd66b50d9)', 'tls': True, 'timestamp_start': 1687860799.7021859, 'proxy_mode': ProxyMode.parse('reverse:http3://mitmproxy.org/@443')})})
[12:13:19.708][[::1]:54450] << CloseConnection({'connection': Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed headà
[12:13:19.708][[::1]:54450] << CloseConnection({'connection': Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed headà
[12:13:19.708][[::1]:54450] [quic] Swallowing Start({}) as handshake failed.
[12:13:19.708][[::1]:54450] !> DataReceived(client, b'J"8X\xd1\xbe\xf3\xdf\xe3\xcb\x00`5D\xa0\xd4\x1e\x17GS\xad\xa94\xb1Y\xb8\x88\x90\xcb\xa8\x93\x17O\xf2\x1d')
[12:13:19.709][[::1]:54450] mitmproxy has crashed!
Traceback (most recent call last):
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\server.py", line 359, in server_event
for command in layer_commands:
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layer.py", line 176, in handle_event
command = next(command_generator)
^^^^^^^^^^^^^^^^^^^^^^^
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layer.py", line 265, in handle_event
yield from self._handle(event)
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layer.py", line 176, in handle_event
command = next(command_generator)
^^^^^^^^^^^^^^^^^^^^^^^
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layers\quic.py", line 803, in _handle_event
yield from super()._handle_event(event)
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\tunnel.py", line 98, in _handle_event
yield from self.event_to_child(event)
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layers\quic.py", line 1092, in event_to_child
yield from super().event_to_child(event)
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layers\quic.py", line 808, in event_to_child
yield from super().event_to_child(event)
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\tunnel.py", line 153, in event_to_child
for command in self.child_layer.handle_event(event):
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layer.py", line 137, in handle_event
yield from self.__continue(event)
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layer.py", line 235, in __continue
yield from self.__process(command_generator)
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layer.py", line 191, in __process
command = command_generator.send(send)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layers\quic.py", line 803, in _handle_event
yield from super()._handle_event(event)
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\tunnel.py", line 83, in _handle_event
yield from self.receive_data(event.data)
File "c:\users\user\git\mitmproxy\mitmproxy\proxy\layers\quic.py", line 966, in receive_data
assert self.quic
AssertionError
[12:13:19.710][[::1]:54450] >> ConnectionClosed(connection=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (5a223858d1bef3dfe3697482ef8973b4840be6d3ac5d979c18dbd66b50d9)', 'tls': True, 'timestamp_start': 1687860799.7021859, 'proxy_mode': ProxyMode.parse('reverse:http3://mitmproxy.org/@443')}))
[12:13:19.710][[::1]:54450] >> ConnectionClosed(connection=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (5a223858d1bef3dfe3697482ef8973b4840beà
[12:13:19.710][[::1]:54450] >> ConnectionClosed(connection=Client({'id': 'à8d63a9', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (5a223858d1bef3dfe3697482ef8973b4840beà
[12:13:19.712][[::1]:54450] client disconnect
[12:13:25.414][[::1]:54450] client connect
[12:13:25.415][[::1]:54450] >> Start({})
[12:13:25.415][[::1]:54450] >> Start({})
[12:13:25.415][[::1]:54450] >> DataReceived(client, b'A"8X\xd1\xbe\xf3\xdf\xe3-\xc5\xf9\x8fJ@o\x93#\xfbD \x87\xfdY\x04\\\xf4\xa3\xcb\xdc+\xb0B o\x88\x14\x11C\xe4\xcb\x15\xa1*Rw\x8c\xa5\xcb\xbc\x1e\xb5Q3.\x12qG:GeO\xeezV\x9d\xce{\xd0C3\xd8\xf0b\xe4\xbf\xc1\xe4\xb7\xe2"Tn\xb8\x03\xa6\xa0\xf0\xed\x88\x0cj\xaa\x7f-[@\xd0')
[12:13:25.415][[::1]:54450] >> DataReceived(client, b'A"8X\xd1\xbe\xf3\xdf\xe3-\xc5\xf9\x8fJ@o\x93#\xfbD \x87\xfdY\x04\\\xf4\xa3\xcb\xdc+\xb0B o\x88\x14\x11C\xe4\xcb\x15\xa1*Rw\x8c\xa5\xcb\xbc\x1e\xb5Q3.\x12qG:GeO\xeezV\x9d\xce{\xd0C3\xd8\xf0b\xe4\xbf\xc1\xe4\xb7\xe2"Tn\xb8\x03\xa6\xà
[12:13:25.415][[::1]:54450] << NextLayerHook(data=NextLayer:None)
[12:13:25.415][[::1]:54450] << NextLayerHook(data=NextLayer:None)
[12:13:25.416][[::1]:54450] >> Reply(NextLayerHook(data=NextLayer:ServerQuicLayer(inactive)), None)
[12:13:25.416][[::1]:54450] >> Reply(NextLayerHook(data=NextLayer:ServerQuicLayer(inactive)), None)
[12:13:25.416][[::1]:54450] [nextlayer] ServerQuicLayer(inactive)
[12:13:25.416][[::1]:54450] >> Start({})
[12:13:25.416][[::1]:54450] >> Start({})
[12:13:25.416][[::1]:54450] >> DataReceived(client, b'A"8X\xd1\xbe\xf3\xdf\xe3-\xc5\xf9\x8fJ@o\x93#\xfbD \x87\xfdY\x04\\\xf4\xa3\xcb\xdc+\xb0B o\x88\x14\x11C\xe4\xcb\x15\xa1*Rw\x8c\xa5\xcb\xbc\x1e\xb5Q3.\x12qG:GeO\xeezV\x9d\xce{\xd0C3\xd8\xf0b\xe4\xbf\xc1\xe4\xb7\xe2"Tn\xb8\x03\xa6\xa0\xf0\xed\x88\x0cj\xaa\x7f-[@\xd0')
[12:13:25.416][[::1]:54450] >> DataReceived(client, b'A"8X\xd1\xbe\xf3\xdf\xe3-\xc5\xf9\x8fJ@o\x93#\xfbD \x87\xfdY\x04\\\xf4\xa3\xcb\xdc+\xb0B o\x88\x14\x11C\xe4\xcb\x15\xa1*Rw\x8c\xa5\xcb\xbc\x1e\xb5Q3.\x12qG:GeO\xeezV\x9d\xce{\xd0C3\xd8\xf0b\xe4\xbf\xc1\xe4\xb7\xe2"Tn\xb8\x03\xa6\xà
[12:13:25.416][[::1]:54450] Client QUIC handshake failed. Cannot parse QUIC header: Malformed head (41223858d1bef3dfe32dc5f98f4a406f9323fb442087fd59045cf4a3cbdc2bb042206f88141143e4cb15a12a52778ca5cbbc1eb551332e1271473a47654fee7a569dce7bd04333d8f062e4bfc1e4b7e222546eb803a6a0f0ed880c6aaa7f2d5b40d0)
[12:13:25.416][[::1]:54450] << TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (41223858d1bef3dfe32dc5f98f4a406f9323fb442087fd59045cf4a3cbdc2bb042206f88141143e4cb15a12a52778ca5cbbc1eb551332e1271473a47654fee7a569dce7bd04333d8f062e4bfc1e4b7e222546eb803a6a0f0ed880c6aaa7f2d5b40d0)', 'tls': True, 'timestamp_start': 1687860805.à
[12:13:25.416][[::1]:54450] << TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malà
[12:13:25.416][[::1]:54450] << TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malà
[12:13:25.417][[::1]:54450] << TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malà
[12:13:25.417][[::1]:54450] >> Reply(TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (41223858d1bef3dfe32dc5f98f4a406f9323fb442087fd59045cf4a3cbdc2bb042206f88141143e4cb15a12a52778ca5cbbc1eb551332e1271473a47654fee7a569dce7bd04333d8f062e4bfc1e4b7e222546eb803a6a0f0ed880c6aaa7f2d5b40d0)', 'tls': True, 'timestamp_start': 16878à
[12:13:25.417][[::1]:54450] >> Reply(TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC headeà
[12:13:25.418][[::1]:54450] >> Reply(TlsFailedClientHook(data=QuicTlsData(conn=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC headeà
[12:13:25.418][[::1]:54450] << CloseConnection({'connection': Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (41223858d1bef3dfe32dc5f98f4a406f9323fb442087fd59045cf4a3cbdc2bb042206f88141143e4cb15a12a52778ca5cbbc1eb551332e1271473a47654fee7a569dce7bd04333d8f062e4bfc1e4b7e222546eb803a6a0f0ed880c6aaa7f2d5b40d0)', 'tls': True, 'timestamp_start': 1687860805.4149044, 'pà
[12:13:25.418][[::1]:54450] << CloseConnection({'connection': Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed headà
[12:13:25.418][[::1]:54450] << CloseConnection({'connection': Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'state': <ConnectionState.OPEN: 3>, 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed headà
[12:13:25.418][[::1]:54450] [quic] Swallowing Start({}) as handshake failed.
[12:13:25.418][[::1]:54450] >> ConnectionClosed(connection=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (41223858d1bef3dfe32dc5f98f4a406f9323fb442087fd59045cf4a3cbdc2bb042206f88141143e4cb15a12a52778ca5cbbc1eb551332e1271473a47654fee7a569dce7bd04333d8f062e4bfc1e4b7e222546eb803a6a0f0ed880c6aaa7f2d5b40d0)', 'tls': True, 'timestamp_start': 1687860805.4149044, 'proxy_mode': ProxyMode.parse('reverse:htà
[12:13:25.418][[::1]:54450] >> ConnectionClosed(connection=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (41223858d1bef3dfe32dc5f98f4a406f9323fà
[12:13:25.418][[::1]:54450] >> ConnectionClosed(connection=Client({'id': 'à91af08', 'address': None, 'peername': ('::1', 54450, 0, 0), 'sockname': ('::', 443, 0, 0), 'transport_protocol': 'udp', 'error': 'Cannot parse QUIC header: Malformed head (41223858d1bef3dfe32dc5f98f4a406f9323fà
[12:13:25.419][[::1]:54450] client disconnect
``` | closed | 2023-06-27T10:22:34Z | 2025-02-16T20:59:04Z | https://github.com/mitmproxy/mitmproxy/issues/6211 | [
"kind/bug",
"area/protocols",
"nlnet"
] | mhils | 0 |
piskvorky/gensim | data-science | 3,371 | Topic coherence decreasing with increasing number of topics | Hello,
I am a bit consufed that all of the topic coherence metrics are decreasing with increasing numbers of topics. Especially that I get the highest coherence scores for only one topic. I have attached a picture of my graph and my code. Any help is appreciated!


| open | 2022-07-27T17:29:11Z | 2022-07-27T17:29:11Z | https://github.com/piskvorky/gensim/issues/3371 | [] | franziskaweindel | 0 |
microsoft/nlp-recipes | nlp | 348 | [BUG] Cannot open notebook "examples/sentence_similarity/gensen_aml_deep_dive.ipynb" | ### Description
When I try to run the notebook "examples/sentence_similarity/gensen_aml_deep_dive.ipynb" on my machine, I got the following error message.

I am using Anaconda and Jupyter notebook. I tried to use VS Code to open it as review but also failed.
### How do we replicate the bug?
<!--- Please be specific as possible (use a list if needed). -->
<!--- For example: -->
<!--- * Create a conda environment for gpu -->
<!--- * Run unit test `test_timer.py` -->
<!--- * ... -->
Open the notebook with Jupyter notebook or VS Code.
### Expected behavior (i.e. solution)
<!--- For example: -->
<!--- * The tests for the timer should pass successfully. -->
### Other Comments
| closed | 2019-08-22T13:43:49Z | 2019-08-22T15:38:48Z | https://github.com/microsoft/nlp-recipes/issues/348 | [
"bug"
] | kehuangms | 0 |
tortoise/tortoise-orm | asyncio | 951 | an iso8601.iso8601.ParseError at DatetimeField when field no data | ### model
```python
class BaseModel(Model):
id = fields.BigIntField(pk=True)
created_at = fields.DatetimeField(auto_now_add=True, null=False)
updated_at = fields.DatetimeField(auto_now=True, null=False)
is_delete = fields.BooleanField(null=False, default=False)
class User(BaseModel, ModelMixin):
cellphone = fields.CharField(max_length=16, null=False)
name = fields.CharField(max_length=30, null=False)
admin_user: fields.OneToOneRelation['AdminUser']
class Meta:
table = 'users'
unique_together = (('cellphone',),)
class AdminUser(BaseModel, ModelMixin):
user: fields.OneToOneRelation['User'] = fields.OneToOneField(
model_name='models.User', related_name='admin_user', db_constraint=False)
login_time = fields.DatetimeField(null=True)
token_expired = fields.DatetimeField(null=True)
class Meta:
table = 'admin_users'
```
### description
```text
1, await User.get_or_none(id=1) ok
2, await AdminUser.get_or_none(id=1) error
```
### error description
```text
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/Miniconda3/envs/ftm_dev/lib/python3.8/site-packages/tortoise/queryset.py", line 966, in _execute
instance_list = await self._db.executor_class(
File "/opt/Miniconda3/envs/ftm_dev/lib/python3.8/site-packages/tortoise/backends/base/executor.py", line 137, in execute_select
instance: "Model" = self.model._init_from_db(
File "/opt/Miniconda3/envs/ftm_dev/lib/python3.8/site-packages/tortoise/models.py", line 729, in _init_from_db
setattr(self, model_field, field.to_python_value(kwargs[key]))
File "/opt/Miniconda3/envs/ftm_dev/lib/python3.8/site-packages/tortoise/fields/data.py", line 316, in to_python_value
value = parse_datetime(value)
File "/opt/Miniconda3/envs/ftm_dev/lib/python3.8/site-packages/iso8601/iso8601.py", line 195, in parse_date
raise ParseError("Unable to parse date string %r" % datestring)
iso8601.iso8601.ParseError: Unable to parse date string ''
```
### env
- asyncmy 0.2.1
- tortoise-orm 0.17.8
- pytest 6.2.5
- fastapi 0.68.0
- linux deepin 20.2.4
- mysql 8.0.26
- python 3.8.11
-
### other
- There may also be an error somewhere in the my code
- will try different computer to test it
| closed | 2021-10-11T14:57:42Z | 2021-10-11T15:11:14Z | https://github.com/tortoise/tortoise-orm/issues/951 | [] | panla | 2 |
skforecast/skforecast | scikit-learn | 228 | In `grid_search_forecaster` and `backtesting_forecaster`, does the algorithm use lagged real value or lagged predicted value? | I have split my data into training, validation, and test set. However, while the documentation says in recursive multi-step forecasting, the lagged values came from predicted values ([link](https://joaquinamatrodrigo.github.io/skforecast/0.4.3/quick-start/introduction-forecasting.html)), and not from the actual values in the test set. I have some doubt from the results of my code:
```
outer_start_date = '2017-01-07'
outer_end_date = '2021-08-21'
end_train = '2019-06-01'
end_validation = '2020-07-01'
selected_forecaster =
lags_grid = [1, 2, 10, 26, 52, 60, [1, 52]]
autoregressive_n_lag = 104
future_prediction_n_step = 8
forecaster = ForecasterAutoreg(
regressor=XGBRegressor(random_state=123),
lags=autoregressive_n_lag # This value will be replaced in the grid search
)
results_grid = grid_search_forecaster(
forecaster=forecaster,
y=df.loc[:end_validation, 'REPORTED_CASES'],
exog=df.loc[:end_validation],
param_grid=param_grid,
lags_grid=lags_grid,
steps=future_prediction_n_step,
metric=curr_performance_metric,
refit=refit_in_last_backtest,
initial_train_size=len(df[:end_train]),
fixed_train_size=fixed_train_size_in_last_backtest,
return_best=True,
verbose=False
)
metric, predictions = backtesting_forecaster(
forecaster=forecaster,
y=df.REPORTED_CASES,
initial_train_size=len(df.REPORTED_CASES[:end_validation]),
fixed_train_size=fixed_train_size_in_last_backtest,
steps=future_prediction_n_step,
metric='mean_absolute_error',
refit=refit_in_last_backtest,
interval=[5, 95],
n_boot=500,
in_sample_residuals=True,
verbose=False
)
```

The model has no features other than the lags based on `lags_grid=[1, 2, 10, 26, 52, 60, [1, 52]] `. If the assumption is true that lag terms were computed from predicted values in the test set, I shouldn't be seeing this current result that the forecaster seemed to know and matched closely of the drop right before 2020. And we would expect the forecaster to predict a peak in mid/late 2020, which it didn't.
When I changed the lag grid to `lags_grid = [[26, 52]]`, it shows the following which confirms my theory.

Can someone please confirm how lags are chosen during backtesting in the test set? Is there any option I can use the predicted values' lags instead of actual values' lags for the forecasting of the test set values? | closed | 2022-08-26T23:48:56Z | 2022-08-28T05:50:07Z | https://github.com/skforecast/skforecast/issues/228 | [
"question"
] | kaionwong | 2 |
xinntao/Real-ESRGAN | pytorch | 743 | ESRGAN Model training ( RRDB_ESRGAN_x4.pth) | I want to train a model with my own dataset but I am getting some error .
My options file is as attached:
the error I am getting is :
Traceback (most recent call last):
File "C:\Users\CAE\Desktop\MY GAN\Real-ESRGAN-master\realesrgan\train.py", line 11, in <module>
train_pipeline(root_path)
File "C:\Users\CAE\Desktop\MY GAN\.venv\lib\site-packages\basicsr-1.4.2-py3.9.egg\basicsr\train.py", line 124, in train_pipeline
model = build_model(opt)
File "C:\Users\CAE\Desktop\MY GAN\.venv\lib\site-packages\basicsr-1.4.2-py3.9.egg\basicsr\models\__init__.py", line 26, in build_model
model = MODEL_REGISTRY.get(opt['model_type'])(opt)
File "c:\users\cae\desktop\my gan\real-esrgan-master\realesrgan\models\realesrgan_model.py", line 24, in __init__
super(RealESRGANModel, self).__init__(opt)
File "C:\Users\CAE\Desktop\MY GAN\.venv\lib\site-packages\basicsr-1.4.2-py3.9.egg\basicsr\models\sr_model.py", line 30, in __init__
self.load_network(self.net_g, load_path, self.opt['path'].get('strict_load_g', True), param_key)
File "C:\Users\CAE\Desktop\MY GAN\.venv\lib\site-packages\basicsr-1.4.2-py3.9.egg\basicsr\models\base_model.py", line 295, in load_network
load_net = load_net[param_key]
KeyError: 'params'
please help me in this regard
Thanks
[train_test_dataset_4x.zip](https://github.com/xinntao/Real-ESRGAN/files/13997380/train_test_dataset_4x.zip)
| open | 2024-01-20T09:06:45Z | 2024-05-18T16:49:55Z | https://github.com/xinntao/Real-ESRGAN/issues/743 | [] | durrani88 | 2 |
waditu/tushare | pandas | 1,511 | 具有上海中期期货有限公司账号后,如何获取期权和期货的tick数据 | 在期货期权TICK数据文档中提到:数据归属上海中期期货有限公司,具备该公司交易账号才可以获取。
如何使用tushare及期货公司账号获取相关tick数据? | open | 2021-02-07T09:54:47Z | 2021-02-07T09:54:47Z | https://github.com/waditu/tushare/issues/1511 | [] | DrShuaiguo | 0 |
alyssaq/face_morpher | numpy | 41 | stasm erro | Hi Alyssa,
I am trying to use the averager but I get the following error (MacOS 10.13.5 with python3.6.3):
Traceback (most recent call last):
File "/Users/orangelewis/Spider/test.py", line 3, in <module>
import stasm
File "/Users/orangelewis/djiangoTest/lib/python3.6/site-packages/stasm/__init__.py", line 3, in <module>
from _stasm import __doc__
ImportError: dlopen(/Users/orangelewis/djiangoTest/lib/python3.6/site-packages/_stasm.cpython-36m-darwin.so, 2): Symbol not found: __ZN2cv17CascadeClassifier16detectMultiScaleERKNS_11_InputArrayERSt6vectorINS_5Rect_IiEESaIS6_EEdiiNS_5Size_IiEESB_
Referenced from: /Users/orangelewis/djiangoTest/lib/python3.6/site-packages/_stasm.cpython-36m-darwin.so
Expected in: flat namespace
in /Users/orangelewis/djiangoTest/lib/python3.6/site-packages/_stasm.cpython-36m-darwin.so
Thank you! | open | 2018-06-28T17:26:27Z | 2018-07-10T18:31:51Z | https://github.com/alyssaq/face_morpher/issues/41 | [] | orangelewis | 3 |
dynaconf/dynaconf | flask | 851 | [bug] dynaconf is not installable in venvs without setuptools | ### Bug description
`dynaconf` doesn't define any dependencies (due to vendoring), but that's not really true, because there is one runtime dependency - `pkg_resources` distributed with `setuptools`:
https://github.com/dynaconf/dynaconf/blob/0439bf836f1a22e96e4c71d388c2e68fd9b70425/dynaconf/contrib/flask_dynaconf.py#L17
How is it possible that it actually works? Only thanks to a "de facto" standard of pre-installing `setuptools` a) together with Python interpreter b) when [creating virtual environments](https://packaging.python.org/en/latest/tutorials/installing-packages/#creating-virtual-environments):
>[venv](https://docs.python.org/3/library/venv.html) is available by default in Python 3.3 and later, and installs [pip](https://packaging.python.org/en/latest/key_projects/#pip) and [setuptools](https://packaging.python.org/en/latest/key_projects/#setuptools) into created virtual environments in Python 3.4 and later.
>
>[virtualenv](https://packaging.python.org/en/latest/key_projects/#virtualenv) needs to be installed separately, but supports Python 2.7+ and Python 3.3+, and [pip](https://packaging.python.org/en/latest/key_projects/#pip), [setuptools](https://packaging.python.org/en/latest/key_projects/#setuptools) and [wheel](https://packaging.python.org/en/latest/key_projects/#wheel) are always installed into created virtual environments by default (regardless of Python version).
It means `setuptools` was not explicitly declared, but it was assumed it would be there anyway. But as I mentioned - it never was a real standard and it caused serious issues in the Python build system as precisely described in [PEP 518](https://peps.python.org/pep-0518/#rationale):
>But when a project chooses to use setuptools, the use of an executable file like setup.py becomes an issue. You can’t execute a setup.py file without knowing its dependencies, but currently there is no standard way to know what those dependencies are in an automated fashion without executing the setup.py file where that information is stored. It’s a catch-22 of a file not being runnable without knowing its own contents which can’t be known programmatically unless you run the file.
And that's why `PEP 518` introduced the [build-system table](https://peps.python.org/pep-0518/#build-system-table) defined in `pyproject.toml`:
```
[build-system]
# Minimum requirements for the build system to execute.
requires = ["setuptools", "wheel"] # PEP 508 specifications.
```
Thanks to that and [PEP 517](https://peps.python.org/pep-0517/) - package managers know what are the actual build dependencies and how they should be handled. What does it imply? No need to blindly install `setuptools` anymore.
Based on that `poetry` introduced a flag [`virtualenvs.options.no-setuptools`](https://python-poetry.org/docs/configuration/#virtualenvsoptionsno-setuptools), which is currently disabled by default, but generally recommended:
<img width="945" alt="image" src="https://user-images.githubusercontent.com/24907857/210487739-354f7b1c-7a70-4853-8c1d-69aab3444644.png">
What's the implication of the above? Well, I guess you already know:
```shell
❯ poetry config --local virtualenvs.options.no-setuptools true
❯ poetry add dynaconf
❯ poetry run python -c "from dynaconf.contrib.flask_dynaconf import DynaconfConfig"
Traceback (most recent call last):
File "<string>", line 1, in <module>
File "/Users/lanskij/Repositories/dyna/.venv/lib/python3.10/site-packages/dynaconf/__init__.py", line 5, in <module>
from dynaconf.contrib import DjangoDynaconf # noqa
File "/Users/lanskij/Repositories/dyna/.venv/lib/python3.10/site-packages/dynaconf/contrib/__init__.py", line 4, in <module>
from dynaconf.contrib.flask_dynaconf import DynaconfConfig # noqa
File "/Users/lanskij/Repositories/dyna/.venv/lib/python3.10/site-packages/dynaconf/contrib/flask_dynaconf.py", line 17, in <module>
import pkg_resources
ModuleNotFoundError: No module named 'pkg_resources'
```
### Required fixes
- **package building**
As showed above - we can't assume anymore `setuptools` would be **for sure** pre-installed in given environment. That's why a proper definition of build dependency should be added to `pyproject.toml`:
```toml
[build-system]
requires = ["setuptools"]
build-backend = "setuptools.build_meta"
```
PS The current usage of `setup_requires` in `setup.py` is redundant, it's even mentioned directly in `PEP 518`:
> Setuptools tried to solve this with a setup_requires argument to its setup() function [[3]](https://peps.python.org/pep-0518/#setup-args). This solution has a number of issues, such as:
> - This cannot include setuptools itself nor can it include a replacement to setuptools, which means that projects such as numpy.distutils are largely incapable of utilizing it and projects cannot take advantage of newer setuptools features until their users naturally upgrade the version of setuptools to a newer one.
- **package runtime**
Here we have 3 solutions:
- specify `setuptools` as a package dependency
- vendor `setuptools`
- get rid of `setuptools`- and `pkg_resources`-related runtime logic at all
Personally - I would vote for the last one. Looking quickly - there's exactly one such line, which is:
https://github.com/dynaconf/dynaconf/blob/0439bf836f1a22e96e4c71d388c2e68fd9b70425/dynaconf/contrib/flask_dynaconf.py#L226
That was probably copy-pasted from the `flask` codebase. But `flask` maintainers already fixed that on their side in https://github.com/pallets/flask/issues/4419 and switched to the built-in `importlib.metadata`, so when aligned - there would be also no need for `dynaconf` to depend on `setuptools` in runtime anymore. | closed | 2023-01-04T05:50:32Z | 2023-07-13T19:11:05Z | https://github.com/dynaconf/dynaconf/issues/851 | [
"bug",
"HIGH"
] | jaklan | 2 |
feature-engine/feature_engine | scikit-learn | 252 | 1-2 minute videos showing how to use a class in youtube | Would these be useful? | open | 2021-04-22T09:07:07Z | 2021-05-01T07:34:01Z | https://github.com/feature-engine/feature_engine/issues/252 | [
"docs"
] | solegalli | 0 |
scrapy/scrapy | python | 6,583 | `scrapy.utils.url.parse_url()` duplicates `w3lib.url.parse_url()` | `scrapy.utils.url` imports `w3lib.url.*` but since 2016 (w3lib 1.15.0) both have `parse_url()`, and with the same implemenation. So we can drop the Scrapy one now. Related: #2186, #4577 | closed | 2024-12-14T08:24:18Z | 2024-12-18T06:50:45Z | https://github.com/scrapy/scrapy/issues/6583 | [
"enhancement",
"cleanup"
] | wRAR | 0 |
jupyter/nbgrader | jupyter | 1,844 | Student's solution with `input()` passes "Validate", but fails on "Autograde" | ### Steps to reproduce the actual behavior
For example, give students this assignment (I'll omit the description as it is self-explanatory):
```python
def count_digits(n, d):
### BEGIN SOLUTION
n = abs(n)
return str(n).count(str(d))
### END SOLUTION
```
Let's assume just one simple public test case:
```python
assert count_digits(112233, 2) == 2
```
Let's say that a student changes the cell with the assignment as follows:
```python
a = input("Enter an integer")
b = input("Enter a digit")
def count_digits(n, d):
n = abs(n)
return str(n).count(str(d))
s = count_digits(int(a), int(b))
print(s)
```
Now there is an issue: when the student clicks on the "Validate" button in JupyterLab to validate their assignment, nbgrader executes it just fine and says that the solution passes all the tests. However, when the instructor runs autograde, it fails with the following error:
```
StdinNotImplementedError: raw_input was called, but this frontend does not support input requests.
```
### Expected behavior
Validate and autograde should behave consistently with respect to the `input()` function: either both should succeed, or both should fail with the same error.
### Operating system
Arch Linux
### `nbgrader --version`
```
Python version 3.11.5 (main, Sep 2 2023, 14:16:33) [GCC 13.2.1 20230801]
nbgrader version 0.9.1
```
### `jupyterhub --version` (if used with JupyterHub)
4.0.2
### `jupyter notebook --version`
7.0.6
### `jupyter lab --version`
4.0.7 | open | 2023-11-01T12:35:29Z | 2024-03-25T13:29:57Z | https://github.com/jupyter/nbgrader/issues/1844 | [
"UX/UI"
] | lahwaacz | 1 |
pallets-eco/flask-sqlalchemy | flask | 712 | Should flask_sqlalchemy.Model.__init__ be overrideable? | I have some custom types (json-encoded lists/dicts), but working with newly-created instances of models that include those types can be awkward. I end up needing to pepper code that deals with those models with a bunch of boilerplate like the following:
```
if instance.field is None:
instance.field = []
instance.field.append('thing')
```
I've supplied column defaults, but my understanding is those don't come into play until the instance is attached to a session and flushed. SQLAlchemy's [docs on object initialization](https://docs.sqlalchemy.org/en/latest/orm/constructors.html#constructors-and-object-initialization) suggest that I should be able to override `__init__` on my model to do what I'm trying to do.
To avoid needing to do that for each individual model that uses those types, I wanted to use introspection in my base class. Which, given that I'm working under flask-sqlalchemy, I thought meant doing it on the class I pass in as my `model_class`.
That doesn't seem to be doing what I expected, and I wanted to see if my expectations are off before I start a code dive to figure out why.
Here's a variation on the quickstart sample from the flask-sqlalchemy docs:
```
from flask import Flask
from flask_sqlalchemy import Model, SQLAlchemy
class BaseModel(Model):
def __init__(self, *args, **kwargs):
print('BaseModel.__init__')
super().__init__(*args, **kwargs)
app = Flask(__name__)
app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:////tmp/test.db'
db = SQLAlchemy(app, model_class=BaseModel)
class User(db.Model):
id = db.Column(db.Integer, primary_key=True)
username = db.Column(db.String(80), unique=True, nullable=False)
email = db.Column(db.String(120), unique=True, nullable=False)
def __repr__(self):
return '<User %r>' % self.username
class Group(db.Model):
id = db.Column(db.Integer, primary_key=True)
groupname = db.Column(db.String(80), unique=True, nullable=False)
def __init__(self, *args, **kwargs):
print('Group.__init__')
super().__init__(*args, **kwargs)
```
When I create a `User` instance, I'd expect to see `BaseModel.__init__` printed. When I create a `Group` instance, I'd expect to see that as well as `Group.__init__` printed. Here's what actually happens (python 3.6.7, flask-sqlalchemy 2.3.2, sqlalchemy 1.3.2):
```
>>> from app import *
>>> db.create_all()
>>> user = User()
>>> group = Group()
Group.__init__
>>>
``` | closed | 2019-04-11T19:11:37Z | 2020-12-05T20:37:20Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/712 | [] | rascalking | 3 |
healthchecks/healthchecks | django | 302 | Feature Request: Add integration to run local shell/python scripts | @cuu508 thank you once again for this amazing tool. I have been playing around with it a lot and am almost ready to deploy the self hosted version to prod.
I love the numerous tools with which we can integrate this app.
In our org though, we use `snmp-traps` for alerting and I was wondering if you can add an integration to run local shell/python scripts for me to accomplish this.
This integration can have the same variables as the ***Webhook*** integration.
Please let me know your thoughts.
Alternatively, can you suggest a workaround to be able to run a shell/python script when the status of an alert changes state. | closed | 2019-11-19T14:46:44Z | 2019-11-22T10:22:00Z | https://github.com/healthchecks/healthchecks/issues/302 | [] | gganeshan | 7 |
holoviz/panel | jupyter | 7,350 | WebLLM source code link not valid | In the app gallery https://panel.holoviz.org/gallery/index.html the WebLLM ChatInterface points to
https://panel.holoviz.org/gallery/index.html#WebLLM
It should point to
https://panel.holoviz.org/gallery/webllm.html
I tried to find the cause but could not. Some code creates this.

| closed | 2024-09-30T16:04:11Z | 2024-10-29T17:01:35Z | https://github.com/holoviz/panel/issues/7350 | [
"type: docs"
] | MarcSkovMadsen | 0 |
open-mmlab/mmdetection | pytorch | 11,740 | EfficientDet as part of the core packages | EfficientDet is currently only [living](https://github.com/open-mmlab/mmdetection/tree/main/projects/EfficientDet) under `projects`. Development is stagnant from last year, with Milestone 3 being the core implementation not tackled yet.
EfficientDet not being part of the core package hinders the ability of users using mmdetection as a third party ML framework to use EfficientDet as one of their architectures, given that anything under project does not appear to be included in the mmdet package when installing it with pip, for example.
Are there any plans by anyone to progress with the Milestone 3 for EfficientDet?
Or, am I wrong in saying there is no way to have anything belonging to `projects` and `configs` if we install mmdet with pip? | open | 2024-05-24T09:17:14Z | 2024-05-24T09:17:30Z | https://github.com/open-mmlab/mmdetection/issues/11740 | [] | davideboschetto | 0 |
sktime/sktime | scikit-learn | 7,100 | [BUG] `RecursiveReductionForecaster` fails for hierarchical data | **Describe the bug**
(Still experimental) `RecursiveReductionForecaster` fails for hierachical (i.e. with instances and time indexes) data with global approach. Returns: `TypeError: Cannot convert input [('xxx', Period('2017-02', 'M'))] of type <class 'tuple'> to Timestamp`
**To Reproduce**
Run tests for PR https://github.com/sktime/sktime/pull/4806
**Expected behavior**
`RecursiveReductionForecaster` should be able to correctly identify cohorts in the hierarchy to use global approach
**Additional context**
Error seems to rise from `_predict_out_of_sample()` where we try to convert the index https://github.com/sktime/sktime/blob/e4d60d1f539cc5f5f4e4054266b9f7db4cf321c1/sktime/forecasting/compose/_reduce.py#L2501) but that fails obvisouly for multiindex:
```python
if hasattr(self.fh, "freq") and self.fh.freq is not None:
y_plus_preds = y_plus_preds.asfreq(self.fh.freq)
```
See also discord discussion: https://discord.com/channels/1075852648688930887/1277878516490178641
**Versions**
<details>
System:
python: 3.10.12 (main, Jul 29 2024, 16:56:48) [GCC 11.4.0]
executable: .../venv/bin/python
machine: Linux-5.15.153.1-microsoft-standard-WSL2-x86_64-with-glibc2.35
Python dependencies:
pip: 24.2
sktime: 0.32.0
sklearn: 1.2.2
skbase: 0.8.3
numpy: 1.23.5
scipy: 1.14.1
pandas: 1.5.3
matplotlib: 3.9.2
joblib: 1.3.2
numba: 0.60.0
statsmodels: 0.14.2
pmdarima: 2.0.4
statsforecast: None
tsfresh: None
tslearn: None
torch: None
tensorflow: None
tensorflow_probability: None
</details>
| open | 2024-09-10T07:11:10Z | 2025-02-16T16:23:28Z | https://github.com/sktime/sktime/issues/7100 | [
"bug",
"module:forecasting"
] | bastisar | 7 |
coqui-ai/TTS | python | 3,478 | [Feature request] aur package | it would be great to get an aur package for arch. I can't install this after 3 hours of running into errors every thing I try. | closed | 2023-12-30T12:57:01Z | 2024-02-10T18:38:06Z | https://github.com/coqui-ai/TTS/issues/3478 | [
"wontfix",
"feature request"
] | ralyodio | 2 |
ultrafunkamsterdam/undetected-chromedriver | automation | 836 | not finding the website html Body | Hey guys,
Not sure it's an issue but I would still love to get some help.
So I was testing a simple find element by xpath on that website: https://2050706v2.onlineleasing.realpage.com/
but the element always wasn't found even tho it's a valid one, I then figured out selenium doesn't have access to that website's body, so I am curious for the reason of that.
side question: why can't I open any website without passing use_subprocess=True to uc.Chrome()? | open | 2022-10-12T12:36:44Z | 2023-12-22T23:45:40Z | https://github.com/ultrafunkamsterdam/undetected-chromedriver/issues/836 | [] | orbar1 | 3 |
vitalik/django-ninja | django | 525 | Inferr Automatic Multipart Data | As per your documentations, when uploading files, I can see that you recommend, for uploading files, sending multipart data. Which I have always found hacky (in general in django). Because, given this model and the rest of the code... In order to do POST requests for creation, we need to separate, as per your docs, the sending of the data in a multipart way and separate in a different Schema, the normal fields (not files) OR, mark the files as null=True, in the model, so that we do not need them. But, Is there a way to avoid code repetition and multiple scattered Schemas... in order to do this?
I would like to avoid having to "hack" or compose different POST routes for every single case. Also, I do not want to send my images in JSON format in Base64, because for large files, it is overkill and chunked it is always way better.
This, works (posting here for other's reference). But I want a more elegant way. The more automatic things, the better. So I am trying to wrap my head around on how to avoid having to affect the images on the save, after the upload with multipart, and automate that part. If you have any idea, I am welcome. I am just trying to reduce my code. This is just a very simple example, but for many fields and models, it becomes tiring and code consuming to save each ImageField manually. Also for the "sending part", I guess there is no other method but multipart, and encode it like I did in 2 separate entities (data, being an Stringified json object, for all the fields that are not files, and image, for the image.
```
class MediaIcon(models.Model):
name = models.CharField(max_length=255)
image = models.ImageField(null=True)
class MediaIconSchema(ModelSchema):
class Config:
model = MediaIcon
model_fields = "__all__"
@api.post("/media-icon", response=MediaIconSchema)
def create_media_icon(request, data: MediaIconSchema, image: UploadedFile = None):
obj = MediaIcon.objects.create(**data.dict())
obj.image = image
obj.save()
return obj
```
AND
```
<div class="container">
<h1>Multipart File Upload</h1>
<form id="form" enctype="multipart/form-data">
<div class="input-group">
<label for="files">Select files</label>
<!--<input id="file" type="file" multiple/>-->
<input id="file" type="file"/>
</div>
<button class="submit-btn" type="submit">Upload</button>
</form>
</div>
<script>
const form = document.getElementById("form");
const inputFile = document.getElementById("file");
const formData = new FormData();
const handleSubmit = (event) => {
event.preventDefault();
console.log(`inputFile-> `, inputFile.files[0]);
const image = inputFile.files[0]
const data = JSON.stringify({
name: image.name,
})
formData.append("data", data);
formData.append("image", image);
fetch("http://127.0.0.1:8000/api/media-icon", {
method: "post",
body: formData,
}).catch((error) => ("Something went wrong!", error));
};
form.addEventListener("submit", handleSubmit);
</script>
```
| open | 2022-08-11T18:19:25Z | 2022-08-14T08:15:39Z | https://github.com/vitalik/django-ninja/issues/525 | [] | martinlombana | 1 |
RobertCraigie/prisma-client-py | pydantic | 218 | Provide performance benchmarks | Currently our performance will not stack up well with other Python ORMs, however we can't work on improving performance without having a baseline to work from.
We should also provide context for these benchmarks by also benchmarking other ORMs. We should include:
- SQLAlchemy
- Django
- Pewee
- Pony ORM
- SQLObject
- Tortoise ORM
- SQLModel
We could also base the benchmarks on these benchmarks: https://github.com/tortoise/orm-benchmarks | open | 2022-01-12T00:31:31Z | 2022-03-10T17:22:53Z | https://github.com/RobertCraigie/prisma-client-py/issues/218 | [
"kind/improvement",
"topic: perf",
"level/advanced",
"priority/medium"
] | RobertCraigie | 3 |
deezer/spleeter | deep-learning | 725 | [Bug] sub processes are kept running even after separating is done and are re-created even after kill | - [X] I didn't find a similar issue already open.
- [X] I read the documentation (README AND Wiki)
- [X] I have installed FFMpeg
- [X] My problem is related to Spleeter only, not a derivative product (such as Webapplication, or GUI provided by others)
## Description
I am trying to use spleeter for vocal extraction in a song, with the following code. I found it is trying to spawn several sub processes for its work (and it works fine to separate what I wanted), but even after the separation is done, these sub processes are still kept running.
```
separator = Separator('spleeter:2stems')
separator.separate_to_file(filepath, self.filedirectory)
separator = None
```
I even tried to kill these sub processes with below code, it seems the sub processes are being re-created after being killed.
```
current_process = psutil.Process()
children = current_process.children(recursive=True)
for child in children:
print('Child pid is {}'.format(child.pid))
os.kill(child.pid, signal.SIGTERM)
```
See below for the debug screenshot before kill (after it was run).
<a href="https://ibb.co/FshqTJV"><img src="https://i.ibb.co/Q8Jjwnd/before-kill.png" alt="before-kill" border="0" /></a>
See below for the debug screenshot after kill.
<a href="https://ibb.co/Pw1fNg9"><img src="https://i.ibb.co/Ln9281J/after-kill.png" alt="after-kill" border="0"></a>
Each sub processes takes around 170MB of memory. See below for the memory usage even after kill (these sub processes are not created elsewhere).
<a href="https://ibb.co/7Gxsdwz"><img src="https://i.ibb.co/hKx3JPM/memory-usage.png" alt="memory-usage" border="0"></a>
## Step to reproduce
<!-- Indicates clearly steps to reproduce the behavior: -->
1. Installed using `pip install spleeter`
2. Run as standalone or using VSC to debug
## Output
Memory is kept high even after use and sub processes are kept running, even after killing they will be re-created.
## Environment
<!-- Fill the following table -->
| | |
| ----------------- | ------------------------------- |
| OS | Windows |
| Installation type | pip |
| RAM available | 8GB |
| Hardware spec | CPU |
## Additional context
<!-- Add any other context about the problem here, references, cites, etc.. -->
| open | 2022-02-06T23:39:45Z | 2022-05-21T11:43:06Z | https://github.com/deezer/spleeter/issues/725 | [
"bug",
"invalid"
] | CoryXie | 3 |
zappa/Zappa | flask | 485 | [Migrated] For Pillow, need to also delete 'PIL' | Originally from: https://github.com/Miserlou/Zappa/issues/1289 by [ceefour](https://github.com/ceefour)
Before you submit this PR, please make sure that you meet these criteria:
(yes, trivial change)
## Description
delete 'PIL' if package is Pillow
## GitHub Issues
See https://github.com/Miserlou/Zappa/issues/1286#issuecomment-350545457
| closed | 2021-02-20T08:35:31Z | 2022-07-16T07:26:22Z | https://github.com/zappa/Zappa/issues/485 | [
"needs-user-testing"
] | jneves | 1 |
floodsung/Deep-Learning-Papers-Reading-Roadmap | deep-learning | 46 | Charmap error while reading README | While opening the README.md, download.py gave me this:
`UnicodeDecodeError: 'charmap' codec can't decode byte 0x9d in position 33476: character maps to <undefined>`
In case someone has the same problem, you may fix it by changing line 73 to:
`with open('README.md', encoding="utf8") as readme:` | open | 2017-03-30T22:10:47Z | 2017-03-30T22:10:47Z | https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap/issues/46 | [] | mendelson | 0 |
ultralytics/yolov5 | machine-learning | 13,265 | A Error which blast my mind.... | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and found no similar bug report.
### YOLOv5 Component
_No response_
### Bug
Tried onnx version 17 and version 12 still the same error

Here is the code
#include <fstream>
#include <opencv2/opencv.hpp>
std::vector<std::string> load_class_list()
{
std::vector<std::string> class_list;
std::ifstream ifs("/home/suman-singh/Downloads/yolov5-master/temp.txt");
std::string line;
while (getline(ifs, line))
{
class_list.push_back(line);
}
return class_list;
}
void load_net(cv::dnn::Net &net, bool is_cuda)
{
auto result = cv::dnn::readNet("/home/suman-singh/Downloads/yolov5-master/yolov5s.onnx");
if (is_cuda)
{
std::cout << "Attempty to use CUDA\n";
result.setPreferableBackend(cv::dnn::DNN_BACKEND_CUDA);
result.setPreferableTarget(cv::dnn::DNN_TARGET_CUDA_FP16);
}
else
{
std::cout << "Running on CPU\n";
result.setPreferableBackend(cv::dnn::DNN_BACKEND_OPENCV);
result.setPreferableTarget(cv::dnn::DNN_TARGET_CPU);
}
net = result;
}
const std::vector<cv::Scalar> colors = {cv::Scalar(255, 255, 0), cv::Scalar(0, 255, 0), cv::Scalar(0, 255, 255), cv::Scalar(255, 0, 0)};
const float INPUT_WIDTH = 640.0;
const float INPUT_HEIGHT = 640.0;
const float SCORE_THRESHOLD = 0.2;
const float NMS_THRESHOLD = 0.4;
const float CONFIDENCE_THRESHOLD = 0.4;
struct Detection
{
int class_id;
float confidence;
cv::Rect box;
};
cv::Mat format_yolov5(const cv::Mat &source) {
int col = source.cols;
int row = source.rows;
int _max = MAX(col, row);
cv::Mat result = cv::Mat::zeros(_max, _max, CV_8UC3);
source.copyTo(result(cv::Rect(0, 0, col, row)));
return result;
}
void detect(cv::Mat &image, cv::dnn::Net &net, std::vector<Detection> &output, const std::vector<std::string> &className) {
cv::Mat blob;
auto input_image = format_yolov5(image);
cv::dnn::blobFromImage(input_image, blob, 1./255., cv::Size(INPUT_WIDTH, INPUT_HEIGHT), cv::Scalar(), true, false);
net.setInput(blob);
std::vector<cv::Mat> outputs;
net.forward(outputs, net.getUnconnectedOutLayersNames());
float x_factor = input_image.cols / INPUT_WIDTH;
float y_factor = input_image.rows / INPUT_HEIGHT;
float *data = (float *)outputs[0].data;
const int dimensions = 85;
const int rows = 25200;
std::vector<int> class_ids;
std::vector<float> confidences;
std::vector<cv::Rect> boxes;
for (int i = 0; i < rows; ++i) {
float confidence = data[4];
if (confidence >= CONFIDENCE_THRESHOLD) {
float * classes_scores = data + 5;
cv::Mat scores(1, className.size(), CV_32FC1, classes_scores);
cv::Point class_id;
double max_class_score;
minMaxLoc(scores, 0, &max_class_score, 0, &class_id);
if (max_class_score > SCORE_THRESHOLD) {
confidences.push_back(confidence);
class_ids.push_back(class_id.x);
float x = data[0];
float y = data[1];
float w = data[2];
float h = data[3];
int left = int((x - 0.5 * w) * x_factor);
int top = int((y - 0.5 * h) * y_factor);
int width = int(w * x_factor);
int height = int(h * y_factor);
boxes.push_back(cv::Rect(left, top, width, height));
}
}
data += 85;
}
std::vector<int> nms_result;
cv::dnn::NMSBoxes(boxes, confidences, SCORE_THRESHOLD, NMS_THRESHOLD, nms_result);
for (int i = 0; i < nms_result.size(); i++) {
int idx = nms_result[i];
Detection result;
result.class_id = class_ids[idx];
result.confidence = confidences[idx];
result.box = boxes[idx];
output.push_back(result);
}
}
int main(int argc, char **argv)
{
std::vector<std::string> class_list = load_class_list();
cv::Mat frame;
cv::VideoCapture capture(0);
cv::dnn::Net net;
load_net(net, true);
while (true)
{
capture.read(frame);
std::vector<Detection> output;
detect(frame, net, output, class_list);
int detections = output.size();
for (int i = 0; i < detections; ++i)
{
auto detection = output[i];
auto box = detection.box;
auto classId = detection.class_id;
const auto color = colors[classId % colors.size()];
cv::rectangle(frame, box, color, 3);
cv::rectangle(frame, cv::Point(box.x, box.y - 20), cv::Point(box.x + box.width, box.y), color, cv::FILLED);
cv::putText(frame, class_list[classId].c_str(), cv::Point(box.x, box.y - 5), cv::FONT_HERSHEY_SIMPLEX, 0.5, cv::Scalar(0, 0, 0));
}
cv::imshow("output", frame);
if (cv::waitKey(1) != -1)
{
capture.release();
std::cout << "finished by user\n";
break;
}
}
return 0;
}
### Environment
_No response_
### Minimal Reproducible Example
# i want my webcam to open and take video but it closes immediately
### Additional
Plzz help fast
### Are you willing to submit a PR?
- [X] Yes I'd like to help by submitting a PR! | open | 2024-08-18T18:41:32Z | 2024-08-19T06:09:49Z | https://github.com/ultralytics/yolov5/issues/13265 | [
"bug"
] | sumansingh-coder | 1 |
PokeAPI/pokeapi | api | 419 | missing pokemon encounters from headbutt trees | For example when I request 'location-area/ilex-forest-area' obviously pineco should be there somewhere available from headbutt trees but isn't. | open | 2019-03-01T17:24:22Z | 2019-03-01T21:44:58Z | https://github.com/PokeAPI/pokeapi/issues/419 | [] | jachymb | 1 |
pydata/pandas-datareader | pandas | 521 | QUANDL: URL format need to be updated. | Hi
Located in quandl.py line 86:
`url = '{url}{dataset}/{symbol}.csv?{params}'`
This line should be changed to:
`url = '{url}{dataset}/{symbol}/data.csv?{params}'`
Otherwise you will get error from quandl:
Response Text:
b'code,message\nQECx02,You have submitted an incorrect Quandl code. Please check your Quandl codes and try again.\n'
| closed | 2018-04-15T22:13:52Z | 2018-09-12T07:57:31Z | https://github.com/pydata/pandas-datareader/issues/521 | [] | ghost | 5 |
Anjok07/ultimatevocalremovergui | pytorch | 1,392 | Light Mode / Theme | It would be nice if we got a light color theme for the GUI :)
It'd be happy to take this on if we get enough people wanting this feature | open | 2024-06-05T17:28:02Z | 2024-06-05T17:29:20Z | https://github.com/Anjok07/ultimatevocalremovergui/issues/1392 | [] | karlingen | 0 |
biolab/orange3 | pandas | 6,626 | How to enable beginner users to use external libraries
| I want to allow beginner users to use a Python script that incorporates an external library. Is it possible to provide this to the users without requiring them to write code?
| closed | 2023-11-04T23:40:19Z | 2023-11-08T09:18:48Z | https://github.com/biolab/orange3/issues/6626 | [] | tatsuya-takakuwa | 5 |
indico/indico | sqlalchemy | 6,255 | It's not possible to insert an image in an event's description using an external url | Hi all, @OmeGak ,
In the new 3.3 version of Indico, the CKEditorWidget has been replaced with TinyMCEWidget.
I noticed that in this new TinyMCEWidget widget, the constructor by default sets absolute_urls = False.
In an Event's description field it is now not possible to set an absolute path to images attached in the description: If I do, it will get converted to an internal url.
**To Reproduce**
Steps to reproduce the behavior:
1. Go to an Event
2. Click on Settings
3. Edit the section for "Title, Description, Short URL"
4. In the editor, click the "Insert/Edit image" widget icon
5. Enter an absolute path to an image (e.g from the Customisation -> Images menu, get the link from an existing image)
6. Then check the Source code and you'll see that the image src attribute has been changed to an internal path
**Expected behavior**
We would expect that the image source remains as an external url
**Screenshots**


**Additional context**
I see that the TinyMCEWidget constructor by default sets absolute_urls = False
See src/indico/web/forms/widgets.py
| open | 2024-03-27T20:28:21Z | 2024-03-28T04:40:00Z | https://github.com/indico/indico/issues/6255 | [
"bug"
] | lecabori | 14 |
tqdm/tqdm | pandas | 1,030 | Having tqdm do nothing with a flag | This is an improvement request, if not the behavior is already implemented (I checked the readme and could not find help on this topic, but if I missed something, apologies, and can you show me where I could find doc / example of the feature?).
I quite often end up writing code like this:
```python
use_tqdm = True # here True, but may change to False next time
if use_tqdm:
wrapper = tqdm.tqdm
else:
def nop(it, *a, **k):
return it
wrapper = nop
for i in wrapper(range(10)):
do_something(i)
```
Is there a way to / can a feature be done so that one can instead do the following, in which case tqdm does nothing (and it would do something without the last arg:
```python
for i in tqdm.tqdm(range(10), do_nothing=True):
do_something(i)
``` | closed | 2020-09-09T13:12:09Z | 2020-09-09T13:39:10Z | https://github.com/tqdm/tqdm/issues/1030 | [
"invalid ⛔",
"question/docs ‽"
] | jerabaul29 | 1 |
davidteather/TikTok-Api | api | 259 | [FEATURE_REQUEST] - Get Comments By Video | closed | 2020-09-12T22:34:24Z | 2021-01-31T15:45:31Z | https://github.com/davidteather/TikTok-Api/issues/259 | [
"feature_request"
] | natallg | 4 | |
graphql-python/graphene | graphql | 773 | Anyway to implement introspection's `__InputValue.description`? | I'm using [graphdoc](https://github.com/2fd/graphdoc) to generate the document. What graphdoc output is something like this:
<img width="276" alt="screen shot 2018-06-19 at 02 18 25" src="https://user-images.githubusercontent.com/24715727/41554340-2c9dba20-7367-11e8-9598-37295dd4eef9.png">
graphdoc use graphql's [introspection](https://graphql.org/learn/introspection/) system and graphene does support introspection. But in the example above, all arguments are "Not documents" because args.description in introspection is empty:
```python
r = requests.get(
"http://localhost:9100/station/standard_graphql",
{
"query": r"""
{
__schema {
queryType {
name
}
types {
name
description
fields(includeDeprecated: true) {
name
description
args {
...InputValue
}
isDeprecated
deprecationReason
}
}
}
}
fragment InputValue on __InputValue {
name
description
defaultValue
}
"""
},
)
print(json.dumps(r.json(), indent=4))
{
"data": {
"__schema": {
"queryType": {
"name": "Query"
},
"types": [
{
"name": "Query",
"description": null,
"fields": [
{
"name": "user",
"description": null,
"args": [
{
"name": "id",
"description": null, # THIS IS NULL
"defaultValue": null
}
],
"isDeprecated": false,
"deprecationReason": null
},
...............
```
For now, does graphene have someway to add descriptions for arguments? Maybe something like these:
```python
class Query(ObjectType):
user = Field(
NotNone(User),
id=NotNone(Int),
description={"id": "user's id with hex formatting"}
)
def resolve_user(self, info, id):
return loaders.get_user_by_id_loader.load(id)
class Query(ObjectType):
user = Field(
NotNone(User),
id=NotNone(Int, description="user's id with hex formatting"),
)
def resolve_user(self, info, id):
return loaders.get_user_by_id_loader.load(id)
```
| closed | 2018-06-18T18:34:20Z | 2018-06-27T01:07:25Z | https://github.com/graphql-python/graphene/issues/773 | [] | ocavue | 2 |
matplotlib/mplfinance | matplotlib | 454 | Custom Keymap Functions? | In matplotlib there's an rc file with all the keymap functions listed to use keyboard shortcuts for things like home, navigation, pan, zoom. (https://matplotlib.org/stable/tutorials/introductory/customizing.html)
In mplfinance I don't think there is such an rc file or at least I can't find it, read docs found nothing for rc file or keymaps. I'm working on a program and would like to be able to use vim-like keys for scrolling back/fourth and zooming in/out plot data - in a different kind of way than the zoom and pan options given, without using mouse.
I figured I could use the animation to update the plot data based on how I scroll with the keys. I'm just trying to get some advice on the best way to go about changing keymaps with functions besides what's in the navigation bar or if anybody else has tried doing this kind of thing before.
Thanks have a good day everyone,
-Rick | closed | 2021-10-14T18:39:23Z | 2021-12-15T23:15:27Z | https://github.com/matplotlib/mplfinance/issues/454 | [
"question"
] | rkb97 | 1 |
microsoft/unilm | nlp | 1,420 | The tanh activation function vqkd | **Describe**
Model I am using (BEiTv2):
I have noticed that both `encode_task_layer` and `decode_task_layer` adopt the `Tanh()` function as the activation. Why not using other activation function like `GELU()` or `RELU()`? Is there any rationale behind this?
Besides, is there any thought behind using standard normalization rather than imagenet normalization for input image during vqkd training?
cc @pengzhiliang
| open | 2024-01-04T08:28:40Z | 2024-01-05T10:27:44Z | https://github.com/microsoft/unilm/issues/1420 | [] | vateye | 1 |
python-gino/gino | asyncio | 691 | Does this work with postgis version 3? | Does this work with postgis version 3?
`db.gino.create_all()` created the tables but skipped the column `geom_point = db.Column(Geometry('POINT'), nullable=True)`.
I tried again, after dropping and re-creating the database.
Here is the stacktrace:
```
[2020-06-02 01:51:26 +0530] [111467] [INFO] Goin' Fast @ http://0.0.0.0:4500
Executing <Task pending name='Task-1' coro=<init_db() running at app.py:16> wait_for=<Future pending cb=[<TaskWakeupMethWrapper object at 0x7f374d006160>()] created at /usr/lib/python3.8/asyncio/tasks.py:466> cb=[run_until_complete.<locals>.<lambda>()] created at /home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/sanic/server.py:648> took 0.272 seconds
[2020-06-02 01:51:27 +0530] [111471] [ERROR] Experienced exception while trying to serve
Traceback (most recent call last):
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/sanic/app.py", line 1170, in run
serve(**server_settings)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/sanic/server.py", line 832, in serve
trigger_events(before_start, loop)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/sanic/server.py", line 648, in trigger_events
loop.run_until_complete(result)
File "uvloop/loop.pyx", line 1456, in uvloop.loop.Loop.run_until_complete
File "app.py", line 17, in init_db
await db.gino.create_all()
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 347, in create_all
await self.create(bind=bind, tables=tables, checkfirst=checkfirst)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 334, in create
await getattr(bind, "_run_visitor")(
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/engine.py", line 859, in _run_visitor
await getattr(conn, "_run_visitor")(*args, **kwargs)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/engine.py", line 530, in _run_visitor
await visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 30, in traverse_single
return await meth(obj, **kw)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 91, in visit_metadata
await self.traverse_single(
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 30, in traverse_single
return await meth(obj, **kw)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 119, in visit_table
await _Async(table.dispatch.before_create)(
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 416, in call
await _call_portable_instancemethod(fn, args, kw)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 402, in _call_portable_instancemethod
m = getattr(fn.target, fn.name + "_async", None)
AttributeError: 'function' object has no attribute 'target'
Traceback (most recent call last):
File "app.py", line 27, in <module>
app.run(host='0.0.0.0', port=4500, debug=True)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/sanic/app.py", line 1170, in run
serve(**server_settings)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/sanic/server.py", line 832, in serve
trigger_events(before_start, loop)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/sanic/server.py", line 648, in trigger_events
loop.run_until_complete(result)
File "uvloop/loop.pyx", line 1456, in uvloop.loop.Loop.run_until_complete
File "app.py", line 17, in init_db
await db.gino.create_all()
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 347, in create_all
await self.create(bind=bind, tables=tables, checkfirst=checkfirst)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 334, in create
await getattr(bind, "_run_visitor")(
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/engine.py", line 859, in _run_visitor
await getattr(conn, "_run_visitor")(*args, **kwargs)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/engine.py", line 530, in _run_visitor
await visitorcallable(self.dialect, self, **kwargs).traverse_single(element)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 30, in traverse_single
return await meth(obj, **kw)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 91, in visit_metadata
await self.traverse_single(
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 30, in traverse_single
return await meth(obj, **kw)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 119, in visit_table
await _Async(table.dispatch.before_create)(
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 416, in call
await _call_portable_instancemethod(fn, args, kw)
File "/home/disciple/Documents/Code/All/Sanic/venv/lib/python3.8/site-packages/gino/schema.py", line 402, in _call_portable_instancemethod
m = getattr(fn.target, fn.name + "_async", None)
AttributeError: 'function' object has no attribute 'target'
sys:1: RuntimeWarning: coroutine 'Loop.create_server' was never awaited
```
_Originally posted by @diptangsu in https://github.com/python-gino/gino/issues/627#issuecomment-637080601_ | closed | 2020-06-02T16:57:54Z | 2020-06-28T12:00:50Z | https://github.com/python-gino/gino/issues/691 | [
"bug"
] | nikhilpatil02 | 9 |
sinaptik-ai/pandas-ai | pandas | 971 | BigQuery Connector is not found | ### System Info
pandas-ai version 2 codespaces, window, chrome browser, python 3.10
### 🐛 Describe the bug
pandas-ai version 2 - you all need to add the following lines of code to the __init__.py in the connectors folder:
`"""
Connectors are used to connect to databases, external APIs, and other data sources.
The connectors package contains all the connectors that are used by the application.
"""
from .airtable import AirtableConnector
from .base import BaseConnector
from .pandas import PandasConnector
from .polars import PolarsConnector
from .sql import MySQLConnector, PostgreSQLConnector, SQLConnector, SqliteConnector
from .yahoo_finance import YahooFinanceConnector
from pandasai.ee.connectors.google_big_query import GoogleBigQueryConnector
__all__ = [
"BaseConnector",
"SQLConnector",
"MySQLConnector",
"PostgreSQLConnector",
"YahooFinanceConnector",
"AirtableConnector",
"SqliteConnector",
"PandasConnector",
"PolarsConnector",
"GoogleBigQueryConnector",
]
` | closed | 2024-02-29T21:29:02Z | 2024-06-08T16:04:07Z | https://github.com/sinaptik-ai/pandas-ai/issues/971 | [] | marleyrosario | 0 |
mljar/mljar-supervised | scikit-learn | 265 | Setting a random_state has no effect since shuffle is False. This will raise an error in 0.24. You should leave random_state to its default (None), or set shuffle=True. | The warning message:
```
Setting a random_state has no effect since shuffle is False.
This will raise an error in 0.24.
You should leave random_state to its default (None), or set shuffle=True.
``` | closed | 2020-12-08T08:20:34Z | 2022-02-24T16:17:17Z | https://github.com/mljar/mljar-supervised/issues/265 | [
"bug"
] | pplonski | 5 |
tensorflow/tensor2tensor | deep-learning | 1,513 | Access intermediate operations In eager mode | ### Description
I am running the translate en-de task in eager mode. I have a trained model. After restoring from checkpoint, I hope to access the values of individual tensors for ex: **layer_0/self_attention/multihead_attention/combine_heads/combine_last_two_dimensions/Reshape:0**
With the Transformer model function, and using model._eager_var_store._store._vars I get access to the kernels and the biases (parameters of the model).
What is the best way for me to persist the individual operations in the Transformer Layers, so that I can access them similar to when using session.
I have written a simple test code below to save variables to the EagerVariableStore.
**Not sure if I'm missing something super straight forward here**
### Environment information
Mac OS 10.14
$ pip freeze | grep tensor
mesh-tensorflow==0.0.5
tensor2tensor==1.12.0
tensorboard==1.13.0
tensorflow==1.13.1
tensorflow-estimator==1.13.0
tensorflow-metadata==0.12.1
tensorflow-probability==0.6.0
$ python -V
Python 3.6.8 :: Anaconda, Inc.
# Steps to reproduce:
```
import tensorflow as tf
import tensorflow.contrib.eager as tfe
tfe.enable_eager_execution()
tf.enable_eager_execution()
assert tf.executing_eagerly() is True
from tensorflow.python.eager import context
from tensorflow.python.ops import variable_scope
with context.eager_mode():
container = variable_scope.EagerVariableStore()
with container.as_default():
initial_value = tf.random_normal([2,3], stddev=0.2)
w = tfe.Variable(initial_value, name='weights')
print(container.variables())
```
**Current Output:** []
**Expected output:** The variable w with name='weights'.
| open | 2019-03-22T05:48:58Z | 2019-03-22T05:55:10Z | https://github.com/tensorflow/tensor2tensor/issues/1513 | [] | ak-7 | 0 |
jmcnamara/XlsxWriter | pandas | 884 | feature request: Meaningful and unique return codes | ### Feature Request
At present the functions (I'm mainly focusing on write_x at present) return integer values, but there is an overlapping that does not allow to properly identify the error in case the generic write function is used.
This is not evident from the docs, since there is no overlapping there, but looking at the source code I found that for instance write_formula returns -2 for "Formula can't be None or empty", while write_string uses the same value for "String longer than 32767 characters". write_url and write_rich_string have many indexes in common.
It may be better to switch to a list of common Enum values to be assigned:
```
from enum import Enum, auto
class ResultCode(Enum):
SUCCESS = 0
NOT_SUPPORTED_IN_CONSTANT_MEMORY_MODE = auto()
ROW_OR_COLUMN_OUTSIDE_WORKSHEET_BOUNDS = auto()
[...]
UNKNOWN_ERROR = auto()
```
then use it like all enums:
```
if self._check_dimensions(row, col):
return ResultCode.ROW_OR_COLUMN_OUTSIDE_WORKSHEET_BOUNDS
```
This way the values will always be unique.
Alternatively (but this is a much less readable solution) the values can be kept, but at least the same value shall not be reused for different meanings | closed | 2022-06-30T12:59:38Z | 2022-07-13T10:38:01Z | https://github.com/jmcnamara/XlsxWriter/issues/884 | [
"feature request",
"medium term"
] | fra87 | 5 |
mljar/mljar-supervised | scikit-learn | 149 | Add Time Controller | I don't like the fact that `AutoML` class is huge right now. It will be nice to move time controlling code into a separate file: `mixins/TimeController.py` | closed | 2020-08-26T15:10:30Z | 2020-08-28T14:32:51Z | https://github.com/mljar/mljar-supervised/issues/149 | [
"refactor"
] | pplonski | 0 |
deepinsight/insightface | pytorch | 1,978 | problem with face parsing code!!! | can you explain how to run test.py.
pretrained weights do not load into the model | open | 2022-04-18T11:49:38Z | 2022-04-18T12:03:08Z | https://github.com/deepinsight/insightface/issues/1978 | [] | Erfun76 | 1 |
pallets-eco/flask-sqlalchemy | sqlalchemy | 845 | inserting into postgres array string array but gettin char array? | ### Expected Behavior
Im trying to insert an array of phones into my User table.
```python
usuario = Usuario(pk_email = form.email.data, nome = form.nome.data, senha = form.senha.data, telefone=form.telefone.data)
try:
db.session.add(usuario)
db.session.commit()
```
And inside form.telefone.data i have something like '{(21)11111-1111, (21)22222-2222, (21)33333-3333}'
### Actual Behavior
2020-06-17 22:48:39,667 INFO sqlalchemy.engine.base.Engine SELECT usuario.pk_email AS usuario_pk_email, usuario.senha AS usuario_senha, usuario.nome AS usuario_nome, usuario.telefone AS usuario_telefone, usuario.papel AS usuario_papel
FROM usuario
WHERE usuario.pk_email = %(pk_email_1)s
LIMIT %(param_1)s
2020-06-17 22:48:39,668 INFO sqlalchemy.engine.base.Engine {'pk_email_1': 'teste20@teste.com', 'param_1': 1}
2020-06-17 22:48:39,702 INFO sqlalchemy.engine.base.Engine INSERT INTO usuario (pk_email, senha, nome, telefone, papel) VALUES (%(pk_email)s, %(senha)s, %(nome)s, %(telefone)s, %(papel)s)
2020-06-17 22:48:39,702 INFO sqlalchemy.engine.base.Engine {'pk_email': 'teste20@teste.com', 'senha': 'Corote10', 'nome': 'Teste20', 'telefone': ['{', '"', '(', '2', '1', ')', '1', '1', '1', '1', '1', '-', '1', '1', '1', '1', '"', ',', ' ', '"', '(', '2', '1', ')', '2', '2', '2', '2', '2', '-', '2', '2', '2', '2', '"', ',', ' ', '"', '(', '2', '1', ')', '3', '3', '3', '3', '3', '-', '3', '3', '3', '3', '}', '"', '}'], 'papel': None}
2020-06-17 22:48:39,720 INFO sqlalchemy.engine.base.Engine COMMIT
a
127.0.0.1 - - [17/Jun/2020 22:48:39] "[32mPOST /register HTTP/1.1[0m" 302 -
127.0.0.1 - - [17/Jun/2020 22:48:39] "[37mGET / HTTP/1.1[0m" 200 -
127.0.0.1 - - [17/Jun/2020 22:48:39] "[37mGET /static/css/style.css HTTP/1.1[0m" 200 -
127.0.0.1 - - [17/Jun/2020 22:48:40] "[37mGET /static/img/logo-rodape.png HTTP/1.1[0m" 200 -
127.0.0.1 - - [17/Jun/2020 23:14:49] "[37mGET / HTTP/1.1[0m" 200 -
I dont understand why is it turning a string array into a char array????
### Environment
* Python version: Python 3.8.3
* Flask-SQLAlchemy version: Flask-SQLAlchemy==2.4.3
* SQLAlchemy version: SQLAlchemy==1.3.17
Thank you in advance!
| closed | 2020-06-18T02:22:45Z | 2020-12-05T19:58:23Z | https://github.com/pallets-eco/flask-sqlalchemy/issues/845 | [] | marianadsalgueiro | 1 |
apachecn/ailearning | scikit-learn | 494 | Use print() function in both Python 2 and Python 3 | Legacy __print__ statements are syntax errors in Python 3 but __print()__ function works as expected in both Python 2 and Python 3. Flake8 finds 52 instances of __>> is invalid with print function__.
[flake8](http://flake8.pycqa.org) testing of https://github.com/apachecn/AiLearning on Python 3.7.1
$ __flake8 . --count --select=E9,F63,F72,F82 --show-source --statistics__
```
./src/py3.x/ml/7.AdaBoost/roc_test.py:9:30: F821 undefined name 'load_data_set'
data_mat, class_labels = load_data_set('../../../input/7.AdaBoost/horseColicTraining2.txt')
^
./src/py3.x/ml/7.AdaBoost/roc_test.py:11:37: F821 undefined name 'ada_boost_train_ds'
weak_class_arr, agg_class_est = ada_boost_train_ds(data_mat, class_labels, 40)
^
./src/py3.x/ml/7.AdaBoost/roc_test.py:16:5: F821 undefined name 'plot_roc'
plot_roc(agg_class_est.T, class_labels)
^
./src/py3.x/ml/7.AdaBoost/roc_test.py:17:37: F821 undefined name 'load_data_set'
data_arr_test, label_arr_test = load_data_set("../../../input/7.AdaBoost/horseColicTest2.txt")
^
./src/py3.x/ml/7.AdaBoost/roc_test.py:18:9: F821 undefined name 'np'
m = np.shape(data_arr_test)[0]
^
./src/py3.x/ml/7.AdaBoost/roc_test.py:19:20: F821 undefined name 'ada_classify'
predicting10 = ada_classify(data_arr_test, weak_class_arr)
^
./src/py3.x/ml/7.AdaBoost/roc_test.py:20:15: F821 undefined name 'np'
err_arr = np.mat(np.ones((m, 1)))
^
./src/py3.x/ml/7.AdaBoost/roc_test.py:20:22: F821 undefined name 'np'
err_arr = np.mat(np.ones((m, 1)))
^
./src/py3.x/ml/7.AdaBoost/roc_test.py:23:35: F821 undefined name 'np'
err_arr[predicting10 != np.mat(label_arr_test).T].sum(),
^
./src/py3.x/ml/7.AdaBoost/roc_test.py:24:35: F821 undefined name 'np'
err_arr[predicting10 != np.mat(label_arr_test).T].sum() / m
^
./src/py3.x/16.RecommenderSystems/test_基于用户.py:74:28: F821 undefined name 'u'
for v, wuv in sorted(W[u].items, key=itemgetter(1), reverse=True)[0:K]:
^
./src/py3.x/16.RecommenderSystems/test_基于用户.py:74:73: F821 undefined name 'K'
for v, wuv in sorted(W[u].items, key=itemgetter(1), reverse=True)[0:K]:
^
./src/py3.x/16.RecommenderSystems/RS-sklearn-rating.py:47:5: F633 use of >> is invalid with print function
print >> sys.stderr, '开始统计流行item的数量...'
^
./src/py3.x/16.RecommenderSystems/RS-sklearn-rating.py:57:5: F633 use of >> is invalid with print function
print >> sys.stderr, '总共流行item数量 = %d' % item_count
^
./src/py3.x/16.RecommenderSystems/RS-sklearn-rating.py:125:5: F633 use of >> is invalid with print function
print >> sys.stderr, '%s: precision=%.4f \t recall=%.4f \t coverage=%.4f \t popularity=%.4f' % (
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:36:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'Similar movie number = %d' % self.n_sim_movie
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:37:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'Recommended movie number = %d' % self.n_rec_movie
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:52:17: F633 use of >> is invalid with print function
print >> sys.stderr, 'loading %s(%s)' % (filename, i)
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:54:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'load %s success' % filename
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:84:9: F633 use of >> is invalid with print function
print >> sys.stderr, '分离训练集和测试集成功'
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:85:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'train set = %s' % trainset_len
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:86:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'test set = %s' % testset_len
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:91:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'counting movies number and popularity...'
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:101:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'count movies number and popularity success'
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:105:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'total movie number = %d' % self.movie_count
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:109:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'building co-rated users matrix...'
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:119:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'build co-rated users matrix success'
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:122:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'calculating movie similarity matrix...'
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:133:21: F633 use of >> is invalid with print function
print >> sys.stderr, 'calculating movie similarity factor(%d)' % simfactor_count
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:135:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'calculate movie similarity matrix(similarity factor) success'
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:136:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'Total similarity factor number = %d' % simfactor_count
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:170:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'Evaluation start...'
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:188:17: F633 use of >> is invalid with print function
print >> sys.stderr, 'recommended for %d users' % i
^
./src/py3.x/16.RecommenderSystems/RS-itemcf.py:207:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'precision=%.4f \t recall=%.4f \t coverage=%.4f \t popularity=%.4f' % (
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:36:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'similar user number = %d' % self.n_sim_user
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:37:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'recommended movie number = %d' % self.n_rec_movie
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:52:17: F633 use of >> is invalid with print function
print >> sys.stderr, 'loading %s(%s)' % (filename, i)
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:54:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'load %s success' % filename
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:84:9: F633 use of >> is invalid with print function
print >> sys.stderr, '分离训练集和测试集成功'
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:85:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'train set = %s' % trainset_len
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:86:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'test set = %s' % testset_len
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:93:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'building movie-users inverse table...'
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:109:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'build movie-users inverse table success'
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:113:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'total movie number = %d' % self.movie_count
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:117:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'building user co-rated movies matrix...'
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:127:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'build user co-rated movies matrix success'
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:130:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'calculating user similarity matrix...'
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:141:21: F633 use of >> is invalid with print function
print >> sys.stderr, 'calculating user similarity factor(%d)' % simfactor_count
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:143:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'calculate user similarity matrix(similarity factor) success'
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:144:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'Total similarity factor number = %d' % simfactor_count
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:185:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'Evaluation start...'
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:203:17: F633 use of >> is invalid with print function
print >> sys.stderr, 'recommended for %d users' % i
^
./src/py3.x/16.RecommenderSystems/RS-usercf.py:222:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'precision=%.4f \t recall=%.4f \t coverage=%.4f \t popularity=%.4f' % (
^
./src/py3.x/16.RecommenderSystems/test_evaluation_model.py:22:16: F821 undefined name 'GetRecommendation'
rank = GetRecommendation(user, N)
^
./src/py3.x/16.RecommenderSystems/test_evaluation_model.py:36:16: F821 undefined name 'GetRecommendation'
rank = GetRecommendation(user, N)
^
./src/py3.x/16.RecommenderSystems/test_evaluation_model.py:51:16: F821 undefined name 'GetRecommendation'
rank = GetRecommendation(user, N)
^
./src/py3.x/16.RecommenderSystems/test_evaluation_model.py:68:16: F821 undefined name 'GetRecommendation'
rank = GetRecommendation(user, N)
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:46:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'Similar item number = %d' % self.n_sim_item
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:47:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'Recommended item number = %d' % self.n_rec_item
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:63:9: F633 use of >> is invalid with print function
print >> sys.stderr, '分离训练集和测试集成功'
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:64:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'len(train) = %s' % np.shape(self.train_data)[0]
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:65:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'len(test) = %s' % np.shape(self.test_data)[0]
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:84:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'item_mat_similarity=', np.shape(
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:87:9: F633 use of >> is invalid with print function
print >> sys.stderr, '开始统计流行item的数量...'
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:98:9: F633 use of >> is invalid with print function
print >> sys.stderr, '总共流行item数量 = %d' % self.item_count
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:139:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'Evaluation start...'
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:155:17: F633 use of >> is invalid with print function
print >> sys.stderr, 'recommended for %d users' % u_index
^
./src/py3.x/16.RecommenderSystems/sklearn-RS-demo-cf-item-test.py:183:9: F633 use of >> is invalid with print function
print >> sys.stderr, 'precision=%.4f \t recall=%.4f \t coverage=%.4f \t popularity=%.4f' % (
^
./src/py3.x/16.RecommenderSystems/test_lfm.py:12:16: F821 undefined name 'items_pool'
item = items_pool[random.randint(0, len(items_pool) - 1)]
^
./src/py3.x/16.RecommenderSystems/test_lfm.py:12:49: F821 undefined name 'items_pool'
item = items_pool[random.randint(0, len(items_pool) - 1)]
^
./src/py3.x/16.RecommenderSystems/test_lfm.py:23:14: F821 undefined name 'InitModel'
[P, Q] = InitModel(user_items, F)
^
./src/py3.x/16.RecommenderSystems/test_lfm.py:26:23: F821 undefined name 'RandSelectNegativeSamples'
samples = RandSelectNegativeSamples(items)
^
./src/py3.x/16.RecommenderSystems/test_lfm.py:28:29: F821 undefined name 'Predict'
eui = rui - Predict(user, item)
^
./src/py3.x/16.RecommenderSystems/test_基于物品.py:10:18: F821 undefined name 'users'
for i in users:
^
./src/py3.x/16.RecommenderSystems/test_基于物品.py:12:22: F821 undefined name 'users'
for j in users:
^
./src/py3.x/16.RecommenderSystems/test_基于物品.py:21:18: F821 undefined name 'v'
W[u][v] = cij / math.sqrt(N[i] * N[j])
^
./src/py3.x/16.RecommenderSystems/test_基于物品.py:30:18: F821 undefined name 'users'
for i in users:
^
./src/py3.x/16.RecommenderSystems/test_基于物品.py:32:22: F821 undefined name 'users'
for j in users:
^
./src/py3.x/16.RecommenderSystems/test_基于物品.py:41:18: F821 undefined name 'v'
W[u][v] = cij / math.sqrt(N[i] * N[j])
^
./src/py3.x/dl/perceptron.py:192:22: F821 undefined name 'train_and_perceptron'
and_perceptron = train_and_perceptron()
^
52 F633 use of >> is invalid with print function
28 F821 undefined name 'GetRecommendation'
80
``` | closed | 2019-04-18T08:38:45Z | 2019-04-28T02:50:01Z | https://github.com/apachecn/ailearning/issues/494 | [] | cclauss | 0 |
lexiforest/curl_cffi | web-scraping | 262 | libcurl-impersonate download should check for libcurl release version | Currently, libcurl-impersonate is downloaded into ~/.local on Linux, and the download is skipped if libcurl-impersonate.so already exists. This may be an issue if libcurl-impersonate is updated but the build runs on a system that already has an old library. | closed | 2024-03-03T12:05:15Z | 2024-03-06T13:45:22Z | https://github.com/lexiforest/curl_cffi/issues/262 | [
"bug"
] | bjia56 | 2 |
microsoft/nni | deep-learning | 5,558 | Need shape format support for predefined one shot search space | Describe the issue:
When I add the profiler as tutorial instructed below:
```python
dummy_input = torch.randn(1, 3, 32, 32)
profiler = NumParamsProfiler(model_space, dummy_input)
penalty = ExpectationProfilerPenalty(profiler, 500e3)
strategy = DartsStrategy(gradient_clip_val=5.0, penalty=penalty)
```
This error appeared:
```bash
[2023-05-12 22:02:18] WARNING: Shape information is not explicitly propagated when executing aten.avg_pool2d.default, and and a recent module that needs shape information has no shape inference formula. Module calling stack:
- '' (type: nni.nas.hub.pytorch.nasnet.DARTS, NO shape formula)
- 'stages.0' (type: nni.nas.hub.pytorch.nasnet.NDSStage, NO shape formula)
- 'stages.0.blocks.0' (type: nni.nas.nn.pytorch.cell.Cell, NO shape formula)
- 'stages.0.blocks.0.ops.0.0' (type: nni.nas.nn.pytorch.choice.LayerChoice, HAS shape formula)
- 'stages.0.blocks.0.ops.0.0.avg_pool_3x3' (type: torch.nn.modules.pooling.AvgPool2d, NO shape formula)
Traceback (most recent call last):
File "/home/dzhang/Documents/ml-experimental/TinyNAS/nas_tutorial/darts.py", line 235, in <module>
profiler = NumParamsProfiler(model_space, dummy_input)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/flops.py", line 197, in __init__
self.profiler = FlopsParamsProfiler(model_space, args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/flops.py", line 171, in __init__
shapes = submodule_input_output_shapes(model_space, *args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 466, in submodule_input_output_shapes
model(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1212, in _call_impl
result = forward_call(*input, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/hub/pytorch/nasnet.py", line 613, in forward
s0, s1 = stage([s0, s1])
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1212, in _call_impl
result = forward_call(*input, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/nn/pytorch/repeat.py", line 158, in forward
x = block(x)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1212, in _call_impl
result = forward_call(*input, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/nn/pytorch/cell.py", line 385, in forward
current_state.append(op(inp(states)))
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1215, in _call_impl
hook_result = hook(self, input, result)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 514, in module_shape_inference_hook
result = _module_shape_inference_impl(module, output, *input, is_leaf=is_leaf)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 628, in _module_shape_inference_impl
output_shape = formula(module, *input_args, **formula_kwargs, **input_kwargs)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape_formula.py", line 230, in layer_choice_formula
expressions[val] = extract_shape_info(shape_inference(module[val], *args, is_leaf=is_leaf, **kwargs))
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 498, in shape_inference
outputs = module(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/torch/nn/modules/module.py", line 1215, in _call_impl
hook_result = hook(self, input, result)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 514, in module_shape_inference_hook
result = _module_shape_inference_impl(module, output, *input, is_leaf=is_leaf)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 617, in _module_shape_inference_impl
tree_map(_ensure_shape, outputs)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py", line 192, in tree_map
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/usr/local/lib/python3.10/dist-packages/torch/utils/_pytree.py", line 192, in <listcomp>
return tree_unflatten([fn(i) for i in flat_args], spec)
File "/usr/local/lib/python3.10/dist-packages/nni/nas/profiler/pytorch/utils/shape.py", line 609, in _ensure_shape
raise RuntimeError(
RuntimeError: Shape inference failed because no shape inference formula is found for AvgPool2d(kernel_size=3, stride=1, padding=1) of type AvgPool2d. Meanwhile the nested modules and functions inside failed to propagate the shape information. Please provide a `_shape_forward` member
function or register a formula using `register_shape_inference_formula`.
```
According to the previous issue in https://github.com/microsoft/nni/issues/5538, I think it is AvgPool2d does not have shape inference formula, since it is not customzied search space, so can you add the support on your side?
Environment:
NNI version: latest(Build from source and use Dockerfile in master branch)
Training service (local|remote|pai|aml|etc): local
Client OS: Unbuntu
Python version: 3.8
PyTorch/TensorFlow version: 1.10.2
Is conda/virtualenv/venv used?: No
Is running in Docker?: Yes
Configuration:
Experiment config (remember to remove secrets!): Same as Latest Version in Darts example
Search space: Darts | open | 2023-05-12T22:16:56Z | 2023-05-25T11:28:20Z | https://github.com/microsoft/nni/issues/5558 | [] | dzk9528 | 9 |
cvat-ai/cvat | computer-vision | 8,797 | Started on Invalid date loop | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
in Export Dataset. YOLO 1.1
In progress it display "Started on Invalid date" to "Started on Dec 9th 24, 15:54" then back to "Started on Invalid date" maybe 2 hours up.anyone know what happen?
Only that dataset have this problem. Other is fine.
### Expected Behavior

Other Dataset is work fine.
### Possible Solution
_No response_
### Context
it can't export that dataset.
### Environment
```Markdown
Server version: 2.22.1
Core version: 15.2.1
Canvas version: 2.20.10
UI version: 1.66.4
docker ps:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
ee4234b10f9d gcr.io/iguazio/alpine:3.17 "/bin/sh -c '/bin/sl…" 2 hours ago Up 2 hours nuclio-local-storage-reader
0cb9f7213c2f cvat/ui:dev "/docker-entrypoint.…" 2 hours ago Up 2 hours 80/tcp cvat_ui
9580ff90fc5d cvat/server:dev "./backend_entrypoin…" 2 hours ago Up 2 hours 8080/tcp cvat_worker_annotation
f66c5a33f125 cvat/server:dev "./backend_entrypoin…" 2 hours ago Up 2 hours 8080/tcp cvat_utils
dbe675ed52e7 cvat/server:dev "./backend_entrypoin…" 2 hours ago Up 2 hours 8080/tcp cvat_server
f7de6024eb18 cvat/server:dev "./backend_entrypoin…" 2 hours ago Up 2 hours 8080/tcp cvat_worker_analytics_reports
e3432b22c89d cvat/server:dev "./backend_entrypoin…" 2 hours ago Up 2 hours 8080/tcp cvat_worker_import
1d016f1903d5 cvat/server:dev "./backend_entrypoin…" 2 hours ago Up 2 hours 8080/tcp cvat_worker_quality_reports
b16576db65fb cvat/server:dev "./backend_entrypoin…" 2 hours ago Up 2 hours 8080/tcp cvat_worker_webhooks
ec858c3707b8 cvat/server:dev "./backend_entrypoin…" 2 hours ago Up 2 hours 8080/tcp cvat_worker_export
a558f460f714 timberio/vector:0.26.0-alpine "/usr/local/bin/vect…" 2 hours ago Up 2 hours cvat_vector
85d594b0958e grafana/grafana-oss:10.1.2 "sh -euc 'mkdir -p /…" 2 hours ago Up 2 hours 3000/tcp cvat_grafana
f07060725981 apache/kvrocks:2.7.0 "kvrocks -c /var/lib…" 2 hours ago Up 2 hours (healthy) 6666/tcp cvat_redis_ondisk
9f1ba647f270 quay.io/nuclio/dashboard:1.13.0-amd64 "/docker-entrypoint.…" 2 hours ago Up 2 hours (healthy) 80/tcp, 0.0.0.0:8070->8070/tcp, :::8070->8070/tcp nuclio
c8ab1a9e21bd postgres:15-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours 5432/tcp cvat_db
654280ee7bc7 redis:7.2.3-alpine "docker-entrypoint.s…" 2 hours ago Up 2 hours 6379/tcp cvat_redis_inmem
60b1635fdaff traefik:v2.10 "/entrypoint.sh trae…" 2 hours ago Up 2 hours 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp, 80/tcp, 0.0.0.0:8090->8090/tcp, :::8090->8090/tcp traefik
3dd50899eb26 openpolicyagent/opa:0.63.0 "/opa run --server -…" 2 hours ago Up 2 hours cvat_opa
feab7f11251d clickhouse/clickhouse-server:23.11-alpine "/entrypoint.sh" 2 hours ago Up 2 hours 8123/tcp, 9000/tcp, 9009/tcp cvat_clickhouse
f84257b00ef2 custom-model-yolov8:latest "processor" 12 days ago Up 2 hours (healthy) 0.0.0.0:49154->8080/tcp, :::49154->8080/tcp nuclio-nuclio-Ultralytics-motorcycle_exhaust_pipe_yolov8_Ver1.3.0_V8n
cce3e57192c3 e74167113c4c "processor" 12 days ago Up 2 hours (healthy) 0.0.0.0:49153->8080/tcp, :::49153->8080/tcp nuclio-nuclio-Ultralytics-YOLOV8m
89cb34936742 cvat.onnx.wongkinyiu.yolov7:latest "processor" 12 days ago Up 2 hours (healthy) 0.0.0.0:49155->8080/tcp, :::49155->8080/tcp nuclio-nuclio-onnx-wongkinyiu-yolov7
```
| open | 2024-12-09T08:27:10Z | 2024-12-11T01:22:22Z | https://github.com/cvat-ai/cvat/issues/8797 | [
"bug"
] | Opeple0627 | 2 |
docarray/docarray | pydantic | 1,452 | add len on DocIndex | # Context
len(db) should return db.num_docs when db is a DocIndex | closed | 2023-04-25T07:44:09Z | 2023-04-25T18:17:51Z | https://github.com/docarray/docarray/issues/1452 | [
"good-first-issue"
] | samsja | 0 |
vastsa/FileCodeBox | fastapi | 13 | docker 的路径似乎不对 | 151 152 图片上传失败,153 正常了。 | closed | 2022-12-13T15:23:24Z | 2022-12-14T00:15:20Z | https://github.com/vastsa/FileCodeBox/issues/13 | [] | MerudaArashi | 2 |
yt-dlp/yt-dlp | python | 12,492 | Bilibili: Formats missing after passing cookies | ### Checklist
- [x] I'm reporting that yt-dlp is broken on a **supported** site
- [x] I've verified that I have **updated yt-dlp to nightly or master** ([update instructions](https://github.com/yt-dlp/yt-dlp#update-channels))
- [x] I've checked that all provided URLs are playable in a browser with the same IP and same login details
- [x] I've checked that all URLs and arguments with special characters are [properly quoted or escaped](https://github.com/yt-dlp/yt-dlp/wiki/FAQ#video-url-contains-an-ampersand--and-im-getting-some-strange-output-1-2839-or-v-is-not-recognized-as-an-internal-or-external-command)
- [x] I've searched [known issues](https://github.com/yt-dlp/yt-dlp/issues/3766), [the FAQ](https://github.com/yt-dlp/yt-dlp/wiki/FAQ), and the [bugtracker](https://github.com/yt-dlp/yt-dlp/issues?q=is%3Aissue%20-label%3Aspam%20%20) for similar issues **including closed ones**. DO NOT post duplicates
- [ ] I've read about [sharing account credentials](https://github.com/yt-dlp/yt-dlp/blob/master/CONTRIBUTING.md#are-you-willing-to-share-account-details-if-needed) and I'm willing to share it if required
### Region
China
### Provide a description that is worded well enough to be understood
1. In Firefox 135.0.1, I logged in with a “大会员”(VIP) account.
2. I used the cookies.txt extension to extract cookies of bilibili.com.
3. I tried downloading a video with the cookies passed to yt-dlp, but was told:
`[BiliBili] Format(s) 4K 超清, 1080P 高码率 are missing; you have to become a premium member to download them. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies`
Well, these formats are indeed for the VIPs only, but I think I am one of them?
I used the same method to download videos before and never failed. Something must be changed by bilibili, but I have no idea.
I used `--write-pages` to extract some more info as attached.
- [`1YK4y1C7CU_https_-_www.bilibili.com_video_BV1YK4y1C7CU_.dump.txt`](https://github.com/user-attachments/files/19012518/1YK4y1C7CU_https_-_www.bilibili.com_video_BV1YK4y1C7CU_.dump.txt)
- [`BV1YK4y1C7CU_https_-_api.bilibili.com_x_player_pagelistbvid=BV1YK4y1C7CU_jsonp=jsonp.dump.txt`](https://github.com/user-attachments/files/19012516/BV1YK4y1C7CU_https_-_api.bilibili.com_x_player_pagelistbvid.BV1YK4y1C7CU_jsonp.jsonp.dump.txt)
- [`882566744_https_-_api.bilibili.com_x_player_wbi_v2aid=882566744_cid=172274931.dump.txt`](https://github.com/user-attachments/files/19012517/882566744_https_-_api.bilibili.com_x_player_wbi_v2aid.882566744_cid.172274931.dump.txt)
### Provide verbose output that clearly demonstrates the problem
- [x] Run **your** yt-dlp command with **-vU** flag added (`yt-dlp -vU <your command line>`)
- [ ] If using API, add `'verbose': True` to `YoutubeDL` params instead
- [x] Copy the WHOLE output (starting with `[debug] Command-line config`) and insert it below
### Complete Verbose Output
```shell
[debug] Command-line config: ['--verbose', '--write-pages', '--cookies', 'cookies.txt', 'https://www.bilibili.com/video/BV1YK4y1C7CU/']
[debug] Encodings: locale cp936, fs utf-8, pref cp936, out utf-8, error utf-8, screen utf-8
[debug] yt-dlp version nightly@2025.02.26.232946 from yt-dlp/yt-dlp-nightly-builds [3042afb5f] (win_exe)
[debug] Python 3.10.11 (CPython AMD64 64bit) - Windows-10-10.0.19041-SP0 (OpenSSL 1.1.1t 7 Feb 2023)
[debug] exe versions: ffmpeg 7.1-full_build-www.gyan.dev (setts), ffprobe 7.1-full_build-www.gyan.dev
[debug] Optional libraries: Cryptodome-3.21.0, brotli-1.1.0, certifi-2025.01.31, curl_cffi-0.5.10, mutagen-1.47.0, requests-2.32.3, sqlite3-3.40.1, urllib3-2.3.0, websockets-15.0
[debug] Proxy map: {}
[debug] Request Handlers: urllib, requests, websockets, curl_cffi
[debug] Plugin directories: none
[debug] Loaded 1843 extractors
[BiliBili] Extracting URL: https://www.bilibili.com/video/BV1YK4y1C7CU/
[BiliBili] 1YK4y1C7CU: Downloading webpage
[BiliBili] Saving request to 1YK4y1C7CU_https_-_www.bilibili.com_video_BV1YK4y1C7CU_.dump
[BiliBili] BV1YK4y1C7CU: Extracting videos in anthology
[BiliBili] Saving request to BV1YK4y1C7CU_https_-_api.bilibili.com_x_player_pagelistbvid=BV1YK4y1C7CU_jsonp=jsonp.dump
[BiliBili] Format(s) 4K 超清, 1080P 高码率 are missing; you have to become a premium member to download them. Use --cookies-from-browser or --cookies for the authentication. See https://github.com/yt-dlp/yt-dlp/wiki/FAQ#how-do-i-pass-cookies-to-yt-dlp for how to manually pass cookies
[BiliBili] 882566744: Extracting chapters
[BiliBili] Saving request to 882566744_https_-_api.bilibili.com_x_player_wbi_v2aid=882566744_cid=172274931.dump
[debug] Formats sorted by: hasvid, ie_pref, lang, quality, res, fps, hdr:12(7), vcodec, channels, acodec, size, br, asr, proto, vext, aext, hasaud, source, id
[debug] Default format spec: bestvideo*+bestaudio/best
[info] BV1YK4y1C7CU: Downloading 1 format(s): 100026+30280
[debug] Invoking http downloader on "https://cn-gddg-cm-01-02.bilivideo.com/upgcxcode/31/49/172274931/172274931_da2-1-100026.m4s?e=ig8euxZM2rNcNbdlhoNvNC8BqJIzNbfqXBvEqxTEto8BTrNvN0GvT90W5JZMkX_YN0MvXg8gNEV4NC8xNEV4N03eN0B5tZlqNxTEto8BTrNvNeZVuJ10Kj_g2UB02J0mN0B5tZlqNCNEto8BTrNvNC7MTX502C8f2jmMQJ6mqF2fka1mqx6gqj0eN0B599M=&uipk=5&nbs=1&deadline=1740669734&gen=playurlv2&os=bcache&oi=1865949154&trid=0000bc16f3c2e9414adc9d3d0f614a5ec472u&mid=701918224&platform=pc&og=hw&upsig=bf7ec2343e159c8dc31e139eb01062dc&uparams=e,uipk,nbs,deadline,gen,os,oi,trid,mid,platform,og&cdnid=2122&bvc=vod&nettype=0&orderid=0,3&buvid=BDABE54B-CDFF-9B1C-B13D-A0BF1F348EEA34522infoc&build=0&f=u_0_0&agrr=0&bw=120904&logo=80000000"
[debug] File locking is not supported. Proceeding without locking
[download] Destination: 【何同学】一只鸽子的自白 [BV1YK4y1C7CU].f100026.mp4
[download] 100% of 54.19MiB in 00:00:05 at 10.69MiB/s
[debug] Invoking http downloader on "https://cn-gddg-cm-01-02.bilivideo.com/upgcxcode/31/49/172274931/172274931_nb2-1-30280.m4s?e=ig8euxZM2rNcNbdlhoNvNC8BqJIzNbfqXBvEqxTEto8BTrNvN0GvT90W5JZMkX_YN0MvXg8gNEV4NC8xNEV4N03eN0B5tZlqNxTEto8BTrNvNeZVuJ10Kj_g2UB02J0mN0B5tZlqNCNEto8BTrNvNC7MTX502C8f2jmMQJ6mqF2fka1mqx6gqj0eN0B599M=&uipk=5&nbs=1&deadline=1740669734&gen=playurlv2&os=bcache&oi=1865949154&trid=0000bc16f3c2e9414adc9d3d0f614a5ec472u&mid=701918224&platform=pc&og=hw&upsig=dd696443647ba26bd11c553296d1484a&uparams=e,uipk,nbs,deadline,gen,os,oi,trid,mid,platform,og&cdnid=2122&bvc=vod&nettype=0&orderid=0,3&buvid=BDABE54B-CDFF-9B1C-B13D-A0BF1F348EEA34522infoc&build=0&f=u_0_0&agrr=0&bw=15620&logo=80000000"
[download] Destination: 【何同学】一只鸽子的自白 [BV1YK4y1C7CU].f30280.m4a
[download] 100% of 7.00MiB in 00:00:00 at 13.00MiB/s
[Merger] Merging formats into "【何同学】一只鸽子的自白 [BV1YK4y1C7CU].mp4"
[debug] ffmpeg command line: ffmpeg -y -loglevel repeat+info -i "file:【何同学】一只鸽子的自白 [BV1YK4y1C7CU].f100026.mp4" -i "file:【何同学】一只鸽子的自白 [BV1YK4y1C7CU].f30280.m4a" -c copy -map 0:v:0 -map 1:a:0 -movflags +faststart "file:【何同学】一只鸽子的自白 [BV1YK4y1C7CU].temp.mp4"
Deleting original file 【何同学】一只鸽子的自白 [BV1YK4y1C7CU].f30280.m4a (pass -k to keep)
Deleting original file 【何同学】一只鸽子的自白 [BV1YK4y1C7CU].f100026.mp4 (pass -k to keep)
``` | open | 2025-02-27T16:09:38Z | 2025-03-03T20:09:52Z | https://github.com/yt-dlp/yt-dlp/issues/12492 | [
"account-needed",
"site-bug",
"triage"
] | PumpkinJui | 11 |
Evil0ctal/Douyin_TikTok_Download_API | web-scraping | 383 | /api/douyin/web/fetch_user_post_videos 中参数max_cursor 无效 | ***发生错误的平台?***
抖音
***发生错误的端点?***
docker api, latest image
***提交的输入值?***
params = {
"sec_user_id": user_id,
"max_cursor": (0 ~ 1724029371000)中尝试,
"count": (0 ~ 36)中尝试,
}
***是否有再次尝试?***
如:是,发生错误后24时间内错误依旧存在。
***你有查看本项目的自述文件或接口文档吗?***
如:有,并且很确定该问题是程序导致的。文档url: https://api.douyin.wtf/docs#/Douyin-Web-API/fetch_user_post_videos_api_douyin_web_fetch_user_post_videos_get
***问题描述***
无论如何更改max_cursor, 服务只能拉取拖拽之前第一页的视频信息,无法获取更多视频信息
| closed | 2024-05-09T02:16:59Z | 2024-05-09T09:53:27Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/383 | [
"BUG"
] | Yuze-Wang | 0 |
CorentinJ/Real-Time-Voice-Cloning | pytorch | 907 | Performance improvement via architectural change | What architectural change can be done to improve performance either on training side or on output? | open | 2021-11-25T16:23:42Z | 2021-11-25T16:23:42Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/907 | [] | Fzr2k | 0 |
joouha/euporie | jupyter | 43 | Cannot go left! | If I shift the cells to the right, then I cannot shift back to the left :(
See the video: https://youtu.be/VG_nabBuiew
| closed | 2022-11-30T12:51:27Z | 2022-12-05T18:31:48Z | https://github.com/joouha/euporie/issues/43 | [] | ghost | 4 |
mwaskom/seaborn | data-science | 3,756 | Use Arrow PyCapsule Interface instead of Dataframe Interchange Protocol | This is something I've chatted with @MarcoGorelli offline about. At the time it was implemented in seaborn, the Dataframe Interchange Protocol was the best option for exchanging dataframe-like data. However, since that was implemented in seaborn, the [PyArrow Capsule Interface](https://arrow.apache.org/docs/format/CDataInterface/PyCapsuleInterface.html) has come along and solved many of the issues that the DataFrame Interchange Protocol left open.
Without knowing the current state of the interchange implementation of seaborn, switching to the PyArrow Capsule Interface should solve at least the following issues:
- It will add support for polars and other dataframe libraries (https://github.com/mwaskom/seaborn/issues/3277 and https://github.com/mwaskom/seaborn/issues/3188)
- It will use the Arrow type system, which supports aggregate types (https://github.com/mwaskom/seaborn/issues/3533)
- The wonkiness of pandas' type system won't be inherited by seaborn (potentially solving https://github.com/mwaskom/seaborn/issues/3519)
The interface has been adopted by a good deal of projects already, some of which are being tracked in https://github.com/apache/arrow/issues/39195 | closed | 2024-08-29T12:25:17Z | 2025-01-28T01:14:13Z | https://github.com/mwaskom/seaborn/issues/3756 | [] | WillAyd | 13 |
matplotlib/matplotlib | matplotlib | 29,699 | [ENH]: Add `zorder` (grid, scatter, line, etc) in `rcParams` | ### Problem
Hi!
I'm working on rcParams and I'd really like it to be possible to control the default zorder of certain elements. The more the better, but being able to control the zorder specifically for grid, lines, scatter and bars would be great!
Is that possible?
Also, I'm willing to open a PR for this as long as it doesn't require a deep understanding of the inner workings of matplotlib.
### Proposed solution
/ | open | 2025-03-03T14:40:52Z | 2025-03-03T14:42:51Z | https://github.com/matplotlib/matplotlib/issues/29699 | [
"New feature"
] | JosephBARBIERDARNAL | 1 |
widgetti/solara | fastapi | 222 | API docs missing for Details | see https://github.com/widgetti/solara/issues/221
https://github.com/widgetti/solara/pull/185 is a good template of what it should look like | open | 2023-07-28T17:36:36Z | 2024-04-01T15:31:45Z | https://github.com/widgetti/solara/issues/222 | [
"documentation",
"good first issue",
"help wanted"
] | maartenbreddels | 1 |
graphql-python/graphene | graphql | 923 | How can I test output of graphene when error is thrown? | Resolver:
```
def resolve_entity(self, info, **kwargs):
id = kwargs.get('id')
if id is None:
raise GraphQLError('ID not provided')
try:
return Entity.objects.get(pk=id)
except Entity.DoesNotExist:
raise GraphQLError('Entity not found')
```
Entity:
```
def test_single_athlete_no_id(self):
athlete = Entity.objects.create(name='John Doe', weight=23.45,
height=180)
try:
self.client.execute(
'{ entity { id, name, weight, height } }')
except:
print('test')
```
I have been trying different ways (assertRaises, using try/except) but nothing is raised when there is definitely a stack trace like the following:
> ..An error occurred while resolving field Query.entity
> Traceback:
> ...
> raise GraphQLError('ID not provided')
> graphql.error.base.GraphQLError: ID not provided
> Traceback:
> ...
> graphql.error.located_error.GraphQLLocatedError: ID not provided
Why am I not able to catch these exceptions? Am I not supposed to use exceptions for error handling? | closed | 2019-03-16T17:43:18Z | 2022-05-20T13:00:14Z | https://github.com/graphql-python/graphene/issues/923 | [] | GasimGasimzada | 10 |
elliotgao2/toapi | flask | 24 | Encrypt URL. | We don't want the API users know any information about the source site.
Example:
```
http://127.0.0.1/adauoai1duaou/
The `/adauoai1duaou/` is an encrypted URL which means `/users/`
```
We should encrypt all the relative URLs in all items. Such as
```
{
'titile':'My Life',
'url':'/movie/2017/'
}
```
Convert it to
```
{
'titile':'My Life',
'url':'/uo23uodaoi123udo/'
}
```
I need a `encrypt.py` file. | closed | 2017-12-06T12:20:57Z | 2017-12-08T09:31:27Z | https://github.com/elliotgao2/toapi/issues/24 | [
"enhancement",
"help wanted"
] | elliotgao2 | 1 |
modoboa/modoboa | django | 2,991 | Unable to edit identity | # Impacted versions
* OS Type: Debian
* OS Version: 11
* Database Type: As per install script
* Database version: As per install script
* Modoboa: As per install script
* installer used: Yes
* Webserver: As per install script
# Steps to reproduce
Log in as admin. Click on 'Identities', top bar. Click on edit icon under 'Actions', right hand side next to an identity.
# Current behavior
Red "Internal error" dialogue box opens.
# Expected behavior
Identity should be opened for editing.
# Video/Screenshot link (optional)

| closed | 2023-05-02T14:59:55Z | 2023-05-02T15:02:51Z | https://github.com/modoboa/modoboa/issues/2991 | [] | NiccyB | 1 |
google-research/bert | tensorflow | 1,048 | For token embeddings, how do you get the shape 768 (Input format) from wordpiece integer ids? | open | 2020-04-04T04:14:23Z | 2020-04-04T04:14:23Z | https://github.com/google-research/bert/issues/1048 | [] | KartavyaKothari | 0 | |
ludwig-ai/ludwig | computer-vision | 3,427 | Bug: RuntimeError: Caught exception during model preprocessing: 'NoneType' object has no attribute 'IndexFlatL2' | **Describe the bug**
I am trying to run the LLM_few-shot example (https://github.com/ludwig-ai/ludwig/blob/master/examples/llm_few_shot_learning/simple_model_training.py) on google colab and getting the following error in the training stage.
`---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[/usr/local/lib/python3.10/dist-packages/ludwig/api.py](https://localhost:8080/#) in preprocess(self, dataset, training_set, validation_set, test_set, training_set_metadata, data_format, skip_save_processed_input, random_seed, **kwargs)
1488 # TODO (Connor): Refactor to use self.config_obj
-> 1489 preprocessed_data = preprocess_for_training(
1490 self.config_obj.to_dict(),
9 frames
[/usr/local/lib/python3.10/dist-packages/ludwig/data/preprocessing.py](https://localhost:8080/#) in preprocess_for_training(config, dataset, training_set, validation_set, test_set, training_set_metadata, data_format, skip_save_processed_input, preprocessing_params, backend, random_seed, callbacks)
1971 else:
-> 1972 processed = data_format_processor.preprocess_for_training(
1973 config,
[/usr/local/lib/python3.10/dist-packages/ludwig/data/preprocessing.py](https://localhost:8080/#) in preprocess_for_training(config, features, dataset, training_set, validation_set, test_set, training_set_metadata, skip_save_processed_input, preprocessing_params, backend, random_seed, callbacks)
252
--> 253 return _preprocess_df_for_training(
254 config,
[/usr/local/lib/python3.10/dist-packages/ludwig/data/preprocessing.py](https://localhost:8080/#) in _preprocess_df_for_training(config, features, dataset, training_set, validation_set, test_set, training_set_metadata, preprocessing_params, backend, random_seed, callbacks)
2180
-> 2181 data, training_set_metadata = build_dataset(
2182 config,
[/usr/local/lib/python3.10/dist-packages/ludwig/data/preprocessing.py](https://localhost:8080/#) in build_dataset(config, dataset_df, features, global_preprocessing_parameters, mode, metadata, backend, random_seed, skip_save_processed_input, callbacks)
1233 logger.debug("handle text features with prompt parameters")
-> 1234 synthesized_dataset_cols = handle_features_with_prompt_config(
1235 config, dataset_df, features, split_col=split_col, backend=backend
[/usr/local/lib/python3.10/dist-packages/ludwig/data/preprocessing.py](https://localhost:8080/#) in handle_features_with_prompt_config(config, dataset_df, features, backend, split_col)
1778 }
-> 1779 retrieval_model, index_name = index_column(
1780 prompt_config["retrieval"],
[/usr/local/lib/python3.10/dist-packages/ludwig/data/prompt.py](https://localhost:8080/#) in index_column(retrieval_config, col_name, dataset_cols, backend, split_col)
111 # Build index if index name is not provided and index for this df does not already exist in cache
--> 112 retrieval_model.create_dataset_index(df, backend, columns_to_index=[col_name])
113 logger.info(f"Saving index to cache directory '{index_cache_directory}' with name '{index_name}'")
[/usr/local/lib/python3.10/dist-packages/ludwig/models/retrieval.py](https://localhost:8080/#) in create_dataset_index(self, df, backend, columns_to_index)
133 embeddings = self._encode(row_strs, backend)
--> 134 self.index = faiss.IndexFlatL2(embeddings.shape[1])
135 self.index.add(embeddings)
AttributeError: 'NoneType' object has no attribute 'IndexFlatL2'
The above exception was the direct cause of the following exception:
RuntimeError Traceback (most recent call last)
[<ipython-input-135-f12a42dc45d2>](https://localhost:8080/#) in <cell line: 6>()
4 preprocessed_data, # tuple Ludwig Dataset objects of pre-processed training data
5 output_directory, # location of training results stored on disk
----> 6 ) = model.train(
7 dataset=df,experiment_name="simple_experiment", model_name="simple_model", skip_save_processed_input=True)
8
[/usr/local/lib/python3.10/dist-packages/ludwig/api.py](https://localhost:8080/#) in train(self, dataset, training_set, validation_set, test_set, training_set_metadata, data_format, experiment_name, model_name, model_resume_path, skip_save_training_description, skip_save_training_statistics, skip_save_model, skip_save_progress, skip_save_log, skip_save_processed_input, output_directory, random_seed, **kwargs)
539 )
540
--> 541 preprocessed_data = self.preprocess(
542 dataset=dataset,
543 training_set=training_set,
[/usr/local/lib/python3.10/dist-packages/ludwig/api.py](https://localhost:8080/#) in preprocess(self, dataset, training_set, validation_set, test_set, training_set_metadata, data_format, skip_save_processed_input, random_seed, **kwargs)
1506 return PreprocessedDataset(proc_training_set, proc_validation_set, proc_test_set, training_set_metadata)
1507 except Exception as e:
-> 1508 raise RuntimeError(f"Caught exception during model preprocessing: {str(e)}") from e
1509 finally:
1510 for callback in self.callbacks:
RuntimeError: Caught exception during model preprocessing: 'NoneType' object has no attribute 'IndexFlatL2'`
**Expected behavior**
Should allow training with the dataset provided.
**Environment (please complete the following information):**
- OS: Colab
- Python version 3.10
- Ludwig version 0.8
**Additional context**
<img width="958" alt="image" src="https://github.com/ludwig-ai/ludwig/assets/29117565/9faafe75-7120-49a5-b815-6049b02034f0">
| closed | 2023-06-01T11:57:21Z | 2024-10-13T22:32:13Z | https://github.com/ludwig-ai/ludwig/issues/3427 | [
"waiting for answer"
] | chayanray | 5 |
dropbox/sqlalchemy-stubs | sqlalchemy | 168 | `backref` relationships raise `has no attribute` | For example, the following code fails to validate:
```python
from sqlalchemy.orm import backref, relationship
from sqlalchemy.sql.schema import Column, ForeignKey
from sqlalchemy.sql.sqltypes import Integer
from sqlalchemy.ext.declarative import declarative_base
Base = declarative_base()
class A(Base):
id = Column(Integer, primary_key=True)
class B(Base):
id = Column(Integer, primary_key=True)
a_id = Column(
Integer, ForeignKey('a.id', ondelete='CASCADE'), index=True, nullable=False
)
a = relationship(A, backref=backref('bs'))
print(B().a)
print(A().bs)
```
```
sql.py:24: error: "A" has no attribute "bs"
Found 1 error in 1 file (checked 1 source file)
```
| open | 2020-07-29T11:06:37Z | 2021-01-25T20:23:52Z | https://github.com/dropbox/sqlalchemy-stubs/issues/168 | [] | ojomio | 1 |
supabase/supabase-py | fastapi | 63 | Tutorial Request: FastAPI | There have been multiple requests for a FastAPI tutorial. For this task, we're looking for a tutorial or blogpost which showcases the use of `supabase` with FastAPI.
There are no restrictions on the type of tutorial and any number of people can take this issue up. This issue is open for creative exploration but feel free to tag/ping me if ideas are needed. | closed | 2021-10-14T05:29:19Z | 2023-02-28T17:28:22Z | https://github.com/supabase/supabase-py/issues/63 | [
"documentation",
"good first issue"
] | J0 | 12 |
zappa/Zappa | django | 370 | [Migrated] zappa certify doesn't map domain name | Originally from: https://github.com/Miserlou/Zappa/issues/934 by [johey](https://github.com/johey)
Trying to follow the [Using Let's Encrypt with Zappa](https://github.com/Miserlou/Zappa/blob/master/docs/domain_with_free_ssl_dns.md) guide. All steps work fine without any error message. However, accessing the custom domain name does not work. Looking into the AWS console I can see no changes to my Hosted Zone in Route 53 and nothing is added to my CloudFront. However, trying to run zappa certify once again results in an error:
```
Calling certify for stage dev..
Are you sure you want to certify? [y/n] y
Certifying domain test.smolinscy.se..
Setting DNS challenge..
Waiting for DNS to propagate..
Deleting DNS challenge..
An error occurred (BadRequestException) when calling the CreateDomainName operation: The domain name you provided is already associated with an existing CloudFront distribution.
Remove the domain name from the existing CloudFront distribution or use a different domain name.
If you own this domain name and are not using it on an existing CloudFront distribution, please contact support.
Failed to generate or install certificate! :(
```
That makes me curious. Where is that domain defined? I cannot find it anyware in my AWS console. Which existing CloudFront distribution does it refer to?
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Follow [Using Let's Encrypt with Zappa](https://github.com/Miserlou/Zappa/blob/master/docs/domain_with_free_ssl_dns.md)
2. Access the given custom domain url
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.42.1
* Operating System and Python version: Ubuntu 17.04.
* The output of `pip freeze`:
```
argcomplete==1.8.2
base58==0.2.4
boto3==1.4.4
botocore==1.5.40
certifi==2017.4.17
chardet==3.0.4
click==6.7
docutils==0.13.1
durationpy==0.4
Flask==0.12.2
future==0.16.0
futures==3.1.1
hjson==2.0.7
idna==2.5
itsdangerous==0.24
Jinja2==2.9.6
jmespath==0.9.3
kappa==0.6.0
lambda-packages==0.15.1
MarkupSafe==1.0
olefile==0.44
Pillow==4.1.1
pkg-resources==0.0.0
placebo==0.8.1
python-dateutil==2.6.0
python-slugify==1.2.4
PyYAML==3.12
requests==2.18.1
s3transfer==0.1.10
six==1.10.0
toml==0.9.2
tqdm==4.14.0
troposphere==1.9.4
Unidecode==0.4.20
urllib3==1.21.1
Werkzeug==0.12
wsgi-request-logger==0.4.6
zappa==0.42.1
```
* Your `zappa_settings.py`:
```
{
"dev": {
"s3_bucket": "<SECRET>",
"app_function": "my_app.app",
"domain": "test.smolinscy.se",
"route53_enabled": true,
"lets_encrypt_key": "account.key"
}
}
``` | closed | 2021-02-20T08:27:33Z | 2024-04-13T15:36:50Z | https://github.com/zappa/Zappa/issues/370 | [
"no-activity",
"auto-closed"
] | jneves | 2 |
PokemonGoF/PokemonGo-Bot | automation | 5,603 | Bot crashes with new account after setting Starter Pokemon as Buddy | ### Expected Behavior
Complete Tutorial, set newly caught starter as buddy and continue with rest of bot
### Actual Behavior
Bot cashes with new account. Other accounts have no problem.
### Your FULL config.json (remove your username, password, gmapkey and any other private info)
{
"websocket_server": false,
"heartbeat_threshold": 10,
"enable_social": false,
"live_config_update": {
"enabled": false,
"tasks_only": false
},
"tasks": [
{
"type": "TelegramTask",
"config": {
"enabled": true,
"master": REMOVED,
"password": null,
"// old syntax, still supported: alert_catch": ["all"],
"// new syntax:": {},
"alert_catch": {
"all": {"operator": "or", "cp": 1000, "iv": 0.90},
"Diglett": {"operator": "or", "cp": 10, "iv": 0.10},
"Horsea": {"operator": "or", "cp": 10, "iv": 0.10},
"Lapras": {"operator": "or", "cp": 900, "iv": 0.9},
"Dragonite": {"operator": "or", "cp": 900, "iv": 0.9},
"Arcanine": {"operator": "or", "cp": 900, "iv": 0.9},
"Exeggutor": {"operator": "or", "cp": 900, "iv": 0.9},
"Snorlax": {"operator": "or", "cp": 900, "iv": 0.9}
}
}
},
{
"type": "DiscordTask",
"config": {
"enabled": false,
"master": null,
"// old syntax, still supported: alert_catch": ["all"],
"// new syntax:": {},
"alert_catch": {
"all": {"operator": "and", "cp": 1000, "iv": 0},
"Snorlax": {"operator": "or", "cp": 900, "iv": 0.9}
}
}
},
{
"//NOTE: This task MUST be placed on the top of task list": {},
"type": "RandomAlivePause",
"config": {
"enabled": true,
"min_duration": "00:00:10",
"max_duration": "00:10:00",
"min_interval": "00:05:00",
"max_interval": "01:30:00"
}
},
```
{
"type": "HandleSoftBan"
},
{
"type": "RandomPause",
"config": {
"enabled": true,
"min_duration": "00:00:10",
"max_duration": "00:10:00",
"min_interval": "00:10:00",
"max_interval": "02:00:00"
}
},
{
"type": "CompleteTutorial",
"config": {
"enabled": false,
"// set a name": "",
"nickname": "MerlionR0ck06",
"// 0 = No Team, 1 = Blue, 2 = Red, 3 = Yellow": "",
"team": 2
}
},
{
"type": "CollectLevelUpReward",
"config": {
"collect_reward": true,
"level_limit": -1
}
},
{
"type": "IncubateEggs",
"config": {
"enabled": false,
"infinite_longer_eggs_first": false,
"breakable_longer_eggs_first": true,
"min_interval": 120,
"infinite": [2,5,10],
"breakable": [2,5,10]
}
},
{
"type": "UpdateLiveStats",
"config": {
"enabled": false,
"min_interval": 10,
"stats": ["username", "uptime", "stardust_earned", "xp_earned", "xp_per_hour", "stops_visited"],
"terminal_log": true,
"terminal_title": true
}
},
{
"type": "UpdateLiveInventory",
"config": {
"enabled": false,
"min_interval": 120,
"show_all_multiple_lines": false,
"items": ["pokemon_bag", "space_info", "pokeballs", "greatballs", "ultraballs", "razzberries", "luckyegg"]
}
},
{
"type": "PokemonOptimizer",
"config": {
"enabled": true,
"min_slots_left": 5,
"action_wait_min": 3,
"action_wait_max": 5,
"transfer": true,
"evolve": true,
"evolve_to_final": true,
"evolve_time": 25,
"evolve_for_xp": true,
"evolve_only_with_lucky_egg": false,
"evolve_count_for_lucky_egg": 80,
"may_use_lucky_egg": true,
"upgrade": true,
"upgrade_level": 30,
"groups": {
"gym": ["Dragonite", "Snorlax", "Lapras", "Arcanine"],
"HorseaDiglett": ["Diglett", "Horsea"]
},
"rules": [
{
"mode": "overall",
"top": 1,
"sort": ["max_cp", "cp"],
"evolve": false,
"buddy": {"candy": -124}
},
{
"mode": "by_pokemon",
"names": ["gym"],
"top": 3,
"sort": ["iv", "ncp"],
"evolve": {"iv": 0.9, "ncp": 0.9},
"upgrade": {"iv": 0.9, "ncp": 0.9}
},
{
"mode": "by_pokemon",
"names": ["HorseaDiglett"],
"top": 10,
"sort": ["cp"],
"keep": {"cp": -21},
"evolve": false,
"upgrade": false
},
{
"mode": "by_family",
"top": 1,
"sort": ["iv"],
"evolve": {"iv": 0.9}
},
{
"mode": "by_family",
"top": 1,
"sort": ["ncp"],
"evolve": {"ncp": 0.9}
},
{
"mode": "by_family",
"top": 1,
"sort": ["cp"]
}
]
}
},
{
"type": "ShowBestPokemon",
"config": {
"enabled": true,
"min_interval": 60,
"amount": 5,
"order_by": "cp",
"info_to_show": ["cp", "ivcp", "dps", "hp"]
}
},
{
"type": "TransferPokemon",
"config": {
"enabled": true,
"min_free_slot": 5,
"transfer_wait_min": 3,
"transfer_wait_max": 5
}
},
{
"type": "NicknamePokemon",
"config": {
"enabled": true,
"nickname_above_iv": 0.9,
"nickname_template": "{name:.8s}_{ivcp_pct}",
"nickname_wait_min": 3,
"nickname_wait_max": 5
}
},
{
"type": "EvolvePokemon",
"config": {
"enabled": false,
"evolve_list": "all",
"donot_evolve_list": "none",
"first_evolve_by": "cp",
"evolve_above_cp": 500,
"evolve_above_iv": 0.8,
"logic": "or",
"min_evolve_speed": 25,
"max_evolve_speed": 30,
"min_pokemon_to_be_evolved": 1,
"use_lucky_egg": false
}
},
{
"type": "UseIncense",
"config": {
"use_incense": false,
"use_order": [
"ordinary",
"spicy",
"cool",
"floral"
]
}
},
{
"type": "RecycleItems",
"config": {
"enabled": true,
"min_empty_space": 15,
"max_balls_keep": 150,
"max_potions_keep": 50,
"max_berries_keep": 70,
"max_revives_keep": 70,
"item_filter": {
"Pokeball": { "keep" : 100 },
"Potion": { "keep" : 10 },
"Super Potion": { "keep" : 20 },
"Hyper Potion": { "keep" : 30 },
"Revive": { "keep" : 30 },
"Razz Berry": { "keep" : 100 }
},
"recycle_wait_min": 3,
"recycle_wait_max": 5,
"recycle_force": true,
"recycle_force_min": "00:01:00",
"recycle_force_max": "00:05:00"
}
},
{
"type": "CatchPokemon",
"config": {
"enabled": true,
"catch_visible_pokemon": true,
"catch_lured_pokemon": true,
"catch_incensed_pokemon": true,
"min_ultraball_to_keep": 5,
"berry_threshold": 0.35,
"vip_berry_threshold": 0.9,
"treat_unseen_as_vip": false,
"daily_catch_limit": 800,
"vanish_settings": {
"consecutive_vanish_limit": 10,
"rest_duration_min": "02:00:00",
"rest_duration_max": "04:00:00"
},
"catch_throw_parameters": {
"excellent_rate": 0.1,
"great_rate": 0.5,
"nice_rate": 0.3,
"normal_rate": 0.1,
"spin_success_rate" : 0.6,
"hit_rate": 0.75
},
"catch_simulation": {
"flee_count": 3,
"flee_duration": 2,
"catch_wait_min": 3,
"catch_wait_max": 6,
"berry_wait_min": 3,
"berry_wait_max": 5,
"changeball_wait_min": 3,
"changeball_wait_max": 5,
"newtodex_wait_min": 20,
"newtodex_wait_max": 30
}
}
},
{
"type": "SpinFort",
"config": {
"enabled": true,
"spin_wait_min": 3,
"spin_wait_max": 5,
"daily_spin_limit": 1900
}
},
{ "type": "UpdateWebInventory",
"config": {
"enabled": true
}
},
{
"type": "PokemonHunter",
"config": {
"enabled": true,
"max_distance": 1500,
"hunt_all": false,
"hunt_vip": true,
"hunt_pokedex": false
}
},
{
"type": "MoveToFort",
"config": {
"enabled": true,
"lure_attraction": true,
"lure_max_distance": 300,
"ignore_item_count": true,
"walker": "PolylineWalker",
"log_interval": 5
}
}
],
"map_object_cache_time": 5,
"forts": {
"avoid_circles": true,
"max_circle_size": 50,
"cache_recent_forts": true
},
"pokemon_bag": {
"// if 'show_at_start' is true, it will log all the pokemons in the bag (not eggs) at bot start": {},
"show_at_start": true,
"// if 'show_count' is true, it will show the amount of each pokemon (minimum 1)": {},
"show_count": false,
"// if 'show_candies' is true, it will show the amount of candies for each pokemon": {},
"show_candies": false,
"// 'pokemon_info' parameter define which info to show for each pokemon": {},
"// the available options are": {},
"// ['cp', 'iv_ads', 'iv_pct', 'ivcp', 'ncp', 'level', 'hp', 'moveset', 'dps']": {},
"pokemon_info": ["cp", "iv_pct"]
},
"walk_max": 4.16,
"walk_min": 2.16,
"alt_min": 500,
"alt_max": 1000,
"sleep_schedule": {
"enabled": true,
"enable_reminder": false,
"reminder_interval": 600,
"entries": [
{
"enabled": true,
"time": "17:30",
"duration": "5:00",
"time_random_offset": "00:20",
"duration_random_offset": "00:20",
"wake_up_at_location": ""
},
{
"enabled": true,
"time": "05:30",
"duration": "6:00",
"time_random_offset": "00:30",
"duration_random_offset": "00:30",
"wake_up_at_location": "1.283247,103.860243"
}
]
},
"gps_default_altitude": 8.0,
"replicate_gps_xy_noise": true,
"replicate_gps_z_noise": true,
"gps_xy_noise_range": 0.000125,
"gps_z_noise_range": 12.5,
"debug": false,
"test": false,
"walker_limit_output": false,
"health_record": false,
"location_cache": true,
"distance_unit": "km",
"reconnecting_timeout": 15,
"logging": {
"color": true,
"show_datetime": true,
"show_process_name": true,
"show_log_level": true,
"show_thread_name": false
},
"catch": {
"any": {"candy_threshold" : 9999 ,"catch_above_cp": 1200, "catch_above_iv": 0.8, "logic": "and"},
"// Example of always catching Rattata:": {},
"Bulbasaur": {},
"Charmander": {},
"Squirtle": {},
"Pikachu": {},
"Vulpix": {},
"Growlithe": {},
"Mankey": {},
"Abra": {},
"Machop": {},
"Geodude": {},
"Ponyta": {},
"Drowzee": {},
"Rhyhorn": {},
"Magikarp": {},
"Dratini": {},
"Onix": {"candy_threshold" : 9999 ,"catch_above_cp": 1000, "catch_above_iv": 0.8, "logic": "and"},
"Hitmonlee": {"candy_threshold" : 9999 ,"catch_above_cp": 1000, "catch_above_iv": 0.8, "logic": "and"},
"Hitmonchan": {"candy_threshold" : 9999 ,"catch_above_cp": 1000, "catch_above_iv": 0.8, "logic": "and"},
"Lickitung": {"candy_threshold" : 9999 ,"catch_above_cp": 1200, "catch_above_iv": 0.8, "logic": "and"},
"Scyther": {"candy_threshold" : 9999 ,"catch_above_cp": 1200, "catch_above_iv": 0.8, "logic": "and"},
"Jynx": {"candy_threshold" : 9999 ,"catch_above_cp": 1200, "catch_above_iv": 0.8, "logic": "and"},
"Electabuzz": {"candy_threshold" : 9999 ,"catch_above_cp": 1200, "catch_above_iv": 0.8, "logic": "and"},
"Diglett": { "candy_threshold": 9999, "catch_below_cp": 11, "catch_above_iv": 0, "logic": "and", "fast_attack": ["Scratch", "Mud Slap"] },
"Horsea": { "candy_threshold": 9999, "catch_below_cp": 11, "catch_above_iv": 0, "logic": "and", "fast_attack": ["Bubble"] }
},
"release": {
"any": {"release_below_cp": 0, "release_below_iv": 0, "logic": "or"},
"// Legendary pokemons (Goes under S-Tier)": {},
"Lapras": { "release_below_cp": 1041, "release_below_iv": 0.8, "logic": "and" },
"Moltres": { "release_below_cp": 1132, "release_below_iv": 0.8, "logic": "and" },
"Zapdos": { "release_below_cp": 1087, "release_below_iv": 0.8, "logic": "and" },
"Articuno": { "release_below_cp": 1039, "release_below_iv": 0.8, "logic": "and" },
"// S-Tier pokemons (if pokemon can be evolved into tier, list the representative)": {},
"Mewtwo": { "release_below_cp": 1447, "release_below_iv": 0.8, "logic": "and"},
"Dragonite": { "release_below_cp": 1221, "release_below_iv": 0.8, "logic": "and" },
"Snorlax": { "release_below_cp": 1087, "release_below_iv": 0.8, "logic": "and" },
"// Mew evolves to Mewtwo": {},
"Mew": { "release_below_cp": 1152, "release_below_iv": 0.8, "logic": "and" },
"Arcanine": { "release_below_cp": 1041, "release_below_iv": 0.8, "logic": "and" },
"Vaporeon": { "release_below_cp": 984, "release_below_iv": 0.8, "logic": "and" },
"Gyarados": { "release_below_cp": 938, "release_below_iv": 0.8, "logic": "and" },
"Exeggutor": { "release_below_cp": 1032, "release_below_iv": 0.8, "logic": "and" },
"Muk": { "release_below_cp": 909, "release_below_iv": 0.8, "logic": "and" },
"Weezing": { "release_below_cp": 784, "release_below_iv": 0.8, "logic": "and" },
"Flareon": { "release_below_cp": 924, "release_below_iv": 0.8, "logic": "and" },
"// Growlithe evolves to Arcanine": {},
"Growlithe": { "release_below_cp": 465, "release_below_iv": 0.8, "logic": "and" },
"// Dragonair evolves to Dragonite": {},
"Dragonair": { "release_below_cp": 609, "release_below_iv": 0.8, "logic": "and" },
"// Grimer evolves to Muk": {},
"Grimer": { "release_below_cp": 448, "release_below_iv": 0.8, "logic": "and" },
"// Magikarp evolves to Gyarados": {},
"Magikarp": { "release_below_cp": 91, "release_below_iv": 0.8, "logic": "and" },
"// Exeggcute evolves to Exeggutor": {},
"Exeggcute": { "release_below_cp": 384, "release_below_iv": 0.8, "logic": "and" },
"// Eevee evolves to many versions, like Vaporeon, Flareon": {},
"Eevee": { "release_below_cp": 376, "release_below_iv": 0.8, "logic": "and" },
"// A-Tier pokemons": {},
"Slowbro": { "release_below_cp": 907, "release_below_iv": 0.8, "logic": "and" },
"Victreebel": { "release_below_cp": 883, "release_below_iv": 0.8, "logic": "and" },
"Machamp": { "release_below_cp": 907, "release_below_iv": 0.8, "logic": "and" },
"Poliwrath": { "release_below_cp": 876, "release_below_iv": 0.8, "logic": "and" },
"Clefable": { "release_below_cp": 837, "release_below_iv": 0.8, "logic": "and" },
"Nidoking": { "release_below_cp": 864, "release_below_iv": 0.8, "logic": "and" },
"Venusaur": { "release_below_cp": 902, "release_below_iv": 0.8, "logic": "and" },
"Charizard": { "release_below_cp": 909, "release_below_iv": 0.8, "logic": "and" },
"Golduck": { "release_below_cp": 832, "release_below_iv": 0.8, "logic": "and" },
"Nidoqueen": { "release_below_cp": 868, "release_below_iv": 0.8, "logic": "and" },
"Vileplume": { "release_below_cp": 871, "release_below_iv": 0.8, "logic": "and" },
"Blastoise": { "release_below_cp": 888, "release_below_iv": 0.8, "logic": "and" },
"Omastar": { "release_below_cp": 780, "release_below_iv": 0.8, "logic": "and" },
"Aerodactyl": { "release_below_cp": 756, "release_below_iv": 0.8, "logic": "and" },
"Golem": { "release_below_cp": 804, "release_below_iv": 0.8, "logic": "and" },
"Wigglytuff": { "release_below_cp": 760, "release_below_iv": 0.8, "logic": "and" },
"Dewgong": { "release_below_cp": 748, "release_below_iv": 0.8, "logic": "and" },
"Ninetales": { "release_below_cp": 763, "release_below_iv": 0.8, "logic": "and" },
"Magmar": { "release_below_cp": 792, "release_below_iv": 0.8, "logic": "and" },
"Kabutops": { "release_below_cp": 744, "release_below_iv": 0.8, "logic": "and" },
"Electabuzz": { "release_below_cp": 739, "release_below_iv": 0.8, "logic": "and" },
"Starmie": { "release_below_cp": 763, "release_below_iv": 0.8, "logic": "and" },
"Jolteon": { "release_below_cp": 746, "release_below_iv": 0.8, "logic": "and" },
"Rapidash": { "release_below_cp": 768, "release_below_iv": 0.8, "logic": "and" },
"Pinsir": { "release_below_cp": 741, "release_below_iv": 0.8, "logic": "and" },
"Scyther": { "release_below_cp": 724, "release_below_iv": 0.8, "logic": "and" },
"Tentacruel": { "release_below_cp": 775, "release_below_iv": 0.8, "logic": "and" },
"Gengar": { "release_below_cp": 724, "release_below_iv": 0.8, "logic": "and" },
"Hypno": { "release_below_cp": 763, "release_below_iv": 0.8, "logic": "and" },
"Pidgeot": { "release_below_cp": 729, "release_below_iv": 0.8, "logic": "and" },
"Rhydon": { "release_below_cp": 782, "release_below_iv": 0.8, "logic": "and" },
"Seaking": { "release_below_cp": 712, "release_below_iv": 0.8, "logic": "and" },
"Kangaskhan": { "release_below_cp": 712, "release_below_iv": 0.8, "logic": "and" },
"// Koffing evolves to Weezing (A-Tier)": {},
"Koffing": { "release_below_cp": 403, "release_below_iv": 0.8, "logic": "and" },
"// Below is B-tier and lower pokemons": {},
"Caterpie": { "release_below_cp": 156, "release_below_iv": 0.8, "logic": "and" },
"Weedle": { "release_below_cp": 156, "release_below_iv": 0.8, "logic": "and" },
"Metapod": { "release_below_cp": 168, "release_below_iv": 0.8, "logic": "and" },
"Kakuna": { "release_below_cp": 170, "release_below_iv": 0.8, "logic": "and" },
"Rattata": { "release_below_cp": 204, "release_below_iv": 0.8, "logic": "and" },
"Abra": { "release_below_cp": 208, "release_below_iv": 0.8, "logic": "and" },
"Zubat": { "release_below_cp": 225, "release_below_iv": 0.8, "logic": "and" },
"Chansey": { "release_below_cp": 235, "release_below_iv": 0.8, "logic": "and" },
"Spearow": { "release_below_cp": 240, "release_below_iv": 0.8, "logic": "and" },
"Meowth": { "release_below_cp": 264, "release_below_iv": 0.8, "logic": "and" },
"Krabby": { "release_below_cp": 276, "release_below_iv": 0.8, "logic": "and" },
"Sandshrew": { "release_below_cp": 278, "release_below_iv": 0.8, "logic": "and" },
"Poliwag": { "release_below_cp": 278, "release_below_iv": 0.8, "logic": "and" },
"Gastly": { "release_below_cp": 280, "release_below_iv": 0.8, "logic": "and" },
"Ekans": { "release_below_cp": 288, "release_below_iv": 0.8, "logic": "and" },
"Shellder": { "release_below_cp": 288, "release_below_iv": 0.8, "logic": "and" },
"Vulpix": { "release_below_cp": 290, "release_below_iv": 0.8, "logic": "and" },
"Voltorb": { "release_below_cp": 292, "release_below_iv": 0.8, "logic": "and" },
"Geodude": { "release_below_cp": 297, "release_below_iv": 0.8, "logic": "and" },
"Doduo": { "release_below_cp": 297, "release_below_iv": 0.8, "logic": "and" },
"Onix": { "release_below_cp": 300, "release_below_iv": 0.8, "logic": "and" },
"Mankey": { "release_below_cp": 307, "release_below_iv": 0.8, "logic": "and" },
"Pikachu": { "release_below_cp": 309, "release_below_iv": 0.8, "logic": "and" },
"Magnemite": { "release_below_cp": 312, "release_below_iv": 0.8, "logic": "and" },
"Tentacool": { "release_below_cp": 316, "release_below_iv": 0.8, "logic": "and" },
"Paras": { "release_below_cp": 319, "release_below_iv": 0.8, "logic": "and" },
"Jigglypuff": { "release_below_cp": 321, "release_below_iv": 0.8, "logic": "and" },
"Ditto": { "release_below_cp": 321, "release_below_iv": 0.8, "logic": "and" },
"Staryu": { "release_below_cp": 326, "release_below_iv": 0.8, "logic": "and" },
"Charmander": { "release_below_cp": 333, "release_below_iv": 0.8, "logic": "and" },
"Goldeen": { "release_below_cp": 336, "release_below_iv": 0.8, "logic": "and" },
"Squirtle": { "release_below_cp": 352, "release_below_iv": 0.8, "logic": "and" },
"Cubone": { "release_below_cp": 352, "release_below_iv": 0.8, "logic": "and" },
"Venonat": { "release_below_cp": 360, "release_below_iv": 0.8, "logic": "and" },
"Bulbasaur": { "release_below_cp": 374, "release_below_iv": 0.8, "logic": "and" },
"Drowzee": { "release_below_cp": 374, "release_below_iv": 0.8, "logic": "and" },
"Machop": { "release_below_cp": 381, "release_below_iv": 0.8, "logic": "and" },
"Psyduck": { "release_below_cp": 386, "release_below_iv": 0.8, "logic": "and" },
"Seel": { "release_below_cp": 386, "release_below_iv": 0.8, "logic": "and" },
"Kabuto": { "release_below_cp": 386, "release_below_iv": 0.8, "logic": "and" },
"Bellsprout": { "release_below_cp": 391, "release_below_iv": 0.8, "logic": "and" },
"Omanyte": { "release_below_cp": 391, "release_below_iv": 0.8, "logic": "and" },
"Kadabra": { "release_below_cp": 396, "release_below_iv": 0.8, "logic": "and" },
"Oddish": { "release_below_cp": 400, "release_below_iv": 0.8, "logic": "and" },
"Dugtrio": { "release_below_cp": 408, "release_below_iv": 0.8, "logic": "and" },
"Rhyhorn": { "release_below_cp": 412, "release_below_iv": 0.8, "logic": "and" },
"Clefairy": { "release_below_cp": 420, "release_below_iv": 0.8, "logic": "and" },
"Slowpoke": { "release_below_cp": 424, "release_below_iv": 0.8, "logic": "and" },
"Pidgeotto": { "release_below_cp": 427, "release_below_iv": 0.8, "logic": "and" },
"Farfetch'd": { "release_below_cp": 441, "release_below_iv": 0.8, "logic": "and" },
"Poliwhirl": { "release_below_cp": 468, "release_below_iv": 0.8, "logic": "and" },
"Nidorino": { "release_below_cp": 480, "release_below_iv": 0.8, "logic": "and" },
"Haunter": { "release_below_cp": 482, "release_below_iv": 0.8, "logic": "and" },
"Nidorina": { "release_below_cp": 489, "release_below_iv": 0.8, "logic": "and" },
"Graveler": { "release_below_cp": 501, "release_below_iv": 0.8, "logic": "and" },
"Beedrill": { "release_below_cp": 504, "release_below_iv": 0.8, "logic": "and" },
"Raticate": { "release_below_cp": 504, "release_below_iv": 0.8, "logic": "and" },
"Butterfree": { "release_below_cp": 508, "release_below_iv": 0.8, "logic": "and" },
"Hitmonlee": { "release_below_cp": 520, "release_below_iv": 0.8, "logic": "and" },
"Ponyta": { "release_below_cp": 530, "release_below_iv": 0.8, "logic": "and" },
"Hitmonchan": { "release_below_cp": 530, "release_below_iv": 0.8, "logic": "and" },
"Charmeleon": { "release_below_cp": 544, "release_below_iv": 0.8, "logic": "and" },
"Wartortle": { "release_below_cp": 552, "release_below_iv": 0.8, "logic": "and" },
"Persian": { "release_below_cp": 568, "release_below_iv": 0.8, "logic": "and" },
"Lickitung": { "release_below_cp": 568, "release_below_iv": 0.8, "logic": "and" },
"Ivysaur": { "release_below_cp": 571, "release_below_iv": 0.8, "logic": "and" },
"Electrode": { "release_below_cp": 576, "release_below_iv": 0.8, "logic": "and" },
"Marowak": { "release_below_cp": 578, "release_below_iv": 0.8, "logic": "and" },
"Gloom": { "release_below_cp": 590, "release_below_iv": 0.8, "logic": "and" },
"Porygon": { "release_below_cp": 590, "release_below_iv": 0.8, "logic": "and" },
"Seadra": { "release_below_cp": 597, "release_below_iv": 0.8, "logic": "and" },
"Jynx": { "release_below_cp": 600, "release_below_iv": 0.8, "logic": "and" },
"Weepinbell": { "release_below_cp": 602, "release_below_iv": 0.8, "logic": "and" },
"Tangela": { "release_below_cp": 607, "release_below_iv": 0.8, "logic": "and" },
"Fearow": { "release_below_cp": 609, "release_below_iv": 0.8, "logic": "and" },
"Parasect": { "release_below_cp": 609, "release_below_iv": 0.8, "logic": "and" },
"Machoke": { "release_below_cp": 614, "release_below_iv": 0.8, "logic": "and" },
"Arbok": { "release_below_cp": 616, "release_below_iv": 0.8, "logic": "and" },
"Sandslash": { "release_below_cp": 631, "release_below_iv": 0.8, "logic": "and" },
"Alakazam": { "release_below_cp": 633, "release_below_iv": 0.8, "logic": "and" },
"Kingler": { "release_below_cp": 636, "release_below_iv": 0.8, "logic": "and" },
"Dodrio": { "release_below_cp": 640, "release_below_iv": 0.8, "logic": "and" },
"Tauros": { "release_below_cp": 643, "release_below_iv": 0.8, "logic": "and" },
"Primeape": { "release_below_cp": 650, "release_below_iv": 0.8, "logic": "and" },
"Magneton": { "release_below_cp": 657, "release_below_iv": 0.8, "logic": "and" },
"Venomoth": { "release_below_cp": 660, "release_below_iv": 0.8, "logic": "and" },
"Golbat": { "release_below_cp": 672, "release_below_iv": 0.8, "logic": "and" },
"Raichu": { "release_below_cp": 708, "release_below_iv": 0.8, "logic": "and" },
"Cloyster": { "release_below_cp": 717, "release_below_iv": 0.8, "logic": "and"},
"Diglett": { "release_above_cp": 30 },
"Horsea": { "release_above_cp": 30 },
"Mr. Mime": { "release_below_cp": 650, "release_below_iv": 0.8, "logic": "and" }
},
"vips" : {
"// Any pokemon put here directly force to use Berry & Best Ball to capture, to secure the capture rate": {},
"//any": {"catch_above_cp": 1500, "catch_above_iv": 0.9, "logic": "and" },
"Lapras": {},
"Moltres": {},
"Zapdos": {},
"Articuno": {},
"// S-Tier pokemons (if pokemon can be evolved into tier, list the representative)": {},
"Mewtwo": {},
"Dragonite": {},
"Snorlax": {},
"// Mew evolves to Mewtwo": {},
"Mew": {},
"Arcanine": {},
"Vaporeon": {},
"Gyarados": {},
"Exeggutor": {},
"Muk": {},
"Weezing": {},
"Flareon": {},
"// Others": {},
"Bulbasaur": {},
"Charmander": {},
"Squirtle": {},
"Pikachu": {},
"Vulpix": {},
"Growlithe": {},
"Mankey": {},
"Abra": {},
"Machop": {},
"Geodude": {},
"Ponyta": {},
"Drowzee": {},
"Rhyhorn": {},
"Onix": {},
"Hitmonlee": {},
"Hitmonchan": {},
"Lickitung": {},
"Scyther": {},
"Jynx": {},
"Electabuzz": {},
"Diglett": {"catch_below_cp": 11, "catch_above_iv": 0, "logic": "and", "fast_attack": ["Scratch", "Mud Slap"] },
"Horsea": {"catch_below_cp": 11, "catch_above_iv": 0, "logic": "and", "fast_attack": ["Bubble"] }
},
"websocket": {
"start_embedded_server": true,
"server_url": "127.0.0.1:4000"
}
```
}
### Output when issue occurred
[2016-09-22 13:08:00] [PokemonGoBot] [INFO] Login procedure started.
[2016-09-22 13:08:34] [PokemonGoBot] [INFO] Login successful.
[2016-09-22 13:08:35] [PokemonGoBot] [INFO] Found encrypt.so! Platform: darwin encrypt.so directory: /Users/Thomas/PokemonGo-Bot
[2016-09-22 13:08:35] [PokemonGoBot] [INFO]
[2016-09-22 13:08:37] [PokemonGoBot] [INFO] Level: 1 (Next Level: 350 XP) (Total: 650 XP)
[2016-09-22 13:08:37] [PokemonGoBot] [INFO] Pokemon Captured: 1 | Pokestops Visited: 1
[2016-09-22 13:08:38] [PokemonGoBot] [INFO]
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] --- USERNAMEREMOVED ---
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Pokemon Bag: 1/250
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Items: 57/350
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Stardust: 100 | Pokecoins: 0
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] PokeBalls: 53 | GreatBalls: 0 | UltraBalls: 0 | MasterBalls: 0
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] RazzBerries: 0 | BlukBerries: 0 | NanabBerries: 0
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] LuckyEgg: 0 | Incubator: 0 | TroyDisk: 0
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Potion: 0 | SuperPotion: 0 | HyperPotion: 0 | MaxPotion: 0
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Incense: 2 | IncenseSpicy: 0 | IncenseCool: 0
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Revive: 0 | MaxRevive: 0
[2016-09-22 13:08:38] [PokemonGoBot] [INFO]
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Pokemon:
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] #4 Charmander: (CP 12, IV 0.67)
[2016-09-22 13:08:38] [PokemonGoBot] [INFO]
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Telegram bot not running. Starting
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Telegram master is valid (numeric): 18511956
[2016-09-22 13:08:38] [RandomAlivePause] [INFO] Next random alive pause at 14:02:13, for a duration of 0:02:51
[2016-09-22 13:08:38] [RandomPause] [INFO] Next random pause at 14:29:55, for a duration of 0:00:14
[2016-09-22 13:08:38] [RecycleItems] [INFO] Next forced item recycle at 13:11:13
[2016-09-22 13:08:38] [PokemonGoBot] [INFO] Starting bot...
[2016-09-22 13:08:40] [CollectLevelUpReward] [INFO] Received level up reward:
[2016-09-22 13:08:40] [ cli] [INFO]
[2016-09-22 13:08:40] [ cli] [INFO] Ran for 0:00:40
[2016-09-22 13:08:40] [ cli] [INFO] Total XP Earned: 0 Average: 0.00/h
[2016-09-22 13:08:40] [ cli] [INFO] Travelled 0.00km
[2016-09-22 13:08:40] [ cli] [INFO] Visited 0 stops
[2016-09-22 13:08:40] [ cli] [INFO] Encountered 0 pokemon, 0 caught, 0 released, 0 evolved, 0 never seen before ()
[2016-09-22 13:08:40] [ cli] [INFO] Threw 0 pokeballs
[2016-09-22 13:08:40] [ cli] [INFO] Earned 0 Stardust
[2016-09-22 13:08:40] [ cli] [INFO] Hatched eggs 0
[2016-09-22 13:08:40] [ cli] [INFO]
[2016-09-22 13:08:40] [ cli] [INFO] Highest CP Pokemon:
[2016-09-22 13:08:40] [ cli] [INFO] Most Perfect Pokemon:
Traceback (most recent call last):
File "pokecli.py", line 843, in <module>
main()
File "pokecli.py", line 202, in main
bot.tick()
File "/Users/Thomas/PokemonGo-Bot/pokemongo_bot/**init**.py", line 744, in tick
if worker.work() == WorkerResult.RUNNING:
File "/Users/Thomas/PokemonGo-Bot/pokemongo_bot/cell_workers/pokemon_optimizer.py", line 91, in work
self.check_buddy()
File "/Users/Thomas/PokemonGo-Bot/pokemongo_bot/cell_workers/pokemon_optimizer.py", line 209, in check_buddy
distance_walked = inventory.player().player_stats.get("km_walked", 0) - self.buddy["last_km_awarded"]
KeyError: 'last_km_awarded'
Thu Sep 22 13:08:40 SGT 2016 Pokebot Stopped.
Press any button or wait 20 seconds to continue.
### Steps to Reproduce
New account, with CompleteTutorial enabled.
### Other Information
OS:
MacOS
Branch:
Dev
Git Commit:
7ecaa17c2cfeeb067bdfc63297ceb21d3876b46d
Python Version:
Python 2.7.10
| closed | 2016-09-22T05:17:12Z | 2016-09-24T05:29:26Z | https://github.com/PokemonGoF/PokemonGo-Bot/issues/5603 | [
"Bug"
] | MerlionRock | 1 |
plotly/dash | flask | 2,541 | [BUG] Inconsistent/buggy partial plot updates using Patch | **Example scenario / steps to reproduce:**
- App has several plots, all of which have the same x-axis coordinate units
- When one plot is rescaled (x-axis range modified, e.g. zoom in), all plots should rescale the same way
- Rather than rebuild each plot, we can now implement a callback using `Patch`. This way, the heavy lifting of initially rendering the plots happens only once, and when a plot is rescaled, only the layout of each plot is updated.
```python
@callback(
output=[Output(dict(type='plot', id=plot_id), 'figure', allow_duplicate=True) for plot_id in plot_ids))]
inputs=[Input(dict(type='plot', id=ALL), 'relayoutData'],
prevent_initial_call=True
)
def update(inputs):
plot_updates = [Patch() for _ in plot_ids]
for p in plot_updates: p.layout.xaxis.range = get_range(dash.callback_context.triggered) # implemented elsewhere
return plot_updates
```
**Unexpected behavior:**
Upon rescale of a plot, only _some_ of the other plots actually update in the UI. Which plots update in response to a rescale is inconsistent and seemingly random (sometimes they all update and the app appears to work as I expect, sometimes only a few of them update, etc). I have verified that the callback/Patch _does_ update the range property in every figure data structure, so it seems like there's just some inconsistency with listening for the update (or maybe I made an error with how I've implemented this).
Happy to provide a screen recording if it would help.
```
dash 2.9.3
dash-bootstrap-components 1.4.1
dash-core-components 2.0.0
dash-html-components 2.0.0
dash-table 5.0.0
```
| closed | 2023-05-25T01:16:45Z | 2024-07-24T17:08:29Z | https://github.com/plotly/dash/issues/2541 | [] | abehr | 2 |
TencentARC/GFPGAN | pytorch | 436 | about v1.4.(Add [V1.4 model], which produces slightly more details and better identity than V1.3.) | Can you disclose the training strategy for v1.4? I would like to know what is the difference from v1.3.
Very thanks! | open | 2023-08-30T07:02:17Z | 2023-08-30T07:02:17Z | https://github.com/TencentARC/GFPGAN/issues/436 | [] | ke0224 | 0 |
amdegroot/ssd.pytorch | computer-vision | 128 | Where did you get the original Caffe weights file -- ssd_300_VOC0712.pth? | @amdegroot What I got from original Caffe is VGG_VOC0712_SSD_300x300_iter_120000.caffemodel file.
So where did you get the original Caffe weights file -- ssd_300_VOC0712.pth? | closed | 2018-03-20T06:26:09Z | 2019-06-12T04:03:07Z | https://github.com/amdegroot/ssd.pytorch/issues/128 | [] | xiaozhi2015 | 3 |
MaartenGr/BERTopic | nlp | 1,138 | Using PaddleNLP for tokenization will stuck at clustering | [PaddleNLP](https://github.com/PaddlePaddle/PaddleNLP) is a NLP framework from Baidu belonging to their PaddlePaddle ML family.
I tried using its word segmentation task for tokenization and it seems things always stuck at the clustering phase (default HDBScan) with no error reported.
```
pip install paddlenlp
```
```
from paddlenlp import Taskflow
tokenize_zh = Taskflow("word_segmentation")
tokenize_zh("Some text.") # a bit wired, but this is their api and do work
```
Paddle is generally thought a better/more advanced word tokenizer than Jieba. If fact, jieba has been inactive for a while and one of the last big features it added is Paddle Mode(which works with BERTopic).
It surely will be nice if paddlenlp just work and not going the indirect jieba paddle mode route(with additional deps and some version complications).
Haven't got more time to investigate, will update this issue with more details if I got anything new.
Thanks for the nice library and cheers,
Alex | closed | 2023-03-30T15:52:21Z | 2023-05-05T02:46:23Z | https://github.com/MaartenGr/BERTopic/issues/1138 | [] | zxygentoo | 3 |
tortoise/tortoise-orm | asyncio | 958 | Is there a way to pass extra parameters to a signal? | When using signals when updating/creating/deleting a model, is there a way to pass extra parameters to the signals?
Let's say I have an app state of loaded `dataframes` that I would to be visible for the signals, how can I achieve this?
| closed | 2021-10-20T07:56:09Z | 2024-08-14T15:22:50Z | https://github.com/tortoise/tortoise-orm/issues/958 | [] | angel-langdon | 0 |
keras-team/keras | python | 20,243 | support for dictionary-type loss functions without explicitly declaring `None` | In previous versions of Keras, when providing a loss function as a dictionary for models with multiple outputs, the compileLoss function would automatically assign a default value for outputs without a defined loss. The relevant code was:
``` for name, yt, yp in zip(output_names, y_true, y_pred):
if name in loss:
if loss[name]:
flat_losses.append(get_loss(loss[name], yt, yp))
else:
flat_losses.append(None)
else:
flat_losses.append(None)
```
However, in the latest version, flat_losses is now simply:
``` flat_losses = tree.flatten(loss) ```
Is there a way to reintroduce support for dictionary-type loss functions without explicitly declaring None for undefined losses? This change reduces flexibility in using dictionary-based loss definitions. | closed | 2024-09-09T20:33:19Z | 2024-09-27T16:55:47Z | https://github.com/keras-team/keras/issues/20243 | [
"stat:contributions welcome",
"type:feature"
] | WJiang0123 | 1 |
OpenInterpreter/open-interpreter | python | 1,206 | IPKernelApp Warning & Pydantic issue | ### Describe the bug
when attempting to run interpreter , I get these errors and then exits:
PS C:\Windows\System32> interpreter --verbose
C:\Program Files\Python311\Lib\site-packages\pydantic\_internal\_fields.py:151: UserWarning: Field "model_list" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
C:\Program Files\Python311\Lib\site-packages\pydantic\_internal\_fields.py:151: UserWarning: Field "model_name" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
C:\Program Files\Python311\Lib\site-packages\pydantic\_internal\_fields.py:151: UserWarning: Field "model_group_alias" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
C:\Program Files\Python311\Lib\site-packages\pydantic\_internal\_fields.py:151: UserWarning: Field "model_info" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
C:\Program Files\Python311\Lib\site-packages\pydantic\_internal\_fields.py:151: UserWarning: Field "model_id" has conflict with protected namespace "model_".
You may be able to resolve this warning by setting `model_config['protected_namespaces'] = ()`.
warnings.warn(
Setting attribute verbose on openinterpreter to 'True'...
Setting attribute anonymous_telemetry on openinterpreter to 'True'...
Setting attribute safe_mode on openinterpreter to 'off'...
Traceback (most recent call last):
File "C:\Program Files\Python311\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 56, in profile
profile = get_profile(filename_or_url, profile_path)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 109, in get_profile
return yaml.safe_load(file)
^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\__init__.py", line 125, in safe_load
return load(stream, SafeLoader)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\__init__.py", line 81, in load
return loader.get_single_data()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\constructor.py", line 49, in get_single_data
node = self.get_single_node()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\composer.py", line 36, in get_single_node
document = self.compose_document()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\composer.py", line 55, in compose_document
node = self.compose_node(None, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\composer.py", line 127, in compose_mapping_node
while not self.check_event(MappingEndEvent):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\parser.py", line 98, in check_event
self.current_event = self.state()
^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\parser.py", line 438, in parse_block_mapping_key
raise ParserError("while parsing a block mapping", self.marks[-1],
yaml.parser.ParserError: while parsing a block mapping
in "C:\Users\HyperIon\AppData\Local\open-interpreter\open-interpreter\profiles\default.yaml", line 3, column 1
expected <block end>, but found '<block mapping start>'
in "C:\Users\HyperIon\AppData\Local\open-interpreter\open-interpreter\profiles\default.yaml", line 10, column 2
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "<frozen runpy>", line 198, in _run_module_as_main
File "<frozen runpy>", line 88, in _run_code
File "C:\Program Files\Python311\Scripts\interpreter.exe\__main__.py", line 7, in <module>
File "C:\Program Files\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 437, in main
start_terminal_interface(interpreter)
File "C:\Program Files\Python311\Lib\site-packages\interpreter\terminal_interface\start_terminal_interface.py", line 352, in start_terminal_interface
interpreter = profile(interpreter, args.profile)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 60, in profile
reset_profile(filename_or_url)
File "C:\Program Files\Python311\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 576, in reset_profile
current_version = determine_user_version()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\interpreter\terminal_interface\profiles\profiles.py", line 670, in determine_user_version
default_profile = yaml.safe_load(file)
^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\__init__.py", line 125, in safe_load
return load(stream, SafeLoader)
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\__init__.py", line 81, in load
return loader.get_single_data()
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\constructor.py", line 49, in get_single_data
node = self.get_single_node()
^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\composer.py", line 36, in get_single_node
document = self.compose_document()
^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\composer.py", line 55, in compose_document
node = self.compose_node(None, None)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\composer.py", line 84, in compose_node
node = self.compose_mapping_node(anchor)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\composer.py", line 127, in compose_mapping_node
while not self.check_event(MappingEndEvent):
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\parser.py", line 98, in check_event
self.current_event = self.state()
^^^^^^^^^^^^
File "C:\Program Files\Python311\Lib\site-packages\yaml\parser.py", line 438, in parse_block_mapping_key
raise ParserError("while parsing a block mapping", self.marks[-1],
yaml.parser.ParserError: while parsing a block mapping
in "C:\Users\HyperIon\AppData\Local\open-interpreter\open-interpreter\profiles\default.yaml", line 3, column 1
expected <block end>, but found '<block mapping start>'
in "C:\Users\HyperIon\AppData\Local\open-interpreter\open-interpreter\profiles\default.yaml", line 10, column 2
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
[IPKernelApp] WARNING | Parent appears to have exited, shutting down.
PS C:\Windows\System32>
### Reproduce
1
### Expected behavior
.
### Screenshots
_No response_
### Open Interpreter version
Name: open-interpreter Version: 0.2.4
### Python version
3.11.4
### Operating System name and version
windows 11
### Additional context
_No response_ | closed | 2024-04-14T11:43:28Z | 2024-04-16T06:49:16Z | https://github.com/OpenInterpreter/open-interpreter/issues/1206 | [] | SynTia-OI | 0 |
ckan/ckan | api | 8,380 | CKANEXT-APPROVALWORKFLOW INSTALLATION FAILURE | ## CKAN version
2.10.3
## Describe the bug
Application crashed when installing ckanext-approvalworkflow plugin.
```
Traceback (most recent call last):
File "/usr/lib/ckan/default/bin/ckan", line 8, in <module>
sys.exit(ckan())
File "/usr/lib/ckan/default/lib/python3.8/site-packages/click/core.py", line 1130, in __call__
return self.main(*args, **kwargs)
File "/usr/lib/ckan/default/lib/python3.8/site-packages/click/core.py", line 1054, in main
with self.make_context(prog_name, args, **extra) as ctx:
File "/usr/lib/ckan/default/lib/python3.8/site-packages/click/core.py", line 920, in make_context
self.parse_args(ctx, args)
File "/usr/lib/ckan/default/src/ckan/ckan/cli/cli.py", line 121, in parse_args
result = super().parse_args(ctx, args)
File "/usr/lib/ckan/default/lib/python3.8/site-packages/click/core.py", line 1613, in parse_args
rest = super().parse_args(ctx, args)
File "/usr/lib/ckan/default/lib/python3.8/site-packages/click/core.py", line 1378, in parse_args
value, args = param.handle_parse_result(ctx, opts, args)
File "/usr/lib/ckan/default/lib/python3.8/site-packages/click/core.py", line 2360, in handle_parse_result
value = self.process_value(ctx, value)
File "/usr/lib/ckan/default/lib/python3.8/site-packages/click/core.py", line 2322, in process_value
value = self.callback(ctx, self, value)
File "/usr/lib/ckan/default/src/ckan/ckan/cli/cli.py", line 131, in _init_ckan_config
_add_ctx_object(ctx, value)
File "/usr/lib/ckan/default/src/ckan/ckan/cli/cli.py", line 140, in _add_ctx_object
ctx.obj = CtxObject(path)
File "/usr/lib/ckan/default/src/ckan/ckan/cli/cli.py", line 57, in __init__
self.app = make_app(raw_config)
File "/usr/lib/ckan/default/src/ckan/ckan/config/middleware/__init__.py", line 27, in make_app
load_environment(conf)
File "/usr/lib/ckan/default/src/ckan/ckan/config/environment.py", line 69, in load_environment
p.load_all()
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 224, in load_all
load(*plugins)
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 240, in load
service = _get_service(plugin)
File "/usr/lib/ckan/default/src/ckan/ckan/plugins/core.py", line 347, in _get_service
return plugin.load()(name=plugin_name)
File "/usr/lib/ckan/default/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2516, in load
return self.resolve()
File "/usr/lib/ckan/default/lib/python3.8/site-packages/pkg_resources/__init__.py", line 2522, in resolve
module = __import__(self.module_name, fromlist=['__name__'], level=0)
File "/usr/lib/ckan/default/src/ckanext-approvalworkflow/ckanext/approvalworkflow/plugin.py", line 4, in <module>
from ckan.lib.base import c, model
ImportError: cannot import name 'c' from 'ckan.lib.base' (/usr/lib/ckan/default/src/ckan/ckan/lib/base.py)
```
### Steps to reproduce
Install this plugin https://github.com/keitaroinc/ckanext-approvalworkflow
### Expected behavior
CKAN Starts
| closed | 2024-07-31T08:33:37Z | 2024-08-01T11:35:38Z | https://github.com/ckan/ckan/issues/8380 | [] | GokulVijayakumarRam | 1 |
hbldh/bleak | asyncio | 931 | Exception in read/write for pairable ble peripheral. | * bleak version:0.13.0
* Python version:3.10.5
* Operating System: windows10
* BlueZ version (`bluetoothctl -v`) in case of Linux:
### What I Did
I am trying to read and write properties using bleak library but not able to do.
The ble peripheral required pairing which required passkey.
Attached my code which is used for Bluetooth communication.
[EXAMPLE.zip](https://github.com/hbldh/bleak/files/9299013/EXAMPLE.zip)
output of attached code:
```
DeprecationWarning: There is no current event loop
loop = asyncio.get_event_loop()
ABCD 1001
D8:A9:8B:A9:86:00
RSSI: -68 AdvertisementData(local_name='ABCD 1001', manufacturer_data={1827: b'0\x01\x01'})
found the ABCD 1001 device
---------------------------------------------------
Ble address: D8:A9:8B:A9:86:00
client object:BleakClientWinRT (D8:A9:8B:A9:86:00)
client disconnect:True
trying to connect...1 time
Excp:
trying to connect...2 time
Excp:
trying to connect...3 time
Excp:
trying to connect...4 time
Excp:
trying to connect...5 time
Excp:
trying to connect...6 time
Excp:
trying to connect...7 time
Excp:
trying to connect...8 time
True
client object in pair:BleakClientWinRT (D8:A9:8B:A9:86:00)
trying to pair with device 1 time : Pair status:False
Exception in pair: Could not pair with device: 19: FAILED
trying to pair with device 2 time : Pair status:False
Exception in pair: Could not pair with device: 19: FAILED
trying to pair with device 3 time : Pair status:False
Exception in pair: Could not pair with device: 19: FAILED
trying to pair with device 4 time : Pair status:False
Exception in pair: Could not pair with device: 19: FAILED
trying to pair with device 5 time : Pair status:False
Exception in pair: Could not pair with device: 19: FAILED
trying to pair with device 6 time : Pair status:False
Exception in pair: Could not pair with device: 19: FAILED
trying to pair with device 7 time : Pair status:False
Exception in pair: Could not pair with device: 19: FAILED
trying to pair with device 8 time : Pair status:False
pair_status: True
client object in services:BleakClientWinRT (D8:A9:8B:A9:86:00)
print client services...
Services:
00001800-0000-1000-8000-00805f9b34fb (Handle: 1): Generic Access Profile
00001801-0000-1000-8000-00805f9b34fb (Handle: 6): Generic Attribute Profile
8b777660-9d6c-4945-b17f-b2fd22c27b8b (Handle: 10): Unknown
client object in char:BleakClientWinRT (D8:A9:8B:A9:86:00)
Characteristic wirth Read properties:
Exception: [Characteristic] 8b77766a-9d6c-4945-b17f-b2fd22c27b8b (Handle: 11): (read,notify,extended-properties), Value: [WinError -2147483629] The object has been closed
Characteristic wirth Read properties:
Exception: [Characteristic] 8b777669-9d6c-4945-b17f-b2fd22c27b8b (Handle: 15): (read,notify,extended-properties), Value: [WinError -2147483629] The object has been closed
Characteristic wirth Read properties:
Exception: [Characteristic] 8b77766d-9d6c-4945-b17f-b2fd22c27b8b (Handle: 19): (read,notify,extended-properties), Value: [WinError -2147483629] The object has been closed
Characteristic wirth Write properties:
[Characteristic]: 8b777662-9d6c-4945-b17f-b2fd22c27b8b (Handle: 23):
Characteristic wirth Write properties:
[Characteristic]: 8b777661-9d6c-4945-b17f-b1fd22c27b8b (Handle: 26):
Characteristic wirth Write properties:
[Characteristic]: 8b777664-9d6c-4945-b17f-b2fd22c27b8b (Handle: 29):
Characteristic wirth Write properties:
[Characteristic]: 8b777663-9d6c-4945-b17f-b2fd22c27b8b (Handle: 32):
Characteristic wirth Write properties:
[Characteristic]: 8b777666-9d6c-4945-b17f-b2fd22c27b8b (Handle: 35):
Characteristic wirth Write properties:
[Characteristic]: 8b777665-9d6c-4945-b17f-b2fd22c27b8b (Handle: 38):
Characteristic wirth Read properties:
Exception: [Characteristic] 8b777668-9d6c-4945-b17f-b2fd22c27b8b (Handle: 41): (read,notify,extended-properties), Value: [WinError -2147483629] The object has been closed
Characteristic wirth Read properties:
Exception: [Characteristic] 8b777667-9d6c-4945-b17f-b2fd22c27b8b (Handle: 45): (read,notify,extended-properties), Value: [WinError -2147483629] The object has been closed
Characteristic wirth Read properties:
Exception: [Characteristic] 8b77766c-9d6c-4945-b17f-b2fd22c27b8b (Handle: 49): (read,notify,extended-properties), Value: [WinError -2147483629] The object has been closed
Characteristic wirth Read properties:
Exception: [Characteristic] 8b77766b-9d6c-4945-b17f-b2fd22c27b8b (Handle: 53): (read,notify,extended-properties), Value: [WinError -2147483629] The object has been closed
Characteristic wirth Write properties:
[Characteristic]: 8b77766f-9d6c-4945-b17f-b2fd22c27b8b (Handle: 57):
Characteristic wirth Read properties:
Exception: [Characteristic] 8b77766e-9d6c-4945-b17f-b2fd22c27b8b (Handle: 61): (read,notify,extended-properties), Value: [WinError -2147483629] The object has been closed
BleakClientWinRT (D8:A9:8B:A9:86:00)
is connected: False
Error in connection while writing:
is connected: False
Error in connection while writing:
is connected: False
Error in connection while writing:
is connected: False
Error in connection while writing:
is connected: False
```
We are getting every time "[WinError -2147483629] The object has been closed" this type of exception.
Please let us know if any details required. | open | 2022-08-10T10:20:59Z | 2023-07-19T15:08:44Z | https://github.com/hbldh/bleak/issues/931 | [
"Backend: WinRT",
"more info required"
] | kaushik8785 | 5 |
huggingface/datasets | tensorflow | 7,372 | Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets | ### Description
I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue:
#### Code 1: Using `load_dataset`
```python
from datasets import Dataset, load_dataset
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_dataset("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 1350 samples.
- `test` has 150 samples.
#### Code 2: Using `load_from_disk`
```python
from datasets import Dataset, load_from_disk
# First save with max_shard_size=10
Dataset.from_dict({"id": range(1000)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Second save with max_shard_size=10
Dataset.from_dict({"id": range(500)}).train_test_split(test_size=0.1).save_to_disk("my_sharded_datasetdict", max_shard_size=10)
# Load the DatasetDict
loaded_datasetdict = load_from_disk("my_sharded_datasetdict")
print(loaded_datasetdict)
```
**Output**:
- `train` has 450 samples.
- `test` has 50 samples.
### Expected Behavior
I expected both `load_dataset` and `load_from_disk` to load the same dataset, as they are pointing to the same directory. However, the results differ significantly:
- `load_dataset` seems to merge all shards, resulting in a combined dataset.
- `load_from_disk` only loads the last saved dataset, ignoring previous shards.
### Questions
1. Is this behavior intentional? If so, could you clarify the difference between `load_dataset` and `load_from_disk` in the documentation?
2. If this is not intentional, could this be considered a bug?
3. What is the recommended way to handle cases where multiple datasets are saved to the same directory?
Thank you for your time and effort in maintaining this great library! I look forward to your feedback. | open | 2025-01-16T05:47:20Z | 2025-01-16T05:47:20Z | https://github.com/huggingface/datasets/issues/7372 | [] | gaohongkui | 0 |
jupyter-book/jupyter-book | jupyter | 2,105 | Unable to change pygments_style in _config.yml | Hello all
### Describe the bug
When I add to the `_config.yml` file
```
sphinx:
recursive_update: true
config:
pygments_style: pygments.styles.xcode.XcodeStyle
```
I expect the style "xcode" to be used to highlight syntax in code blocks
But instead code blocks are always colored the same way, whatever the provided pygments_style
I tried to assign a class that does not exist (pygments.styles.xcode.XcodeStyle2) and this results in an error during book building
```{console}
Exception occurred:
File "C:\Users\Martin\AppData\Roaming\Python\Python39\site-packages\sphinx\highlighting.py", line 89, in get_style
return getattr(import_module(module), stylename)
AttributeError: module 'pygments.styles.xcode' has no attribute 'XcodeStyle2'
```
=> This suggests that the pygments style is somehow "read" (when an existing class is provided), but not used as I would expect.
I would like to customize the syntax highlighting and thus first need to check the possibility to change pygments_style before diving into custom lexer, custom style, etc.
### Reproduce the bug
1. Unzip the sample source code [mynewbook.zip](https://github.com/executablebooks/jupyter-book/files/14028054/mynewbook.zip)
It contains a `MATLAB` code block in the file mynewbook/markdown.md
And the option below in `_config.yml`
```
sphinx:
recursive_update: true
config:
pygments_style: pygments.styles.xcode.XcodeStyle
```
2. Build the book with
```
jupyter-book build mynewbook/
```
3. Observe that the code highlighting is the same with and without the `pygments_style` option.
4. To see the expected output, you can go to this page https://pygments.org/demo/
Select
Language = Matlab
Style = xcode
And paste the following code under `Enter some code :`
```
rxnList = {'PCHOLP_hs_f', 'PLA2_2_f', 'SMS_f','PCHOLP_hs_b', 'PLA2_2_b', 'SMS_b'};
c = [1, 1, 1, 1, 1, 1]; % This is a comment
d = 10;
ineqSense = 'G';
modelConstrained = constrainRxnListAboveBound(modelIrrev, rxnList, C, d, ineqSense);
```

Thanks in advance for your help and time ! :)
### List your environment
Jupyter Book : 0.15.1
External ToC : 0.3.1
MyST-Parser : 0.18.1
MyST-NB : 0.17.2
Sphinx Book Theme : 1.0.1
Jupyter-Cache : 0.6.1
NbClient : 0.7.4
Python 3.9
OS : Windows 11 | open | 2024-01-23T17:57:57Z | 2024-06-06T12:37:43Z | https://github.com/jupyter-book/jupyter-book/issues/2105 | [
"bug"
] | martinSDT | 3 |
apache/airflow | automation | 47,497 | AIP-38 | Connections Edit | ### Body
Alongside with #43703 a form needs to be available allowing to edit an existing connection.
This UI should build on top of #47496
### Committer
- [x] I acknowledge that I am a maintainer/committer of the Apache Airflow project. | closed | 2025-03-07T14:34:14Z | 2025-03-23T20:03:34Z | https://github.com/apache/airflow/issues/47497 | [
"kind:meta",
"area:UI"
] | jscheffl | 0 |
gradio-app/gradio | data-visualization | 10,746 | Cannot use drag and drop file upload if file suffix is written in CAPS | ### Describe the bug
We implemented a chat application with file upload. We added the file upload via:
```
SUPPORTED_FILES = [".txt"]
file_input = gr.File(
label="Upload files",
file_types=SUPPORTED_FILES,
file_count="multiple",
container=True,
height="20vh"
)
```
If we use the file upload dialog, we can upload `.txt` files as well as `.TXT` files. But if we use the drag and drop feature, we can only upload `.txt` files. If we try to drag and drop `.TXT` files we get the error message: `Invalid file type only .txt allowed`.
It is very weird that the upload dialog and the drag and drop feature behave differently.
### Have you searched existing issues? 🔎
- [x] I have searched and found no existing issues
### Reproduction
```python
import gradio as gr
with gr.Blocks() as demo:
SUPPORTED_FILES = [".txt"]
file_input = gr.File(
label="Upload files", file_types=SUPPORTED_FILES, file_count="multiple", container=True, height="20vh"
)
demo.launch()
```
Start the app and try uploading a file with `.TXT` suffix and the error will occur.
If you change `SUPPORTED_FILES = [".txt", ".TXT"]` drag and drop will work for `.TXT`
Nevertheless, IMO the different behavior between file upload dialog and drag and drop is very strange.
### Screenshot

### Logs
```shell
```
### System Info
```shell
Gradio Environment Information:
------------------------------
Operating System: Windows
gradio version: 5.20.0
gradio_client version: 1.7.2
------------------------------------------------
gradio dependencies in your environment:
aiofiles: 23.2.1
anyio: 4.8.0
audioop-lts is not installed.
fastapi: 0.115.11
ffmpy: 0.5.0
gradio-client==1.7.2 is not installed.
groovy: 0.1.2
httpx: 0.28.1
huggingface-hub: 0.29.2
jinja2: 3.1.5
markupsafe: 2.1.5
numpy: 2.2.3
orjson: 3.10.15
packaging: 24.2
pandas: 2.2.3
pillow: 11.1.0
pydantic: 2.10.6
pydub: 0.25.1
python-multipart: 0.0.20
pyyaml: 6.0.2
ruff: 0.9.9
safehttpx: 0.1.6
semantic-version: 2.10.0
starlette: 0.46.0
tomlkit: 0.13.2
typer: 0.15.2
typing-extensions: 4.12.2
urllib3: 2.3.0
uvicorn: 0.34.0
authlib; extra == 'oauth' is not installed.
itsdangerous; extra == 'oauth' is not installed.
gradio_client dependencies in your environment:
fsspec: 2025.2.0
httpx: 0.28.1
huggingface-hub: 0.29.2
packaging: 24.2
typing-extensions: 4.12.2
websockets: 15.0.1
```
### Severity
I can work around it | open | 2025-03-06T13:32:43Z | 2025-03-06T16:39:03Z | https://github.com/gradio-app/gradio/issues/10746 | [
"bug"
] | tosterloh | 1 |
tensorflow/tensor2tensor | machine-learning | 1,863 | MultiStep Optimizer | ### Description
Hi there,
I've been working on a project still in TF 1.4 (oh dear, I know! An attempt to turrn [this](https://github.com/jchwenger/gpt-2/tree/titan) into working multit-gpu code, perhaps the last thing I'll do with this before moving to another framework --_--", (yes, Trax for instance :D)), and was glad to be able to import the AdaFactor optimizer from T2T. I'm considering now experimenting with the multistep Adam, and I noticed the two versions, `multistep_with_adamoptimizer.py ` and `multistep_optimizer.py`, the one multistep-Adamizing the superclass `tf.train.Optimizer`, and the other 'only' multistepping the `tf.train.AdamOptimizer`. Is there any difference between these two, and which one would you recommend I use?
Thanks tons in advance!
Jeremie
...
### Environment information
TF 1.4, to work with V100s.
Ubuntu 18.04.
| closed | 2020-10-28T11:21:16Z | 2020-10-29T23:32:53Z | https://github.com/tensorflow/tensor2tensor/issues/1863 | [] | jchwenger | 1 |
newpanjing/simpleui | django | 388 | layer对话框按钮,当全选数据时,点击按钮,获取到的 queryset 只有第一页的数据 | **layer对话框按钮,当全选数据时,点击按钮,获取到的 queryset 只有第一页的数据**
使用 layer对话框按钮,当全选数据时,点击按钮,获取到的 queryset 只有第一页的数据
**重现步骤**
** repeat step **
1.继承了 from simpleui.admin import AjaxAdmin 的 AjaxAdmin
2.在表的 admin 方法中 自定义了 adtion “batch_update_factor”, def batch_update_factor(self, request, queryset): pass
3.在 admin 管理页面全选表中的数据共229条,使用的默认分页,即每页 100 条数据,但是在 batch_update_factor 方法中发现
queryset 只有第一页的100条,丢失了后面几页的数据;改成每页展示 10 条数据,则 queryset 中只拿到第一页的 10条
**环境**
1.Operating System:
MacOS
2.Python Version:
3.9.0
3.Django Version:
3.1.12
4.SimpleUI Version:
2021.6.2
**Description**
| closed | 2021-06-22T07:36:01Z | 2021-07-21T03:26:51Z | https://github.com/newpanjing/simpleui/issues/388 | [
"bug"
] | Ronghefeng | 0 |
vitalik/django-ninja | django | 560 | OpenAPI Docs does not display example values of item when using pagination | When using pagination in view function, generated OpenAPI docs do not show example values of the `items` Schemas.
### Example API codes
```python
from ninja.pagination import paginate
@router.get("/test/list/", response=List[AssetHistorySchema])
def retrieve_list(request):
...
return schema_list
@router.get("/test/paged_list/", response=List[AssetHistorySchema])
@paginate()
def retrieve_paged_list(request):
...
return schema_list
```
### Comparing OpenAPI example values (w/, w/o pagination)
<table border="0"> <tr> <td><b style="font-size:30px">Without pagination</b></td> <td><b style="font-size:30px">With pagination</b></td> </tr> <tr> <td><img width="602" alt="image" src="https://user-images.githubusercontent.com/37990664/189020685-52e08547-e86b-4286-92b1-8df864702fce.png"></td> <td><img width="597" alt="image" src="https://user-images.githubusercontent.com/37990664/189021468-c61b34d4-0278-43c3-9d8b-16948a9605cd.png"></td> </tr> </table>
* compared to when pagination is not used, adding pagination to API makes example value not visible.
###
https://github.com/vitalik/django-ninja/blob/9b27910f689417602e9061eb85aa62fee9df0c09/ninja/pagination.py#L212
It looks like defalut value of Output Scehma (`[]`) hides the items schemas' default values.
I prefer to see the inside example values(like when i don't use pagination..) Is there any reason for this behavior?
I've made PR resolving this issue. https://github.com/vitalik/django-ninja/pull/559 Thanks. | open | 2022-09-08T03:19:11Z | 2022-09-08T03:21:31Z | https://github.com/vitalik/django-ninja/issues/560 | [] | esc5221 | 0 |
s3rius/FastAPI-template | asyncio | 73 | Failed tests on initilization and unreachable /w Docker-Compose | ```
python3 -m fastapi_template ✔ ccdemo 3.9.10
Project name: fatemplate
Project description: fatemplate
Removing resources for disabled feature Gitlab CI...
Removing resources for disabled feature Tortoise ORM...
Removing resources for disabled feature Ormar ORM...
Removing resources for disabled feature PsycoPG...
Removing resources for disabled feature MySQL DB...
Removing resources for disabled feature SQLite DB...
cleanup complete!
⭐ Placing resources nicely in your new project ⭐
Resources are happy to be where they are needed the most.
hint: Using 'master' as the name for the initial branch. This default branch name
hint: is subject to change. To configure the initial branch name to use in all
hint: of your new repositories, which will suppress this warning, call:
hint:
hint: git config --global init.defaultBranch <name>
hint:
hint: Names commonly chosen instead of 'master' are 'main', 'trunk' and
hint: 'development'. The just-created branch can be renamed via this command:
hint:
hint: git branch -m <name>
Git repository initialized.
Added files to index.
Updating dependencies
Resolving dependencies... (14.9s)
Writing lock file
No dependencies to install or update
Installing the current project: fatemplate (0.1.0)
pre-commit installed at .git/hooks/pre-commit
pre-commit installed.
Check python ast.........................................................Passed
Trim Trailing Whitespace.................................................Failed
- hook id: trailing-whitespace
- exit code: 1
- files were modified by this hook
Fixing fatemplate/web/application.py
Fixing deploy/docker-compose.yml
Fixing fatemplate/settings.py
Fixing fatemplate/web/lifetime.py
Fixing deploy/kube/db.yml
Check Toml...............................................................Passed
Fix End of Files.........................................................Failed
- hook id: end-of-file-fixer
- exit code: 1
- files were modified by this hook
Fixing deploy/docker-compose.yml
Fixing fatemplate/tests/test_dummy.py
Fixing fatemplate/static/docs/swagger-ui-bundle.js
Fixing fatemplate/static/docs/redoc.standalone.js
Fixing fatemplate/tests/test_echo.py
Fixing fatemplate/static/docs/swagger-ui.css
Add trailing commas......................................................Failed
- hook id: add-trailing-comma
- exit code: 1
- files were modified by this hook
Rewriting fatemplate/tests/test_redis.py
Rewriting fatemplate/web/application.py
Rewriting fatemplate/conftest.py
Rewriting fatemplate/db/dao/dummy_dao.py
Rewriting fatemplate/db/utils.py
Rewriting fatemplate/tests/test_echo.py
Rewriting fatemplate/tests/test_dummy.py
Pretty format YAML.......................................................Failed
- hook id: pretty-format-yaml
- exit code: 1
- files were modified by this hook
File deploy/docker-compose.yml is not pretty-formatted
Fixing file deploy/docker-compose.yml
File .github/workflows/tests.yml is not pretty-formatted
Fixing file .github/workflows/tests.yml
File .pre-commit-config.yaml is not pretty-formatted
Fixing file .pre-commit-config.yaml
File deploy/kube/app.yml is not pretty-formatted
Fixing file deploy/kube/app.yml
File deploy/kube/db.yml is not pretty-formatted
Fixing file deploy/kube/db.yml
File deploy/kube/redis.yml is not pretty-formatted
Fixing file deploy/kube/redis.yml
Format with Black........................................................Failed
- hook id: black
- files were modified by this hook
reformatted fatemplate/web/api/dummy/__init__.py
reformatted fatemplate/web/lifetime.py
All done! ✨ 🍰 ✨
2 files reformatted, 2 files left unchanged.
reformatted fatemplate/settings.py
All done! ✨ 🍰 ✨
1 file reformatted, 3 files left unchanged.
reformatted fatemplate/db/migrations/env.py
All done! ✨ 🍰 ✨
1 file reformatted, 3 files left unchanged.
reformatted fatemplate/web/api/monitoring/views.py
All done! ✨ 🍰 ✨
1 file reformatted, 3 files left unchanged.
reformatted fatemplate/web/api/monitoring/__init__.py
reformatted fatemplate/tests/test_fatemplate.py
All done! ✨ 🍰 ✨
2 files reformatted, 2 files left unchanged.
reformatted fatemplate/web/api/docs/views.py
reformatted fatemplate/tests/test_redis.py
All done! ✨ 🍰 ✨
2 files reformatted, 2 files left unchanged.
reformatted fatemplate/web/application.py
All done! ✨ 🍰 ✨
1 file reformatted, 3 files left unchanged.
reformatted fatemplate/conftest.py
All done! ✨ 🍰 ✨
1 file reformatted, 3 files left unchanged.
reformatted fatemplate/web/api/docs/__init__.py
reformatted fatemplate/web/api/echo/__init__.py
reformatted fatemplate/db/utils.py
All done! ✨ 🍰 ✨
3 files reformatted, 1 file left unchanged.
reformatted fatemplate/tests/test_echo.py
reformatted fatemplate/tests/test_dummy.py
All done! ✨ 🍰 ✨
2 files reformatted, 2 files left unchanged.
reformatted fatemplate/web/api/redis/__init__.py
All done! ✨ 🍰 ✨
1 file reformatted, 1 file left unchanged.
autoflake................................................................Failed
- hook id: autoflake
- files were modified by this hook
isort....................................................................Failed
- hook id: isort
- files were modified by this hook
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/web/lifetime.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/tests/test_fatemplate.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/tests/test_redis.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/web/api/docs/views.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/web/application.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/conftest.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/db/dao/dummy_dao.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/web/api/docs/__init__.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/db/utils.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/tests/test_echo.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/tests/test_dummy.py
Fixing /home/mano/Desktop/test/fatemplate/fatemplate/web/api/router.py
Check with Flake8........................................................Passed
Validate types with MyPy.................................................Passed
Remove usless noqa.......................................................Failed
- hook id: yesqa
- exit code: 1
- files were modified by this hook
Rewriting fatemplate/conftest.py
Check python ast.........................................................Passed
Trim Trailing Whitespace.................................................Passed
Check Toml...............................................................Passed
Fix End of Files.........................................................Passed
Add trailing commas......................................................Passed
Pretty format YAML.......................................................Passed
Format with Black........................................................Passed
autoflake................................................................Passed
isort....................................................................Passed
Check with Flake8........................................................Passed
Validate types with MyPy.................................................Passed
Remove usless noqa.......................................................Passed
hint: The '.git/hooks/commit-msg' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
Project successfully generated. You can read information about usage in README.md
```
Building and then running with docker-compose the server is unreachable on localhost:8000 or 0.0.0.0:8000:
```sudo docker-compose -f deploy/docker-compose.yml --project-directory . up --build ✔ 23s ccdemo 3.9.10
Sending build context to Docker daemon 777.3kB
Step 1/9 : FROM python:3.9.6-slim-buster
---> e18d3088c48c
Step 2/9 : RUN pip install poetry==1.1.8
---> Using cache
---> 1c68c5835316
Step 3/9 : RUN poetry config virtualenvs.create false
---> Using cache
---> 738969c2c64f
Step 4/9 : COPY pyproject.toml poetry.lock /app/src/
---> Using cache
---> b6024b866e35
Step 5/9 : WORKDIR /app/src
---> Using cache
---> fb117ba5c837
Step 6/9 : RUN poetry install
---> Using cache
---> efe0dcf90249
Step 7/9 : COPY . /app/src/
---> Using cache
---> 298c2bf3d8d7
Step 8/9 : RUN poetry install
---> Using cache
---> 39f4a9b1f1ad
Step 9/9 : CMD ["/usr/local/bin/python", "-m", "fatemplate"]
---> Using cache
---> e0af20beef2a
Successfully built e0af20beef2a
Successfully tagged fatemplate:latest
[+] Running 6/6
⠿ Network fatemplate_default Created 0.0s
⠿ Volume "fatemplate-db-data" Created 0.0s
⠿ Container fatemplate-redis-1 Created 0.1s
⠿ Container fatemplate-db-1 Created 0.1s
⠿ Container fatemplate-migrator-1 Created 0.2s
⠿ Container fatemplate-api-1 Created 0.1s
Attaching to fatemplate-api-1, fatemplate-db-1, fatemplate-migrator-1, fatemplate-redis-1
fatemplate-db-1 | The files belonging to this database system will be owned by user "postgres".
fatemplate-db-1 | This user must also own the server process.
fatemplate-db-1 |
fatemplate-db-1 | The database cluster will be initialized with locale "en_US.utf8".
fatemplate-db-1 | The default database encoding has accordingly been set to "UTF8".
fatemplate-db-1 | The default text search configuration will be set to "english".
fatemplate-db-1 |
fatemplate-db-1 | Data page checksums are disabled.
fatemplate-db-1 |
fatemplate-db-1 | fixing permissions on existing directory /var/lib/postgresql/data ... ok
fatemplate-db-1 | creating subdirectories ... ok
fatemplate-db-1 | selecting dynamic shared memory implementation ... posix
fatemplate-db-1 | selecting default max_connections ... 100
fatemplate-db-1 | selecting default shared_buffers ... 128MB
fatemplate-db-1 | selecting default time zone ... Etc/UTC
fatemplate-db-1 | creating configuration files ... ok
fatemplate-redis-1 | redis 16:46:16.54
fatemplate-redis-1 | redis 16:46:16.54 Welcome to the Bitnami redis container
fatemplate-redis-1 | redis 16:46:16.54 Subscribe to project updates by watching https://github.com/bitnami/bitnami-docker-redis
fatemplate-redis-1 | redis 16:46:16.54 Submit issues and feature requests at https://github.com/bitnami/bitnami-docker-redis/issues
fatemplate-redis-1 | redis 16:46:16.54
fatemplate-redis-1 | redis 16:46:16.54 INFO ==> ** Starting Redis setup **
fatemplate-redis-1 | redis 16:46:16.55 WARN ==> You set the environment variable ALLOW_EMPTY_PASSWORD=yes. For safety reasons, do not use this flag in a production environment.
fatemplate-redis-1 | redis 16:46:16.55 INFO ==> Initializing Redis
fatemplate-redis-1 | redis 16:46:16.55 INFO ==> Setting Redis config file
fatemplate-redis-1 | redis 16:46:16.56 INFO ==> ** Redis setup finished! **
fatemplate-redis-1 |
fatemplate-redis-1 | redis 16:46:16.57 INFO ==> ** Starting Redis **
fatemplate-redis-1 | 1:C 16 Apr 2022 16:46:16.579 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
fatemplate-redis-1 | 1:C 16 Apr 2022 16:46:16.579 # Redis version=6.2.5, bits=64, commit=00000000, modified=0, pid=1, just started
fatemplate-redis-1 | 1:C 16 Apr 2022 16:46:16.579 # Configuration loaded
fatemplate-redis-1 | 1:M 16 Apr 2022 16:46:16.579 * monotonic clock: POSIX clock_gettime
fatemplate-redis-1 | 1:M 16 Apr 2022 16:46:16.580 * Running mode=standalone, port=6379.
fatemplate-redis-1 | 1:M 16 Apr 2022 16:46:16.580 # Server initialized
fatemplate-redis-1 | 1:M 16 Apr 2022 16:46:16.580 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
fatemplate-redis-1 | 1:M 16 Apr 2022 16:46:16.580 * Ready to accept connections
fatemplate-db-1 | running bootstrap script ... ok
fatemplate-db-1 | performing post-bootstrap initialization ... ok
fatemplate-db-1 | syncing data to disk ... ok
fatemplate-db-1 |
fatemplate-db-1 |
fatemplate-db-1 | Success. You can now start the database server using:
fatemplate-db-1 |
fatemplate-db-1 | pg_ctl -D /var/lib/postgresql/data -l logfile start
fatemplate-db-1 |
fatemplate-db-1 | initdb: warning: enabling "trust" authentication for local connections
fatemplate-db-1 | You can change this by editing pg_hba.conf or using the option -A, or
fatemplate-db-1 | --auth-local and --auth-host, the next time you run initdb.
fatemplate-db-1 | waiting for server to start....2022-04-16 16:46:17.100 UTC [47] LOG: starting PostgreSQL 13.4 (Debian 13.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
fatemplate-db-1 | 2022-04-16 16:46:17.102 UTC [47] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
fatemplate-db-1 | 2022-04-16 16:46:17.112 UTC [48] LOG: database system was shut down at 2022-04-16 16:46:16 UTC
fatemplate-db-1 | 2022-04-16 16:46:17.118 UTC [47] LOG: database system is ready to accept connections
fatemplate-db-1 | done
fatemplate-db-1 | server started
fatemplate-db-1 | CREATE DATABASE
fatemplate-db-1 |
fatemplate-db-1 |
fatemplate-db-1 | /usr/local/bin/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
fatemplate-db-1 |
fatemplate-db-1 | 2022-04-16 16:46:17.322 UTC [47] LOG: received fast shutdown request
fatemplate-db-1 | waiting for server to shut down....2022-04-16 16:46:17.324 UTC [47] LOG: aborting any active transactions
fatemplate-db-1 | 2022-04-16 16:46:17.325 UTC [47] LOG: background worker "logical replication launcher" (PID 54) exited with exit code 1
fatemplate-db-1 | 2022-04-16 16:46:17.325 UTC [49] LOG: shutting down
fatemplate-db-1 | 2022-04-16 16:46:17.339 UTC [47] LOG: database system is shut down
fatemplate-db-1 | done
fatemplate-db-1 | server stopped
fatemplate-db-1 |
fatemplate-db-1 | PostgreSQL init process complete; ready for start up.
fatemplate-db-1 |
fatemplate-db-1 | 2022-04-16 16:46:17.439 UTC [1] LOG: starting PostgreSQL 13.4 (Debian 13.4-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
fatemplate-db-1 | 2022-04-16 16:46:17.439 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
fatemplate-db-1 | 2022-04-16 16:46:17.439 UTC [1] LOG: listening on IPv6 address "::", port 5432
fatemplate-db-1 | 2022-04-16 16:46:17.442 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
fatemplate-db-1 | 2022-04-16 16:46:17.449 UTC [75] LOG: database system was shut down at 2022-04-16 16:46:17 UTC
fatemplate-db-1 | 2022-04-16 16:46:17.454 UTC [1] LOG: database system is ready to accept connections
fatemplate-api-1 | INFO: Will watch for changes in these directories: ['/app/src']
fatemplate-api-1 | INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)
fatemplate-api-1 | INFO: Started reloader process [1] using statreload
fatemplate-migrator-1 | INFO [alembic.runtime.migration] Context impl PostgresqlImpl.
fatemplate-migrator-1 | INFO [alembic.runtime.migration] Will assume transactional DDL.
fatemplate-migrator-1 | INFO [alembic.runtime.migration] Running upgrade -> 819cbf6e030b, Initial migration.
fatemplate-migrator-1 | INFO [alembic.runtime.migration] Running upgrade 819cbf6e030b -> 2b7380507a71, Created Dummy Model.
fatemplate-api-1 | INFO: Started server process [8]
fatemplate-api-1 | INFO: Waiting for application startup.
fatemplate-migrator-1 exited with code 0
fatemplate-api-1 | INFO: Application startup complete.
```
What default configuration are you testing with @s3rius that I could try, or more likely -- I am performing an error at some point? | closed | 2022-04-16T16:51:03Z | 2022-04-17T10:58:18Z | https://github.com/s3rius/FastAPI-template/issues/73 | [] | WP-LKL | 5 |
fbdesignpro/sweetviz | pandas | 167 | The data cannot be output in the original order | The data cannot be output in the original order, and it is forced to be sorted according to the amount of data from large to small
for example:

I want the data be sorted as the label order(00 01 02 03) , | open | 2024-01-18T04:51:45Z | 2024-02-17T03:10:56Z | https://github.com/fbdesignpro/sweetviz/issues/167 | [
"feature request"
] | Tangdanxu | 1 |
keras-team/keras | data-science | 20,109 | Value Error | ```python
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
Cell In[244], line 1
----> 1 training_history = Plant_Detector.fit(x= training_set, validation_data = validation_set, epochs = 10)
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:1193, in Model.fit(self, x, y, batch_size, epochs, verbose, callbacks, validation_split, validation_data, shuffle, class_weight, sample_weight, initial_epoch, steps_per_epoch, validation_steps, validation_batch_size, validation_freq, max_queue_size, workers, use_multiprocessing)
1186 with trace.Trace(
1187 'train',
1188 epoch_num=epoch,
1189 step_num=step,
1190 batch_size=batch_size,
1191 _r=1):
1192 callbacks.on_train_batch_begin(step)
-> 1193 tmp_logs = self.train_function(iterator)
1194 if data_handler.should_sync:
1195 context.async_wait()
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\def_function.py:885, in Function.__call__(self, *args, **kwds)
882 compiler = "xla" if self._jit_compile else "nonXla"
884 with OptionalXlaContext(self._jit_compile):
--> 885 result = self._call(*args, **kwds)
887 new_tracing_count = self.experimental_get_tracing_count()
888 without_tracing = (tracing_count == new_tracing_count)
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\def_function.py:933, in Function._call(self, *args, **kwds)
930 try:
931 # This is the first call of __call__, so we have to initialize.
932 initializers = []
--> 933 self._initialize(args, kwds, add_initializers_to=initializers)
934 finally:
935 # At this point we know that the initialization is complete (or less
936 # interestingly an exception was raised) so we no longer need a lock.
937 self._lock.release()
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\def_function.py:759, in Function._initialize(self, args, kwds, add_initializers_to)
756 self._lifted_initializer_graph = lifted_initializer_graph
757 self._graph_deleter = FunctionDeleter(self._lifted_initializer_graph)
758 self._concrete_stateful_fn = (
--> 759 self._stateful_fn._get_concrete_function_internal_garbage_collected( # pylint: disable=protected-access
760 *args, **kwds))
762 def invalid_creator_scope(*unused_args, **unused_kwds):
763 """Disables variable creation."""
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\function.py:3066, in Function._get_concrete_function_internal_garbage_collected(self, *args, **kwargs)
3064 args, kwargs = None, None
3065 with self._lock:
-> 3066 graph_function, _ = self._maybe_define_function(args, kwargs)
3067 return graph_function
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\function.py:3463, in Function._maybe_define_function(self, args, kwargs)
3459 return self._define_function_with_shape_relaxation(
3460 args, kwargs, flat_args, filtered_flat_args, cache_key_context)
3462 self._function_cache.missed.add(call_context_key)
-> 3463 graph_function = self._create_graph_function(args, kwargs)
3464 self._function_cache.primary[cache_key] = graph_function
3466 return graph_function, filtered_flat_args
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\function.py:3298, in Function._create_graph_function(self, args, kwargs, override_flat_arg_shapes)
3293 missing_arg_names = [
3294 "%s_%d" % (arg, i) for i, arg in enumerate(missing_arg_names)
3295 ]
3296 arg_names = base_arg_names + missing_arg_names
3297 graph_function = ConcreteFunction(
-> 3298 func_graph_module.func_graph_from_py_func(
3299 self._name,
3300 self._python_function,
3301 args,
3302 kwargs,
3303 self.input_signature,
3304 autograph=self._autograph,
3305 autograph_options=self._autograph_options,
3306 arg_names=arg_names,
3307 override_flat_arg_shapes=override_flat_arg_shapes,
3308 capture_by_value=self._capture_by_value),
3309 self._function_attributes,
3310 function_spec=self.function_spec,
3311 # Tell the ConcreteFunction to clean up its graph once it goes out of
3312 # scope. This is not the default behavior since it gets used in some
3313 # places (like Keras) where the FuncGraph lives longer than the
3314 # ConcreteFunction.
3315 shared_func_graph=False)
3316 return graph_function
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\framework\func_graph.py:1007, in func_graph_from_py_func(name, python_func, args, kwargs, signature, func_graph, autograph, autograph_options, add_control_dependencies, arg_names, op_return_value, collections, capture_by_value, override_flat_arg_shapes, acd_record_initial_resource_uses)
1004 else:
1005 _, original_func = tf_decorator.unwrap(python_func)
-> 1007 func_outputs = python_func(*func_args, **func_kwargs)
1009 # invariant: `func_outputs` contains only Tensors, CompositeTensors,
1010 # TensorArrays and `None`s.
1011 func_outputs = nest.map_structure(convert, func_outputs,
1012 expand_composites=True)
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\eager\def_function.py:668, in Function._defun_with_scope.<locals>.wrapped_fn(*args, **kwds)
664 with default_graph._variable_creator_scope(scope, priority=50): # pylint: disable=protected-access
665 # __wrapped__ allows AutoGraph to swap in a converted function. We give
666 # the function a weak reference to itself to avoid a reference cycle.
667 with OptionalXlaContext(compile_with_xla):
--> 668 out = weak_wrapped_fn().__wrapped__(*args, **kwds)
669 return out
File ~\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\framework\func_graph.py:994, in func_graph_from_py_func.<locals>.wrapper(*args, **kwargs)
992 except Exception as e: # pylint:disable=broad-except
993 if hasattr(e, "ag_error_metadata"):
--> 994 raise e.ag_error_metadata.to_exception(e)
995 else:
996 raise
ValueError: in user code:
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:862 train_function *
return step_function(self, iterator)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:852 step_function **
outputs = model.distribute_strategy.run(run_step, args=(data,))
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:1286 run
return self._extended.call_for_each_replica(fn, args=args, kwargs=kwargs)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:2849 call_for_each_replica
return self._call_for_each_replica(fn, args, kwargs)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\distribute\distribute_lib.py:3632 _call_for_each_replica
return fn(*args, **kwargs)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:845 run_step **
outputs = model.train_step(data)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\training.py:803 train_step
loss = self.compiled_loss(
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\engine\compile_utils.py:204 __call__
loss_value = loss_obj(y_t, y_p, sample_weight=sw)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\losses.py:155 __call__
losses = call_fn(y_true, y_pred)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\losses.py:259 call **
return ag_fn(y_true, y_pred, **self._fn_kwargs)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\util\dispatch.py:206 wrapper
return target(*args, **kwargs)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\losses.py:1679 categorical_crossentropy
return backend.categorical_crossentropy(
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\util\dispatch.py:206 wrapper
return target(*args, **kwargs)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\keras\backend.py:4875 categorical_crossentropy
target.shape.assert_is_compatible_with(output.shape)
C:\Users\Luvolwethu Tokwe\anaconda3\envs\TFNew\lib\site-packages\tensorflow\python\framework\tensor_shape.py:1161 assert_is_compatible_with
raise ValueError("Shapes %s and %s are incompatible" % (self, other))
ValueError: Shapes (None, 9) and (None, 1024) are incompatible
``` | closed | 2024-08-10T16:10:25Z | 2024-09-12T01:58:57Z | https://github.com/keras-team/keras/issues/20109 | [
"type:support",
"stat:awaiting response from contributor",
"stale"
] | LuvolwethuTokwe | 4 |
wkentaro/labelme | computer-vision | 918 | [QUESTION] Is there any way to customize imagePath? | Hi I'm trying to use labelme for image segmentation.
In original code, 'imagePath' in json file returns the name of the image file.
but I want it to be relative path of image file like 'home/blahblah/blah/blah/image.png'.
I tried to change ImagePath in label_file.py but it didn't work..
Is there any way to change it? | closed | 2021-09-23T07:58:40Z | 2021-10-20T09:30:51Z | https://github.com/wkentaro/labelme/issues/918 | [] | naeunhub | 3 |
kennethreitz/records | sqlalchemy | 80 | Not clear how to install drivers | This probably python/sql 101 for many people but when trying Records out, I received a message saying it didn't have a protocol handler for mysqldb. It's been a real struggle to understand that I need to install a driver, to find a suitable version and install it on a macbook (I gave up after several hours) and eventually to install the driver from the mysql site. I still don't know if this will work with Records or whether I'll have to use it in its raw form.
| open | 2016-09-09T08:42:39Z | 2018-04-28T22:10:47Z | https://github.com/kennethreitz/records/issues/80 | [
"enhancement",
"help wanted"
] | rogthedodge | 3 |
kizniche/Mycodo | automation | 515 | Dashbord does not show | ## Mycodo Issue Report:
- Specific Mycodo Version: 6.2.0
#### Problem Description
Please list: I was selecting the dashbord menu item, the live item works correctly. This was all working under 6.1.4,
### Errors
Something bad happened but it's probably not your fault. Letting the developers know about these issues is crucial to supporting Mycodo. Please submit a new issue on GitHub with the following error traceback (copy the entire traceback):
Error (Full Traceback):
Traceback (most recent call last):
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 2292, in wsgi_app
response = self.full_dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1815, in full_dispatch_request
rv = self.handle_user_exception(e)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1718, in handle_user_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/_compat.py", line 35, in reraise
raise value
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1813, in full_dispatch_request
rv = self.dispatch_request()
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/app.py", line 1799, in dispatch_request
return self.view_functions[rule.endpoint](**req.view_args)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask_login/utils.py", line 261, in decorated_view
return func(*args, **kwargs)
File "/home/pi/Mycodo/mycodo/mycodo_flask/routes_page.py", line 489, in page_dashboard
y_axes=y_axes)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/templating.py", line 135, in render_template
context, ctx.app)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/flask/templating.py", line 117, in _render
rv = template.render(context)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/environment.py", line 1008, in render
return self.environment.handle_exception(exc_info, True)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/environment.py", line 780, in handle_exception
reraise(exc_type, exc_value, tb)
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/_compat.py", line 37, in reraise
raise value.with_traceback(tb)
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/dashboard.html", line 3, in top-level template code
{% set help_page = ["dashboard", _('Dashboard')] %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/layout.html", line 255, in top-level template code
{%- block body %}{% endblock -%}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/dashboard.html", line 788, in block "body"
{% include 'pages/dashboard_options/display_graph.html' %}
File "/home/pi/Mycodo/mycodo/mycodo_flask/templates/pages/dashboard_options/display_graph.html", line 246, in top-level template code
name: '{{each_math.name}} ({{measurement_units[each_measurement]['name']}}
File "/home/pi/Mycodo/env/lib/python3.5/site-packages/jinja2/environment.py", line 411, in getitem
return obj[argument]
jinja2.exceptions.UndefinedError: 'dict object' has no attribute 'CO2'
- List any errors you encountered.
- Copy and pasting crash logs, or link to any specific
code lines you've isolated (please use GitHub permalinks for this)
### Steps to Reproduce the issue:
How can this issue be reproduced?
1. Select Dashboard, the web errors report a web timeout
2. step 2...
3. etc
### Additional Notes
Is there anything that should be added to make it easier
to address this issue?
This is has only happened since the upgrade, I have restarted the system | closed | 2018-08-20T02:53:47Z | 2018-08-21T00:31:20Z | https://github.com/kizniche/Mycodo/issues/515 | [] | wflost | 3 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,649 | [Bug]: SDXL inpainting results in 'NansException' occurred with 1st settings. Error when VAE is present on MacOS | ### Checklist
- [X] The issue exists after disabling all extensions
- [X] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [X] The issue exists in the current version of the webui
- [X] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
On my Mac Book Pro any img2img with sdxl checkpoints does not work when a VAE is baked into the checkpoint or a VAE is selected in webui, if it is selected and try to generate a image a
`NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.` error occures.
Using `--no-half` does indeed fix the problem but then it takes ages and I would not call it a fix due to the fact that it does work with SD1.5 checkpoints where VAE is activated without a problem and even SDXL img2img works without a problem as long as VAE is disabled (which would be fine for me if checkpoint creators would stop baking in the VAE all the time ._.). Normal txt2img does work without a problem with sdxl checkpoints and selected VAE without a problem.
### Steps to reproduce the problem
1. Select a SDXL Model
2. Set the VAE to an usable sdxl vae (or choose a Model with baked in VAE)
3. go to img2img
4. wrtie a prompt
5. upload a image
6. Generate
### What should have happened?
A new image should have been generated instead of throwing an NansException Error
### What browsers do you use to access the UI ?
_No response_
### Sysinfo
[sysinfo-2024-04-28-08-27.json](https://github.com/AUTOMATIC1111/stable-diffusion-webui/files/15141550/sysinfo-2024-04-28-08-27.json)
### Console logs
```Shell
################################################################
Launching launch.py...
################################################################
Python 3.10.13 (main, Mar 17 2024, 20:31:43) [Clang 15.0.0 (clang-1500.3.9.4)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --skip-torch-cuda-test --opt-sub-quad-attention --upcast-sampling --no-half-vae --use-cpu interrogate
no module 'xformers'. Processing without...
no module 'xformers'. Processing without...
No module 'xformers'. Proceeding without it.
Warning: caught exception 'Torch not compiled with CUDA enabled', memory monitor disabled
Loading weights [15dc93da84] from /Users/c/stable-diffusion-webui/models/Stable-diffusion/pony/matrixPony_v4.safetensors
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 4.3s (import torch: 2.2s, import gradio: 0.5s, setup paths: 0.6s, initialize shared: 0.1s, other imports: 0.4s, create ui: 0.2s, gradio launch: 0.1s).
Creating model from config: /Users/c/stable-diffusion-webui/repositories/generative-models/configs/inference/sd_xl_base.yaml
Applying attention optimization: sdp... done.
Model loaded in 54.4s (load weights from disk: 0.5s, create model: 0.6s, apply weights to model: 52.3s, apply dtype to VAE: 0.1s, move model to device: 0.2s, calculate empty prompt: 0.5s).
0%| | 0/16 [00:01<?, ?it/s]
*** Error completing request
*** Arguments: ('task(npe6u0644pbgbjj)', <gradio.routes.Request object at 0x2dd4975e0>, 0, 'eating cookies', '', [], <PIL.Image.Image image mode=RGBA size=512x768 at 0x2DD4673D0>, None, None, None, None, None, None, 4, 0, 1, 1, 1, 7, 1.5, 0.75, 0.0, 512, 512, 1, 0, 0, 32, 0, '', '', '', [], False, [], '', 0, 20, 'DPM++ 2M', 'Automatic', False, '', 0.8, -1, False, -1, 0, 0, 0, '* `CFG Scale` should be 2 or lower.', True, True, '', '', True, 50, True, 1, 0, False, 4, 0.5, 'Linear', 'None', '<p style="margin-bottom:0.75em">Recommended settings: Sampling Steps: 80-100, Sampler: Euler a, Denoising strength: 0.8</p>', 128, 8, ['left', 'right', 'up', 'down'], 1, 0.05, 128, 4, 0, ['left', 'right', 'up', 'down'], False, False, 'positive', 'comma', 0, False, False, 'start', '', '<p style="margin-bottom:0.75em">Will upscale the image by the selected scale factor; use width and height sliders to set tile size</p>', 64, 0, 2, 1, '', [], 0, '', [], 0, '', [], True, False, False, False, False, False, False, 0, False) {}
Traceback (most recent call last):
File "/Users/c/stable-diffusion-webui/modules/call_queue.py", line 57, in f
res = list(func(*args, **kwargs))
File "/Users/c/stable-diffusion-webui/modules/call_queue.py", line 36, in f
res = func(*args, **kwargs)
File "/Users/c/stable-diffusion-webui/modules/img2img.py", line 232, in img2img
processed = process_images(p)
File "/Users/c/stable-diffusion-webui/modules/processing.py", line 845, in process_images
res = process_images_inner(p)
File "/Users/c/stable-diffusion-webui/modules/processing.py", line 981, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
File "/Users/c/stable-diffusion-webui/modules/processing.py", line 1741, in sample
samples = self.sampler.sample_img2img(self, self.init_latent, x, conditioning, unconditional_conditioning, image_conditioning=self.image_conditioning)
File "/Users/c/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 172, in sample_img2img
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "/Users/c/stable-diffusion-webui/modules/sd_samplers_common.py", line 272, in launch_sampling
return func()
File "/Users/c/stable-diffusion-webui/modules/sd_samplers_kdiffusion.py", line 172, in <lambda>
samples = self.launch_sampling(t_enc + 1, lambda: self.func(self.model_wrap_cfg, xi, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
File "/Users/c/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
File "/Users/c/stable-diffusion-webui/repositories/k-diffusion/k_diffusion/sampling.py", line 594, in sample_dpmpp_2m
denoised = model(x, sigmas[i] * s_in, **extra_args)
File "/Users/c/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1511, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
File "/Users/c/stable-diffusion-webui/venv/lib/python3.10/site-packages/torch/nn/modules/module.py", line 1520, in _call_impl
return forward_call(*args, **kwargs)
File "/Users/c/stable-diffusion-webui/modules/sd_samplers_cfg_denoiser.py", line 269, in forward
devices.test_for_nans(x_out, "unet")
File "/Users/c/stable-diffusion-webui/modules/devices.py", line 255, in test_for_nans
raise NansException(message)
modules.devices.NansException: A tensor with all NaNs was produced in Unet. This could be either because there's not enough precision to represent the picture, or because your video card does not support half type. Try setting the "Upcast cross attention layer to float32" option in Settings > Stable Diffusion or using the --no-half commandline argument to fix this. Use --disable-nan-check commandline argument to disable this check.
---
```
### Additional information
I tried updating/downgrading torch but that did not make any difference... | closed | 2024-04-28T09:27:16Z | 2024-07-16T14:29:11Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15649 | [
"bug-report"
] | chr1st0ph3rGG | 2 |
zappa/Zappa | flask | 613 | [Migrated] Bad link in README in Basic Usage > Initial Deployments | Originally from: https://github.com/Miserlou/Zappa/issues/1572 by [kjschiroo](https://github.com/kjschiroo)
<!--- Provide a general summary of the issue in the Title above -->
## Context
In [this section of the README](https://github.com/Miserlou/Zappa#initial-deployments) the link for "Using Custom AWS IAM Roles and Policies for Execution" doesn't go anywhere when clicked. This is occuring on
## Expected Behavior
Clicking the link should take the user to [Using Custom AWS IAM Roles and Policies for Execution](https://github.com/Miserlou/Zappa#using-custom-aws-iam-roles-and-policies-for-execution)
## Actual Behavior
Clicking the link takes the user nowhere.
## Possible Fix
Change `#using-custom-aws-iam-roles-and-policies-for-Execution` to `#using-custom-aws-iam-roles-and-policies-for-execution`
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Go to [README > Basic Usage > Initial Deployments](https://github.com/Miserlou/Zappa#initial-deployments)
2. Click on "Using Custom AWS IAM Roles and Policies for Execution"
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Browser: Chrome (67.0.3396.62). | closed | 2021-02-20T12:26:39Z | 2022-07-16T06:53:16Z | https://github.com/zappa/Zappa/issues/613 | [] | jneves | 1 |
ymcui/Chinese-LLaMA-Alpaca | nlp | 283 | Ignoring llama-cpp-python: markers 'platform_system == "Windows"' don't match your environment | 感谢您使用Issue提问模板,请按照以下步骤提供相关信息。我们将优先处理信息相对完整的Issue,感谢您的配合。
*提示:将[ ]中填入x,表示打对钩。提问时删除上面这两行。请只保留符合的选项,删掉其他。*
### 详细描述问题
*请尽量具体地描述您遇到的问题。这将有助于我们更快速地定位问题所在。*
### 运行截图或log
*(如有必要)请提供文本log或者运行截图,以便我们更好地了解问题详情。*
### 必查项目
- [x] 问题类型:**(只保留你要问的)**
- 模型量化和部署问题(llama.cpp、text-generation-webui、LlamaChat)
使用 https://www.autodl.com/ 的容器 会报这个错误
Ignoring llama-cpp-python: markers 'platform_system == "Windows"' don't match your environment
无论是clone代码 安装运行
还是下载他们的 工具包 都会 | closed | 2023-05-09T16:40:43Z | 2023-05-10T07:48:27Z | https://github.com/ymcui/Chinese-LLaMA-Alpaca/issues/283 | [] | koaqiu | 2 |
ultralytics/yolov5 | machine-learning | 12,769 | Merge Yolov5 with LSTM for Human Activity Recognition task | ### Search before asking
- [X] I have searched the YOLOv5 [issues](https://github.com/ultralytics/yolov5/issues) and [discussions](https://github.com/ultralytics/yolov5/discussions) and found no similar questions.
### Question
Hello! I want to know if it is possible to somehow merge Yolov5 with LSTM for Human Activity Recognition task. Yolov5 should be trained to detect certain objects on the video and LSTM should be able to recognize an action being performed. I already have a trained LSTM model but I wish to increase the accuracy by introducing the presence of certain objects typical for certain kind of actions.
Does anyone can help me with that? I am new to this and I am not sure how and whether this can be implemented at all.
### Additional
_No response_ | closed | 2024-02-27T13:14:54Z | 2024-10-20T19:40:28Z | https://github.com/ultralytics/yolov5/issues/12769 | [
"question",
"Stale"
] | alina15andreeva | 3 |
huggingface/datasets | tensorflow | 7,223 | Fallback to arrow defaults when loading dataset with custom features that aren't registered locally | ### Describe the bug
Datasets allows users to create and register custom features.
However if datasets are then pushed to the hub, this means that anyone calling load_dataset without registering the custom Features in the same way as the dataset creator will get an error message.
It would be nice to offer a fallback in this case.
### Steps to reproduce the bug
```python
load_dataset("alex-hh/custom-features-example")
```
(Dataset creation process - must be run in separate session so that NewFeature isn't registered in session in which download is attempted:)
```python
from dataclasses import dataclass, field
import pyarrow as pa
from datasets.features.features import register_feature
from datasets import Dataset, Features, Value, load_dataset
from datasets import Feature
@dataclass
class NewFeature(Feature):
_type: str = field(default="NewFeature", init=False, repr=False)
def __call__(self):
return pa.int32()
def examples_generator():
for i in range(5):
yield {"feature": i}
ds = Dataset.from_generator(examples_generator, features=Features(feature=NewFeature()))
ds.push_to_hub("alex-hh/custom-features-example")
register_feature(NewFeature, "NewFeature")
```
### Expected behavior
It would be nice, and offer greater extensibility, if there was some kind of graceful fallback mechanism in place for cases where user-defined features are stored in the dataset but not available locally.
### Environment info
3.0.2 | open | 2024-10-12T16:08:20Z | 2024-10-12T16:08:20Z | https://github.com/huggingface/datasets/issues/7223 | [] | alex-hh | 0 |
deepset-ai/haystack | pytorch | 8,067 | docs: clean up docstrings of InMemoryEmbeddingRetriever | closed | 2024-07-24T11:01:57Z | 2024-07-25T11:24:01Z | https://github.com/deepset-ai/haystack/issues/8067 | [] | agnieszka-m | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.