repo_name stringlengths 9 75 | topic stringclasses 30
values | issue_number int64 1 203k | title stringlengths 1 976 | body stringlengths 0 254k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | url stringlengths 38 105 | labels listlengths 0 9 | user_login stringlengths 1 39 | comments_count int64 0 452 |
|---|---|---|---|---|---|---|---|---|---|---|---|
huggingface/datasets | deep-learning | 6,489 | load_dataset imageflder for aws s3 path | ### Feature request
I would like to load a dataset from S3 using the imagefolder option
something like
`dataset = datasets.load_dataset('imagefolder', data_dir='s3://.../lsun/train/bedroom', fs=S3FileSystem(), streaming=True) `
### Motivation
no need of data_files
### Your contribution
no experience with this | open | 2023-12-12T00:08:43Z | 2023-12-12T00:09:27Z | https://github.com/huggingface/datasets/issues/6489 | [
"enhancement"
] | segalinc | 0 |
serengil/deepface | machine-learning | 820 | found a bug | https://github.com/serengil/deepface/blob/master/deepface/DeepFace.py#L683
if len(img.shape) == 3:
img = cv2.resize(img, target_size)
img = np.expand_dims(img, axis=0). this line should be removed?
# --------------------------------
img_region = [0, 0, img.shape[1], img.shape[0]] otherwise, this img_region will be wrong!!!
img_objs = [(img, img_region, 0)]
| closed | 2023-08-09T13:19:39Z | 2023-08-09T13:32:04Z | https://github.com/serengil/deepface/issues/820 | [] | ghost | 2 |
vitalik/django-ninja | pydantic | 1,292 | [BUG] computed_field is not present in openapi spec | **Describe the bug**
`computed_field` is missing in openapi spec.
I have defined a schema to be used for responses like this:
```
class MySchema(Schema):
id: int
slug: str
@computed_field
def name(self) -> str:
return f"{self.id}-{self.slug}"
def model_json_schema(self, *args, **kwargs):
# This is a workaround for the issue with the generated schema to include computed_field
return super().model_json_schema(*args, mode="serialization", **kwargs)
```
The overwritten `model_json_schema` is an attempt to fix the issue according to pydantic issue [here](https://github.com/pydantic/pydantic/discussions/6298).
Anyway the fix doesn't work, looks like `NinjaGenerateJsonSchema` doesn't handle `mode="serialization"`.
I decided not to investigate it farther and opened the issue.
**Versions (please complete the following information):**
- Python version: Python 3.10.3
- Django version: 3.2.13
- Django-Ninja version: 1.1.0
- Pydantic version: 2.7.4
| open | 2024-09-05T21:16:26Z | 2024-10-02T10:00:13Z | https://github.com/vitalik/django-ninja/issues/1292 | [] | POD666 | 1 |
tflearn/tflearn | tensorflow | 884 | How to use an image as label? | I built an end-to-end network, based on FCN.
The network ouput is [?,224,224,3], and I wanted to use a series of images as label.
I got "index out of range error" when fit(X,Y)
In tflearn utils.py:
**for i, y in enumerate(Y):
feed_dict[net_targets[i]] = y**
Caused this error.
Here is my code to generate label from images:
**label = read_images(label_list)
def read_images(srcList):
numberOfLines = len(srcList)
dataMat = np.zeros((numberOfLines,224,224,3),dtype='float32')
index = 0
for v in srcList:
img = Image.open(v)
img = img.resize((224,224))
dataMat[index,:] = img
index += 1
return dataMat**
Anyone knows how to solve this problem?
Thanks! | open | 2017-08-23T07:58:39Z | 2017-08-30T02:02:06Z | https://github.com/tflearn/tflearn/issues/884 | [] | polar99 | 6 |
plotly/dash-table | dash | 207 | Breaking Changes Log | 👋 Dash Community 👋
We'll keep this issue updated when we make breaking changes to the `DataTable`. Subscribe to this issue to stay aware of these changes.
Note that we'll indicate breaking changes through our versioning system ([semver](https://semver.org/)). If we bump the major version number (`major.minor.micro`), then it means that we've made a breaking change to the API.
Not all breaking changes will be large, so don't worry. We'll always include steps for upgrading between major versions in our CHANGELOG.md.
Thank you for your diligence and support ❤️ | open | 2018-11-02T02:57:15Z | 2019-06-27T00:26:21Z | https://github.com/plotly/dash-table/issues/207 | [
"dash-type-epic"
] | chriddyp | 3 |
twopirllc/pandas-ta | pandas | 140 | example.ipynb does not work | ```
Chart(df,
# style: which mplfinance chart style to use. Added "random" as an option.
# rpad: how many bars to leave empty on the right of the chart
style="yahoo", title=ticker, last=recent_bars(df), rpad=10,
# Overlap Indicators
linreg=True, midpoint=False, ohlc4=False, archermas=True,
# Example Indicators with default parameters
volume=True, rsi=True, clr=True, macd=True, zscore=False, squeeze=False, lazybear=False,
# Archer OBV and OBV MAs (https://www.tradingview.com/script/Co1ksara-Trade-Archer-On-balance-Volume-Moving-Averages-v1/)
archerobv=False,
# Create trends and see their returns
trendreturn=False,
# Example Trends or create your own. Trend must yield Booleans
long_trend=ta.sma(closedf,10) > ta.sma(closedf,20), # trend: sma(close,10) > sma(close,20) [Default Example]
# long_trend=closedf > ta.ema(closedf,5), # trend: close > ema(close,5)
# long_trend=ta.sma(closedf,10) > ta.ema(closedf,50), # trend: sma(close,10) > ema(close,50)
# long_trend=macdh > 0, # trend: macd hist > 0
# long_trend=ta.increasing(ta.sma(ta.rsi(closedf), 10), 5, asint=False), # trend: rising sma(rsi, 10) for the previous 5 periods
show_nontrading=False, # Intraday use if needed
verbose=True, # More detail
)
```
output:
```
[i] Loaded SPY(5262, 62)
[+] Strategy: Common Price and Volume SMAs
[i] Indicator arguments: {'append': True}
[i] Multiprocessing: 16 of 16 cores.
[i] Total indicators: 5
[i] Columns added: 0
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\cbook\__init__.py:1377: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead.
x[:, None]
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axes\_base.py:239: FutureWarning: Support for multi-dimensional indexing (e.g. `obj[:, None]`) is deprecated and will be removed in a future version. Convert to a numpy array before indexing instead.
y = y[:, np.newaxis]
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-44-f7c14830084c> in <module>
26 # long_trend=ta.increasing(ta.sma(ta.rsi(closedf), 10), 5, asint=False), # trend: rising sma(rsi, 10) for the previous 5 periods
27 show_nontrading=False, # Intraday use if needed
---> 28 verbose=True, # More detail
29 )
<ipython-input-35-df099406cc6a> in __init__(self, df, strategy, *args, **kwargs)
20 # Build TA and Plot
21 self.df.ta.strategy(self.strategy, verbose=self.verbose)
---> 22 self._plot(**kwargs)
23
24 def _validate_ta_strategy(self, strategy):
<ipython-input-35-df099406cc6a> in _plot(self, **kwargs)
340 show_nontrading=self.mpfchart["non_trading"],
341 vlines=vlines_,
--> 342 addplot=taplots
343 )
344
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\mplfinance\plotting.py in plot(data, **kwargs)
610
611 if config['volume']:
--> 612 volumeAxes.figure.canvas.draw() # This is needed to calculate offset
613 offset = volumeAxes.yaxis.get_major_formatter().get_offset()
614 volumeAxes.yaxis.offsetText.set_visible(False)
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\backends\backend_agg.py in draw(self)
400 toolbar = self.toolbar
401 try:
--> 402 self.figure.draw(self.renderer)
403 # A GUI class may be need to update a window using this draw, so
404 # don't forget to call the superclass.
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
48 renderer.start_filter()
49
---> 50 return draw(artist, renderer, *args, **kwargs)
51 finally:
52 if artist.get_agg_filter() is not None:
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1647
1648 mimage._draw_list_compositing_images(
-> 1649 renderer, self, artists, self.suppressComposite)
1650
1651 renderer.close_group('figure')
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
48 renderer.start_filter()
49
---> 50 return draw(artist, renderer, *args, **kwargs)
51 finally:
52 if artist.get_agg_filter() is not None:
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axes\_base.py in draw(self, renderer, inframe)
2626 renderer.stop_rasterizing()
2627
-> 2628 mimage._draw_list_compositing_images(renderer, self, artists)
2629
2630 renderer.close_group('axes')
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
48 renderer.start_filter()
49
---> 50 return draw(artist, renderer, *args, **kwargs)
51 finally:
52 if artist.get_agg_filter() is not None:
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axis.py in draw(self, renderer, *args, **kwargs)
1183 renderer.open_group(__name__)
1184
-> 1185 ticks_to_draw = self._update_ticks(renderer)
1186 ticklabelBoxes, ticklabelBoxes2 = self._get_tick_bboxes(ticks_to_draw,
1187 renderer)
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axis.py in _update_ticks(self, renderer)
1021
1022 interval = self.get_view_interval()
-> 1023 tick_tups = list(self.iter_ticks()) # iter_ticks calls the locator
1024 if self._smart_bounds and tick_tups:
1025 # handle inverted limits
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axis.py in iter_ticks(self)
969 self.major.formatter.set_locs(majorLocs)
970 majorLabels = [self.major.formatter(val, i)
--> 971 for i, val in enumerate(majorLocs)]
972
973 minorLocs = self.minor.locator()
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axis.py in <listcomp>(.0)
969 self.major.formatter.set_locs(majorLocs)
970 majorLabels = [self.major.formatter(val, i)
--> 971 for i, val in enumerate(majorLocs)]
972
973 minorLocs = self.minor.locator()
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\mplfinance\_utils.py in __call__(self, x, pos)
1135 else:
1136 date = self.dates[ix]
-> 1137 dateformat = mdates.num2date(date).strftime(self.fmt)
1138 #print('x=',x,'pos=',pos,'dates[',ix,']=',date,'dateformat=',dateformat)
1139 return dateformat
ValueError: Invalid format string
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\IPython\core\formatters.py in __call__(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\IPython\core\pylabtools.py in <lambda>(fig)
246
247 if 'png' in formats:
--> 248 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
249 if 'retina' in formats or 'png2x' in formats:
250 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\IPython\core\pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
130 FigureCanvasBase(fig)
131
--> 132 fig.canvas.print_figure(bytes_io, **kw)
133 data = bytes_io.getvalue()
134 if fmt == 'svg':
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, **kwargs)
2047 orientation=orientation,
2048 dryrun=True,
-> 2049 **kwargs)
2050 renderer = self.figure._cachedRenderer
2051 bbox_artists = kwargs.pop("bbox_extra_artists", None)
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\backends\backend_agg.py in print_png(self, filename_or_obj, *args, **kwargs)
508
509 """
--> 510 FigureCanvasAgg.draw(self)
511 renderer = self.get_renderer()
512
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\backends\backend_agg.py in draw(self)
400 toolbar = self.toolbar
401 try:
--> 402 self.figure.draw(self.renderer)
403 # A GUI class may be need to update a window using this draw, so
404 # don't forget to call the superclass.
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
48 renderer.start_filter()
49
---> 50 return draw(artist, renderer, *args, **kwargs)
51 finally:
52 if artist.get_agg_filter() is not None:
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\figure.py in draw(self, renderer)
1647
1648 mimage._draw_list_compositing_images(
-> 1649 renderer, self, artists, self.suppressComposite)
1650
1651 renderer.close_group('figure')
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
48 renderer.start_filter()
49
---> 50 return draw(artist, renderer, *args, **kwargs)
51 finally:
52 if artist.get_agg_filter() is not None:
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axes\_base.py in draw(self, renderer, inframe)
2626 renderer.stop_rasterizing()
2627
-> 2628 mimage._draw_list_compositing_images(renderer, self, artists)
2629
2630 renderer.close_group('axes')
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
136 if not_composite or not has_images:
137 for a in artists:
--> 138 a.draw(renderer)
139 else:
140 # Composite any adjacent images together
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
48 renderer.start_filter()
49
---> 50 return draw(artist, renderer, *args, **kwargs)
51 finally:
52 if artist.get_agg_filter() is not None:
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axis.py in draw(self, renderer, *args, **kwargs)
1183 renderer.open_group(__name__)
1184
-> 1185 ticks_to_draw = self._update_ticks(renderer)
1186 ticklabelBoxes, ticklabelBoxes2 = self._get_tick_bboxes(ticks_to_draw,
1187 renderer)
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axis.py in _update_ticks(self, renderer)
1021
1022 interval = self.get_view_interval()
-> 1023 tick_tups = list(self.iter_ticks()) # iter_ticks calls the locator
1024 if self._smart_bounds and tick_tups:
1025 # handle inverted limits
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axis.py in iter_ticks(self)
969 self.major.formatter.set_locs(majorLocs)
970 majorLabels = [self.major.formatter(val, i)
--> 971 for i, val in enumerate(majorLocs)]
972
973 minorLocs = self.minor.locator()
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\matplotlib\axis.py in <listcomp>(.0)
969 self.major.formatter.set_locs(majorLocs)
970 majorLabels = [self.major.formatter(val, i)
--> 971 for i, val in enumerate(majorLocs)]
972
973 minorLocs = self.minor.locator()
D:\ProgramData\Anaconda3\envs\vnpy\lib\site-packages\mplfinance\_utils.py in __call__(self, x, pos)
1135 else:
1136 date = self.dates[ix]
-> 1137 dateformat = mdates.num2date(date).strftime(self.fmt)
1138 #print('x=',x,'pos=',pos,'dates[',ix,']=',date,'dateformat=',dateformat)
1139 return dateformat
ValueError: Invalid format string
<Figure size 1200x1000 with 10 Axes>
``` | closed | 2020-09-30T08:58:35Z | 2020-10-05T18:42:39Z | https://github.com/twopirllc/pandas-ta/issues/140 | [
"question",
"info"
] | yhmickey | 13 |
python-visualization/folium | data-visualization | 1,484 | Is there a feature to make multiple layer controls in folium python with different data? | I am trying to plot some data using folium maps.
Below is my python code and a screenshot.

```
india_coord = [lat,long]
my_map = folium.Map(location = india_coord,zoom_start=5,tiles=None,overlay=False)
feature_group = folium.FeatureGroup(name='Central Region',overlay=True)
feature_group2 = folium.FeatureGroup(name='South Region',overlay=True)
feature_group3 = folium.FeatureGroup(name='East Region',overlay=True)
feature_group4 = folium.FeatureGroup(name='West Region',overlay=True)
feature_group5 = folium.FeatureGroup(name='North Region',overlay=True)
feature_group.add_to(my_map)
feature_group2.add_to(my_map)
feature_group3.add_to(my_map)
feature_group4.add_to(my_map)
feature_group5.add_to(my_map)
c1 = folium.Choropleth(
geo_data = india,
name = 'India',
legend_name = 'India',
fill_color = 'orange',
fill_opacity = 0.3,
highlight = True).add_to(my_map)
folium.TileLayer('openstreetmap',overlay=False,name = f'SUMMARY {summ_date}').add_to(my_map)
folium.LayerControl().add_to(my_map)
folium.Marker(location,tooltip = data1,popup = popup, icon=folium.Icon(color='cadetblue',icon = 'fa-industry', prefix='fa')).add_to(feature_group)
...
...
...
folium.Marker(location,tooltip = data1,popup = popup, icon=folium.Icon(color='cadetblue',icon = 'fa-industry', prefix='fa')).add_to(feature_group5)
my_map.add_child(feature_group)
my_map.add_child(feature_group2)
my_map.add_child(feature_group3)
my_map.add_child(feature_group4)
my_map.add_child(feature_group5)
```
With this code I get the map above but I need to make another layer control option for another summary.
For ex. if I add another
```
folium.LayerControl().add_to(my_map)
```
to my code I get the following:

But this layer control has the same options as the above summary.
Is there a way in folium to create different layer controls with different data on the same map so that If I am selecting the first summary in the first layer control I get the data like in image1 but if I choose the below layer control the data is displayed according to that particular layer control.
How do I link different data with different layer control to view them on the same map.Is there a feature in folium like this?
Thank you | closed | 2021-07-06T06:09:07Z | 2022-11-18T11:51:19Z | https://github.com/python-visualization/folium/issues/1484 | [] | gmehta1996 | 2 |
gee-community/geemap | jupyter | 613 | Authentication Error | <!-- Please search existing issues to avoid creating duplicates. -->
### Environment Information
- geemap version: 0.8.18
- Python version: 3.9.0
- Operating System: MacOS Big Sur 11.5
### Description
Attempting to install/start geemap. Authentication error in notebook and CLI.
### What I Did
In Notebook -
```
Enter verification code: ENTERED VERIFICATION CODE
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/data.py in get_persistent_credentials()
218 try:
--> 219 tokens = json.load(open(oauth.get_credentials_path()))
220 refresh_token = tokens['refresh_token']
FileNotFoundError: [Errno 2] No such file or directory: '/Users/gavinpirrie/.config/earthengine/credentials'
During handling of the above exception, another exception occurred:
EEException Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/geemap/common.py in ee_initialize(token_name)
81
---> 82 ee.Initialize()
83 except Exception:
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/__init__.py in Initialize(credentials, opt_url, cloud_api_key, http_transport, project)
114 if credentials == 'persistent':
--> 115 credentials = data.get_persistent_credentials()
116 data.initialize(
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/data.py in get_persistent_credentials()
228 except IOError:
--> 229 raise ee_exception.EEException(
230 'Please authorize access to your Earth Engine account by '
EEException: Please authorize access to your Earth Engine account by running
earthengine authenticate
in your command line, and then retry.
During handling of the above exception, another exception occurred:
SSLCertVerificationError Traceback (most recent call last)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
1341 try:
-> 1342 h.request(req.get_method(), req.selector, req.data, headers,
1343 encode_chunked=req.has_header('Transfer-encoding'))
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py in request(self, method, url, body, headers, encode_chunked)
1254 """Send a complete request to the server."""
-> 1255 self._send_request(method, url, body, headers, encode_chunked)
1256
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py in _send_request(self, method, url, body, headers, encode_chunked)
1300 body = _encode(body, 'body')
-> 1301 self.endheaders(body, encode_chunked=encode_chunked)
1302
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py in endheaders(self, message_body, encode_chunked)
1249 raise CannotSendHeader()
-> 1250 self._send_output(message_body, encode_chunked=encode_chunked)
1251
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py in _send_output(self, message_body, encode_chunked)
1009 del self._buffer[:]
-> 1010 self.send(msg)
1011
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py in send(self, data)
949 if self.auto_open:
--> 950 self.connect()
951 else:
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py in connect(self)
1423
-> 1424 self.sock = self._context.wrap_socket(self.sock,
1425 server_hostname=server_hostname)
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py in wrap_socket(self, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, session)
499 # ctx._wrap_socket()
--> 500 return self.sslsocket_class._create(
501 sock=sock,
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py in _create(cls, sock, server_side, do_handshake_on_connect, suppress_ragged_eofs, server_hostname, context, session)
1039 raise ValueError("do_handshake_on_connect should not be specified for non-blocking sockets")
-> 1040 self.do_handshake()
1041 except (OSError, ValueError):
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py in do_handshake(self, block)
1308 self.settimeout(None)
-> 1309 self._sslobj.do_handshake()
1310 finally:
SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1122)
During handling of the above exception, another exception occurred:
URLError Traceback (most recent call last)
<ipython-input-6-1d393d38f16f> in <module>
2 import geemap
3
----> 4 Map = geemap.Map()
5
6 # To view exactly where
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/geemap/geemap.py in __init__(self, **kwargs)
35
36 if kwargs["ee_initialize"]:
---> 37 ee_initialize()
38
39 # Default map center location (lat, lon) and zoom level
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/geemap/common.py in ee_initialize(token_name)
82 ee.Initialize()
83 except Exception:
---> 84 ee.Authenticate()
85 ee.Initialize()
86
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/__init__.py in Authenticate(authorization_code, quiet, code_verifier)
87 code_verifier: PKCE verifier to prevent auth code stealing.
88 """
---> 89 oauth.authenticate(authorization_code, quiet, code_verifier)
90
91
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/oauth.py in authenticate(cli_authorization_code, quiet, cli_code_verifier)
233 webbrowser.open_new(auth_url)
234
--> 235 _obtain_and_write_token(None, code_verifier) # Will prompt for auth_code.
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/oauth.py in _obtain_and_write_token(auth_code, code_verifier)
139 auth_code = input('Enter verification code: ')
140 assert isinstance(auth_code, six.string_types)
--> 141 token = request_token(auth_code.strip(), code_verifier)
142 write_token(token)
143 print('\nSuccessfully saved authorization token.')
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/oauth.py in request_token(auth_code, code_verifier)
82
83 try:
---> 84 response = request.urlopen(
85 TOKEN_URI,
86 parse.urlencode(request_args).encode()).read().decode()
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py in urlopen(url, data, timeout, cafile, capath, cadefault, context)
212 else:
213 opener = _opener
--> 214 return opener.open(url, data, timeout)
215
216 def install_opener(opener):
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py in open(self, fullurl, data, timeout)
515
516 sys.audit('urllib.Request', req.full_url, req.data, req.headers, req.get_method())
--> 517 response = self._open(req, data)
518
519 # post-process response
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py in _open(self, req, data)
532
533 protocol = req.type
--> 534 result = self._call_chain(self.handle_open, protocol, protocol +
535 '_open', req)
536 if result:
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py in _call_chain(self, chain, kind, meth_name, *args)
492 for handler in handlers:
493 func = getattr(handler, meth_name)
--> 494 result = func(*args)
495 if result is not None:
496 return result
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py in https_open(self, req)
1383
1384 def https_open(self, req):
-> 1385 return self.do_open(http.client.HTTPSConnection, req,
1386 context=self._context, check_hostname=self._check_hostname)
1387
/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py in do_open(self, http_class, req, **http_conn_args)
1343 encode_chunked=req.has_header('Transfer-encoding'))
1344 except OSError as err: # timeout error
-> 1345 raise URLError(err)
1346 r = h.getresponse()
1347 except:
URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1122)>
```
Tried to authenticate in CLI -
```
gavinpirrie in ~
$ earthengine authenticate
To authorize access needed by Earth Engine, open the following URL in a web browser and follow the instructions. If the web browser does not start automatically, please manually browse the URL below.
https://accounts.google.com/o/oauth2/auth?client_id=51.....
The authorization workflow will generate a code, which you should paste in the box below.
Enter verification code: VERIFICATION CODE
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 1342, in do_open
h.request(req.get_method(), req.selector, req.data, headers,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1255, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1301, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1250, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1010, in _send_output
self.send(msg)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 950, in send
self.connect()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/http/client.py", line 1424, in connect
self.sock = self._context.wrap_socket(self.sock,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 500, in wrap_socket
return self.sslsocket_class._create(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1040, in _create
self.do_handshake()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/ssl.py", line 1309, in do_handshake
self._sslobj.do_handshake()
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1122)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Library/Frameworks/Python.framework/Versions/3.9/bin/earthengine", line 33, in <module>
sys.exit(load_entry_point('earthengine-api==0.1.277', 'console_scripts', 'earthengine')())
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/cli/eecli.py", line 84, in main
_run_command()
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/cli/eecli.py", line 63, in _run_command
dispatcher.run(args, config)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/cli/commands.py", line 352, in run
self.command_dict[vars(args)[self.dest]].run(args, config)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/cli/commands.py", line 385, in run
ee.Authenticate(**args_auth)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/__init__.py", line 89, in Authenticate
oauth.authenticate(authorization_code, quiet, code_verifier)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/oauth.py", line 235, in authenticate
_obtain_and_write_token(None, code_verifier) # Will prompt for auth_code.
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/oauth.py", line 141, in _obtain_and_write_token
token = request_token(auth_code.strip(), code_verifier)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/ee/oauth.py", line 84, in request_token
response = request.urlopen(
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 214, in urlopen
return opener.open(url, data, timeout)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 517, in open
response = self._open(req, data)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 534, in _open
result = self._call_chain(self.handle_open, protocol, protocol +
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 494, in _call_chain
result = func(*args)
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 1385, in https_open
return self.do_open(http.client.HTTPSConnection, req,
File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/urllib/request.py", line 1345, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1122)>
```
| closed | 2021-08-09T18:33:18Z | 2021-08-09T20:24:45Z | https://github.com/gee-community/geemap/issues/613 | [
"bug"
] | gavinpirrie | 1 |
unionai-oss/pandera | pandas | 1,886 | Add `pd.json_normalize` to input types | **Is your feature request related to a problem? Please describe.**
Add `pd.json_normalize` as input type. Currently `from_dict` is supported but `from_dict` does not support `records` format for some reason (even though it is accepted for `to_dict`) and `json_normalize` would allow more complex record_paths.
| open | 2024-12-26T11:18:57Z | 2024-12-26T11:20:53Z | https://github.com/unionai-oss/pandera/issues/1886 | [
"enhancement"
] | lucasjamar | 1 |
stanfordnlp/stanza | nlp | 946 | Index issue when parsing dependencies, alternative words in calculations | I am not sure this is a bug, but I am trying to build dependencies for some german text, and what I noticed, is in some instances instead of having a clear ID there is a group (tuple)` {'id': (5, 6), 'text': 'zur', 'start_char': 20, 'end_char': 23},` containing the real word from the sentence following by the calculation, however, these words are not in the original sentence
```
{'id': 5,
'text': 'zu',
'lemma': 'zu',
'upos': 'ADP',
'xpos': 'APPR',
'head': 7,
'deprel': 'case'},
{'id': 6,
'text': 'der',
'lemma': 'der',
'upos': 'DET',
'xpos': 'ART',
'feats': 'Case=Dat|Definite=Def|Gender=Fem|Number=Sing|PronType=Art',
'head': 7,
'deprel': 'det'},
```
Steps to reproduce the behavior:
```
stanza.download('de')
sentence = 'heute habe Ida erst zur sport'
nlp = stanza.Pipeline('de', processors = "tokenize,mwt,pos,lemma,depparse")
doc = nlp(sentence)
doc.sentences[0].print_dependencies()
sent_dict = doc.sentences[0].to_dict()
```
the full output
```
[{'id': 1,
'text': 'heute',
'lemma': 'heute',
'upos': 'ADV',
'xpos': 'ADV',
'head': 2,
'deprel': 'advmod',
'start_char': 0,
'end_char': 5},
{'id': 2,
'text': 'habe',
'lemma': 'haben',
'upos': 'AUX',
'xpos': 'VAFIN',
'feats': 'Mood=Sub|Number=Sing|Person=3|Tense=Pres|VerbForm=Fin',
'head': 0,
'deprel': 'root',
'start_char': 6,
'end_char': 10},
{'id': 3,
'text': 'ida',
'lemma': 'ida',
'upos': 'PROPN',
'xpos': 'NE',
'feats': 'Case=Nom|Gender=Masc|Number=Sing',
'head': 2,
'deprel': 'nsubj',
'start_char': 11,
'end_char': 14},
{'id': 4,
'text': 'erst',
'lemma': 'erst',
'upos': 'ADV',
'xpos': 'ADV',
'head': 2,
'deprel': 'advmod',
'start_char': 15,
'end_char': 19},
{'id': (5, 6), 'text': 'zur', 'start_char': 20, 'end_char': 23},
{'id': 5,
'text': 'zu',
'lemma': 'zu',
'upos': 'ADP',
'xpos': 'APPR',
'head': 7,
'deprel': 'case'},
{'id': 6,
'text': 'der',
'lemma': 'der',
'upos': 'DET',
'xpos': 'ART',
'feats': 'Case=Dat|Definite=Def|Gender=Fem|Number=Sing|PronType=Art',
'head': 7,
'deprel': 'det'},
{'id': 7,
'text': 'sport',
'lemma': 'sport',
'upos': 'NOUN',
'xpos': 'NN',
'feats': 'Case=Dat|Gender=Fem|Number=Sing',
'head': 2,
'deprel': 'obl',
'start_char': 24,
'end_char': 29}]
```
is there any way of fixing this? | closed | 2022-02-07T16:47:46Z | 2022-02-07T18:11:08Z | https://github.com/stanfordnlp/stanza/issues/946 | [
"bug"
] | vitotitto | 2 |
HIT-SCIR/ltp | nlp | 11 | Readme中2.2之后版本第2步需要运行cmak,而不是./configure | 相应的Traiva CI也有问
| closed | 2013-07-21T13:00:57Z | 2013-08-13T14:21:49Z | https://github.com/HIT-SCIR/ltp/issues/11 | [] | carfly | 0 |
praw-dev/praw | api | 1,731 | AssertionError: PRAW Error occurred | **Describe the bug**
While using `replace_more(limit=None)`, I'm receiving an AssertionError. This appears to be a duplicate of #1581. I don't know what the underlying issue is, and there was mention of CDN problems in the other ticket. But it seems that the error should be handled in PRAW and retried or a more appropriate exception raised.
**To Reproduce**
````Python
while True:
try:
reddit_post.comments.replace_more(limit=None)
break
except praw.exceptions.DuplicateReplaceException as ex:
logging.error(f'Comments were refreshed while replacing MoreComments. {ex}')
return
except praw.exceptions.RedditAPIException as ex:
exception_limit += 1
if exception_limit > 10:
logging.error(
f'Error while expanding MoreComments. Cancelling and continuing with processing. {ex}'
)
break
logging.error(f'Error while expanding MoreComments. Waiting. {ex}')
time.sleep(2.0)
config.check_rate_limit()
````
produces
```
File "/Users/tom/Development/script/quickie.py", line 331, in process_subreddit_posts
reddit_post.comments.replace_more(limit=None)
File "/Users/tom/Development/script/venv/lib/python3.9/site-packages/praw/models/comment_forest.py", line 188, in replace_more
self._insert_comment(comment)
File "/Users/tom/Development/script/venv/lib/python3.9/site-packages/praw/models/comment_forest.py", line 83, in _insert_comment
assert comment.parent_id in self._submission._comments_by_id, (
AssertionError: PRAW Error occurred. Please file a bug report and include the code that caused the error.
```
Note that I do not receive any of the error messages produced by exception checking. Looking at the code that raises this asertion:
```Python
def _insert_comment(self, comment):
if comment.name in self._submission._comments_by_id:
raise DuplicateReplaceException
comment.submission = self._submission
if isinstance(comment, MoreComments) or comment.is_root:
self._comments.append(comment)
else:
assert comment.parent_id in self._submission._comments_by_id, (
"PRAW Error occurred. Please file a bug report and include the code"
" that caused the error."
)
parent = self._submission._comments_by_id[comment.parent_id]
parent.replies._comments.append(comment)
```
Based on an extremely brief look, It would appear that a duplicate ID is being inserted into the comment collection. Is it possible that Reddit is returning a duplicate key for more than one comment? If so, is raising `DuplicateReplaceException` more appropriate here?
**Expected behavior**
A more appropriate exception should be raised to indicate the root cause of the error, whether it's an HTTP error, an API error, etc.
**System Info**
- OS: MacOS 11.3 on M1 Processor
- Python: 3.9.5
- PRAW Version: 7.2.0
| closed | 2021-06-01T02:43:58Z | 2021-09-09T06:06:25Z | https://github.com/praw-dev/praw/issues/1731 | [
"Stale",
"Auto-closed - Stale"
] | tomc603 | 10 |
slackapi/python-slack-sdk | asyncio | 859 | Reading body of an interaction event on AWS Lambda | Hey all, I'm trying to create a serverless slack bot in AWS Lambda. I managed to get the app to send a message to a user in Slack with an interactive button, which sends a post back to the app when clicked. The problem is that the body of this message (the one from the button click) has the body of the JSON all garbled. Here is a reduced version of the garble:
'body': 'cGF5bG9hZD0lN0IlMjJ0eXBlJTIyJTNBJTIyYmxvY2tfYWN0aW9...
The rest of the JSON object looks normal.
On the other hand, the button object I'm sending to slack is this one:
```python
{
"type": "button",
"text": {
"type": "plain_text",
"emoji": True,
"text": "Yes"
},
"style": "primary",
"value": "click_me_123"
},
```
What am I missing? Appreciate all the help I can get! | closed | 2020-10-24T01:25:10Z | 2020-10-24T02:54:16Z | https://github.com/slackapi/python-slack-sdk/issues/859 | [
"question"
] | tinoargentino | 3 |
KevinMusgrave/pytorch-metric-learning | computer-vision | 741 | k-NN clusters as labels or pairs? | Hello Kevin,
Thank you for your work on pytorch_metric_learning. I'm looking to reimplement some experiments of my own under your package. I'm not sure what the best way to go about implementing this might be, so wondering if you could give some pointers.
We are training a contrastive model that is similar to SpectralNet https://github.com/shaham-lab/SpectralNet/tree/main. We're doing domain adaptation in which there are pairs of points from each domain, and, critically, we have the ground-truth knns for one domain. That is, for a point sampled in two domains (xi,yi), we know xi's k nearest neighbors kNN(xi). We would like to learn two embedding functions (g_X and g_Y) such that kNN(g_X(xi)) and kNN(g_Y(yi)) approximate kNN(xi).
The way I do this right now is to take a batch of paired xs and ys, and for each point i (which is now an anchor) I get its pre-computed kNN(xis). These are positive examples. The negative examples are simply the remainder of the batch (for each xi). This works but is slow and perhaps suboptimal.
I think the current strategy I just described can be done just using the `indices_tuple` argument to many of the losses in this library.
However, I would like to implement this using a miner. What I am unclear about is how to define labels in this context. Essentially, every point is its own class centroid (as defined by kNN), but it also belongs to upto batch_size-1 many other classes. Of course, kNN is taken to be small so this isn't true, but the point is that this is a multi-label situation.
Do you have any recommendations for how to approach this? | open | 2025-02-07T17:39:11Z | 2025-02-14T22:10:25Z | https://github.com/KevinMusgrave/pytorch-metric-learning/issues/741 | [] | stanleyjs | 1 |
koxudaxi/datamodel-code-generator | fastapi | 1,444 | collapse-root-models missing openapi string format type imports | **Describe the bug**
When models are generated using an input directory of OpenAPI schemas instead of a single openapi.yaml, and the schemas within the directory have references to each other using a file **$ref**, along with the `--collapse-root-models` flag, the generated models file containing the merged collapsed model is **missing imports for its string format type**. This issue is only happening when using an input directory of schemas.
Based on testing, the bug occurs only when a schema references another schema in a different file, and the referenced schema contains a [string format](https://swagger.io/docs/specification/data-models/data-types/) . Below are screenshots containing the openapi schemas and generated models with missing imports.
**Input schemas directory (schemas/user.yaml, schemas/common.yaml)**

**Generated models with missing imports**

The collapsed root models' attributes, user `id` and `email`, will be of type `UUID` and `EmailStr` in user.py. However, user.py will have missing imports for both UUID and EmailStr. It's using these types but not importing them.
Also, user.py is importing common by doing `from . import common`, but `common.py` does not contain any defined models as they have been collapsed.
**To Reproduce**
1. Create input directory called **schemas**, containing two files, **user.yaml** and **common.yaml** using the schemas below
2. Create output directory called **models**
3. Generate models using the datamodel-codegen command below
**schemas/user.yaml**
```yaml
openapi: 3.0.0
components:
schemas:
User:
type: object
properties:
id:
$ref: "common.yaml#/components/schemas/UserId"
name:
type: string
email:
$ref: "common.yaml#/components/schemas/email"
```
**schemas/common.yaml**
```yaml
openapi: 3.0.0
components:
schemas:
UserId:
type: string
format: uuid
email:
type: string
format: email
```
Used commandline:
```
$ datamodel-codegen --input schemas --output models --collapse-root-models
```
**Expected behavior**
In `user.py` the string format types `UUID` and `EmailStr` that are being used should be imported.
Also, in user.py, the import of `common` is unnecessary since it's not used anywhere in the file. Additionally, there is no need to generate common.py in this scenario since the models from that file were already merged into the User model in user.py.
**expected user.py with string format imports (UUID, EmailStr)**

**Version:**
- OS: Ubuntu 22.04.1 LTS
- Python version: Python 3.10.6
- datamodel-code-generator version: 0.21.2
| closed | 2023-07-23T16:38:21Z | 2023-11-19T17:51:24Z | https://github.com/koxudaxi/datamodel-code-generator/issues/1444 | [
"bug"
] | myke2424 | 1 |
mars-project/mars | numpy | 3,082 | Web page is stuck when task is running | <!--
Thank you for your contribution!
Please review https://github.com/mars-project/mars/blob/master/CONTRIBUTING.rst before opening an issue.
-->
Web sometimes hang when task is running, the stack is below:
```
Current thread 0x00007fce86d99740 (most recent call first):
File "/opt/conda/lib/python3.8/logging/__init__.py", line 1069 in flush
File "/opt/conda/lib/python3.8/logging/__init__.py", line 1089 in emit
File "/opt/conda/lib/python3.8/logging/__init__.py", line 954 in handle
File "/opt/conda/lib/python3.8/logging/__init__.py", line 1661 in callHandlers
File "/opt/conda/lib/python3.8/logging/__init__.py", line 1599 in handle
File "/opt/conda/lib/python3.8/logging/__init__.py", line 1589 in _log
File "/opt/conda/lib/python3.8/logging/__init__.py", line 1434 in debug
File "/home/admin/work/_public-mars-hks.zip/mars/services/scheduling/supervisor/manager.py", line 271 in submit_subtask_to_band
File "/opt/conda/lib/python3.8/asyncio/events.py", line 81 in _run
File "/opt/conda/lib/python3.8/asyncio/base_events.py", line 1859 in _run_once
File "/opt/conda/lib/python3.8/asyncio/base_events.py", line 570 in run_forever
File "/opt/conda/lib/python3.8/asyncio/base_events.py", line 603 in run_until_complete
File "/opt/conda/lib/python3.8/asyncio/runners.py", line 44 in run
File "/home/admin/work/_public-mars-hks.zip/mars/oscar/backends/mars/pool.py", line 196 in _start_sub_pool
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 108 in run
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 315 in _bootstrap
File "/opt/conda/lib/python3.8/multiprocessing/spawn.py", line 129 in _main
File "/opt/conda/lib/python3.8/multiprocessing/spawn.py", line 116 in spawn_main
File "<string>", line 1 in <module>
``` | open | 2022-05-24T10:16:41Z | 2022-06-12T00:11:59Z | https://github.com/mars-project/mars/issues/3082 | [
"type: bug",
"mod: web"
] | hekaisheng | 0 |
dask/dask | scikit-learn | 11,815 | Index Query Hangs | **Describe the issue**:
After setting index to timestamp, some loc based query works but string based querying causes the operation to hang unless we call optimize first
**Minimal Complete Verifiable Example**:
```python
import dask.dataframe as dd
import pandas as pd
import random
def test_df() -> dd.DataFrame:
dfs = []
start_date = '2024-01-01'
end_date = '2024-01-31'
for num_rows in [2, 5, 10]:
df = pd.DataFrame(
{
'timestamp': pd.to_datetime(
pd.date_range(start_date, end_date, periods=num_rows),
),
'value1': random.choices(range(-20, 20), k=num_rows),
'value2': random.choices(range(-1000, 1000), k=num_rows),
},
)
dfs.append(
dd.from_pandas(df, npartitions=1),
)
return dd.concat(dfs)
df = test_df()
df = df.set_index('timestamp', npartitions=df.npartitions)
# df = df.optimize()
df.loc[df.index > '2024-01-15'].compute()
```
**Anything else we need to know?**:
**Environment**:
- Dask version: 2025.2.0
- Python version: 3.10
- Operating System: Mac Os and Linux Tested
- Install method (conda, pip, source): Pip
| open | 2025-03-05T14:31:58Z | 2025-03-19T19:05:01Z | https://github.com/dask/dask/issues/11815 | [
"dataframe",
"bug"
] | mscanlon-exos | 3 |
Avaiga/taipy | automation | 2,511 | PyGWalker integration | ### Description
Including dynamically content generated by pygwalker into a taipy page should be great.
For example, would it be possible to include html generated by PyGWalker in a dialog component (assuming my datas are in "df", "cit_pgw_partial" properly defined in main in the example below)
With current taipy version, html from PyGWalker is just displayed in a new browser's tab. More over, when I close this pyg page, the following exception is raised :
```
on_action(): Exception raised in 'gen_pgw_html()':
'content' argument is missing for class '_Renderer'
```
Page example :
```
from taipy.gui import Html
import taipy.gui.builder as tgb
import pandas as pd
import pygwalker as pyg
sel_expanded = False
show_dialog = False
df = load_my_datas() # just loading some datas for tests
def gen_pgw_html(state):
state.cit_pgw_partial.update_content(state, Html(pyg.walk(state.df,return_html=True,)))
state.show_dialog = True
def dialog_action(state, _, payload):
if payload["args"][0] == -1:
state.show_dialog = False
with tgb.Page() as pgw_tgb:
with tgb.part(class_name='container align_columns_center'):
tgb.menu(lov="{lov_menu}",on_action="menu_option_selected")
with tgb.expandable(title="Requete Oracle",expanded="{sel_expanded}",on_change="expand"):
with tgb.part():
tgb.button(label="{icon_apply}", on_action="gen_pgw_html")
tgb.dialog("{show_dialog}",partial="{cit_pgw_partial}",on_action="dialog_action")
```
### Acceptance Criteria
- [ ] If applicable, a new demo code is provided to show the new feature in action.
- [ ] Integration tests exhibiting how the functionality works are added.
- [ ] Any new code is covered by a unit tested.
- [ ] Check code coverage is at least 90%.
- [ ] Related issue(s) in taipy-doc are created for documentation and Release Notes are updated.
### Code of Conduct
- [x] I have checked the [existing issues](https://github.com/Avaiga/taipy/issues?q=is%3Aissue+).
- [ ] I am willing to work on this issue (optional) | open | 2025-03-24T15:43:44Z | 2025-03-24T15:43:54Z | https://github.com/Avaiga/taipy/issues/2511 | [
"✨New feature"
] | dataxcount | 0 |
sinaptik-ai/pandas-ai | data-visualization | 644 | Unsupported Format | ### 🐛 Describe the bug
It doesnt matter what i do with the data i always get unsupported format.
llm = Starcoder(api_token="token")
df = pd.DataFrame({
"Store": ["FERRETERIA EPA SA","DELIMART PIRRO","FRESH MARKET SAN PABLO","PAYPAL *UBERBV EATS","SUPER LAS CRUCITAS","PAYPAL *UBERBV EATS","CASONA DE MI TIERRA","MCDONALDS HEREDIA","KFC HEREDIA","SOS COM"],
"Spend": [19995,2550,1600,7542,2400,6845,4350,3990,4890,3695]
})
df = SmartDataframe(df, config={"llm":llm})
df.chat('Calculate the sum of the Spend') | closed | 2023-10-14T07:12:09Z | 2024-06-01T00:20:36Z | https://github.com/sinaptik-ai/pandas-ai/issues/644 | [] | masterchop | 0 |
junyanz/pytorch-CycleGAN-and-pix2pix | deep-learning | 1,412 | UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. | could anyone tell me how to solve this problem?followd is the discrible of the probelm.
E:\Anaconda\envs\pytorch-CycleGAN-and-pix2pix\lib\site-packages\torch\optim\lr_scheduler.py:134: UserWarning: Detected call of `lr_scheduler.step()` before `optimizer.step()`. In PyTorch 1.1.0 and later, you should call them in the
opposite order: `optimizer.step()` before `lr_scheduler.step()`. Failure to do this will result in PyTorch skipping the first value of the learning rate schedule. See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
"https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning)
| open | 2022-04-20T15:20:37Z | 2024-01-10T03:07:56Z | https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix/issues/1412 | [] | DJstepbystep | 4 |
Evil0ctal/Douyin_TikTok_Download_API | fastapi | 496 | 请问项目只能爬取单个抖音账号的内容吗 有没有依据关键词检索爬去全部视频的功能 | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternative solutions or features you've considered.
**Additional context**
Add any other context or screenshots about the feature request here.
| open | 2024-11-02T06:39:54Z | 2024-11-02T06:53:43Z | https://github.com/Evil0ctal/Douyin_TikTok_Download_API/issues/496 | [
"enhancement"
] | song9910moon | 1 |
sanic-org/sanic | asyncio | 2,535 | Wondering how the changelog is maintained. | Hi,
I am just a learner and wondering how the changlog is written here what specific github actions are used? or if it is manually done. and how?
Thank you and sorry!
This is brilliant stuff | open | 2022-08-22T15:12:24Z | 2022-08-22T15:20:04Z | https://github.com/sanic-org/sanic/issues/2535 | [] | corientdev | 1 |
AUTOMATIC1111/stable-diffusion-webui | deep-learning | 15,616 | [Bug]: sd-webui-prompt-all-in-one extension is glitching the prompt bar | ### Checklist
- [ ] The issue exists after disabling all extensions
- [ ] The issue exists on a clean installation of webui
- [ ] The issue is caused by an extension, but I believe it is caused by a bug in the webui
- [ ] The issue exists in the current version of the webui
- [ ] The issue has not been reported before recently
- [ ] The issue has been reported before but has not been fixed yet
### What happened?
i cant use the prompt bar in text2image and its like the bar are glitching upward and i cant scroll it down.

### Steps to reproduce the problem
all i did was just import data from png info to text2image and suddenly it happen. i try to solve it by reload UI but it didnt work and only uninstal the sd-webui-prompt-all-in-one it fix it.

its back to normal when i uninstal the extension
### What should have happened?
its should just show the prompt bar and the negative prompt bar like normal. but for some reason the prompt bar just stuck and wont scroll down.
### What browsers do you use to access the UI ?
Google Chrome
### Sysinfo
...
### Console logs
```Shell
venv "D:\SD11\stable-diffusion-webui\venv\Scripts\Python.exe"
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --xformers
2024-04-24 12:06:05.754340: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-24 12:06:07.290966: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
CivitAI Browser+: Aria2 RPC started
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [821aa5537f] from D:\SD11\stable-diffusion-webui\models\Stable-diffusion\autismmixSDXL_autismmixPony.safetensors
Creating model from config: D:\SD11\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
IIB Database file has been successfully backed up to the backup folder.
Startup time: 32.9s (prepare environment: 5.0s, import torch: 6.0s, import gradio: 2.3s, setup paths: 8.5s, initialize shared: 0.2s, other imports: 1.5s, list SD models: 0.3s, load scripts: 1.9s, initialize extra networks: 0.3s, create ui: 1.8s, gradio launch: 2.7s, app_started_callback: 2.2s).
Loading VAE weights specified in settings: D:\SD11\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
*** Error loading embedding DetailedEyes_V3.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
*** Error loading embedding more_details.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
Model loaded in 23.2s (load weights from disk: 1.1s, create model: 0.4s, apply weights to model: 11.1s, load VAE: 1.6s, move model to device: 0.3s, load textual inversion embeddings: 6.5s, calculate empty prompt: 1.9s).
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --xformers
2024-04-24 12:07:49.584363: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-24 12:07:51.841580: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
CivitAI Browser+: Aria2 RPC started
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [821aa5537f] from D:\SD11\stable-diffusion-webui\models\Stable-diffusion\autismmixSDXL_autismmixPony.safetensors
Creating model from config: D:\SD11\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 28.6s (prepare environment: 3.9s, import torch: 5.7s, import gradio: 2.6s, setup paths: 7.6s, initialize shared: 0.4s, other imports: 1.4s, load scripts: 1.7s, create ui: 0.7s, gradio launch: 2.5s, app_started_callback: 2.0s).
Loading VAE weights specified in settings: D:\SD11\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
*** Error loading embedding DetailedEyes_V3.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
*** Error loading embedding more_details.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
Model loaded in 22.4s (load weights from disk: 0.7s, create model: 0.6s, apply weights to model: 10.7s, load VAE: 1.1s, move model to device: 0.2s, load textual inversion embeddings: 6.9s, calculate empty prompt: 2.1s).
Reusing loaded model autismmixSDXL_autismmixPony.safetensors [821aa5537f] to load ponyDiffusionV6XL_v6StartWithThisOne.safetensors [67ab2fd8ec]
Loading weights [67ab2fd8ec] from D:\SD11\stable-diffusion-webui\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Loading VAE weights specified in settings: D:\SD11\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
Weights loaded in 153.5s (send model to cpu: 2.6s, load weights from disk: 0.6s, apply weights to model: 100.3s, load VAE: 4.0s, hijack: 0.1s, move model to device: 44.8s, script callbacks: 1.0s).
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --xformers
2024-04-24 12:15:00.215326: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-24 12:15:04.341658: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Loading weights [67ab2fd8ec] from D:\SD11\stable-diffusion-webui\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Running on local URL: http://127.0.0.1:7860
Creating model from config: D:\SD11\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
To create a public link, set `share=True` in `launch()`.
Startup time: 49.6s (prepare environment: 13.1s, import torch: 7.3s, import gradio: 3.1s, setup paths: 17.6s, initialize shared: 0.3s, other imports: 1.4s, list SD models: 0.3s, load scripts: 2.6s, create ui: 0.8s, gradio launch: 3.1s).
Loading VAE weights specified in settings: D:\SD11\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
*** Error loading embedding DetailedEyes_V3.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
*** Error loading embedding more_details.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --xformers
2024-04-24 12:16:12.567363: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-24 12:16:14.026851: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [67ab2fd8ec] from D:\SD11\stable-diffusion-webui\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Creating model from config: D:\SD11\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 26.7s (prepare environment: 4.1s, import torch: 6.2s, import gradio: 2.7s, setup paths: 7.7s, initialize shared: 0.3s, other imports: 0.7s, list SD models: 0.2s, load scripts: 1.9s, create ui: 0.5s, gradio launch: 2.4s).
Loading VAE weights specified in settings: D:\SD11\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
*** Error loading embedding DetailedEyes_V3.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
*** Error loading embedding more_details.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
Model loaded in 13.5s (load weights from disk: 0.7s, create model: 0.4s, apply weights to model: 4.9s, load VAE: 0.2s, move model to device: 0.1s, load textual inversion embeddings: 4.3s, calculate empty prompt: 2.8s).
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --xformers
2024-04-24 12:17:16.981487: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-24 12:17:18.771450: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [67ab2fd8ec] from D:\SD11\stable-diffusion-webui\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Running on local URL: http://127.0.0.1:7860
Thanks for being a Gradio user! If you have questions or feedback, please join our Discord server and chat with us: https://discord.gg/feTf9x3ZSB
Creating model from config: D:\SD11\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
To create a public link, set `share=True` in `launch()`.
Startup time: 27.9s (prepare environment: 3.8s, import torch: 5.8s, import gradio: 2.0s, setup paths: 7.8s, initialize shared: 0.4s, other imports: 1.7s, load scripts: 1.7s, create ui: 0.5s, gradio launch: 2.3s, app_started_callback: 1.8s).
Loading VAE weights specified in settings: D:\SD11\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
*** Error loading embedding DetailedEyes_V3.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
*** Error loading embedding more_details.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
Model loaded in 20.1s (load weights from disk: 0.7s, create model: 0.4s, apply weights to model: 10.1s, load VAE: 0.3s, move model to device: 0.2s, load textual inversion embeddings: 5.3s, calculate empty prompt: 2.9s).
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --xformers
2024-04-24 12:18:06.511161: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-24 12:18:08.521674: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [67ab2fd8ec] from D:\SD11\stable-diffusion-webui\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Creating model from config: D:\SD11\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 25.7s (prepare environment: 3.9s, import torch: 5.7s, import gradio: 2.2s, setup paths: 7.5s, initialize shared: 0.4s, other imports: 1.2s, load scripts: 1.6s, create ui: 0.7s, gradio launch: 2.4s).
Loading VAE weights specified in settings: D:\SD11\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
*** Error loading embedding DetailedEyes_V3.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
*** Error loading embedding more_details.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
Model loaded in 13.4s (load weights from disk: 0.8s, create model: 0.4s, apply weights to model: 4.1s, load VAE: 0.2s, move model to device: 0.1s, load textual inversion embeddings: 6.3s, calculate empty prompt: 1.3s).
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --xformers
2024-04-24 12:19:08.951259: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-24 12:19:09.984317: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
sd-webui-prompt-all-in-one background API service started successfully.
Loading weights [67ab2fd8ec] from D:\SD11\stable-diffusion-webui\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Running on local URL: http://127.0.0.1:7860
Creating model from config: D:\SD11\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
To create a public link, set `share=True` in `launch()`.
Startup time: 22.2s (prepare environment: 3.5s, import torch: 5.0s, import gradio: 1.4s, setup paths: 4.8s, initialize shared: 0.2s, other imports: 0.9s, load scripts: 1.5s, create ui: 0.5s, gradio launch: 2.4s, app_started_callback: 1.7s).
Loading VAE weights specified in settings: D:\SD11\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
*** Error loading embedding DetailedEyes_V3.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
*** Error loading embedding more_details.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
Model loaded in 15.7s (load weights from disk: 0.7s, create model: 0.4s, apply weights to model: 7.3s, load VAE: 0.8s, move model to device: 0.2s, load textual inversion embeddings: 4.8s, calculate empty prompt: 1.4s).
Python 3.10.11 (tags/v3.10.11:7d4cc5a, Apr 5 2023, 00:38:17) [MSC v.1929 64 bit (AMD64)]
Version: v1.9.3
Commit hash: 1c0a0c4c26f78c32095ebc7f8af82f5c04fca8c0
Launching Web UI with arguments: --xformers
2024-04-24 12:29:35.906396: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
2024-04-24 12:29:36.909431: I tensorflow/core/util/port.cc:113] oneDNN custom operations are on. You may see slightly different numerical results due to floating-point round-off errors from different computation orders. To turn them off, set the environment variable `TF_ENABLE_ONEDNN_OPTS=0`.
Tag Autocomplete: Could not locate model-keyword extension, Lora trigger word completion will be limited to those added through the extra networks menu.
Loading weights [67ab2fd8ec] from D:\SD11\stable-diffusion-webui\models\Stable-diffusion\ponyDiffusionV6XL_v6StartWithThisOne.safetensors
Creating model from config: D:\SD11\stable-diffusion-webui\repositories\generative-models\configs\inference\sd_xl_base.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Startup time: 19.0s (prepare environment: 2.6s, import torch: 4.9s, import gradio: 1.3s, setup paths: 4.5s, initialize shared: 0.2s, other imports: 0.7s, load scripts: 1.5s, create ui: 0.7s, gradio launch: 2.5s).
Loading VAE weights specified in settings: D:\SD11\stable-diffusion-webui\models\VAE\sdxl_vae.safetensors
Applying attention optimization: xformers... done.
*** Error loading embedding DetailedEyes_V3.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
*** Error loading embedding more_details.safetensors
Traceback (most recent call last):
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 203, in load_from_dir
self.load_from_file(fullfn, fn)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 184, in load_from_file
embedding = create_embedding_from_data(data, name, filename=filename, filepath=path)
File "D:\SD11\stable-diffusion-webui\modules\textual_inversion\textual_inversion.py", line 297, in create_embedding_from_data
assert len(data.keys()) == 1, 'embedding file has multiple terms in it'
AssertionError: embedding file has multiple terms in it
---
Model loaded in 17.0s (load weights from disk: 0.6s, create model: 0.7s, apply weights to model: 6.8s, load VAE: 2.1s, move model to device: 0.3s, load textual inversion embeddings: 5.1s, calculate empty prompt: 1.4s).
```
### Additional information
_No response_ | closed | 2024-04-24T03:34:28Z | 2024-04-24T11:00:33Z | https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/15616 | [
"bug-report"
] | hungrydog666 | 2 |
keras-team/keras | machine-learning | 20,529 | Cannot access accuracy in results with keras 3 | Hi, I am new to Keras and TensorFlow. I am using `keras==3.6.0` and `tensorflow==2.18.0`.
I created a sequential model and added layers to it. Here is a pseudocode of what I did:
```
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import Dense
from keras.layers import Flatten
from keras import metrics
def build_model(self,
input_shape,
num_classes,
conv_kernel_size=(4, 4),
conv_strides=(2, 2),
conv1_channels_out=16,
conv2_channels_out=32,
final_dense_inputsize=100):
model = Sequential()
model.add(Conv2D(conv1_channels_out,
kernel_size=conv_kernel_size,
strides=conv_strides,
activation='relu',
input_shape=input_shape))
model.add(Conv2D(conv2_channels_out,
kernel_size=conv_kernel_size,
strides=conv_strides,
activation='relu'))
model.add(Flatten())
model.add(Dense(final_dense_inputsize, activation='relu'))
model.add(Dense(num_classes, activation='softmax'))
model.compile(loss="categorical_crossentropy",
optimizer="adam",
metrics=["accuracy"])
return model
```
while evaluating the model, I am getting the results as shown below.
```
print(model.evaluate(self.data_loader.get_valid_loader(batch_size), verbose=1))
print(new_model.metrics_names)
```
Result:
```
157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.0788 - loss: 2.3034
[2.3036301136016846, 0.07720000296831131]
['loss', 'compile_metrics']
```
Expected Result:
```
157/157 ━━━━━━━━━━━━━━━━━━━━ 0s 1ms/step - accuracy: 0.0788 - loss: 2.3034
[2.3036301136016846, 0.07720000296831131]
['loss', 'accuracy']
```
Is my expectation correct, or do I need to access accuracy differently? Also, the results do not change even if I change the metrics while compiling the model. Any guidance would be appreciated.
eg.
```
model.compile(loss="categorical_crossentropy",
optimizer="adam",
metrics=[metrics.MeanSquaredError(name='my_mse'),
metrics.AUC(name='my_auc'),
metrics.BinaryAccuracy(),
metrics.Accuracy(),])
```
| closed | 2024-11-21T09:08:21Z | 2024-11-26T16:43:45Z | https://github.com/keras-team/keras/issues/20529 | [
"type:support",
"stat:awaiting response from contributor"
] | tanwarsh | 5 |
autokey/autokey | automation | 323 | Autokey broken with update to python 3.8? | ## Classification:
Bug??
## Reproducibility:
Always
## Version
AutoKey version: 0.95.8-1
Used GUI (Gtk, Qt, or both): gtk
Installed via: from the AUR
Linux Distribution: Arch Linux
## Summary
I just updated to python 3.8 and can't launch autokey anymore. This is the message I get:
```
Traceback (most recent call last):
File "/usr/bin/autokey-gtk", line 6, in <module>
from pkg_resources import load_entry_point
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3251, in <module>
def _initialize_master_working_set():
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3234, in _call_aside
f(*args, **kwargs)
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 3263, in _initialize_master_working_set
working_set = WorkingSet._build_master()
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 583, in _build_master
ws.require(__requires__)
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 900, in require
needed = self.resolve(parse_requirements(requirements))
File "/usr/lib/python3.8/site-packages/pkg_resources/__init__.py", line 786, in resolve
raise DistributionNotFound(req, requirers)
pkg_resources.DistributionNotFound: The 'autokey==0.95.7' distribution was not found and is required by the application
```
Note: as I updated my system, I did get a warning about a python dependency cycle detected. So it might be that there is nothing to do on your end and that some of the python 3.8 dependencies will be updated on Arch Linux in the coming few days and that things will go back to normal. Nothing else seems to be broken on my system though.
Another application I use has to be uninstalled and re-installed after each python major update (and I get a very similar error message). So I tried to uninstall and re-install autokey, but that didn't do the trick. | closed | 2019-11-15T05:12:00Z | 2020-02-26T18:42:14Z | https://github.com/autokey/autokey/issues/323 | [] | prosoitos | 7 |
tqdm/tqdm | jupyter | 728 | Status printer doesn't always clear enough characters | The following script:
```
from tqdm import tqdm, trange
from time import sleep
for i in trange(10, desc='outer'):
for j in trange(10, desc='inner'):
# Print using tqdm class method .write()
sleep(0.1)
if not (j % 3):
tqdm.write("Done task %i.%i" % (i,j))
```
produces output that looks like:
```
Done task 0.0
Done task 0.3
Done task 0.6
Done task 0.9
Done task 1.0
Done task 1.3 ]
Done task 1.6
Done task 1.9
Done task 2.0
Done task 2.3 ]
```
(note the hanging ']'s)
It seems that something about the logic for outputting the correct number of spaces is off. Instead of outputting that number of spaces, we could instead use the ANSI \x1b[K escape sequence to delete to end of line in status_printer. | open | 2019-05-07T05:25:48Z | 2020-07-19T22:23:45Z | https://github.com/tqdm/tqdm/issues/728 | [
"help wanted 🙏",
"p2-bug-warning ⚠"
] | toddlipcon | 1 |
Miserlou/Zappa | flask | 2,084 | 502 error while deploying | I am getting 502 error while trying to deploy the app.
## Context
I have an app written with Django (3.0.4), works great locally. For dev purposes, I use sqlite only. I wanted to deploy it to AWS today, but I am getting 502 errors.
Sorry, I cannot post an app here, since it's private.
## Expected Behavior
It should be deployed without problems.
## Actual Behavior
Results in 502 error.
## Your Environment
Zappa version: 0.51.0
Django version: 3.0.4
Python: 3.7
Zappa config:
```
{
"dev": {
"aws_region": "eu-central-1",
"django_settings": "hay.hay.settings",
"profile_name": "default",
"project_name": "hundred-a-year",
"runtime": "python3.7",
"s3_bucket": "hay-dev",
"slim_handler": false
}
}
```
Error on AWS:
```
"{'message': 'An uncaught exception happened while servicing this request. You can investigate this with the `zappa tail` command.', 'traceback': ['Traceback (most recent call last):\\n', ' File \"/var/task/handler.py\", line 540, in handler\\n with Response.from_app(self.wsgi_app, environ) as response:\\n', ' File \"/var/task/werkzeug/wrappers/base_response.py\", line 287, in from_app\\n return cls(*_run_wsgi_app(app, environ, buffered))\\n', ' File \"/var/task/werkzeug/wrappers/base_response.py\", line 26, in _run_wsgi_app\\n return _run_wsgi_app(*args)\\n', ' File \"/var/task/werkzeug/test.py\", line 1119, in run_wsgi_app\\n app_rv = app(environ, start_response)\\n', \"TypeError: 'NoneType' object is not callable\\n\"]}"
```
Results of `zappa tail`:
```
[1587228323192] [DEBUG] 2020-04-18T16:45:23.192Z f41f3ef4-dfa2-46dc-95c2-99d6fd9f0852 Zappa Event: {'resource': '/', 'path': '/', 'httpMethod': 'GET', 'headers': {'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9', 'Accept-Encoding': 'gzip, deflate, br', 'Accept-Language': 'pl-PL,pl;q=0.9,en-US;q=0.8,en;q=0.7,ru;q=0.6,so;q=0.5', 'cache-control': 'max-age=0', 'CloudFront-Forwarded-Proto': 'https', 'CloudFront-Is-Desktop-Viewer': 'true', 'CloudFront-Is-Mobile-Viewer': 'false', 'CloudFront-Is-SmartTV-Viewer': 'false', 'CloudFront-Is-Tablet-Viewer': 'false', 'CloudFront-Viewer-Country': 'PL', 'Host': 'yiudwi21ux.execute-api.eu-central-1.amazonaws.com', 'Referer': 'https://eu-central-1.console.aws.amazon.com/apigateway/home?region=eu-central-1', 'sec-fetch-dest': 'document', 'sec-fetch-mode': 'navigate', 'sec-fetch-site': 'cross-site', 'sec-fetch-user': '?1', 'upgrade-insecure-requests': '1', 'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36', 'Via': '2.0 70d111e01220d4724cfea727fa9dfb91.cloudfront.net (CloudFront)', 'X-Amz-Cf-Id': 'VMUK9rPoyIcqWqOjowgpHrWsI-qW0OTnW1YAUIJ0BZHH-Vr7Z3rWCw==', 'X-Amzn-Trace-Id': 'Root=1-5e9b2ea3-5181626ee6404a71bf9a1db9', 'X-Forwarded-For': '89.64.74.188, 54.239.171.153', 'X-Forwarded-Port': '443', 'X-Forwarded-Proto': 'https'}, 'multiValueHeaders': {'Accept': ['text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9'], 'Accept-Encoding': ['gzip, deflate, br'], 'Accept-Language': ['pl-PL,pl;q=0.9,en-US;q=0.8,en;q=0.7,ru;q=0.6,so;q=0.5'], 'cache-control': ['max-age=0'], 'CloudFront-Forwarded-Proto': ['https'], 'CloudFront-Is-Desktop-Viewer': ['true'], 'CloudFront-Is-Mobile-Viewer': ['false'], 'CloudFront-Is-SmartTV-Viewer': ['false'], 'CloudFront-Is-Tablet-Viewer': ['false'], 'CloudFront-Viewer-Country': ['PL'], 'Host': ['yiudwi21ux.execute-api.eu-central-1.amazonaws.com'], 'Referer': ['https://eu-central-1.console.aws.amazon.com/apigateway/home?region=eu-central-1'], 'sec-fetch-dest': ['document'], 'sec-fetch-mode': ['navigate'], 'sec-fetch-site': ['cross-site'], 'sec-fetch-user': ['?1'], 'upgrade-insecure-requests': ['1'], 'User-Agent': ['Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36'], 'Via': ['2.0 70d111e01220d4724cfea727fa9dfb91.cloudfront.net (CloudFront)'], 'X-Amz-Cf-Id': ['VMUK9rPoyIcqWqOjowgpHrWsI-qW0OTnW1YAUIJ0BZHH-Vr7Z3rWCw=='], 'X-Amzn-Trace-Id': ['Root=1-5e9b2ea3-5181626ee6404a71bf9a1db9'], 'X-Forwarded-For': ['89.64.74.188, 54.239.171.153'], 'X-Forwarded-Port': ['443'], 'X-Forwarded-Proto': ['https']}, 'queryStringParameters': None, 'multiValueQueryStringParameters': None, 'pathParameters': None, 'stageVariables': None, 'requestContext': {'resourceId': '3achqb2vbg', 'resourcePath': '/', 'httpMethod': 'GET', 'extendedRequestId': 'LMQ5gG_WliAFbBw=', 'requestTime': '18/Apr/2020:16:45:23 +0000', 'path': '/dev', 'accountId': '536687225340', 'protocol': 'HTTP/1.1', 'stage': 'dev', 'domainPrefix': 'yiudwi21ux', 'requestTimeEpoch': 1587228323137, 'requestId': 'd3fd55d0-bec3-4424-877b-f6cf00c7affb', 'identity': {'cognitoIdentityPoolId': None, 'accountId': None, 'cognitoIdentityId': None, 'caller': None, 'sourceIp': '89.64.74.188', 'principalOrgId': None, 'accessKey': None, 'cognitoAuthenticationType': None, 'cognitoAuthenticationProvider': None, 'userArn': None, 'userAgent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_4) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.163 Safari/537.36', 'user': None}, 'domainName': 'yiudwi21ux.execute-api.eu-central-1.amazonaws.com', 'apiId': 'yiudwi21ux'}, 'body': None, 'isBase64Encoded': False}
[1587228323192] [DEBUG] 2020-04-18T16:45:23.192Z f41f3ef4-dfa2-46dc-95c2-99d6fd9f0852 host found: [yiudwi21ux.execute-api.eu-central-1.amazonaws.com]
[1587228323192] [DEBUG] 2020-04-18T16:45:23.192Z f41f3ef4-dfa2-46dc-95c2-99d6fd9f0852 amazonaws found in host
[1587228323192] 'NoneType' object is not callable
```
I've tried various solutions found on the internet, disabling `slim_handler`, removing `pyc` files, reinstalling venv. Nothing helped. | open | 2020-04-18T16:41:14Z | 2020-05-01T16:14:50Z | https://github.com/Miserlou/Zappa/issues/2084 | [] | tomekbuszewski | 3 |
donnemartin/system-design-primer | python | 88 | Traditional Chinese Translation | Maintainer(s): @kevingo
Please check out the [Translations Contributing Guidelines.](https://github.com/donnemartin/system-design-primer/blob/master/CONTRIBUTING.md#translations)
Original translations thread: #28
Interested in helping? Let us know! | closed | 2017-07-01T15:19:23Z | 2018-02-28T02:42:04Z | https://github.com/donnemartin/system-design-primer/issues/88 | [
"help wanted",
"translation"
] | kevingo | 1 |
pandas-dev/pandas | pandas | 60,309 | BUG: assignment fails with copy_on_write = True | ### Pandas version checks
- [X] I have checked that this issue has not already been reported.
- [X] I have confirmed this bug exists on the [latest version](https://pandas.pydata.org/docs/whatsnew/index.html) of pandas.
- [ ] I have confirmed this bug exists on the [main branch](https://pandas.pydata.org/docs/dev/getting_started/install.html#installing-the-development-version-of-pandas) of pandas.
### Reproducible Example
```python
import pandas as pd
pd.options.mode.copy_on_write = True
# pd.options.mode.copy_on_write = "warn"
dftest = pd.DataFrame({"A":[1,4,1,5], "B":[2,5,2,6], "C":[3,6,1,7]})
df=dftest[["B","C"]]
df.iloc[[1,3],:] = [[2, 2],[2 ,2]]
```
### Issue Description
The result is the following error output:
Traceback (most recent call last):
File "/home/rolf/jupyter/venv/lib/python3.12/site-packages/pandas/core/internals/blocks.py", line 1429, in setitem
values[indexer] = casted
~~~~~~^^^^^^^^^
ValueError: shape mismatch: value array of shape (2,2) could not be broadcast to indexing result of shape (2,)
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/rolf/code/python/bug.py", line 7, in <module>
df.iloc[[1,3],:] = [[2, 2],[2 ,2]]
~~~~~~~^^^^^^^^^
File "/home/rolf/jupyter/venv/lib/python3.12/site-packages/pandas/core/indexing.py", line 911, in __setitem__
iloc._setitem_with_indexer(indexer, value, self.name)
File "/home/rolf/jupyter/venv/lib/python3.12/site-packages/pandas/core/indexing.py", line 1944, in _setitem_with_indexer
self._setitem_single_block(indexer, value, name)
File "/home/rolf/jupyter/venv/lib/python3.12/site-packages/pandas/core/indexing.py", line 2218, in _setitem_single_block
self.obj._mgr = self.obj._mgr.setitem(indexer=indexer, value=value)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rolf/jupyter/venv/lib/python3.12/site-packages/pandas/core/internals/managers.py", line 409, in setitem
self.blocks[0].setitem((indexer[0], np.arange(len(blk_loc))), value)
File "/home/rolf/jupyter/venv/lib/python3.12/site-packages/pandas/core/internals/blocks.py", line 1432, in setitem
raise ValueError(
ValueError: setting an array element with a sequence.
### Expected Behavior
Should not give an error. When uncommenting the copy_on_write="warn" line the code runs with no error message as expected (I'd expect a warning in this case, but thats a separate issue)
### Installed Versions
<details>
INSTALLED VERSIONS
------------------
commit : 0691c5cf90477d3503834d983f69350f250a6ff7
python : 3.12.3
python-bits : 64
OS : Linux
OS-release : 6.8.0-47-generic
Version : #47-Ubuntu SMP PREEMPT_DYNAMIC Fri Sep 27 21:40:26 UTC 2024
machine : x86_64
processor : x86_64
byteorder : little
LC_ALL : None
LANG : de_DE.UTF-8
LOCALE : de_DE.UTF-8
pandas : 2.2.3
numpy : 2.0.2
pytz : 2024.2
dateutil : 2.9.0.post0
pip : 24.0
Cython : None
sphinx : None
IPython : 8.27.0
adbc-driver-postgresql: None
adbc-driver-sqlite : None
bs4 : 4.12.3
blosc : None
bottleneck : None
dataframe-api-compat : None
fastparquet : None
fsspec : None
html5lib : None
hypothesis : None
gcsfs : None
jinja2 : 3.1.4
lxml.etree : None
matplotlib : 3.9.2
numba : None
numexpr : None
odfpy : None
openpyxl : None
pandas_gbq : None
psycopg2 : None
pymysql : None
pyarrow : None
pyreadstat : None
pytest : None
python-calamine : None
pyxlsb : None
s3fs : None
scipy : 1.14.1
sqlalchemy : 2.0.36
tables : None
tabulate : None
xarray : None
xlrd : None
xlsxwriter : None
zstandard : None
tzdata : 2024.2
qtpy : None
pyqt5 : None
</details>
| open | 2024-11-14T09:54:06Z | 2025-02-15T14:49:51Z | https://github.com/pandas-dev/pandas/issues/60309 | [
"Indexing",
"Regression"
] | kameamea | 7 |
nerfstudio-project/nerfstudio | computer-vision | 3,057 | transform.json with several cameras??? | I have been preprocessing datasets from distinct captures (moments in time and camera sizes). I could only acquire this by using `hloc`, obtaining the `images.bin` and `cameras.bin`. However there is the problem that I cannot export the `transforms.json` in which each frame is from a distinct camera (f.e. 18 frames for 18 cameras).
This error arises
https://github.com/nerfstudio-project/nerfstudio/blob/main/nerfstudio/process_data/colmap_utils.py#L461
I commented this assertion and the code works, however, it is considering only the first camera and so the output transforms.json is incorrect.
Here is the `cam_id_to_camera`:
```
{
"1":Camera(id=1,
"model=""SIMPLE_RADIAL",
width=1521,
height=1073,
"params=array("[
3.1692782e+03,
7.6050000e+02,
5.3650000e+02,
-3.2129741e-01
]"))",
"2":Camera(id=2,
"model=""SIMPLE_RADIAL",
width=1062,
height=1523,
"params=array("[
2.01058080e+03,
5.31000000e+02,
7.61500000e+02,
7.41873012e-02
]"))",
"3":Camera(id=3,
"model=""SIMPLE_RADIAL",
width=1629,
height=1157,
"params=array("[
1.62281426e+03,
8.14500000e+02,
5.78500000e+02,
2.30765979e-02
]"))",
"4":Camera(id=4,
"model=""SIMPLE_RADIAL",
width=1601,
height=1119,
"params=array("[
2.09333469e+03,
8.00500000e+02,
5.59500000e+02,
1.16253708e-01
]"))",
"5":Camera(id=5,
"model=""SIMPLE_RADIAL",
width=1615,
height=1156,
"params=array("[
2.48802583e+03,
8.07500000e+02,
5.78000000e+02,
6.09272343e-02
]"))",
"6":Camera(id=6,
"model=""SIMPLE_RADIAL",
width=1624,
height=1162,
"params=array("[
1.96078379e+03,
8.12000000e+02,
5.81000000e+02,
7.16309553e-02
]"))",
"7":Camera(id=7,
"model=""SIMPLE_RADIAL",
width=1155,
height=1642,
"params=array("[
1.60673918e+03,
5.77500000e+02,
8.21000000e+02,
8.55971204e-03
]"))",
"8":Camera(id=8,
"model=""SIMPLE_RADIAL",
width=1613,
height=1114,
"params=array("[
1.32070856e+03,
8.06500000e+02,
5.57000000e+02,
3.21522170e-02
]"))",
"9":Camera(id=9,
"model=""SIMPLE_RADIAL",
width=1578,
height=1092,
"params=array("[
2.60890528e+03,
7.89000000e+02,
5.46000000e+02,
8.12536186e-02
]"))",
"10":Camera(id=10,
"model=""SIMPLE_RADIAL",
width=1580,
height=1003,
"params=array("[
2.56096207e+03,
7.90000000e+02,
5.01500000e+02,
1.39310730e-01
]"))",
"11":Camera(id=11,
"model=""SIMPLE_RADIAL",
width=1592,
height=1116,
"params=array("[
3.03610690e+03,
7.96000000e+02,
5.58000000e+02,
5.80613135e-02
]"))",
"12":Camera(id=12,
"model=""SIMPLE_RADIAL",
width=1593,
height=1107,
"params=array("[
2.52260764e+03,
7.96500000e+02,
5.53500000e+02,
7.04535419e-02
]"))",
"13":Camera(id=13,
"model=""SIMPLE_RADIAL",
width=1611,
height=1103,
"params=array("[
2.88055835e+03,
8.05500000e+02,
5.51500000e+02,
2.10120324e-01
]"))",
"14":Camera(id=14,
"model=""SIMPLE_RADIAL",
width=1646,
height=995,
"params=array("[
2.4982031e+03,
8.2300000e+02,
4.9750000e+02,
-1.3854031e-01
]"))",
"15":Camera(id=15,
"model=""SIMPLE_RADIAL",
width=1597,
height=1135,
"params=array("[
2.73717713e+03,
7.98500000e+02,
5.67500000e+02,
1.97624105e-01
]"))",
"16":Camera(id=16,
"model=""SIMPLE_RADIAL",
width=1611,
height=1106,
"params=array("[
4.84974167e+03,
8.05500000e+02,
5.53000000e+02,
1.22514371e+00
]"))",
"17":Camera(id=17,
"model=""SIMPLE_RADIAL",
width=1610,
height=1111,
"params=array("[
2.59722838e+03,
8.05000000e+02,
5.55500000e+02,
6.36294581e-02
]"))",
"18":Camera(id=18,
"model=""SIMPLE_RADIAL",
width=1609,
height=1063,
"params=array("[
1.50583974e+03,
8.04500000e+02,
5.31500000e+02,
1.06968565e-02
]"))"
}
```
and here the `frames`:
```
[
{
"file_path":"images/1126.jpg",
"transform_matrix":[
[
0.921260037730486,
0.06288893809857878,
0.3838292906301658,
2.105211533209113
],
[
0.37958703050345155,
0.06982038797403659,
-0.9225176419433542,
-0.49627269588663403
],
[
-0.08481526486661992,
0.9955752582743246,
0.0404509079949632,
-0.20863332076687988
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":1
},
{
"file_path":"images/1134.jpg",
"transform_matrix":[
[
0.8583242977307534,
0.11998327997713212,
0.49888216289113607,
1.770437228111344
],
[
0.4977941717173231,
0.04104329885648988,
-0.8663235020610032,
3.0586422844543204
],
[
-0.12442010500426791,
0.9919271445750971,
-0.024498516808594096,
-0.7686162062318393
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":2
},
{
"file_path":"images/2.jpg",
"transform_matrix":[
[
0.9987673374281286,
0.023247725771476515,
-0.043856002248050946,
-0.3619371437571082
],
[
-0.04332284445077449,
-0.022961861788500397,
-0.9987972186845036,
4.184698731103925
],
[
-0.024226779303506703,
0.9994660054997667,
-0.021926399953461295,
-0.8347879189623232
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":3
},
{
"file_path":"images/2326.jpg",
"transform_matrix":[
[
0.9370367611769083,
0.06775503444844672,
0.3425950430318276,
1.728675369817075
],
[
0.33861503269095566,
0.06376540098335143,
-0.9387618618548238,
1.2761133084455978
],
[
-0.08545155258269459,
0.9956622062448298,
0.03680765160784886,
-0.45186142155216497
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":4
},
{
"file_path":"images/2338.jpg",
"transform_matrix":[
[
0.8813299488127686,
0.10930712419638257,
0.4596841023198423,
1.841601872364886
],
[
0.4602579735036135,
0.021432087078036188,
-0.8875264860666562,
2.203777714907431
],
[
-0.10686495754937475,
0.9937769459206609,
-0.031420735264698414,
-0.6513425540316853
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":5
},
{
"file_path":"images/2340.jpg",
"transform_matrix":[
[
0.9206981141371043,
0.10766983005107991,
0.37512956471218234,
1.7572824124729307
],
[
0.3758289148940189,
0.014497557771394089,
-0.9265756566779905,
2.1902888554735798
],
[
-0.1052027060201598,
0.9940809969592199,
-0.027117561295204536,
-0.6448570997422394
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":6
},
{
"file_path":"images/2456.jpg",
"transform_matrix":[
[
0.8573272033900071,
0.05573010625622554,
0.5117462472594589,
1.2127377194929818
],
[
0.504346668458593,
0.10812185718237283,
-0.8567053764358857,
3.589515987762743
],
[
-0.10307523631885408,
0.9925743394104568,
0.0645885160203267,
-0.7812858964244921
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":7
},
{
"file_path":"images/2463.jpg",
"transform_matrix":[
[
0.9954484676003469,
0.03313166916561513,
-0.08935681759340801,
-1.6523912197371058
],
[
-0.09407310453685454,
0.19155985422161045,
-0.9769621657226002,
-0.1398620968511394
],
[
-0.015251208310143776,
0.9809215640146237,
0.1938047623155453,
1.0450828310962592
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":8
},
{
"file_path":"images/2485.jpg",
"transform_matrix":[
[
0.9891916330813568,
0.04931210562895927,
-0.13808775934265996,
-1.5941377427275276
],
[
-0.14527684385887527,
0.20202848644940175,
-0.9685448514660371,
-0.5301334614190419
],
[
-0.01986332500469107,
0.9781374171870753,
0.2070087955096492,
0.7207519715670379
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":9
},
{
"file_path":"images/2567.jpg",
"transform_matrix":[
[
0.8925616769222634,
0.06041009953577875,
0.44686046229644716,
2.161356105579826
],
[
0.43273160719096765,
0.16390283363283475,
-0.8864982894879434,
-1.3635388897736749
],
[
-0.12679514591513044,
0.9846250458937172,
0.12015202857694987,
0.5136719645620728
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":10
},
{
"file_path":"images/2573.jpg",
"transform_matrix":[
[
0.9991551080868791,
0.036873966540006284,
0.018148845018437103,
-0.6931337791903103
],
[
0.016013139447350416,
0.05741096296918211,
-0.9982222000616853,
-0.9374166026593576
],
[
-0.037850354673852955,
0.9976694301834439,
0.056771988942662915,
-0.09372331565158667
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":11
},
{
"file_path":"images/2601.jpg",
"transform_matrix":[
[
0.9557191580717495,
0.08063304345979128,
0.2830180262757794,
1.307514461284852
],
[
0.27937732346070393,
0.053565911278007576,
-0.9586860822421956,
1.0340873188077349
],
[
-0.09246189501929722,
0.9953034740476736,
0.028666923763676317,
-0.4118952305167651
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":12
},
{
"file_path":"images/2610.jpg",
"transform_matrix":[
[
0.998453318691547,
0.0305494432024383,
0.0464510701045464,
-0.2646977676315645
],
[
0.04352520702189186,
0.09031714805785866,
-0.9949614912751128,
-1.3977587790913204
],
[
-0.03459084774240067,
0.9954443953766156,
0.08884778537333513,
0.05542609888591876
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":13
},
{
"file_path":"images/2615.jpg",
"transform_matrix":[
[
0.9714382605592887,
0.04191056517656841,
-0.23356200557480283,
-1.7504395113195594
],
[
-0.23666696635240758,
0.24254180979536413,
-0.9408305998099434,
-15.926176434914343
],
[
0.01721780935810302,
0.9692352526748963,
0.24553324013896363,
7.8069388140706595
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":14
},
{
"file_path":"images/2616.jpg",
"transform_matrix":[
[
0.9367652645482465,
0.06555439110847552,
0.34376367018963866,
1.752089086403888
],
[
0.33802595877024866,
0.0848274703909408,
-0.9373061140654786,
-0.18522699767214082
],
[
-0.09060513414430646,
0.9942368541114692,
0.05730434183568592,
-0.2446859384790211
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":15
},
{
"file_path":"images/2917.jpg",
"transform_matrix":[
[
0.5109149170601622,
0.008185497969738711,
-0.8595923133371974,
-19.047031369140104
],
[
-0.8480028237921291,
0.1687171748883521,
-0.5024198699676082,
-4.9440612706440765
],
[
0.14091542983641886,
0.9856305152138304,
0.09314144680719047,
1.4995949329568972
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":16
},
{
"file_path":"images/3064.jpg",
"transform_matrix":[
[
0.9273008922603315,
0.013600882385027945,
-0.37406960744163836,
-3.7092495637182714
],
[
-0.3735407178236307,
0.09794186215691611,
-0.922428709313106,
-0.9985153665915756
],
[
0.024091229545199995,
0.9950991948723424,
0.09590205953688483,
-0.009991324859579633
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":17
},
{
"file_path":"images/5.jpg",
"transform_matrix":[
[
0.98894126695783,
0.0010595069134651477,
-0.14830390403810095,
-1.3443514171466973
],
[
-0.14827754096055015,
0.027225227695202324,
-0.9885709675201052,
1.2298767028500044
],
[
0.0029902097809862524,
0.9996287633026797,
0.0270812522498681,
-0.34892139763097163
],
[
0.0,
0.0,
0.0,
1.0
]
],
"colmap_im_id":18
}
]
```
In https://github.com/autonomousvision/sdfstudio/blob/master/scripts/heritage_to_nerfstudio.py#L87 there is a workaround for this, however, it requires a config file with these parameters: radius, min_track_length, voxel_size, origin.
In https://github.com/InternLandMark/LandMark?tab=readme-ov-file#prepare-dataset define this format of `transforms.json` also for multifocal sfm, see:
```
### single focal example ###
{
"camera_model": "SIMPLE_PINHOLE",
"fl_x": 427,
"fl_y": 427,
"w": 547,
"h": 365,
"frames": [
{
"file_path": "./images/image_0.png",
"transform_matrix": []
}
]
}
### multi focal example ###
{
"camera_model": "SIMPLE_PINHOLE",
"frames": [
{
"fl_x": 1116,
"fl_y": 1116,
"w": 1420,
"h": 1065,
"file_path": "./images/image_0.png",
"transform_matrix": []
}
]
}
``` | closed | 2024-04-09T11:01:45Z | 2024-04-26T19:31:56Z | https://github.com/nerfstudio-project/nerfstudio/issues/3057 | [] | dberga | 5 |
ansible/ansible | python | 83,955 | `user` module use of chmod/chown can change files outside of home dir | ### Summary
The `user` module makes use of `os.chown` and `os.chmod` to apply permissions to the users home dir and contents when creating the dir. If the skel directory contain symlinks that will lead out of the home dir, this can inadvertently cause files outside of the users homedir to be changed.
### Issue Type
Bug Report
### Component Name
user
### Ansible Version
```console
$ ansible --version
ansible [core 2.18.0.dev0] (devel f00e3d7762) last updated 2024/09/17 13:36:47 (GMT -500)
config file = None
configured module search path = ['/Users/sivel/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
ansible python module location = /Users/sivel/projects/ansibledev/ansible/lib/ansible
ansible collection location = /Users/sivel/.ansible/collections:/usr/share/ansible/collections
executable location = /Users/sivel/venvs/ansibledev/bin/ansible
python version = 3.12.4 (main, Sep 9 2024, 13:16:08) [Clang 15.0.0 (clang-1500.3.9.4)] (/Users/sivel/venvs/ansibledev/bin/python)
jinja version = 3.1.4
libyaml = True
```
### Configuration
```console
# if using a version older than ansible-core 2.12 you should omit the '-t all'
$ ansible-config dump --only-changed -t all
```
### OS / Environment
N/A
### Steps to Reproduce
<!--- Paste example playbooks or commands between quotes below -->
```yaml (paste below)
```
### Expected Results
user module should not follow symlinks
### Actual Results
```console
user module follows symlinks
```
### Code of Conduct
- [X] I agree to follow the Ansible Code of Conduct | closed | 2024-09-17T18:38:09Z | 2024-10-22T13:00:03Z | https://github.com/ansible/ansible/issues/83955 | [
"module",
"bug",
"has_pr",
"affects_2.18"
] | sivel | 1 |
e2b-dev/code-interpreter | jupyter | 44 | Top level async/await doesn't work in JavaScript runtime | When running JS code in the Sandbox, the top level promise is never awaited:
```js
import { Sandbox } from '@e2b/code-interpreter'
const sandbox = await Sandbox.create({apiKey: ''})
const code = `
const printResult = async () => {
return 'foo';
};
printResult()
.then((result) => {
console.log("glama.result");
})
.catch((error) => {
console.log("glama.error");
});
`
console.log('running code')
const ex = await sandbox.runCode(code, {
language: 'js',
onError: (error) => {
console.log('error', error)
},
onStdout: (stdout) => {
console.log('stdout', stdout)
},
onStderr: (stderr) => {
console.log('stderr', stderr)
},
})
console.log('code executed', ex)
```
The code above will produce a following output
```
running code
code executed Execution {
results: [
Result {
isMainResult: true,
text: 'Promise { <pending> }',
html: undefined,
markdown: undefined,
svg: undefined,
png: undefined,
jpeg: undefined,
pdf: undefined,
latex: undefined,
json: undefined,
javascript: undefined,
raw: [Object],
data: undefined,
chart: undefined,
extra: {}
}
],
logs: { stdout: [], stderr: [] },
error: undefined,
executionCount: 2
}
```
The `text` field says `'Promise { <pending> }'` which means the promise was never awaited. | closed | 2024-10-25T05:56:00Z | 2025-03-17T23:58:39Z | https://github.com/e2b-dev/code-interpreter/issues/44 | [
"bug",
"improvement"
] | mlejva | 8 |
FlareSolverr/FlareSolverr | api | 924 | Logical action for when the partition is full | ### Have you checked our README?
- [X] I have checked the README
### Have you followed our Troubleshooting?
- [X] I have followed your Troubleshooting
### Is there already an issue for your problem?
- [X] I have checked older issues, open and closed
### Have you checked the discussions?
- [X] I have read the Discussions
### Environment
```markdown
CentOS Linux 7 (Core)
Kernel 4.11.8-1.el7.elrepo.x86_64
Python 3.11.6
FlareSolverr 3.3.6
Google Chrome 118.0.5993.70
Headless true
```
### Description
While everything was working fine on the last version. All of a sudden I got the following error:
```
(env311) root@server2 [/nabi/FlareSolverr]# python src/flaresolverr.py
2023-10-17 04:50:47 INFO FlareSolverr 3.3.6
2023-10-17 04:50:47 INFO Testing web browser installation...
2023-10-17 04:50:47 INFO Platform: Linux-4.11.8-1.el7.elrepo.x86_64-x86_64-with-glibc2.17
2023-10-17 04:50:47 INFO Chrome / Chromium path: /usr/bin/chromium
2023-10-17 04:50:47 INFO Chrome / Chromium major version: 118
2023-10-17 04:50:47 INFO Launching web browser...
Traceback (most recent call last):
File "/nabi/FlareSolverr/src/utils.py", line 296, in get_user_agent
driver = get_webdriver()
^^^^^^^^^^^^^^^
File "/nabi/FlareSolverr/src/utils.py", line 177, in get_webdriver
driver = uc.Chrome(options=options, browser_executable_path=browser_executable_path,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nabi/FlareSolverr/src/undetected_chromedriver/__init__.py", line 474, in __init__
super(Chrome, self).__init__(
File "/nabi/FlareSolverr/env311/lib/python3.11/site-packages/selenium/webdriver/chrome/webdriver.py", line 45, in __init__
super().__init__(
File "/nabi/FlareSolverr/env311/lib/python3.11/site-packages/selenium/webdriver/chromium/webdriver.py", line 56, in __init__
super().__init__(
File "/nabi/FlareSolverr/env311/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 206, in __init__
self.start_session(capabilities)
File "/nabi/FlareSolverr/src/undetected_chromedriver/__init__.py", line 732, in start_session
super(selenium.webdriver.chrome.webdriver.WebDriver, self).start_session(
File "/nabi/FlareSolverr/env311/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 290, in start_session
response = self.execute(Command.NEW_SESSION, caps)["value"]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/nabi/FlareSolverr/env311/lib/python3.11/site-packages/selenium/webdriver/remote/webdriver.py", line 345, in execute
self.error_handler.check_response(response)
File "/nabi/FlareSolverr/env311/lib/python3.11/site-packages/selenium/webdriver/remote/errorhandler.py", line 229, in check_response
raise exception_class(message, screen, stacktrace)
selenium.common.exceptions.WebDriverException: Message: disconnected: Unable to receive message from renderer
(failed to check if window was closed: disconnected: not connected to DevTools)
(Session info: chrome=118.0.5993.70)
Stacktrace:
#0 0x55d62a04d4e3 <unknown>
#1 0x55d629d7cc76 <unknown>
#2 0x55d629d66284 <unknown>
#3 0x55d629d65fa0 <unknown>
#4 0x55d629d649bf <unknown>
#5 0x55d629d64fed <unknown>
#6 0x55d629d639e1 <unknown>
#7 0x55d629d6c416 <unknown>
#8 0x55d629d63794 <unknown>
#9 0x55d629d65c29 <unknown>
#10 0x55d629d649bf <unknown>
#11 0x55d629d64fed <unknown>
#12 0x55d629d639e1 <unknown>
#13 0x55d629d5ba72 <unknown>
#14 0x55d629d63794 <unknown>
#15 0x55d629d62db4 <unknown>
#16 0x55d629d62a96 <unknown>
#17 0x55d629d7e402 <unknown>
#18 0x55d629d5614c <unknown>
#19 0x55d629d552b5 <unknown>
#20 0x55d629de0e2c <unknown>
#21 0x55d629de047f <unknown>
#22 0x55d629dd7de3 <unknown>
#23 0x55d629dad2dd <unknown>
#24 0x55d629dae34e <unknown>
#25 0x55d62a00d3e4 <unknown>
#26 0x55d62a0113d7 <unknown>
#27 0x55d62a01bb20 <unknown>
#28 0x55d62a012023 <unknown>
#29 0x55d629fe01aa <unknown>
#30 0x55d62a0366b8 <unknown>
#31 0x55d62a036847 <unknown>
#32 0x55d62a046243 <unknown>
#33 0x7f159c290ea5 start_thread
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/nabi/FlareSolverr/src/flaresolverr.py", line 105, in <module>
flaresolverr_service.test_browser_installation()
File "/nabi/FlareSolverr/src/flaresolverr_service.py", line 72, in test_browser_installation
user_agent = utils.get_user_agent()
^^^^^^^^^^^^^^^^^^^^^^
File "/nabi/FlareSolverr/src/utils.py", line 300, in get_user_agent
raise Exception("Error getting browser User-Agent. " + str(e))
Exception: Error getting browser User-Agent. Message: disconnected: Unable to receive message from renderer
(failed to check if window was closed: disconnected: not connected to DevTools)
(Session info: chrome=118.0.5993.70)
Stacktrace:
#0 0x55d62a04d4e3 <unknown>
#1 0x55d629d7cc76 <unknown>
#2 0x55d629d66284 <unknown>
#3 0x55d629d65fa0 <unknown>
#4 0x55d629d649bf <unknown>
#5 0x55d629d64fed <unknown>
#6 0x55d629d639e1 <unknown>
#7 0x55d629d6c416 <unknown>
#8 0x55d629d63794 <unknown>
#9 0x55d629d65c29 <unknown>
#10 0x55d629d649bf <unknown>
#11 0x55d629d64fed <unknown>
#12 0x55d629d639e1 <unknown>
#13 0x55d629d5ba72 <unknown>
#14 0x55d629d63794 <unknown>
#15 0x55d629d62db4 <unknown>
#16 0x55d629d62a96 <unknown>
#17 0x55d629d7e402 <unknown>
#18 0x55d629d5614c <unknown>
#19 0x55d629d552b5 <unknown>
#20 0x55d629de0e2c <unknown>
#21 0x55d629de047f <unknown>
#22 0x55d629dd7de3 <unknown>
#23 0x55d629dad2dd <unknown>
#24 0x55d629dae34e <unknown>
#25 0x55d62a00d3e4 <unknown>
#26 0x55d62a0113d7 <unknown>
#27 0x55d62a01bb20 <unknown>
#28 0x55d62a012023 <unknown>
#29 0x55d629fe01aa <unknown>
#30 0x55d62a0366b8 <unknown>
#31 0x55d62a036847 <unknown>
#32 0x55d62a046243 <unknown>
#33 0x7f159c290ea5 start_thread
```
After 2 days of confusion
Check the source code for bugs
Test on multiple servers
Testing various Chromium versions
Testing and checking inside Docker server
Test my own proxies
Remote access tests
Many searches on the Internet
And after 2 whole days my time was wasted and I was completely disappointed and confused.
I happened to see the problem in a place I never thought:
```
(env311) root@server2 [/nabi/FlareSolverr]# df -h
Filesystem Size Used Avail Use% Mounted on
devtmpfs 3.9G 0 3.9G 0% /dev
tmpfs 3.9G 12K 3.9G 1% /dev/shm
tmpfs 3.9G 345M 3.6G 9% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda2 191G 147G 35G 81% /
**/dev/sda5 2.0G 2.0G 0 100% /tmp**
/dev/sda1 480M 94M 357M 21% /boot
tmpfs 799M 4.0K 799M 1% /run/user/0
(env311) root@server2 [/nabi/FlareSolverr]# ls /tmp
lost+found tmp2yt92bdq tmpb11gr_37 tmpjn_tvvyt tmpr257tvnv
mc-root tmp2yz04mu1 tmpb531_a5n tmpjp8jfxex tmpr3jjpmer
puppeteer_dev_chrome_profile-0SzjUq tmp30d7g5xu tmpb60ohdlf tmpjto_kr02 tmprg8ukogj
puppeteer_dev_chrome_profile-3Apt8K tmp33z4igqs tmpb62tlgwj tmpjuvlf_0q tmprpr2p8pr
puppeteer_dev_chrome_profile-4QeEZP tmp362ptpfv tmpb7azyw1z tmpjyso8ty_ tmprt3b8gp2
puppeteer_dev_chrome_profile-4ud6nn tmp3jjo5ffz tmpb8veevlt tmpk8znbczw tmpryt2zect
puppeteer_dev_chrome_profile-66je1M tmp3ptcr7_f tmpbih28a47 tmpkapqeqrh tmps0z2rgj4
puppeteer_dev_chrome_profile-9bvJRL tmp3q0p5tvn tmpbk17g2as tmpkcq1v0il tmps9ck1hof
puppeteer_dev_chrome_profile-aiRADV tmp41hv_r18 tmpbodfwaje tmp_k_eso0x tmps9s35ylr
puppeteer_dev_chrome_profile-cD4Gtj tmp41lpsmkn tmpbplqcw92 tmpknf0819n tmpsanjn928
puppeteer_dev_chrome_profile-CHgzlj tmp43cndxd_ tmpbtmrawlp tmpknwfsjc2 tmp.sDyesBBrOt
puppeteer_dev_chrome_profile-ctHcE3 tmp45994cbn tmpbuc_2bpe tmp_ku7pms4 tmpsj_qgfyh
puppeteer_dev_chrome_profile-CxmwBZ tmp4_63ws9s tmpbvohofjo tmpkvv7gr1_ tmpspqq1iyj
puppeteer_dev_chrome_profile-DIMYkG tmp46rmqhyb tmpc0agx5bf tmpkxwovalk tmpsqkmji27
puppeteer_dev_chrome_profile-dJxScN tmp4a1v81wd tmpc1gnqdyt tmpl01qvfmt tmp.SRfhaJMKRj
puppeteer_dev_chrome_profile-DMFbWl tmp4bak079j tmpc_3w32u0 tmpl6bltdow tmpsyfwgoa7
puppeteer_dev_chrome_profile-fULBhM tmp4bru59fc tmpc71myr9r tmpl_6ip6ej tmpt2x8559k
puppeteer_dev_chrome_profile-fwoKYb tmp4gw58y3t tmpc7hhxny4 tmpl8sqrlcd tmpt3liygls
puppeteer_dev_chrome_profile-hwqVVk tmp4i2nf1g4 tmpca9pbvny tmplatw4c4l tmptavxqwki
puppeteer_dev_chrome_profile-IkFddX tmp4lcs0zap tmpcch0rzu4 tmplf4v1kn6 tmptq8jz26z
puppeteer_dev_chrome_profile-j6xqD0 tmp4ncqhcp9 tmpce6pk_3i tmplivsv7z0 tmptqiucvjb
puppeteer_dev_chrome_profile-j6zk7P tmp4o33r53x tmpcfsfpodr tmpln7lbt88 tmp_tt3bl_z
puppeteer_dev_chrome_profile-Jtxq1r tmp4ssdoik4 tmpcht3mrne tmplqkrthz0 tmpu2_nkwbf
puppeteer_dev_chrome_profile-K6arvJ tmp5026jy27 tmpcoiz36wg tmplry2m7d_ tmpu35m8hje
puppeteer_dev_chrome_profile-kT2lhW tmp54kv54od tmpcpilci1p tmplzhhf11q tmpu4pofipi
puppeteer_dev_chrome_profile-lShjEk tmp_55_s2zj tmpctgiyoh_ tmpm2hut_ra tmpufxyl_5r
puppeteer_dev_chrome_profile-MW6yqL tmp57h54539 tmpd1ixnu2h tmpm35rhru9 tmpuiqliq0m
puppeteer_dev_chrome_profile-NDvMfA tmp5a_0qvzz tmpd519b1sw tmpm4ltrc3o tmpuj4hiitq
puppeteer_dev_chrome_profile-OUxdQ0 tmp5hii5v22 tmpd61e4dwj tmpm5bbpfm_cacert.pem tmpukc24tnm
puppeteer_dev_chrome_profile-PElFIB tmp5pcmhg7c tmpd86sfbkm tmpma7j2p4q tmpuq1nek01
puppeteer_dev_chrome_profile-PGLt1z tmp5u_vc86s tmpd8u0tyhs tmpmavi51qb tmpusc5k6hx
puppeteer_dev_chrome_profile-Qw1jmP tmp5xmkehwb tmpdovfppnc tmpmjnuo4rt tmpv4rp45lz
puppeteer_dev_chrome_profile-R7ty3e tmp5zbury0u tmpdwgalisu tmpmvqwz4in tmpv7p8d756
puppeteer_dev_chrome_profile-rB2VzQ tmp643mysye tmpe3vpxb4v tmpn0jn8h09 tmpva_u10wd
puppeteer_dev_chrome_profile-RJg2aa tmp64n8v4ha tmpe8y6c6nf tmpn_8bb2jw tmpvcfbocbi
puppeteer_dev_chrome_profile-Ryg7af tmp66foayuj tmpeew1mzvf tmpncelm3x5 tmpvcmx2gob
puppeteer_dev_chrome_profile-SkLwfx tmp67pkyu2c tmpehewmk4m tmpnd3_xutf tmpve7rhdfo
puppeteer_dev_chrome_profile-sq1Tzc tmp_68ydxfq tmpeint0p2e tmpnh_i2dbz tmpviaqh_xd
puppeteer_dev_chrome_profile-ssK2K8 tmp6drheatn tmpekvv0rb7 tmpnms5jb59 tmpvsab08su
puppeteer_dev_chrome_profile-stvhQC tmp6dzs2fym tmpem1h7qt1 tmpnnxl3f_f tmpw8mw1se5
puppeteer_dev_chrome_profile-ttN9oN tmp6fympdd5 tmpeypgqhhf tmpno1yzevc tmpwc67d_sz
puppeteer_dev_chrome_profile-WGovaR tmp6fynwgiv tmpf1vj4xi3 tmpnr_twrxe tmpwdyfxjco
puppeteer_dev_profile-0ydPaB tmp6hda4572 tmpf5a94w1j tmpnsf12e1d tmpwe6mhwzk
puppeteer_dev_profile-BpCg0h tmp6qizukeu tmpf_8z_txv tmpnzscyyb_ tmpwhevqmd7
puppeteer_dev_profile-Dr1efw tmp6tcg19v2 tmpf_aguhqb tmpo3c0prdn tmpwrcfnac8
puppeteer_dev_profile-e07GgP tmp6ub8fyw0 tmpfhrn9dyg tmpo58yp6qw tmpwsr6vw5r
puppeteer_dev_profile-PDl4d3 tmp6xgws51a tmpfwfeqboa tmpo64pavxc tmpwtn3obd9
puppeteer_dev_profile-rORDLI tmp6xx3s8e_ tmpg7h7n9vo tmpo7nsy4f6 tmpwwn8e4vh
puppeteer_dev_profile-VMFiOq tmp7917f30_ tmpg828le66 tmpocdxhc7k tmpwwx9j94z
puppeteer_dev_profile-XcDoYh tmp79il713d tmpgfg3urgw tmpoj3toohz tmpwy9dr97l
puppeteer_dev_profile-xviTMQ tmp7is5k054 tmpgfydb54e tmpon8ujak3 tmpx14vdegv
puppeteer_dev_profile-FviTMD tmp7k08uh2l tmpgm3qaeaq tmpoqg5_94s tmpx1jhg2ni
tmp06nqkm8t tmp7u65245r tmpgmljq48b tmpotd4d_vn tmpx43zimjq
tmp0gnnm8br tmp7wlv0a8u tmpgmsorqyk tmpotz3xvb2 tmpx45uw_8g
tmp0je9pzxp tmp802et1vy tmpgpecku2f tmpox1ylym2 tmpx5fxhusf
tmp_0o9updu tmp80z8zedd tmpgqay12uh tmpp2ak4x_p tmpxciftnqc
tmp0rufnanu tmp8f3ms4x0 tmpg_snxj4k tmpp47456to tmpxd5l6pwi
tmp0udaycff tmp8gakl9sc tmph15l8u5y tmpp7vtp2tp tmpxe3b119d
tmp0vtu65sj tmp8ltbafv_ tmph7a0nfa2 tmpp8ngbuf6 tmpxnz34q4m
tmp0wvtbzvh tmp8qmehqxu tmph7p_zmc4 tmpp8wyznd1 tmpxotelrjr
tmp0yh1a7cd tmp8vc49gq6 tmphadwh90l tmppbp7rsik tmpxp2lntx1
tmp11oifb1g tmp8x_p_b61 tmphb8m0b4d tmpps1yhb3q tmpy2ijw0dz
tmp1g7l7_a9 tmp8yfyr_gd tmphkbmk7yi tmppvruabbs tmpy5x6fz9e
tmp1h56by0l tmp91uz4f25 tmphmvs9y2w tmppx2wn817 tmpy5yq2mj4
tmp1i_9n_lb tmp9a94dtyz tmphnhkqizk tmppzn04xan tmpyilfafnh
tmp1vdek48h tmp9b6byfjz tmphywe4yc_ tmpq1ciswa_ tmp.yN1qSgp9SE
tmp1xa3flh9 tmp9eks9ihg tmpi5pc0x4e tmpq2wx9ty4 tmpy_nwe9fv
tmp1ypdcsi6 tmp9f_5u_sx tmpi9tmn3my tmpq33_tvdm tmpyx_skuvt
tmp1ytcbv3b tmp9fyok4z1 tmpidqeorbq tmpq_3g6ne_ tmpz0tya1h5
tmp21tqb0bj tmp9hcmsggc tmpihx86lra tmpq62006dz tmpz2arsh59
tmp25nw7g5h tmp9xipkv0n tmpikjg_q50 tmpqde8thax tmpz4ftl4vb
tmp26lrd1c8 tmp9zq5tz22 tmpilogapo9 tmpqiir8cxt tmpzbuvm_ac
tmp29vfkego tmpa31yefgk tmpiwv9b5mf tmpqklzgzik tmpzi1txfxh
tmp2kw9fg3q tmpa9iwqb4m tmpj5n8_gly tmpqkwrjopu tmpzjig9q_q
tmp2q2tikrd tmpady3pte5 tmpj8pmiadr tmpqpbtytz1 tmpzjltxmzs
tmp2ulfbx_o tmpaem007ub tmpjbqahzqd tmpqpj_3l2u tmpzn6rdyug
tmp2veu1dyb tmpajray7il tmpjfousy0k tmpqtpaw3v_ tmpzsasixwe
tmp2xxt4oum tmpausb12dg tmpjkhslcfz tmpqz358yzi tmpzwbhyvsz
tmp2y89ail6 tmpaxrwzkxt tmpjmqim467 tmpr_1dhnmp
```
Yes! My `/tmp` partition was full!
I just wanted to inform about this so that if you encounter such errors, you can solve the problem quickly.
Meanwhile, I deleted the tmp, but if it fills up again quickly, I can't easily increase the size of this partition. Maybe a solution for me is to set the tmp directory path myself.
And of course, I'm still not sure if the tmp filling is caused by FlareSolverr.
But in any case, maybe a logical action is if the partition has no space, the program will issue a logical error.
Anyway, my main goal was just to inform.
### Logged Error Messages
```text
Above.
```
### Screenshots
_No response_ | open | 2023-10-17T03:02:42Z | 2023-10-17T19:19:14Z | https://github.com/FlareSolverr/FlareSolverr/issues/924 | [
"help wanted"
] | NabiKAZ | 2 |
WZMIAOMIAO/deep-learning-for-image-processing | deep-learning | 794 | 针对于自己 的模型生成Grad-Cam | 你好up主,很感谢你能出这期视频,视频中说到的如何去用自己的模型生成gradcam我是理解的,但是我有个疑问是,我的改进的模型是Resnet101后面接了Transformer做的多标签图像分类,那么为我的fc.weight层在transformer后面,但是我的目标层是resnet101的最后一个卷积层,这样合理可实施嘛,希望up主有时间能够回复我,非常感谢!
| open | 2024-03-08T05:25:41Z | 2024-03-08T05:25:41Z | https://github.com/WZMIAOMIAO/deep-learning-for-image-processing/issues/794 | [] | Zhong1015 | 0 |
python-gino/gino | asyncio | 202 | How Can I Establish An SSL Connection With A Database Using Gino | * GINO version: 0.7.0
* Python version: 3.6
* Operating System: MacOSX
### Description
How can I make an SSL Connection with Gino? Is this possible?
### What I Did
Couldn't find an example of this in documentation or in issues.
| closed | 2018-04-19T21:00:33Z | 2019-02-28T20:35:44Z | https://github.com/python-gino/gino/issues/202 | [
"enhancement",
"help wanted"
] | mashaalmemon | 7 |
ydataai/ydata-profiling | data-science | 1,426 | to_html ignores sensitive parameter and exposes data | ### Current Behaviour
In ydata-profile v4.5.0, `ProfileReport.to_html()` ignores the sensitive parameter and exposes data, similar to the bug reported in #1300.
### Expected Behaviour
No sensitive data shown.
### Data Description
A list of integers from 0 - 9, inclusive.
### Code that reproduces the bug
```Python
import pandas as pd
from ydata_profiling import ProfileReport
data = [[i] for i in range(10)]
df = pd.DataFrame(data, columns=['sensitive_column'])
displayHTML(ProfileReport(df, sensitive=True).to_html())
```
### pandas-profiling version
v4.5.0
### Dependencies
```Text
pandas==1.1.5
```
### OS
_No response_
### Checklist
- [X] There is not yet another bug report for this issue in the [issue tracker](https://github.com/ydataai/pandas-profiling/issues)
- [X] The problem is reproducible from this bug report. [This guide](http://matthewrocklin.com/blog/work/2018/02/28/minimal-bug-reports) can help to craft a minimal bug report.
- [X] The issue has not been resolved by the entries listed under [Common Issues](https://pandas-profiling.ydata.ai/docs/master/pages/support_contrib/common_issues.html). | open | 2023-08-11T18:25:03Z | 2023-08-24T15:38:41Z | https://github.com/ydataai/ydata-profiling/issues/1426 | [
"information requested ❔"
] | ch-nickgustafson | 1 |
gee-community/geemap | streamlit | 842 | Add support for plotly | Reference: https://plotly.com/python/maps | closed | 2021-12-27T18:14:46Z | 2022-01-08T01:43:44Z | https://github.com/gee-community/geemap/issues/842 | [
"Feature Request"
] | giswqs | 1 |
holoviz/panel | jupyter | 6,996 | ValueError when using inclusive_bounds | I'm working on adding support for Pydantic dataclasses to Panel when I stumbled upon this bug
```python
import param
import panel as pn
pn.extension()
class SomeModel(param.Parameterized):
int_field = param.Integer(default=1, bounds=(0,10), inclusive_bounds=(False, False))
float_field = param.Integer(default=1, bounds=(0, 10), inclusive_bounds=(False, False))
model = SomeModel()
pn.Param(model).servable()
```
If you drag either of the the slider to the end you will get one of
```bash
ValueError: Integer parameter 'SomeModel.float_field' must be less than 10, not 10.
ValueError: Integer parameter 'SomeModel.float_field' must be less than 10, not 10.
``` | open | 2024-07-17T05:18:23Z | 2025-01-20T19:18:52Z | https://github.com/holoviz/panel/issues/6996 | [] | MarcSkovMadsen | 1 |
Esri/arcgis-python-api | jupyter | 2,214 | GeoAccessor.compare - Incorrect match_field column renaming | **Describe the bug**
Whatever column you use as the match_field gets truncated in the output when using GeoAccessor.compare
'global_join' --> becomes --> 'global_'
'test' --> becomes --> ' ' (it returns empty)
**To Reproduce**
```python
differences = archive_sdf.spatial.compare(current_sdf, match_field='global_join')
change_log = {
"modified": len(differences['modified_rows']['global_join']),
"inserted": len(differences['added_rows']['global_join']),
"deleted": len(differences['deleted_rows']['global_join'])
}
```
error:
```python
"deleted": len(differences['deleted_rows']['global_join'])
~~~~~~~~~~~~~~~~~~~~~~~~~~~^^^^^^^^^^^^^^^
File "...\Python\envs\arcgispro-py3\Lib\site-packages\pandas\core\frame.py", line 3761, in __getitem__
indexer = self.columns.get_loc(key)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "...\Python\envs\arcgispro-py3\Lib\site-packages\pandas\core\indexes\base.py", line 3654, in get_loc
raise KeyError(key) from err
KeyError: 'global_join'
```
**Screenshots**
**Expected behavior**
the input match_field name should not change. If you input "global_join" the resulting dictionary should have a "global_join"
**Platform (please complete the following information):**
- OS: Windows
- Python API Version `2.4.0`]
**Additional context**
This may be the cause:
field names are renamed in the 2.4.0 version of the _assessor.py file using
```python
new_column_name = column[: -len("_new")]
new_column_name = column[: -len("_old")]
```
it is removing 4 characters for all columns including match_field
in an older version of the API this was used instead
```python
column.rstrip("_new")
column.rstrip("_old")
```
The logic needs to be adjusted so that only the "_new" or "_old" suffix is removed from the appropriate columns and not the match_field.
| closed | 2025-01-31T00:28:04Z | 2025-02-03T19:11:51Z | https://github.com/Esri/arcgis-python-api/issues/2214 | [
"bug"
] | BrackstonLand | 2 |
Esri/arcgis-python-api | jupyter | 1,814 | FeatureLayerCollectionManager insert_layer() fails with unknown error | I have successfully created an empty feature service with create_empty_service(). When attempting to add a layer to it with insert_layer, using a file geodatabase containing a single point feature class, it returns "Exception: Unknown Error (Error Code: 500)".
Inspecting the uploaded File Geodatabase and its resulting feature layer (hosted), they both look fine. And, when I examine the empty feature service, it does now have the expected layer in it. If I add it to map and add some points, it appears to be behaving correctly.
So I'm wondering what the error is about, and wether I should trust the feature service is correctly functional.
I've attached a zipped Notebook and File Geodatabase which can be used to reproduce the issue:
- [insert_layer fails with unknown error.ipynb.zip](https://github.com/Esri/arcgis-python-api/files/15120091/insert_layer.fails.with.unknown.error.ipynb.zip)
- [my_fgdb.gdb.zip](https://github.com/Esri/arcgis-python-api/files/15120090/my_fgdb.gdb.zip)
This is with ArcGIS API for Python version: 2.2.0.1
| open | 2024-04-25T20:10:10Z | 2024-05-06T12:24:11Z | https://github.com/Esri/arcgis-python-api/issues/1814 | [
"bug"
] | knoopum | 9 |
koxudaxi/datamodel-code-generator | pydantic | 2,207 | Can't extract models from openapi via paths and references | **Describe the bug**
When trying to extract models from openapi, where the main openapi schema itself does not contain traditional `components/schemas` it does not seem to behave correctly.
No models are found, or otherwise references don't get resolved properly.
**To Reproduce**
Example schema:
Please see the data in this commit https://github.com/Wim-De-Clercq/testingrepo/commit/1c0c8e49f976b577f951350aca6484199c2fd979
It has a little test as well, although nothing is truly asserted.
**Expected behavior**
I expect the schemas within `./cat/schemas.yaml` to be turned into models.
**Version:**
- OS: Kubuntu 24.04
- Python version: 3.11
- datamodel-code-generator version: `main` branch.
**Additional context**
The test in my commit keeps on complaining about `Models not found in the input data`.
I actually come from a larger project and tried to reproduce it in a small test.
In my large project - with in my eyes a similar setup - I however end up with file not found errors. So I'm pretty sure the ref resolving has problems as well with this kind of setup. Somewhere beyond this test. But at the moment I keep on getting stuck on this error to try reproducing it further.
| open | 2024-12-05T12:57:03Z | 2024-12-05T14:10:05Z | https://github.com/koxudaxi/datamodel-code-generator/issues/2207 | [] | Wim-De-Clercq | 0 |
seleniumbase/SeleniumBase | pytest | 3,297 | `sb.cdp.get_all_cookies()` is hitting errors | `sb.cdp.get_all_cookies()` is hitting errors.
```
*** AttributeError: "Tab" has no attribute "closed".
Traceback (most recent call last):
File ".../seleniumbase/undetected/cdp_driver/tab.py", line 1308, in __getattr__
return getattr(self._target, item)
AttributeError: 'TargetInfo' object has no attribute 'closed'
```
There is a workaround for now:
`sb.cdp.get_cookie_string()`, which gets the string result from `page.evaluate("document.cookie")`. | closed | 2024-11-27T17:53:48Z | 2024-11-28T00:46:56Z | https://github.com/seleniumbase/SeleniumBase/issues/3297 | [
"bug",
"workaround exists",
"UC Mode / CDP Mode"
] | mdmintz | 1 |
dfki-ric/pytransform3d | matplotlib | 145 | Release 1.9 does not contains some of the sub packages | This file, from pypi.org <https://files.pythonhosted.org/packages/2a/9d/1625a751df6a135bf09f033f75288db27d58bf84accd0eb36437dda09510/pytransform3d-1.9.tar.gz> is missing a lot of the code, and breaks functionality.
The reason seems to be that setup.py contains the line, ``` packages=['pytransform3d']``` that is not sufficient anymore since some of the packages are now folders and not files directly under pytransform3d. It should probably be changed to `packages=find_packages()`, with `from setuptools import setup, find_packages` at line 2 of the setup.py file.
To see the difference, compare the result of ```python setup.py bdist_wheel``` on the current code and the same command with the proposed changed.
I suggest removing the current release from pypi.org
Anyway, thank you very much for this great package.
Unfortunately due to legalese I will not be able to open a PR with that change.
| closed | 2021-06-20T12:25:44Z | 2021-06-25T14:34:24Z | https://github.com/dfki-ric/pytransform3d/issues/145 | [] | brainoom | 3 |
matplotlib/matplotlib | matplotlib | 28,969 | Unexpected behavior of `add_patch` with points around zero and large `linewidth` | ### Bug summary
`add_patch` method renders incorrectly, if the linewidth is set to a large number and the patch is generated from a random path using `PathPatch`.
This bug was noticed while using `shapely`, but isolated to this minimal example
### Code for reproduction
```Python
import matplotlib.pyplot as plt
import numpy as np
from matplotlib.patches import PathPatch
from matplotlib.path import Path
if __name__ == "__main__":
num_points = 50
path = np.random.randn(num_points, 2)
path = Path(np.asarray(path))
path_patch = PathPatch(path, facecolor="none", edgecolor="b", linewidth=7)
fig, ax = plt.subplots(figsize=(5, 5))
ax.set_xlim([-150, 150])
ax.set_ylim([-150, 150])
ax.add_patch(path_patch)
plt.show()
```
### Actual outcome

### Expected outcome
If the linewidth is set to 1, then the behaviour is more reasonable

### Additional information
_No response_
### Operating system
OS/X Sonoma
### Matplotlib Version
3.9.2
### Matplotlib Backend
macosx
### Python version
3.9.6
### Jupyter version
_No response_
### Installation
pip | closed | 2024-10-11T13:05:21Z | 2024-10-11T16:39:05Z | https://github.com/matplotlib/matplotlib/issues/28969 | [] | aivarsoo | 2 |
sammchardy/python-binance | api | 1,552 | []()[]()@Tanimola50 ![[DO_NOT_DELETE_V2]_OKTO_KEY_BACKUP_8905.txt](https://github.com/user-attachments/files/17816801/DO_NOT_DELETE_V2._OKTO_KEY_BACKUP_8905.txt) | []()[]()@Tanimola50 ![[DO_NOT_DELETE_V2]_OKTO_KEY_BACKUP_8905.txt](https://github.com/user-attachments/files/17816801/DO_NOT_DELETE_V2._OKTO_KEY_BACKUP_8905.txt)
### @ binance.com/
# login Tanimola50>
_Originally posted by @Tanimola50 in https://github.com/bc-game-project/bcgame-crash/issues/68#issuecomment-2485916500_ | closed | 2025-02-15T17:25:26Z | 2025-02-15T17:48:08Z | https://github.com/sammchardy/python-binance/issues/1552 | [] | Tanimola50 | 1 |
microsoft/nni | data-science | 5,176 | aws S3 as shared storage | I would like to be sure about the development roadmap for NNI.
Currently Azure Blob and NFS are supported as shared storage, are there any plans to support aws S3 in the future? | open | 2022-10-23T13:47:07Z | 2022-11-24T01:23:31Z | https://github.com/microsoft/nni/issues/5176 | [
"feature request"
] | makonaga | 0 |
tartiflette/tartiflette | graphql | 122 | Improve type resolving | Since `tartiflette` allow us to deal only with `dict`. Resolving the `__typename` field for an `UnionType` is difficult and is delegate to the `Resolver` implementation which need to return the `__typename` in the resolved value.
Maybe we could add an optional `resolve_type` argument to the `Resolver` decorator which could take a callable in charge of determining the `__typename` to return for the resolved value? This `resolve_type` parameter could be required for an `UnionType` resolver which returns a value where the `__typename` couldn't be resolved by the default `resolve_type` function (a `dict` without a `__typename` key for instance).
```python
@Resolver(
"Query.catOrDog",
resolve_type=lambda result: "Dog" if is_dog(result) else "Cat",
)
def query_cat_or_dog_resolver(*_args, **_kwargs):
return {
"name": "Dogo",
}
``` | closed | 2019-02-12T13:54:43Z | 2019-09-11T14:51:25Z | https://github.com/tartiflette/tartiflette/issues/122 | [
"enhancement",
"question"
] | Maximilien-R | 1 |
sktime/pytorch-forecasting | pandas | 1,206 | Update scikit-learn version requirement to allow v1.2 | Greetings,
Scikit-learn 1.2.0 was released recently https://github.com/scikit-learn/scikit-learn/releases/tag/1.2.0. Pytorch-forecasting currently requires `scikit-learn = ">=0.24,<1.2"`. It would be nice to let version 1.2 install.
Thank you. | closed | 2022-12-20T05:34:34Z | 2023-04-11T19:39:33Z | https://github.com/sktime/pytorch-forecasting/issues/1206 | [] | melanopsis | 2 |
vipstone/faceai | tensorflow | 35 | 我试了下,蔡徐kun 的照片被识别成女的,怎么说? | 蔡徐kun 的照片被识别成女的,怎么说? | open | 2019-07-22T02:32:09Z | 2025-03-20T05:52:37Z | https://github.com/vipstone/faceai/issues/35 | [] | AnswerNo2 | 17 |
ymcui/Chinese-BERT-wwm | tensorflow | 145 | 中文维基百科数据集 | 崔老师,您好,
我下载完维基百科数据集,然后再使用wikiextractor处理之后的统计如下:

我也统计了一下总共有多少个paragraph,大概有5975674(约6M)个段落,感觉和您论文中的13.6M lines input text还有很大的区别,即使转为繁体字版本加起来也就12M,还是有差别.
所以想问一下是我的处理方式有问题吗? 我使用的是最新的wiki百科库。
| closed | 2020-09-18T07:41:37Z | 2020-09-21T07:03:29Z | https://github.com/ymcui/Chinese-BERT-wwm/issues/145 | [] | liuwei1206 | 2 |
CorentinJ/Real-Time-Voice-Cloning | python | 480 | Saving hparams in model files | Having spent several hours to get the Swedish model (#257) to work, I think it is a good idea to save the hparams along with the models. Maybe even load them at run time from the model files.
Then we can mix and match in the toolbox, and it can check for compatibility, e.g. `speaker_embedding_size` in encoder and synth, `sample_rate` between the synth and vocoder. Then we can make helpful error messages to replace the python exceptions that occur when models are incompatible.
Since we have to re-release models when #472 gets merged to master, it is an opportunity to implement this new checkpoint format. In addition to "model_state" and "optimizer_state", we can also save a dictionary of "model_parameters" which would contain something like:
#### Encoder
* `sample_rate`
* `speaker_embedding_size`
#### Synthesizer
* symbols
* language?
* `speaker_embedding_size`
* `sample_rate`, `n_mels`, `n_fft`, etc.
#### Vocoder
* `sample_rate`, `n_mels`, `n_fft`, etc.
I'll put this in a new issue and see if anyone wants this feature.
_Originally posted by @blue-fish in https://github.com/CorentinJ/Real-Time-Voice-Cloning/pull/472#issuecomment-671403282_ | closed | 2020-08-10T14:54:59Z | 2020-08-11T13:08:41Z | https://github.com/CorentinJ/Real-Time-Voice-Cloning/issues/480 | [
"enhancement"
] | ghost | 2 |
faif/python-patterns | python | 300 | can you show your pycallgraph code? thank you. | closed | 2019-07-15T07:38:36Z | 2019-07-31T18:43:17Z | https://github.com/faif/python-patterns/issues/300 | [
"question"
] | daidai21 | 2 | |
2noise/ChatTTS | python | 547 | gpt | use default LlamaModel for importing TELlamaModel error: No module named 'transformer_engine' | api启动的时候报错如上 | closed | 2024-07-08T09:22:16Z | 2024-11-21T04:02:18Z | https://github.com/2noise/ChatTTS/issues/547 | [
"documentation",
"stale"
] | panpan123456 | 2 |
Gozargah/Marzban | api | 1,108 | Add node error monitoring | Whenever "nodes" get an error and exit the Connect mode, it will be notified. | closed | 2024-07-15T16:57:58Z | 2024-09-02T20:20:20Z | https://github.com/Gozargah/Marzban/issues/1108 | [
"Feature",
"OnHold"
] | erfjab | 0 |
hbldh/bleak | asyncio | 992 | Device may already be connected when calling connect with BlueZ | If the device is connected, and then the application/docker container is restarted and the device is not disconnected, the next connection attempt will fail because the device is already connected on the host but we think it is not connected and a connect call will fail.
```
"/org/bluez/hci1/dev_BD_24_6F_85_AA_61": {
"org.freedesktop.DBus.Introspectable": {},
"org.bluez.Device1": {
"Address": "BD:24:6F:85:AA:61",
"AddressType": "public",
"Name": "Dream~BD246F85AA61",
"Alias": "Dream~BD246F85AA61",
"Appearance": 962,
"Icon": "input-mouse",
"Paired": false,
"Trusted": false,
"Blocked": false,
"LegacyPairing": false,
"RSSI": -36,
"Connected": true,
"UUIDs": [
"00001800-0000-1000-8000-00805f9b34fb",
"00001801-0000-1000-8000-00805f9b34fb",
"0000180a-0000-1000-8000-00805f9b34fb",
"0000ffd0-0000-1000-8000-00805f9b34fb",
"0000ffd5-0000-1000-8000-00805f9b34fb"
],
"Modalias": "usb:v045Ep0040d0300",
"Adapter": "/org/bluez/hci1",
"ManufacturerData": {
"20808": {
"__type": "<class 'bytearray'>",
"repr": "bytearray(b'364656')"
}
},
"ServicesResolved": true
},
"org.freedesktop.DBus.Properties": {}
},
``` | closed | 2022-09-10T02:42:08Z | 2022-09-10T20:54:20Z | https://github.com/hbldh/bleak/issues/992 | [] | bdraco | 5 |
ExpDev07/coronavirus-tracker-api | fastapi | 240 | Heroku application down? | At least during the last couple of hours, a timeout is triggered when trying to access the Heroku instance. See for example https://coronavirus-tracker-api.herokuapp.com/v2/sources
Could it have to do with #174 ? | closed | 2020-03-31T00:15:12Z | 2020-03-31T07:29:06Z | https://github.com/ExpDev07/coronavirus-tracker-api/issues/240 | [
"question"
] | cyenyxe | 2 |
vitalik/django-ninja | pydantic | 418 | [BUG] Pydantic schema is using a higher version json schema than open api | **Describe the bug**
It seems pydantic is generating schema in line with the latest json schema draft, but openapi expects an older version.
This issue has some info, and fastapi appears to have solved it but it might be worth solving this in ninja too.
https://github.com/samuelcolvin/pydantic/issues/1164
The outcome of this at the moment is codegen tools for openapi do not work correctly with ninja produced openapi docs.
swagger codegen simply silently fails, https://github.com/openapi-generators/openapi-python-client explodes
The type causing the issue is used as part of the default pagination decorator so I imagine this applies to most ninja users. | closed | 2022-04-08T23:09:32Z | 2022-07-01T13:34:40Z | https://github.com/vitalik/django-ninja/issues/418 | [] | shughes-uk | 3 |
ckan/ckan | api | 8,194 | Supervisor config for worker should set environment variable PYTHONUNBUFFERED | ## CKAN version
2.10.4
## Describe the bug
While testing the background worker, I was not able to get any output of the worker processes to be written to the log files.
### Steps to reproduce
```
# clear the logs
echo "" > /var/log/ckan/ckan-worker.stderr.log
echo "" > /var/log/ckan/ckan-worker.stdout.log
# restart the ckan worker via
supervisorctl stop ckan-worker:*`
supervisorctl start ckan-worker:*`
# create a test job
ckan jobs test default
# check whether anything is written to the logs
du /var/log/ckan/ckan-worker.stdout.log
# 4 /var/log/ckan/ckan-worker.stdout.log
```
### Expected behavior
When I submit a test job, I would like to immediately see in the logs whether things work.
### Problem and proposed solution
The problem is that Python by default buffers its output. While running `ckan jobs worker default` outputs the data directly into the terminal without any delay, running it with pipes attached buffers the output (can also be reproduced with `ckan jobs worker default > test.log`).
A very simple solution would be to set the PYTHONUNBUFFERED environment variable in the supervisor configuration file.
```
environment=PYTHONUNBUFFERED=1
```
I could create a PR for this if this is an acceptable solution. :pray: | open | 2024-04-20T08:17:18Z | 2024-08-06T10:21:30Z | https://github.com/ckan/ckan/issues/8194 | [
"Good for Contribution"
] | paulmueller | 4 |
litestar-org/polyfactory | pydantic | 494 | Bug: ModelFactory can not generate list of classes with a Pydantic field alias | ### Description
I am trying to generate a list of classes but an error is thrown:
```
pydantic_core._pydantic_core.ValidationError: 1 validation error for Status
cardstatus.0
Input should be a valid dictionary or instance of CardStatus [type=model_type, input_value=[CardStatus(link_id=5265,...ture=-12651701372362.7)], input_type=list]
For further information visit https://errors.pydantic.dev/2.5/v/model_type
```
This error only occurs when I define a field alias. Without a field alias it works.
### URL to code causing the issue
_No response_
### MCVE
```python
from typing import Optional, Union
from polyfactory import Use
from polyfactory.factories.pydantic_factory import ModelFactory
from pydantic import BaseModel, Field, conlist, StrictInt, StrictFloat
class CardStatus(BaseModel):
link_id: Optional[StrictInt] = Field(None, alias="linkId")
temperature: Optional[Union[StrictFloat, StrictInt]] = None
__properties = ["linkId", "temperature"]
class CardsFactory(ModelFactory):
__model__ = CardStatus
__allow_none_optionals__ = False
class Status(BaseModel):
# cellular_cards: Optional[conlist(CardStatus)] = Field(None) # working
card_status: Optional[conlist(CardStatus)] = Field(None, alias="cardstatus") # not working
class StatusFactory(ModelFactory):
__model__ = Status
__randomize_collection_length__ = True
__min_collection_length__ = 2
__max_collection_length__ = 5
__allow_none_optionals__ = False
cellular_cards = Use(CardsFactory.batch, size=3)
if __name__ == '__main__':
gen = StatusFactory()
e: Status = gen.build()
print(e.model_dump())
```
### Steps to reproduce
```bash
1. Run MCVE
```
### Screenshots
_No response_
### Logs
_No response_
### Release Version
Python 3.11
pydantic==2.5.3
pydantic-settings==2.1.0
polyfactory==2.14.1
### Platform
- [ ] Linux
- [X] Mac
- [ ] Windows
- [ ] Other (Please specify in the description above) | closed | 2024-01-24T11:05:52Z | 2025-03-20T15:53:13Z | https://github.com/litestar-org/polyfactory/issues/494 | [
"bug"
] | keviloper | 1 |
microsoft/JARVIS | pytorch | 132 | requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0) | Hello,
When running in inference mode, I got this error message. Here is the log of the interaction. Any suggestions appreciated.
```
2023-04-11 10:47:21,867 - awesome_chat - INFO - input: For the image at location /images/example_page.jpg please draw a bounding box around each block of text in the image.
2023-04-11 10:47:21,871 - awesome_chat - DEBUG - [{'role': 'system', 'content': '#1 Task Planning Stage: The AI assistant can parse user input to several tasks: [{"task": task, "id": task_id, "dep": dependency_task_id, "args": {"text": text or <GENERATED>-dep_id, "image": image_url or <GENERATED>-dep_id, "audio": audio_url or <GENERATED>-dep_id}}]. The special tag "<GENERATED>-dep_id" refer to the one generated text/image/audio in the dependency task (Please consider whether the dependency task generates resources of this type.) and "dep_id" must be in "dep" list. The "dep" field denotes the ids of the previous prerequisite tasks which generate a new resource that the current task relies on. The "args" field must in ["text", "image", "audio"], nothing else. The task MUST be selected from the following options: "token-classification", "text2text-generation", "summarization", "translation", "question-answering", "conversational", "text-generation", "sentence-similarity", "tabular-classification", "object-detection", "image-classification", "image-to-image", "image-to-text", "text-to-image", "text-to-video", "visual-question-answering", "document-question-answering", "image-segmentation", "depth-estimation", "text-to-speech", "automatic-speech-recognition", "audio-to-audio", "audio-classification", "canny-control", "hed-control", "mlsd-control", "normal-control", "openpose-control", "canny-text-to-image", "depth-text-to-image", "hed-text-to-image", "mlsd-text-to-image", "normal-text-to-image", "openpose-text-to-image", "seg-text-to-image". There may be multiple tasks of the same type. Think step by step about all the tasks needed to resolve the user\'s request. Parse out as few tasks as possible while ensuring that the user request can be resolved. Pay attention to the dependencies and order among tasks. If the user input can\'t be parsed, you need to reply empty JSON [].'}, {'role': 'user', 'content': 'Give you some pictures e1.jpg, e2.png, e3.jpg, help me count the number of sheep?'}, {'role': 'assistant', 'content': '[{"task": "image-to-text", "id": 0, "dep": [-1], "args": {"image": "e1.jpg" }}, {"task": "object-detection", "id": 1, "dep": [-1], "args": {"image": "e1.jpg" }}, {"task": "visual-question-answering", "id": 2, "dep": [1], "args": {"image": "<GENERATED>-1", "text": "How many sheep in the picture"}} }}, {"task": "image-to-text", "id": 3, "dep": [-1], "args": {"image": "e2.png" }}, {"task": "object-detection", "id": 4, "dep": [-1], "args": {"image": "e2.png" }}, {"task": "visual-question-answering", "id": 5, "dep": [4], "args": {"image": "<GENERATED>-4", "text": "How many sheep in the picture"}} }}, {"task": "image-to-text", "id": 6, "dep": [-1], "args": {"image": "e3.jpg" }}, {"task": "object-detection", "id": 7, "dep": [-1], "args": {"image": "e3.jpg" }}, {"task": "visual-question-answering", "id": 8, "dep": [7], "args": {"image": "<GENERATED>-7", "text": "How many sheep in the picture"}}]'}, {'role': 'user', 'content': 'Look at /e.jpg, can you tell me how many objects in the picture? Give me a picture and video similar to this one.'}, {'role': 'assistant', 'content': '[{"task": "image-to-text", "id": 0, "dep": [-1], "args": {"image": "/e.jpg" }}, {"task": "object-detection", "id": 1, "dep": [-1], "args": {"image": "/e.jpg" }}, {"task": "visual-question-answering", "id": 2, "dep": [1], "args": {"image": "<GENERATED>-1", "text": "how many objects in the picture?" }}, {"task": "text-to-image", "id": 3, "dep": [0], "args": {"text": "<GENERATED-0>" }}, {"task": "image-to-image", "id": 4, "dep": [-1], "args": {"image": "/e.jpg" }}, {"task": "text-to-video", "id": 5, "dep": [0], "args": {"text": "<GENERATED-0>" }}]'}, {'role': 'user', 'content': 'given a document /images/e.jpeg, answer me what is the student amount? And describe the image with your voice'}, {'role': 'assistant', 'content': '{"task": "document-question-answering", "id": 0, "dep": [-1], "args": {"image": "/images/e.jpeg", "text": "what is the student amount?" }}, {"task": "visual-question-answering", "id": 1, "dep": [-1], "args": {"image": "/images/e.jpeg", "text": "what is the student amount?" }}, {"task": "image-to-text", "id": 2, "dep": [-1], "args": {"image": "/images/e.jpg" }}, {"task": "text-to-speech", "id": 3, "dep": [2], "args": {"text": "<GENERATED>-2" }}]'}, {'role': 'user', 'content': 'Given an image /example.jpg, first generate a hed image, then based on the hed image generate a new image where a girl is reading a book'}, {'role': 'assistant', 'content': '[{"task": "openpose-control", "id": 0, "dep": [-1], "args": {"image": "/example.jpg" }}, {"task": "openpose-text-to-image", "id": 1, "dep": [0], "args": {"text": "a girl is reading a book", "image": "<GENERATED>-0" }}]'}, {'role': 'user', 'content': "please show me a video and an image of (based on the text) 'a boy is running' and dub it"}, {'role': 'assistant', 'content': '[{"task": "text-to-video", "id": 0, "dep": [-1], "args": {"text": "a boy is running" }}, {"task": "text-to-speech", "id": 1, "dep": [-1], "args": {"text": "a boy is running" }}, {"task": "text-to-image", "id": 2, "dep": [-1], "args": {"text": "a boy is running" }}]'}, {'role': 'user', 'content': 'please show me a joke and an image of cat'}, {'role': 'assistant', 'content': '[{"task": "conversational", "id": 0, "dep": [-1], "args": {"text": "please show me a joke of cat" }}, {"task": "text-to-image", "id": 1, "dep": [-1], "args": {"text": "a photo of cat" }}]'}, {'role': 'user', 'content': "The chat log [ [{'role': 'user', 'content': 'Please set your OpenAI API key first.'}, {'role': 'assistant', 'content': 'To answer your request, you need to set your OpenAI API key first. To do this, you must create an OpenAI account and then find your API key on the 'settings' page of your account and insert it into the required fields. The model used for this task is ChatGPT. There are no generated files of images, audios or videos in the inference results. Is there anything else I can help you with?'}] ] may contain the resources I mentioned. Now I input { For the image at location /images/example_page.jpg please draw a bounding box around each block of text in the image. }. Pay attention to the input and output types of tasks and the dependencies between tasks."}]
2023-04-11 10:47:26,537 - awesome_chat - DEBUG - {"id":"cmpl-74AVFSQM0q1urf67qfyueY86qflFp","object":"text_completion","created":1681228041,"model":"text-davinci-003","choices":[{"text":"\n[{\"task\": \"image-to-text\", \"id\": 0, \"dep\": [-1], \"args\": {\"image\": \"/images/example_page.jpg\" }}, {\"task\": \"text-to-image\", \"id\": 1, \"dep\": [0], \"args\": {\"text\": \"<GENERATED>-0\" }}, {\"task\": \"object-detection\", \"id\": 2, \"dep\": [-1], \"args\": {\"image\": \"/images/example_page.jpg\" }}]","index":0,"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":2052,"completion_tokens":114,"total_tokens":2166}}
2023-04-11 10:47:26,537 - awesome_chat - INFO - [{"task": "image-to-text", "id": 0, "dep": [-1], "args": {"image": "/images/example_page.jpg" }}, {"task": "text-to-image", "id": 1, "dep": [0], "args": {"text": "<GENERATED>-0" }}, {"task": "object-detection", "id": 2, "dep": [-1], "args": {"image": "/images/example_page.jpg" }}]
2023-04-11 10:47:26,537 - awesome_chat - DEBUG - [{'task': 'image-to-text', 'id': 0, 'dep': [-1], 'args': {'image': '/images/example_page.jpg'}}, {'task': 'text-to-image', 'id': 1, 'dep': [0], 'args': {'text': '<GENERATED>-0'}}, {'task': 'object-detection', 'id': 2, 'dep': [-1], 'args': {'image': '/images/example_page.jpg'}}]
2023-04-11 10:47:26,537 - awesome_chat - DEBUG - Run task: 0 - image-to-text
2023-04-11 10:47:26,538 - awesome_chat - DEBUG - Deps: []
2023-04-11 10:47:26,538 - awesome_chat - DEBUG - Run task: 2 - object-detection
2023-04-11 10:47:26,538 - awesome_chat - DEBUG - parsed task: {'task': 'image-to-text', 'id': 0, 'dep': [-1], 'args': {'image': 'public//images/example_page.jpg'}}
2023-04-11 10:47:26,539 - awesome_chat - DEBUG - Deps: []
2023-04-11 10:47:26,539 - awesome_chat - DEBUG - parsed task: {'task': 'object-detection', 'id': 2, 'dep': [-1], 'args': {'image': 'public//images/example_page.jpg'}}
2023-04-11 10:47:26,771 - awesome_chat - DEBUG - avaliable models on image-to-text: {'local': ['nlpconnect/vit-gpt2-image-captioning'], 'huggingface': ['nlpconnect/vit-gpt2-image-captioning', 'Salesforce/blip2-opt-2.7b']}
2023-04-11 10:47:26,773 - awesome_chat - DEBUG - [{'role': 'system', 'content': '#2 Model Selection Stage: Given the user request and the parsed tasks, the AI assistant helps the user to select a suitable model from a list of models to process the user request. The assistant should focus more on the description of the model and find the model that has the most potential to solve requests and tasks. Also, prefer models with local inference endpoints for speed and stability.'}, {'role': 'user', 'content': 'For the image at location /images/example_page.jpg please draw a bounding box around each block of text in the image. '}, {'role': 'assistant', 'content': "{'task': 'image-to-text', 'id': 0, 'dep': [-1], 'args': {'image': 'public//images/example_page.jpg'}}"}, {'role': 'user', 'content': 'Please choose the most suitable model from [{\'id\': \'nlpconnect/vit-gpt2-image-captioning\', \'inference endpoint\': [\'nlpconnect/vit-gpt2-image-captioning\'], \'likes\': 219, \'description\': \'\\n\\n# nlpconnect/vit-gpt2-image-captioning\\n\\nThis is an image captioning model trained by @ydshieh in [\', \'tags\': [\'image-to-text\', \'image-captioning\']}, {\'id\': \'Salesforce/blip2-opt-2.7b\', \'inference endpoint\': [\'nlpconnect/vit-gpt2-image-captioning\', \'Salesforce/blip2-opt-2.7b\'], \'likes\': 25, \'description\': \'\\n\\n# BLIP-2, OPT-2.7b, pre-trained only\\n\\nBLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/f\', \'tags\': [\'vision\', \'image-to-text\', \'image-captioning\', \'visual-question-answering\']}] for the task {\'task\': \'image-to-text\', \'id\': 0, \'dep\': [-1], \'args\': {\'image\': \'public//images/example_page.jpg\'}}. The output must be in a strict JSON format: {"id": "id", "reason": "your detail reasons for the choice"}.'}]
2023-04-11 10:47:26,818 - awesome_chat - DEBUG - avaliable models on object-detection: {'local': ['facebook/detr-resnet-101', 'google/owlvit-base-patch32'], 'huggingface': ['facebook/detr-resnet-50', 'facebook/detr-resnet-101', 'hustvl/yolos-small']}
2023-04-11 10:47:26,819 - awesome_chat - DEBUG - [{'role': 'system', 'content': '#2 Model Selection Stage: Given the user request and the parsed tasks, the AI assistant helps the user to select a suitable model from a list of models to process the user request. The assistant should focus more on the description of the model and find the model that has the most potential to solve requests and tasks. Also, prefer models with local inference endpoints for speed and stability.'}, {'role': 'user', 'content': 'For the image at location /images/example_page.jpg please draw a bounding box around each block of text in the image. '}, {'role': 'assistant', 'content': "{'task': 'object-detection', 'id': 2, 'dep': [-1], 'args': {'image': 'public//images/example_page.jpg'}}"}, {'role': 'user', 'content': 'Please choose the most suitable model from [{\'id\': \'facebook/detr-resnet-50\', \'inference endpoint\': [\'facebook/detr-resnet-50\', \'facebook/detr-resnet-101\', \'hustvl/yolos-small\'], \'likes\': 129, \'description\': \'\\n\\n# DETR (End-to-End Object Detection) model with ResNet-50 backbone\\n\\nDEtection TRansformer (DETR) m\', \'tags\': [\'object-detection\', \'vision\']}, {\'id\': \'facebook/detr-resnet-101\', \'inference endpoint\': [\'facebook/detr-resnet-101\', \'google/owlvit-base-patch32\'], \'likes\': 30, \'description\': \'\\n\\n# DETR (End-to-End Object Detection) model with ResNet-101 backbone\\n\\nDEtection TRansformer (DETR) \', \'tags\': [\'object-detection\', \'vision\']}, {\'id\': \'google/owlvit-base-patch32\', \'inference endpoint\': [\'facebook/detr-resnet-101\', \'google/owlvit-base-patch32\'], \'likes\': 30, \'description\': \'\\n\\n# Model Card: OWL-ViT\\n\\n## Model Details\\n\\nThe OWL-ViT (short for Vision Transformer for Open-World \', \'tags\': [\'vision\', \'object-detection\']}, {\'id\': \'hustvl/yolos-small\', \'inference endpoint\': [\'facebook/detr-resnet-50\', \'facebook/detr-resnet-101\', \'hustvl/yolos-small\'], \'likes\': 14, \'description\': \'\\n\\n# YOLOS (small-sized) model\\n\\nYOLOS model fine-tuned on COCO 2017 object detection (118k annotated \', \'tags\': [\'object-detection\', \'vision\']}] for the task {\'task\': \'object-detection\', \'id\': 2, \'dep\': [-1], \'args\': {\'image\': \'public//images/example_page.jpg\'}}. The output must be in a strict JSON format: {"id": "id", "reason": "your detail reasons for the choice"}.'}]
2023-04-11 10:47:28,742 - awesome_chat - DEBUG - {"id":"cmpl-74AVKd7YA7NJXTKJeM5XYrnJxWPjG","object":"text_completion","created":1681228046,"model":"text-davinci-003","choices":[{"text":"\n{\"id\": \"facebook/detr-resnet-50\", \"reason\": \"This model is best suited for the task of object detection as it has a ResNet-50 backbone and is specifically designed for this task. It also has the highest number of likes and is the most popular model for this task\"}","index":0,"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":719,"completion_tokens":65,"total_tokens":784}}
2023-04-11 10:47:28,742 - awesome_chat - DEBUG - chosen model: {"id": "facebook/detr-resnet-50", "reason": "This model is best suited for the task of object detection as it has a ResNet-50 backbone and is specifically designed for this task. It also has the highest number of likes and is the most popular model for this task"}
2023-04-11 10:47:29,122 - awesome_chat - DEBUG - {"id":"cmpl-74AVKphM9UzND6d6pxb7KeKAGOsVv","object":"text_completion","created":1681228046,"model":"text-davinci-003","choices":[{"text":"\n{\"id\": \"nlpconnect/vit-gpt2-image-captioning\", \"reason\": \"This model is specifically designed for image-to-text tasks and has a local inference endpoint for speed and stability\"}","index":0,"logprobs":null,"finish_reason":null}],"usage":{"prompt_tokens":539,"completion_tokens":49,"total_tokens":588}}
2023-04-11 10:47:29,123 - awesome_chat - DEBUG - chosen model: {"id": "nlpconnect/vit-gpt2-image-captioning", "reason": "This model is specifically designed for image-to-text tasks and has a local inference endpoint for speed and stability"}
2023-04-11 10:47:29,137 - awesome_chat - WARNING - Inference error: {'message': 'Expecting value: line 1 column 1 (char 0)'}
2023-04-11 10:47:29,137 - awesome_chat - DEBUG - inference result: {'error': {'message': 'Expecting value: line 1 column 1 (char 0)'}}
2023-04-11 10:47:29,542 - awesome_chat - DEBUG - Run task: 1 - text-to-image
2023-04-11 10:47:29,543 - awesome_chat - DEBUG - Deps: [{"task": {"task": "image-to-text", "id": 0, "dep": [-1], "args": {"image": "public//images/example_page.jpg"}}, "inference result": {"error": {"message": "Expecting value: line 1 column 1 (char 0)"}}, "choose model result": {"id": "nlpconnect/vit-gpt2-image-captioning", "reason": "This model is specifically designed for image-to-text tasks and has a local inference endpoint for speed and stability"}}]
2023-04-11 10:47:29,543 - awesome_chat - DEBUG - Detect the image of dependency task (from args): public//images/example_page.jpg
2023-04-11 10:47:29,543 - awesome_chat - DEBUG - parsed task: {'task': 'text-to-image', 'id': 1, 'dep': [0], 'args': {'text': '<GENERATED>-0'}}
2023-04-11 10:47:29,809 - awesome_chat - DEBUG - avaliable models on text-to-image: {'local': ['runwayml/stable-diffusion-v1-5'], 'huggingface': ['runwayml/stable-diffusion-v1-5', 'hakurei/waifu-diffusion', 'prompthero/openjourney', 'stabilityai/stable-diffusion-2-1']}
2023-04-11 10:47:29,809 - awesome_chat - DEBUG - [{'role': 'system', 'content': '#2 Model Selection Stage: Given the user request and the parsed tasks, the AI assistant helps the user to select a suitable model from a list of models to process the user request. The assistant should focus more on the description of the model and find the model that has the most potential to solve requests and tasks. Also, prefer models with local inference endpoints for speed and stability.'}, {'role': 'user', 'content': 'For the image at location /images/example_page.jpg please draw a bounding box around each block of text in the image. '}, {'role': 'assistant', 'content': "{'task': 'text-to-image', 'id': 1, 'dep': [0], 'args': {'text': '<GENERATED>-0'}}"}, {'role': 'user', 'content': 'Please choose the most suitable model from [{\'id\': \'runwayml/stable-diffusion-v1-5\', \'inference endpoint\': [\'runwayml/stable-diffusion-v1-5\'], \'likes\': 6367, \'description\': \'\\n\\n# Stable Diffusion v1-5 Model Card\\n\\nStable Diffusion is a latent text-to-image diffusion model cap\', \'tags\': [\'stable-diffusion\', \'stable-diffusion-diffusers\', \'text-to-image\']}, {\'id\': \'prompthero/openjourney\', \'inference endpoint\': [\'runwayml/stable-diffusion-v1-5\', \'hakurei/waifu-diffusion\', \'prompthero/openjourney\', \'stabilityai/stable-diffusion-2-1\'], \'likes\': 2060, \'description\': \'\\n# Openjourney is an open source Stable Diffusion fine tuned model on Midjourney images, by [PromptH\', \'tags\': [\'stable-diffusion\', \'text-to-image\']}, {\'id\': \'hakurei/waifu-diffusion\', \'inference endpoint\': [\'runwayml/stable-diffusion-v1-5\', \'hakurei/waifu-diffusion\', \'prompthero/openjourney\', \'stabilityai/stable-diffusion-2-1\'], \'likes\': 1900, \'description\': \'\\n\\n# waifu-diffusion v1.4 - Diffusion for Weebs\\n\\nwaifu-diffusion is a latent text-to-image diffusion \', \'tags\': [\'stable-diffusion\', \'text-to-image\']}, {\'id\': \'stabilityai/stable-diffusion-2-1\', \'inference endpoint\': [\'runwayml/stable-diffusion-v1-5\', \'hakurei/waifu-diffusion\', \'prompthero/openjourney\', \'stabilityai/stable-diffusion-2-1\'], \'likes\': 1829, \'description\': \'\\n\\n# Stable Diffusion v2-1 Model Card\\nThis model card focuses on the model associated with the Stable\', \'tags\': [\'stable-diffusion\', \'text-to-image\']}] for the task {\'task\': \'text-to-image\', \'id\': 1, \'dep\': [0], \'args\': {\'text\': \'<GENERATED>-0\'}}. The output must be in a strict JSON format: {"id": "id", "reason": "your detail reasons for the choice"}.'}]
2023-04-11 10:47:30,493 - awesome_chat - DEBUG - inference result: {'generated image with predicted box': '/images/25de.jpg', 'predicted': []}
2023-04-11 10:47:32,081 - awesome_chat - DEBUG - {"id":"cmpl-74AVN0TqSS4BOvD1A5WzqnZ0LIVgN","object":"text_completion","created":1681228049,"model":"text-davinci-003","choices":[{"text":"\n{\"id\": \"runwayml/stable-diffusion-v1-5\", \"reason\": \"This model has the most potential to solve the text-to-image task as it has the highest number of likes and is the most popular model for this task. It also has a local inference endpoint which will provide speed and stability\"}","index":0,"logprobs":null,"finish_reason":"stop"}],"usage":{"prompt_tokens":779,"completion_tokens":70,"total_tokens":849}}
2023-04-11 10:47:32,081 - awesome_chat - DEBUG - chosen model: {"id": "runwayml/stable-diffusion-v1-5", "reason": "This model has the most potential to solve the text-to-image task as it has the highest number of likes and is the most popular model for this task. It also has a local inference endpoint which will provide speed and stability"}
```
Here is the cli output:
```bash
python run_gradio_demo.py --config config.gradio.yaml
Running on local URL: http://127.0.0.1:7860
To create a public link, set `share=True` in `launch()`.
Expecting value: line 1 column 1 (char 0)
Traceback (most recent call last):
File "/home/matt/anaconda2/envs/jarvis/lib/python3.8/site-packages/requests/models.py", line 971, in json
return complexjson.loads(self.text, **kwargs)
File "/home/matt/anaconda2/envs/jarvis/lib/python3.8/json/__init__.py", line 357, in loads
return _default_decoder.decode(s)
File "/home/matt/anaconda2/envs/jarvis/lib/python3.8/json/decoder.py", line 337, in decode
obj, end = self.raw_decode(s, idx=_w(s, 0).end())
File "/home/matt/anaconda2/envs/jarvis/lib/python3.8/json/decoder.py", line 355, in raw_decode
raise JSONDecodeError("Expecting value", s, err.value) from None
json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/home/matt/programming/JARVIS/server/awesome_chat.py", line 605, in model_inference
inference_result = local_model_inference(model_id, data, task)
File "/home/matt/programming/JARVIS/server/awesome_chat.py", line 575, in local_model_inference
results = response.json()
File "/home/matt/anaconda2/envs/jarvis/lib/python3.8/site-packages/requests/models.py", line 975, in json
raise RequestsJSONDecodeError(e.msg, e.doc, e.pos)
requests.exceptions.JSONDecodeError: Expecting value: line 1 column 1 (char 0)
``` | open | 2023-04-11T15:51:08Z | 2025-01-23T10:10:46Z | https://github.com/microsoft/JARVIS/issues/132 | [] | themantalope | 1 |
axnsan12/drf-yasg | rest-api | 158 | Make excluded "produces" media types customizable | Hello!
In `utils.py` at https://github.com/axnsan12/drf-yasg/blob/master/src/drf_yasg/utils.py#L278
Excludes html media type? It is some OpenAPI restriction?
```python
def get_produces(renderer_classes):
"""Extract ``produces`` MIME types from a list of renderer classes.
:param list renderer_classes: renderer classes
:return: MIME types for ``produces``
:rtype: list[str]
"""
media_types = [renderer.media_type for renderer in renderer_classes or []]
media_types = [encoding for encoding in media_types if 'html' not in encoding]
return media_types
```
Sorry about my poor English. | closed | 2018-07-04T09:29:55Z | 2018-08-08T06:03:15Z | https://github.com/axnsan12/drf-yasg/issues/158 | [] | estin | 4 |
deepfakes/faceswap | deep-learning | 626 | crashed at the beginning of train | ```
File "/opt/lib/python3.6/site-packages/tensorflow/python/framework/errors_impl.py", line 519, in __exit__
c_api.TF_GetCode(self.status.status))
tensorflow.python.framework.errors_impl.FailedPreconditionError: Attempting to use uninitialized value Adam/l
```
crashed on
```
File "/faceswap/plugins/train/trainer/_base.py", line 213, in train_one_batch
loss = self.model.predictors[self.side].train_on_batch(*batch)
```
my environment is
tensorflow-gpu== 1.9.0 cuda==9.0 cudnn==7.0.5 ubutun 16.04
Does anyone have any idea? Thanks
| closed | 2019-02-26T14:22:40Z | 2019-02-28T10:46:06Z | https://github.com/deepfakes/faceswap/issues/626 | [] | cp0000 | 6 |
csurfer/pyheat | matplotlib | 14 | matplotlib warning given when running against `test_program.py`. | Steps to reproduce:
```bash
git clone https://github.com/csurfer/pyheat.git
cd pyheat
python3 setup.py install --user
pyheat tests/test_program.py
```
Results in the following warning on the line which calls `set_yticklabels`:
```
/home/aaron/.local/lib/python3.6/site-packages/py_heat-0.0.6-py3.6.egg/pyheat/pyheat.py:158: UserWarning: FixedFormatter should only be used together with FixedLocator
```
Strangely enough the warning doesn't appear when running the unit tests with:
```bash
python3 setup.py test
``` | closed | 2021-01-08T21:21:32Z | 2021-09-18T20:16:25Z | https://github.com/csurfer/pyheat/issues/14 | [] | AaronRobson | 1 |
Miserlou/Zappa | flask | 1,643 | Temporary S3 Bucket is not deleted | <!--- Provide a general summary of the issue in the Title above -->
## Context
<!--- Provide a more detailed introduction to the issue itself, and why you consider it to be a bug -->
<!--- Also, please make sure that you are running Zappa _from a virtual environment_ and are using Python 2.7/3.6 -->
In my `zappa_setting.yml`, I set `slim_handler=false` and `bucket_name=None`. And it turns out that after several months, my S3 reaches bucket limit(200). It turns out that most of those buckets are created by Zappa and are empty.
## Expected Behavior
<!--- Tell us what should happen -->
With `slim_handler=false` and `bucket_name=None`, there should not be any s3 bucket created /left after deployment succeed.
## Actual Behavior
<!--- Tell us what happens instead -->
Zappa will create a temporary empty s3 bucket and leave it there. One work around is prescribing the `bucket_name` in `zappa_setting.yml` but it would be better to automatically delete the temp bucket in S3.
## Possible Fix
<!--- Not obligatory, but suggest a fix or reason for the bug -->
change this code to delete bucket with boto3 https://github.com/Miserlou/Zappa/blob/361578322f082f927ae50782bfdf1e9717376515/zappa/core.py#L1011
## Steps to Reproduce
<!--- Provide a link to a live example, or an unambiguous set of steps to -->
<!--- reproduce this bug include code to reproduce, if relevant -->
1. Set `slim_handler=false` and `bucket_name=None` in zappa_setting.yml
2. Deploy lambda function
3. Check s3 if there is an empty zappa bucket
## Your Environment
<!--- Include as many relevant details about the environment you experienced the bug in -->
* Zappa version used: 0.46.1
* Operating System and Python version: linux, python 3.6
* The output of `pip freeze`:
* Link to your project (optional):
* Your `zappa_settings.py`:
| open | 2018-10-11T00:57:29Z | 2019-03-11T14:24:10Z | https://github.com/Miserlou/Zappa/issues/1643 | [] | chiqunz | 1 |
wandb/wandb | tensorflow | 9,601 | [Feature]: And option to syn in Python API | ### Description
Add option to sync offline runs directly in python
### Suggested Solution
Just like there is an option to sync via command line using `wandb sync` create a wrapper for python ´wandb.sync()´ | open | 2025-03-19T18:24:50Z | 2025-03-20T09:36:05Z | https://github.com/wandb/wandb/issues/9601 | [
"ty:feature"
] | pfcouto | 1 |
microsoft/nni | tensorflow | 5,133 | mobolienet V2 QAT quantization accuracy is low | i use a qat quantizer to quantization my model base on mobolienet v2, the result is always nan at first, so I reduce the learning rate, no more nan. but the KPI(accuray, recall, precision) of model is low(30% reduction compared to the original model), how can promote KPI when use QAT quantizer, here is my code:
from learner_curve import Simple_MLSD_Curve_Learner
import json
from nni.algorithms.compression.pytorch.quantization import QAT_Quantizer
from nni.compression.pytorch.quantization.settings import set_quant_scheme_dtype
from nni.compression.pytorch.quantization_speedup import ModelSpeedupTensorRT
from nni.compression.pytorch.utils import count_flops_params
from nni.algorithms.compression.pytorch.quantization import LsqQuantizer
from models.mbv2_mlsd_large_curve import MobileV2_MLSD_Large_Curve
from kpi import calculate_kpi
configure_list = [{
'quant_types': ['weight'],
'quant_bits': {
'weight': 8,
},
'op_types': ['Conv2d']
}]
set_quant_scheme_dtype('weight', 'per_channel_symmetric', 'int')
set_quant_scheme_dtype('output', 'per_tensor_symmetric', 'int')
set_quant_scheme_dtype('input', 'per_tensor_symmetric', 'int')
model = get_origin_model(cfg)
dummy_input = torch.randn(1, 12, 272, 480).to(device)
flops, params, results = count_flops_params(model, dummy_input.to(device))
optimizer = torch.optim.Adam(params=model.parameters(),lr=cfg.train.learning_rate,weight_decay=cfg.train.weight_decay)
print("model1:", model)
quantizer = QAT_Quantizer(model, configure_list, optimizer, dummy_input=dummy_input)
quantizer.compress()
train(cfg, model, device, optimizer)
input_shape = (1, 12, 272, 480)
device = torch.device("cuda")
calibration_config = quantizer.export_model(model_path, calibration_path, onnx_path, input_shape, device) #
input_shape = (1, 12, 272, 480)
calculate_kpi(cfg, model)
engine = ModelSpeedupTensorRT(model, input_shape, config=calibration_config, batchsize=1,strict_datatype=True, onnx_path="/home/tzc/2021/code/LLD/mlsd_pytorch/qat_model.onnx") #
print('Done') | open | 2022-09-21T02:07:33Z | 2022-10-12T01:50:42Z | https://github.com/microsoft/nni/issues/5133 | [
"model compression",
"support"
] | mairkiss | 1 |
skypilot-org/skypilot | data-science | 4,103 | [Provisioner] Backward compatibility for status refreshing on Lambda New Provisioner | <!-- Describe the bug report / feature request here -->
In #3865, for lambda cloud cluster that launched before this PR, running `sky status --refresh` puts it in the `INIT` state. We should investigate what is happening here.
https://github.com/skypilot-org/skypilot/pull/3865#pullrequestreview-2373491746
```bash
(sky) ➜ skypilot git:(master) sky launch --cloud lambda -c lmd-old --gpus A100 nvidia-smi
Task from command: nvidia-smi
Considered resources (1 node):
------------------------------------------------------------------------------------------------
CLOUD INSTANCE vCPUs Mem(GB) ACCELERATORS REGION/ZONE COST ($) CHOSEN
------------------------------------------------------------------------------------------------
Lambda gpu_1x_a100_sxm4 30 200 A100:1 us-east-1 1.29 ✔
------------------------------------------------------------------------------------------------
Multiple Lambda instances satisfy A100:1. The cheapest Lambda(gpu_1x_a100_sxm4, {'A100': 1}) is considered among:
['gpu_1x_a100_sxm4', 'gpu_1x_a100'].
To list more details, run: sky show-gpus A100
Launching a new cluster 'lmd-old'. Proceed? [Y/n]:
⚙︎ Launching on Lambda us-east-1.
Head VM is up.
✓ Cluster launched: 'lmd-old'. View logs at: ~/sky_logs/sky-2024-10-16-11-59-04-228003/provision.log
⚙︎ Job submitted, ID: 1
├── Waiting for task resources on 1 node.
└── Job started. Streaming logs... (Ctrl-C to exit log streaming; job will not be killed)
(sky-cmd, pid=4130) Wed Oct 16 19:04:24 2024
(sky-cmd, pid=4130) +---------------------------------------------------------------------------------------+
(sky-cmd, pid=4130) | NVIDIA-SMI 535.129.03 Driver Version: 535.129.03 CUDA Version: 12.2 |
(sky-cmd, pid=4130) |-----------------------------------------+----------------------+----------------------+
(sky-cmd, pid=4130) | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
(sky-cmd, pid=4130) | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
(sky-cmd, pid=4130) | | | MIG M. |
(sky-cmd, pid=4130) |=========================================+======================+======================|
(sky-cmd, pid=4130) | 0 NVIDIA A100-SXM4-40GB On | 00000000:07:00.0 Off | 0 |
(sky-cmd, pid=4130) | N/A 32C P0 44W / 400W | 4MiB / 40960MiB | 0% Default |
(sky-cmd, pid=4130) | | | Disabled |
(sky-cmd, pid=4130) +-----------------------------------------+----------------------+----------------------+
(sky-cmd, pid=4130)
(sky-cmd, pid=4130) +---------------------------------------------------------------------------------------+
(sky-cmd, pid=4130) | Processes: |
(sky-cmd, pid=4130) | GPU GI CI PID Type Process name GPU Memory |
(sky-cmd, pid=4130) | ID ID Usage |
(sky-cmd, pid=4130) |=======================================================================================|
(sky-cmd, pid=4130) | No running processes found |
(sky-cmd, pid=4130) +---------------------------------------------------------------------------------------+
✓ Job finished (status: SUCCEEDED).
Shared connection to 150.136.118.84 closed.
📋 Useful Commands
Job ID: 1
├── To cancel the job: sky cancel lmd-old 1
├── To stream job logs: sky logs lmd-old 1
└── To view job queue: sky queue lmd-old
Cluster name: lmd-old
├── To log into the head VM: ssh lmd-old
├── To submit a job: sky exec lmd-old yaml_file
├── To stop the cluster: sky stop lmd-old
└── To teardown the cluster: sky down lmd-old
(sky) ➜ skypilot git:(master) gsw feat/oss-lambda-cloud-new-provisioner
Switched to branch 'feat/oss-lambda-cloud-new-provisioner'
(sky) ➜ skypilot git:(feat/oss-lambda-cloud-new-provisioner) sst -r
Clusters
Refreshing status for 2 clusters ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 0% -:--:--instance_id: 25d1811e8e004be7975979a6b3dd4923, status: ClusterStatus.UP
NAME LAUNCHED RESOURCES STATUS AUTOSTOP COMMAND
lmd-old 36 secs ago 1x Lambda(gpu_1x_a100_sxm4, {'A100': 1}) INIT - sky launch --cloud lambda -c...
sky-serve-controller-402b1bba 1 hr ago 1x AWS(m6i.xlarge, disk_size=200, ports=['30001-30020']) STOPPED 10m sky serve up examples/ser...
Managed jobs
No in-progress managed jobs. (See: sky jobs -h)
Services
No live services.
```
<!-- If relevant, fill in versioning info to help us troubleshoot -->
_Version & Commit info:_
* `sky -v`: PLEASE_FILL_IN
* `sky -c`: PLEASE_FILL_IN
| open | 2024-10-17T16:29:44Z | 2024-12-19T23:08:44Z | https://github.com/skypilot-org/skypilot/issues/4103 | [] | cblmemo | 0 |
LAION-AI/Open-Assistant | machine-learning | 3,281 | Are the weights of the RM available? | As the title suggests. Would like to use an off-the-shelf reward model for RLHF training. | closed | 2023-06-02T10:17:01Z | 2023-06-12T07:57:15Z | https://github.com/LAION-AI/Open-Assistant/issues/3281 | [
"ml",
"question"
] | zyzhang1130 | 2 |
JaidedAI/EasyOCR | deep-learning | 778 | Question about the pre-processing and post-processing methods used in EasyOCR | Hi,
Many thanks for releasing this amazing OCR platform! From the framework diagram, there are pre-processing and post-processing operations in this platform.
Could you please let us know what the pre-processing and post-processing methods were used in EasyOCR?
Thank you!
| open | 2022-07-08T10:24:05Z | 2022-07-08T10:24:05Z | https://github.com/JaidedAI/EasyOCR/issues/778 | [] | Di-Ma-S21 | 0 |
brightmart/text_classification | tensorflow | 11 | could you give a brief intro about the ensemble | Hi bright,
nice job on these baselines. I saw you said you did some ensemble work, so could you give some words about?
I didn't find a good way to do a traditional bagging or boosting, and I tried the [hard voting](http://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html), but results' bad. :-( | closed | 2017-08-05T08:41:30Z | 2017-08-05T11:24:35Z | https://github.com/brightmart/text_classification/issues/11 | [] | ghost | 1 |
MagicStack/asyncpg | asyncio | 392 | Possible to INSERT with prepared statements | It seems like INSERTs with prepared statements have been overlooked in asyncpg. The only attributes of a PreparedStatement object listed in the asyncpg docs relate to SELECT queries: `cursor`, `fetch`, `fetchrow`, `fetchval` etc.
Have I missed it somewhere or is this likely to be implemented soon?
| closed | 2018-12-16T06:30:32Z | 2019-12-28T17:52:58Z | https://github.com/MagicStack/asyncpg/issues/392 | [] | Gitborg | 3 |
deezer/spleeter | deep-learning | 343 | [Bug] name your bug |
## Description
Used in both cases: Spleet gui
result:
Spleeter works fine on Win7, but produces this, on Win10:
## Step to reproduce
Installed:
python-3.8.2.exe
Miniconda3-latest-Windows-x86_64.exe
then (without errors):
pip install spleeter
conda install numba
## Output
Informationen über das Aufrufen von JIT-Debuggen
anstelle dieses Dialogfelds finden Sie am Ende dieser Meldung.
************** Ausnahmetext **************
System.Security.SecurityException: Der angeforderte Registrierungszugriff ist unzulässig.
bei System.ThrowHelper.ThrowSecurityException(ExceptionResource resource)
bei Microsoft.Win32.RegistryKey.OpenSubKey(String name, Boolean writable)
bei System.Environment.SetEnvironmentVariable(String variable, String value, EnvironmentVariableTarget target)
bei spleetGUI.Form1.addtopath()
bei spleetGUI.Form1.InstallFFMPEG()
bei spleetGUI.Form1.button1_Click(Object sender, EventArgs e)
bei System.Windows.Forms.Control.OnClick(EventArgs e)
bei System.Windows.Forms.Button.OnClick(EventArgs e)
bei System.Windows.Forms.Button.OnMouseUp(MouseEventArgs mevent)
bei System.Windows.Forms.Control.WmMouseUp(Message& m, MouseButtons button, Int32 clicks)
bei System.Windows.Forms.Control.WndProc(Message& m)
bei System.Windows.Forms.ButtonBase.WndProc(Message& m)
bei System.Windows.Forms.Button.WndProc(Message& m)
bei System.Windows.Forms.Control.ControlNativeWindow.OnMessage(Message& m)
bei System.Windows.Forms.Control.ControlNativeWindow.WndProc(Message& m)
bei System.Windows.Forms.NativeWindow.Callback(IntPtr hWnd, Int32 msg, IntPtr wparam, IntPtr lparam)
Die Zone der Assembly, bei der ein Fehler aufgetreten ist:
MyComputer
************** Geladene Assemblys **************
mscorlib
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1063.1 built by: NETFXREL3STAGE.
CodeBase: file:///C:/Windows/Microsoft.NET/Framework/v4.0.30319/mscorlib.dll.
spleetGUI
Assembly-Version: 1.0.0.0.
Win32-Version: 1.0.0.0.
CodeBase: file:///C:/OSTRIP/%5BTOOLS%5D/SpleetGUI.v2/SpleetGUI.exe.
System.Windows.Forms
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms/v4.0_4.0.0.0__b77a5c561934e089/System.Windows.Forms.dll.
System
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System/v4.0_4.0.0.0__b77a5c561934e089/System.dll.
System.Drawing
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1068.2 built by: NETFXREL3STAGE.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Drawing/v4.0_4.0.0.0__b03f5f7f11d50a3a/System.Drawing.dll.
Accessibility
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/Accessibility/v4.0_4.0.0.0__b03f5f7f11d50a3a/Accessibility.dll.
mscorlib.resources
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/mscorlib.resources/v4.0_4.0.0.0_de_b77a5c561934e089/mscorlib.resources.dll.
System.Windows.Forms.resources
Assembly-Version: 4.0.0.0.
Win32-Version: 4.6.1038.0 built by: NETFXREL2.
CodeBase: file:///C:/Windows/Microsoft.Net/assembly/GAC_MSIL/System.Windows.Forms.resources/v4.0_4.0.0.0_de_b77a5c561934e089/System.Windows.Forms.resources.dll.
************** JIT-Debuggen **************
Um das JIT-Debuggen (Just-In-Time) zu aktivieren, muss in der
Konfigurationsdatei der Anwendung oder des Computers
(machine.config) der jitDebugging-Wert im Abschnitt system.windows.forms festgelegt werden.
Die Anwendung muss mit aktiviertem Debuggen kompiliert werden.
Zum Beispiel:
<configuration>
<system.windows.forms jitDebugging="true" />
</configuration>
Wenn das JIT-Debuggen aktiviert ist, werden alle nicht behandelten
Ausnahmen an den JIT-Debugger gesendet, der auf dem
Computer registriert ist, und nicht in diesem Dialogfeld behandelt.
## Environment
Firewall: disabled.
Host file: untouched from stock windows 10
<!-- Fill the following table -->
| ----------------- | ------------------------------- |
| OS | Windows 10|
| Installation type | Conda / pip |
| RAM available | 4Go |
| Hardware spec | Fujitsu Q702, GPU: Intel HD Graphics 4000, Intel(R) i3-3217U1.80Ghz |
## Additional context
 | closed | 2020-04-25T18:34:31Z | 2020-04-27T07:49:40Z | https://github.com/deezer/spleeter/issues/343 | [
"bug",
"invalid"
] | Ry3yr | 1 |
pydata/xarray | pandas | 9,824 | Some operations broken when using the SciPy engine | ### What happened?
A simple `open->resample.mean` fails if the scipy engine is used and flox is used. See error below.
### What did you expect to happen?
I expected it to work.
### Minimal Complete Verifiable Example
```Python
import xarray as xr
# xr.set_options(use_flox=True) # default so not really needed
ds = xr.tutorial.open_dataset("air_temperature", engine='scipy')
out = ds.air.resample(time="D").mean(keep_attrs=True)
```
### MVCE confirmation
- [X] Minimal example — the example is as focused as reasonably possible to demonstrate the underlying issue in xarray.
- [X] Complete example — the example is self-contained, including all data and the text of any traceback.
- [X] Verifiable example — the example copy & pastes into an IPython prompt or [Binder notebook](https://mybinder.org/v2/gh/pydata/xarray/main?urlpath=lab/tree/doc/examples/blank_template.ipynb), returning the result.
- [X] New issue — a search of GitHub Issues suggests this is not a duplicate.
- [X] Recent environment — the issue occurs with the latest version of xarray and its dependencies.
### Relevant log output
```Python
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
Cell In[11], line 3
1 with xr.set_options(use_flox=True):
2 ds = xr.tutorial.open_dataset("air_temperature", engine='scipy')
----> 3 out = ds.air.resample(time="D").mean(keep_attrs=True)
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/_aggregations.py:8619, in DataArrayResampleAggregations.mean(self, dim, skipna, keep_attrs, **kwargs)
8537 """
8538 Reduce this DataArray's data by applying ``mean`` along some dimension(s).
8539
(...)
8612 * time (time) datetime64[ns] 24B 2001-01-31 2001-04-30 2001-07-31
8613 """
8614 if (
8615 flox_available
8616 and OPTIONS["use_flox"]
8617 and contains_only_chunked_or_numpy(self._obj)
8618 ):
-> 8619 return self._flox_reduce(
8620 func="mean",
8621 dim=dim,
8622 skipna=skipna,
8623 # fill_value=fill_value,
8624 keep_attrs=keep_attrs,
8625 **kwargs,
8626 )
8627 else:
8628 return self.reduce(
8629 duck_array_ops.mean,
8630 dim=dim,
(...)
8633 **kwargs,
8634 )
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/resample.py:58, in Resample._flox_reduce(self, dim, keep_attrs, **kwargs)
52 def _flox_reduce(
53 self,
54 dim: Dims,
55 keep_attrs: bool | None = None,
56 **kwargs,
57 ) -> T_Xarray:
---> 58 result = super()._flox_reduce(dim=dim, keep_attrs=keep_attrs, **kwargs)
59 result = result.rename({RESAMPLE_DIM: self._group_dim})
60 return result
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/groupby.py:1094, in GroupBy._flox_reduce(self, dim, keep_attrs, **kwargs)
1089 expected_groups = tuple(
1090 pd.RangeIndex(len(grouper)) for grouper in self.groupers
1091 )
1093 codes = tuple(g.codes for g in self.groupers)
-> 1094 result = xarray_reduce(
1095 obj.drop_vars(non_numeric.keys()),
1096 *codes,
1097 dim=parsed_dim,
1098 expected_groups=expected_groups,
1099 isbin=False,
1100 keep_attrs=keep_attrs,
1101 **kwargs,
1102 )
1104 # we did end up reducing over dimension(s) that are
1105 # in the grouped variable
1106 group_dims = set(grouper.group.dims)
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/flox/xarray.py:410, in xarray_reduce(obj, func, expected_groups, isbin, sort, dim, fill_value, dtype, method, engine, keep_attrs, skipna, min_count, reindex, *by, **finalize_kwargs)
407 output_sizes = group_sizes
408 output_sizes.update({dim.name: dim.size for dim in newdims if dim.size != 0})
--> 410 actual = xr.apply_ufunc(
411 wrapper,
412 ds_broad.drop_vars(tuple(missing_dim)).transpose(..., *grouper_dims),
413 *by_da,
414 input_core_dims=input_core_dims,
415 # for xarray's test_groupby_duplicate_coordinate_labels
416 exclude_dims=set(dim_tuple),
417 output_core_dims=[output_core_dims],
418 dask="allowed",
419 dask_gufunc_kwargs=dict(
420 output_sizes=output_sizes,
421 output_dtypes=[dtype] if dtype is not None else None,
422 ),
423 keep_attrs=keep_attrs,
424 kwargs={
425 "func": func,
426 "axis": axis,
427 "sort": sort,
428 "fill_value": fill_value,
429 "method": method,
430 "min_count": min_count,
431 "skipna": skipna,
432 "engine": engine,
433 "reindex": reindex,
434 "expected_groups": tuple(expected_groups_valid_list),
435 "isbin": isbins,
436 "finalize_kwargs": finalize_kwargs,
437 "dtype": dtype,
438 "core_dims": input_core_dims,
439 },
440 )
442 # restore non-dim coord variables without the core dimension
443 # TODO: shouldn't apply_ufunc handle this?
444 for var in set(ds_broad._coord_names) - set(ds_broad._indexes) - set(ds_broad.dims):
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/computation.py:1252, in apply_ufunc(func, input_core_dims, output_core_dims, exclude_dims, vectorize, join, dataset_join, dataset_fill_value, keep_attrs, kwargs, dask, output_dtypes, output_sizes, meta, dask_gufunc_kwargs, on_missing_core_dim, *args)
1250 # feed datasets apply_variable_ufunc through apply_dataset_vfunc
1251 elif any(is_dict_like(a) for a in args):
-> 1252 return apply_dataset_vfunc(
1253 variables_vfunc,
1254 *args,
1255 signature=signature,
1256 join=join,
1257 exclude_dims=exclude_dims,
1258 dataset_join=dataset_join,
1259 fill_value=dataset_fill_value,
1260 keep_attrs=keep_attrs,
1261 on_missing_core_dim=on_missing_core_dim,
1262 )
1263 # feed DataArray apply_variable_ufunc through apply_dataarray_vfunc
1264 elif any(isinstance(a, DataArray) for a in args):
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/computation.py:523, in apply_dataset_vfunc(func, signature, join, dataset_join, fill_value, exclude_dims, keep_attrs, on_missing_core_dim, *args)
518 list_of_coords, list_of_indexes = build_output_coords_and_indexes(
519 args, signature, exclude_dims, combine_attrs=keep_attrs
520 )
521 args = tuple(getattr(arg, "data_vars", arg) for arg in args)
--> 523 result_vars = apply_dict_of_variables_vfunc(
524 func,
525 *args,
526 signature=signature,
527 join=dataset_join,
528 fill_value=fill_value,
529 on_missing_core_dim=on_missing_core_dim,
530 )
532 out: Dataset | tuple[Dataset, ...]
533 if signature.num_outputs > 1:
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/computation.py:447, in apply_dict_of_variables_vfunc(func, signature, join, fill_value, on_missing_core_dim, *args)
445 core_dim_present = _check_core_dims(signature, variable_args, name)
446 if core_dim_present is True:
--> 447 result_vars[name] = func(*variable_args)
448 else:
449 if on_missing_core_dim == "raise":
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/computation.py:729, in apply_variable_ufunc(func, signature, exclude_dims, dask, output_dtypes, vectorize, keep_attrs, dask_gufunc_kwargs, *args)
722 broadcast_dims = tuple(
723 dim for dim in dim_sizes if dim not in signature.all_core_dims
724 )
725 output_dims = [broadcast_dims + out for out in signature.output_core_dims]
727 input_data = [
728 (
--> 729 broadcast_compat_data(arg, broadcast_dims, core_dims)
730 if isinstance(arg, Variable)
731 else arg
732 )
733 for arg, core_dims in zip(args, signature.input_core_dims, strict=True)
734 ]
736 if any(is_chunked_array(array) for array in input_data):
737 if dask == "forbidden":
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/computation.py:650, in broadcast_compat_data(variable, broadcast_dims, core_dims)
645 def broadcast_compat_data(
646 variable: Variable,
647 broadcast_dims: tuple[Hashable, ...],
648 core_dims: tuple[Hashable, ...],
649 ) -> Any:
--> 650 data = variable.data
652 old_dims = variable.dims
653 new_dims = broadcast_dims + core_dims
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/variable.py:474, in Variable.data(self)
472 return self._data
473 elif isinstance(self._data, indexing.ExplicitlyIndexed):
--> 474 return self._data.get_duck_array()
475 else:
476 return self.values
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/indexing.py:729, in LazilyVectorizedIndexedArray.get_duck_array(self)
727 def get_duck_array(self):
728 if isinstance(self.array, ExplicitlyIndexedNDArrayMixin):
--> 729 array = apply_indexer(self.array, self.key)
730 else:
731 # If the array is not an ExplicitlyIndexedNDArrayMixin,
732 # it may wrap a BackendArray so use its __getitem__
733 array = self.array[self.key]
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/indexing.py:1029, in apply_indexer(indexable, indexer)
1027 """Apply an indexer to an indexable object."""
1028 if isinstance(indexer, VectorizedIndexer):
-> 1029 return indexable.vindex[indexer]
1030 elif isinstance(indexer, OuterIndexer):
1031 return indexable.oindex[indexer]
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/core/indexing.py:369, in IndexCallable.__getitem__(self, key)
368 def __getitem__(self, key: Any) -> Any:
--> 369 return self.getter(key)
File ~/miniforge3/envs/xclim-dev/lib/python3.12/site-packages/xarray/coding/variables.py:75, in _ElementwiseFunctionArray._vindex_get(self, key)
74 def _vindex_get(self, key):
---> 75 return type(self)(self.array.vindex[key], self.func, self.dtype)
AttributeError: 'ScipyArrayWrapper' object has no attribute 'vindex'
```
### Anything else we need to know?
Opening with `netcdf4` works. Disabling `flox` works. Downgrading xarray to 2024.10 works.
And of course, loading the dataset before resampling works too.
I have tested with flox 0.9.12 and 0.9.14, and only xarray's version seems to have an effect, so I thought here was the best place to open the issue. But I can reopen it on flox if we judge that to be correct!
### Environment
<details>
INSTALLED VERSIONS
------------------
commit: None
python: 3.12.7 | packaged by conda-forge | (main, Oct 4 2024, 16:05:46) [GCC 13.3.0]
python-bits: 64
OS: Linux
OS-release: 6.11.9-300.fc41.x86_64
machine: x86_64
processor:
byteorder: little
LC_ALL: None
LANG: fr_CA.UTF-8
LOCALE: ('fr_CA', 'UTF-8')
libhdf5: 1.14.3
libnetcdf: 4.9.2
xarray: 2024.10.1.dev3+g0c41bb4c
pandas: 2.2.3
numpy: 2.0.2
scipy: 1.14.1
netCDF4: 1.7.1
pydap: None
h5netcdf: 1.4.0
h5py: 3.12.1
zarr: None
cftime: 1.6.4
nc_time_axis: 1.4.1
iris: None
bottleneck: 1.4.2
dask: 2024.10.0
distributed: 2024.10.0
matplotlib: 3.9.2
cartopy: None
seaborn: None
numbagg: None
fsspec: 2024.10.0
cupy: None
pint: 0.24.3
sparse: None
flox: 0.9.12
numpy_groupies: 0.11.2
setuptools: 75.1.0
pip: 24.2
conda: None
pytest: 8.3.3
mypy: 1.13.0
IPython: 8.29.0
sphinx: 8.1.3
</details>
| closed | 2024-11-25T17:20:44Z | 2025-01-31T08:41:38Z | https://github.com/pydata/xarray/issues/9824 | [
"bug"
] | aulemahal | 1 |
MilesCranmer/PySR | scikit-learn | 38 | Predefined function form | Hi Miles,
In the regression process, can we pre-define a function form first, and let the regression start from this function?
for example, If our objective function is x0**2 + 2.0*cos(x3) - 2.0 like example.py, that is simple, pysr can get the result quickly.
however, In some research processes, our objective function may be more complicated, such as x0**2+(x1*x0)*exp(sin(x2)*x1), At this time, pysr may take longer to optimize and may fall into a local optimal solution.
so, So, I wonder if it is possible to predefine a function form in the pysr function, for example, x0*exp(sin(x2)). Let the regression start from this function to speed up the efficiency of optimization. It is like applying a boundary condition to the solution equation.
thanks.
| open | 2021-03-08T14:08:02Z | 2021-07-20T20:50:57Z | https://github.com/MilesCranmer/PySR/issues/38 | [
"question"
] | nice-mon | 2 |
graphql-python/gql | graphql | 475 | Error when using gqc client under windows | When running this program on Windows:
https://github.com/ms140569/omnivore-backup
I'm getting this Error:
python backup.py >outpout.csv
Traceback (most recent call last):
File "<USER>\prj\omnivore-backup\backup.py", line 186, in <module>
sys.exit(main())
^^^^^^
File "<USER>\prj\omnivore-backup\backup.py", line 67, in main
backup.run()
File "<USER>\prj\omnivore-backup\backup.py", line 121, in run
self._fetch()
File "<USER>\prj\omnivore-backup\backup.py", line 138, in _fetch
result = self.client.execute(self.query,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<USER>\AppData\Roaming\Python\Python312\site-packages\gql\client.py", line 469, in execute
data = loop.run_until_complete(
^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Program Files\Python312\Lib\asyncio\base_events.py", line 685, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "<USER>\AppData\Roaming\Python\Python312\site-packages\gql\client.py", line 367, in execute_async
return await session.execute(
^^^^^^^^^^^^^^^^^^^^^^
File "<USER>\AppData\Roaming\Python\Python312\site-packages\gql\client.py", line 1639, in execute
raise TransportQueryError(
gql.transport.exceptions.TransportQueryError: {'message': 'Unexpected server error'}
Yes, the error-message points clearly to a server error, but what made me create this issue is the fact that this only happens on Windows.
There are **no problems at all** under Linux and Mac. Because of this, I realized this behavior pretty late ...
What could be the reason for this asymmetry?
OS: Win11, Version 10.0.22621 Build 22621
Python: 3.12.2
Gql: 3.5.0
Graphql-core: 3.2.3
| closed | 2024-03-23T15:13:49Z | 2024-03-23T22:49:21Z | https://github.com/graphql-python/gql/issues/475 | [
"type: question or discussion"
] | ms140569 | 6 |
jupyter/nbviewer | jupyter | 279 | Convert ipython notebook to pdf and/or print notebook | Hi can someone please instruct me on how to convert my notebook.ipynb to notebook.pdf and/or print my notebook? Thank you.
| open | 2014-05-08T00:15:27Z | 2020-07-06T11:41:17Z | https://github.com/jupyter/nbviewer/issues/279 | [
"type:Enhancement",
"tag:Format"
] | tmstout | 6 |
iperov/DeepFaceLab | machine-learning | 482 | Error during extraction, occurs repeatedly with certain data_dst.mp4 extracted frames. | Extraction finishes 1st pass, during 2nd pass an error occurs:
F:\DF\DFL\_internal\DeepFaceLab\mathlib\__init__.py:25: RuntimeWarning: overflow encountered in int_scalars return 0.5*np.abs(np.dot(x,np.roll(y,1))-np.dot(y,np.roll(x,1)))
Error occurs with some datasets, if it happens with certain one it can be reproduced (this one I tried 4 times, 4 times getting the same error around 3000th frame out of 7000 (always on 2nd pass).
After that 3rd pass runs and it only extracts faces found in those frames it processed.
Frames extracted as jpg.
Other relevant information
Windows 10 64bit, newest DFL version along with updates from github, GTX 1060, 4790k 16Gb of RAM. | closed | 2019-11-07T18:33:05Z | 2020-02-27T02:41:50Z | https://github.com/iperov/DeepFaceLab/issues/482 | [] | ThomasBardem | 2 |
Yorko/mlcourse.ai | matplotlib | 382 | Topic 3. Decision tree regressor, MSE | В примере по DecisionTreeRegressor неправильный расчет MSE в названии графика:
`plt.title("Decision tree regressor, MSE = %.2f" % np.sum((y_test - reg_tree_pred) ** 2))`
Нужно ещё поделить на количество наблюдений, предлагаю поправить так:
`plt.title("Decision tree regressor, MSE = %.4f" % (np.sum((y_test - reg_tree_pred) ** 2) / n_test))`
Файл:
https://github.com/Yorko/mlcourse.ai/blob/master/jupyter_english/topic03_decision_trees_kNN/topic3_decision_trees_kNN.ipynb
И в русской версии аналогично:
https://github.com/Yorko/mlcourse.ai/blob/master/jupyter_russian/topic03_decision_trees_knn/topic3_trees_knn.ipynb | closed | 2018-10-17T08:08:32Z | 2018-10-17T21:34:04Z | https://github.com/Yorko/mlcourse.ai/issues/382 | [
"minor_fix"
] | lalimpiev | 1 |
recommenders-team/recommenders | data-science | 1,918 | [FEATURE] Find a way to run the tests for external contributors after approval from core devs | ### Description
<!--- Describe your expected feature in detail -->
Currently, external contributors can do PRs, but after a core dev approves and executes to run the tests, there is an error:
```
Run azure/login@v1
Error: Az CLI Login failed. Please check the credentials and make sure az is installed on the runner. For more information refer https://aka.ms/create-secrets-for-GitHub-workflows
```
See example #1916
### Expected behavior with the suggested feature
<!--- For example: -->
<!--- *Adding algorithm xxx will help people understand more about xxx use case scenarios. -->
### Other Comments
Related to https://github.com/microsoft/recommenders/pull/1840 | closed | 2023-04-11T10:47:26Z | 2023-04-11T10:49:12Z | https://github.com/recommenders-team/recommenders/issues/1918 | [
"enhancement"
] | miguelgfierro | 1 |
HumanSignal/labelImg | deep-learning | 160 | size is 0 in xml annotations. |
- **OS:** Windows 7
- **PyQt version:**5
for some images ,the xml annotations file created by using labelImg has problems that: the width and height in size node is 0.
My guess is that the image is maybe not a jpg format file.
| closed | 2017-09-11T09:20:15Z | 2021-06-06T14:50:43Z | https://github.com/HumanSignal/labelImg/issues/160 | [
"bug"
] | makefile | 1 |
davidsandberg/facenet | tensorflow | 750 | evaluate function in training step will raise error : "float division by zero" | Take new pairs of pics as the evaluate dataset, raise error "float division by zero" in calculate_val_far function (facenet.py).
the codes in function "calculate_val_far"(508L @ facenet.py) is :
`def calculate_val_far(threshold, dist, actual_issame):`
  ` predict_issame = np.less(dist, threshold)`
  ` true_accept = np.sum(np.logical_and(predict_issame, actual_issame))`
  ` false_accept = np.sum(np.logical_and(predict_issame, np.logical_not(actual_issame)))`
  ` n_same = np.sum(actual_issame)`
  ` n_diff = np.sum(np.logical_not(actual_issame))`
  ` val = float(true_accept) / float(n_same)`
  ` far = float(false_accept) / float(n_diff)`
  ` return val, far`
BUT!
When the all samples in one floder are positive sample, and the "actual_issame" in function will be all "True", which causes the "n_diff" will be zero! It is same to the "n_same" when all samples are neg.
So, may change to
` def calculate_val_far(threshold, dist, actual_issame):`
  ` predict_issame = np.less(dist, threshold)`
  ` true_accept = np.sum(np.logical_and(predict_issame, actual_issame))`
  ` false_accept = np.sum(np.logical_and(predict_issame, np.logical_not(actual_issame)))`
  ` n_same = np.sum(actual_issame)`
  ` n_diff = np.sum(np.logical_not(actual_issame))`
  ` val = float(true_accept) / float(n_same) if float(n_same) > 0 else 1.0`
  ` far = float(false_accept) / float(n_diff) if float(n_diff) > 0 else 0.0`
  ` return val, far`
| open | 2018-05-17T07:18:03Z | 2021-06-12T09:39:19Z | https://github.com/davidsandberg/facenet/issues/750 | [] | PartYoga | 3 |
holoviz/panel | matplotlib | 7,281 | Tabulator: Separate aggregators for different columns blocked by type hint | <details>
<summary>Software Version Info</summary>
```plaintext
panel 1.4.5
```
</details>
#### Description of expected behavior and the observed behavior
- 'If separate aggregators for different columns are required the dictionary may be nested as {index_name: {column_name: aggregator}}'
- Aggregators parameter in class DataTabulator accepts only Dict(String, String)
#### Complete, minimal, self-contained example code that reproduces the issue
```python
pn.widgets.Tabulator(
value = value,
hierarchical = True,
aggregators = {index_name: {column_name: aggregator}}
```
#### Stack traceback and/or browser JavaScript console output
```python
ValueError: failed to validate DataTabulator(id=id, ...).aggregators: expected a dict of type Dict(String, String), got a dict with invalid values for keys: index_name
``` | closed | 2024-09-15T09:12:49Z | 2024-12-02T13:17:57Z | https://github.com/holoviz/panel/issues/7281 | [
"component: tabulator"
] | AxZolotl | 2 |
PaddlePaddle/ERNIE | nlp | 882 | 出现了一个直接执行evaluate找不到文件路径的问题 |
python 3.6.5
paddlepaddle 2.3.2
paddlepaddle-gpu 2.1.2.post101
按提示做预测
https://github.com/PaddlePaddle/ERNIE/tree/ernie-kit-open-v1.0/applications/tasks/text_generation
python run_infer.py --param_path ./examples/cls_ernie_gen_infilling_ch_infer.json
报了这个错
File "../../../erniekit/controller/inference.py", line 38, in __init__
self.parser_input_keys()
File "../../../erniekit/controller/inference.py", line 106, in parser_input_keys
param_dict = params.from_file(data_params_path)
File "../../../erniekit/utils/params.py", line 50, in from_file
json_file = json.loads(evaluate_file(filename), strict=False)
File "../../../erniekit/utils/params.py", line 44, in evaluate_file
with open(filename, "r") as evaluation_file:
FileNotFoundError: [Errno 2] No such file or directory: 'output/ernie_gen_ch/save_inference_model/inference_step_39421/infer_data_params.json'
| closed | 2023-01-12T11:08:37Z | 2023-04-02T05:23:14Z | https://github.com/PaddlePaddle/ERNIE/issues/882 | [
"wontfix"
] | dataCoderX10 | 1 |
dot-agent/nextpy | streamlit | 77 | Migrate Nextpy to Pydantic v2 for Enhanced Performance and Compatibility | It's time to upgrade Nextpy to Pydantic v2. This migration is crucial to leverage the latest performance improvements and ensure compatibility with other libraries that are also moving to Pydantic v2.
### Expected Benefits
- **Performance Improvements**: Pydantic v2 comes with significant enhancements in performance, which can positively impact the overall efficiency of Nextpy.
- **Better Compatibility**: Keeping up with the latest version ensures that Nextpy remains compatible with other tools and libraries in the ecosystem that rely on Pydantic.
- **Access to New Features**: Pydantic v2 introduces new features and improvements, which can be beneficial for future development and feature enhancements in Nextpy.
### Potential Challenges & Blockers
- **Dependencies on Other Libraries**: Some dependencies like `sqlmodel` might have compatibility issues that need to be addressed.
- **Internal API Changes**: Pydantic v2 has made changes to some of its internal APIs (e.g., `ModelField` no longer exists). We need to find suitable alternatives or workarounds for these changes.
### Call for Contributions
We invite contributors to join in on this upgrade process. Whether you have experience with Pydantic internals or are new to it, your input and help would be valuable.
- If you have experience with Pydantic v2 or its internals, your guidance can help overcome specific challenges.
- For those who are new, this could be a great learning opportunity and a way to contribute significantly to the Nextpy project.
### Progress Tracking
- [ ] Assess the impact of migration on existing codebase
- [ ] Identify and resolve dependencies issues with `sqlmodel`
- [ ] Update the Nextpy codebase to adapt to Pydantic v2 API changes
- [ ] Thorough testing to ensure stability post-migration
- [ ] Update documentation to reflect changes
### Collaboration and Updates
- For ongoing discussions, please refer to this thread.
- Contributors working on related tasks are encouraged to share updates and findings here.
- Any significant breakthroughs or challenges can be discussed in follow-up comments.
### Conclusion
Migrating to Pydantic v2 is an important step for the future of Nextpy. It ensures that our framework stays up-to-date with the latest advancements and continues to integrate smoothly within the broader Python ecosystem.
| open | 2023-12-13T14:35:20Z | 2023-12-13T14:36:01Z | https://github.com/dot-agent/nextpy/issues/77 | [
"enhancement",
"help wanted"
] | anubrag | 0 |
aimhubio/aim | tensorflow | 2,795 | Enable real-time data observability through live updates 🔥 | ## 🚀 Feature
Real-time data observability through live updates.
### Motivation
Query data caching on UI makes it unable to monitor real-time data changes without page reload.
So enabling live-update (auto-refresh) will help to resolve this issue.
### Pitch
Add auto-refresh logic to Board page, which will use all registered queries within the Board to re-run them with some time interval.
| open | 2023-05-30T13:42:06Z | 2023-05-31T19:04:17Z | https://github.com/aimhubio/aim/issues/2795 | [
"type / enhancement",
"phase / ready-to-go",
"area / Web-UI"
] | roubkar | 0 |
allenai/allennlp | nlp | 5,067 | Training state of last epoch not saved due to early stopping | `trainer.py` line 1006 will break the loop, and `dump_metrics` on line 1030 & `save_checkpoint` on line 1043 are skipped.
```python
this_epoch_val_metric = self._metric_tracker.combined_score(val_metrics) self._metric_tracker.add_metrics(val_metrics)
if self._metric_tracker.should_stop_early():
logger.info("Ran out of patience. Stopping training.")
break
``` | closed | 2021-03-25T11:50:53Z | 2021-04-23T00:09:32Z | https://github.com/allenai/allennlp/issues/5067 | [
"bug"
] | alanwang93 | 2 |
xorbitsai/xorbits | numpy | 536 | ENH: DataFrameNunique has performance issue | Note that the issue tracker is NOT the place for general support. For
discussions about development, questions about usage, or any general questions,
contact us on https://discuss.xorbits.io/.
I am testing on a dataframe with 3 columns and approximately 400 million rows. The first column of the data contains 85,642,283 distinct values. The performance of xorbits is significantly slower than pandas.
On 256g AWS EC2,
pandas spent over 8 minutes to complete caculation including reading csv data, while xorbits took over 10 minutes.
We should introduce shuffle in nunique op for this case. | closed | 2023-06-19T09:58:39Z | 2023-07-07T13:07:12Z | https://github.com/xorbitsai/xorbits/issues/536 | [
"enhancement"
] | ChengjieLi28 | 0 |
microsoft/nni | data-science | 4,981 | Automatic Operator Conversion Enhancement | **What would you like to be added**:
automatic operator conversion in compression.pytorch.speedup
**Why is this needed**:
nni needs to call these functions to understand the model.
problems when doing it manually:
1. The arguments can only be fetched as a argument list
2. The function uses a lot of star(*) syntax (Keyword-Only Arguments, PEP 3102), both positional argument and keyword-only argument, but the argument list cannot be used to distinguish positional argument and keyword-only argument
3. The function is overloaded, and the number of parameters in multiple versions of the same function may be the same, so it is difficult to distinguish overloaded situations only by the number.
4. Because it is a build-in, inspect.getfullargspec and other methods in inspect module cannot be used to get reflection information.
5. There are more than 2000 functions including the overloaded functions, which is hard to be operated by manual adaptation.
**Without this feature, how does current nni work**:
manual adaptation and conversion
**Components that may involve changes**:
only jit_translate.py in common/compression/pytorch/speedup/
**Brief description of your proposal if any**:
1. Automatic conversion
+ There is a schema information in jit node which can parse out positional argument and keyword-only argument.
+ Then we can automatic wrap arguments, keywords, and the function up to an adapted function.
+ Tested the automatic conversions of torch.sum, torch.unsqueeze, and torch.flatten OK.
2. Unresolved issues
+ Check schema syntax in multiple versions of pytorch and whether the syntax is stable.
+ The schema syntax is different from python's or c++'s.
+ I did't find the syntax document in pytorch documentation.
+ When pytorch compiles, it will dynamically generate schema informations from c++ functions.
+ For all the given schemas, see if they can correspond to the compiled pytorch functions.
+ For all the given schemas, try to parse one by one, and count the number that cannot be parsed.
| open | 2022-07-04T05:59:32Z | 2022-10-08T08:24:33Z | https://github.com/microsoft/nni/issues/4981 | [
"nnidev"
] | Louis-J | 1 |
milesmcc/shynet | django | 77 | Monthly Reports | Hello community 😄
First of all, @milesmcc and all the contributors, really thank you for this amazing work! After months searching for the perfect solution, I think I hopefully found it.
To be 100% perfect, I'm just missing a monthly report. Basically, I would love to every month send a report of visitors (and other similar metrics) of a certain website for certain emails. This could be done with each one SMTP of course.
For example: [https://count.ly/plugins/email-reports](https://count.ly/plugins/email-reports). If you scroll down you can see the picture of the admin page and the email.
_"This plugin lets you setup daily or weekly email reports to be delivered to configured email addresses. You can setup your reports to have data from multiple applications and you can choose to receive analytics, revenue, push notifications and crash analytics related metrics."_
This would be awesome for agencies, freelancers, and others that want to send a weekly/monthly report of a client's website.
Once again, really thank you for your amazing work! | open | 2020-08-25T23:05:36Z | 2020-08-26T14:19:11Z | https://github.com/milesmcc/shynet/issues/77 | [
"enhancement"
] | Tragio | 3 |
marcomusy/vedo | numpy | 449 | vedo marching cubes algorithm | Hey thanks for the wonderful library. I am also looking into vtk but the documentation and the examples seem really opaque.
I was wondering if the marching cubes algorithm in vedo is the same as the marching cubes in vtk. Is it the discreteMarchingCubes algorithm. or does the volume.isosurface follow some other algorithm.
Also in terms of functionality can I do almost everything in vtk in vedo as well or does vtk have better implementations, I ask this because the documentation says vedo is based entirely on vtk and numpy.
| closed | 2021-09-03T13:49:28Z | 2023-03-09T08:48:12Z | https://github.com/marcomusy/vedo/issues/449 | [] | cakeinspace | 3 |
vitalik/django-ninja | pydantic | 1,069 | /After upgrading into 1.1.0 from 0.22.2, /api/docs page shows `No API definition provided.` | **Describe the bug**
After upgrading into 1.1.0 from 0.22.2, /api/docs page shows `No API definition provided.`
It works well in local (docker-compose) I don't know why it doesn't suddenly work.
I don't know it is exactly related to django-ninja version. but It suddenly happened after upgrading.
**Versions (please complete the following information):**
- Python version: 3.11
- Django version: 4.2.4
- Django-Ninja version: 1.1.
- Pydantic version: 2.5.3
Note you can quickly get this by runninng in `./manage.py shell` this line:
```
import django; import pydantic; import ninja; django.__version__; ninja.__version__; pydantic.__version__
```
| closed | 2024-01-31T05:07:28Z | 2024-02-18T05:51:27Z | https://github.com/vitalik/django-ninja/issues/1069 | [] | baidoosik | 1 |
horovod/horovod | deep-learning | 3,306 | Error when trying to install horovod | Hello,
I've been working on this for awhile, but couldn't figure it out yet. I would appreciate your help.
I am still not able to install horovod, although I did install all the requirements specified in the documentation.
I am getting this long error:
> Installing collected packages: psutil, cloudpickle, cffi, horovod
Running setup.py install for psutil ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\Munira\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Munira\\AppData\\Local\\Temp\\pip-install-_cqd15u3\\psutil_f9f6130583a240c5833aaefb1d9da5e6\\setup.py'"'"'; __file__='"'"'C:\\Users\\Munira\\AppData\\Local\\Temp\\pip-install-_cqd15u3\\psutil_f9f6130583a240c5833aaefb1d9da5e6\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Munira\AppData\Local\Temp\pip-record-o4cdua9y\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\Munira\AppData\Local\Programs\Python\Python310\Include\psutil'
cwd: C:\Users\Munira\AppData\Local\Temp\pip-install-_cqd15u3\psutil_f9f6130583a240c5833aaefb1d9da5e6\
Complete output (38 lines):
running install
running build
running build_py
creating build
creating build\lib.win32-3.10
creating build\lib.win32-3.10\psutil
copying psutil\_common.py -> build\lib.win32-3.10\psutil
copying psutil\_compat.py -> build\lib.win32-3.10\psutil
copying psutil\_psaix.py -> build\lib.win32-3.10\psutil
copying psutil\_psbsd.py -> build\lib.win32-3.10\psutil
copying psutil\_pslinux.py -> build\lib.win32-3.10\psutil
copying psutil\_psosx.py -> build\lib.win32-3.10\psutil
copying psutil\_psposix.py -> build\lib.win32-3.10\psutil
copying psutil\_pssunos.py -> build\lib.win32-3.10\psutil
copying psutil\_pswindows.py -> build\lib.win32-3.10\psutil
copying psutil\__init__.py -> build\lib.win32-3.10\psutil
creating build\lib.win32-3.10\psutil\tests
copying psutil\tests\runner.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_aix.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_bsd.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_connections.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_contracts.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_linux.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_memleaks.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_misc.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_osx.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_posix.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_process.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_sunos.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_system.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_testutils.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_unicode.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\test_windows.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\__init__.py -> build\lib.win32-3.10\psutil\tests
copying psutil\tests\__main__.py -> build\lib.win32-3.10\psutil\tests
running build_ext
building 'psutil._psutil_windows' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\Munira\AppData\Local\Programs\Python\Python310\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Munira\\AppData\\Local\\Temp\\pip-install-_cqd15u3\\psutil_f9f6130583a240c5833aaefb1d9da5e6\\setup.py'"'"'; __file__='"'"'C:\\Users\\Munira\\AppData\\Local\\Temp\\pip-install-_cqd15u3\\psutil_f9f6130583a240c5833aaefb1d9da5e6\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Munira\AppData\Local\Temp\pip-record-o4cdua9y\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\Munira\AppData\Local\Programs\Python\Python310\Include\psutil' Check the logs for full command output.
WARNING: You are using pip version 21.2.4; however, version 21.3.1 is available.
You should consider upgrading via the 'C:\Users\Munira\AppData\Local\Programs\Python\Python310\python.exe -m pip install --upgrade pip' command.<
I tried to reinstall the most recent g++ but still getting the same error.
Can anyone help please?
Thank you, | closed | 2021-12-08T13:11:05Z | 2022-03-13T18:56:25Z | https://github.com/horovod/horovod/issues/3306 | [
"wontfix"
] | n-balla | 8 |
pytorch/pytorch | machine-learning | 149,425 | python custom ops tutorial stopped working in PyTorch 2.7 RC1 | Get PyTorch 2.7 RC1. Repro in next comment.
Error looks like:
```py
Traceback (most recent call last):
File "/home/rzou/dev/2.7/pco.py", line 124, in <module>
cropped_img = f(img)
^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 655, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/pco.py", line 120, in f
@torch.compile(fullgraph=True)
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_dynamo/eval_frame.py", line 838, in _fn
return fn(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/aot_autograd.py", line 1201, in forward
return compiled_fn(full_args)
^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line
328, in runtime_wrapper
all_outs = call_func_at_runtime_with_args(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/utils.py", line 126, in cal
l_func_at_runtime_with_args
out = normalize_as_list(f(args))
^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line
689, in inner_fn
outs = compiled_fn(args)
^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_functorch/_aot_autograd/runtime_wrappers.py", line
495, in wrapper
return compiled_fn(runtime_args)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/rzou/dev/2.7/env/lib/python3.11/site-packages/torch/_inductor/output_code.py", line 460, in __call__
return self.current_callable(inputs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/tmp/torchinductor_rzou/oy/coy5shd4xlyzvhkrwtaiad5zxz7jhd654636vqhwxsyeux5q27d7.py", line 42, in call
assert_size_stride(buf1, (3, 40, 40), (1600, 40, 1))
AssertionError: expected size 3==3, stride 1==1600 at dim=0; expected size 40==40, stride 120==40 at dim=1; expected s
ize 40==40, stride 3==1 at dim=2
This error most often comes from a incorrect fake (aka meta) kernel for a custom op.
Use torch.library.opcheck to test your custom op.
See https://pytorch.org/docs/stable/library.html#torch.library.opcheck
```
cc @ezyang @gchanan @kadeng @msaroufim @voznesenskym @penguinwu @EikanWang @jgong5 @Guobing-Chen @XiaobingSuper @zhuhaozhe @blzheng @wenzhe-nrv @jiayisunx @ipiszy @chenyang78 @muchulee8 @amjames @chauhang @aakhundov | closed | 2025-03-18T19:57:03Z | 2025-03-19T15:08:34Z | https://github.com/pytorch/pytorch/issues/149425 | [
"high priority",
"triage review",
"oncall: pt2",
"module: inductor"
] | zou3519 | 4 |
matplotlib/matplotlib | matplotlib | 29,160 | [Bug]: ConciseDateFormatter doesn't handle DST changes correctly | ### Bug summary
When the time series to plot has a day light saving change and the time zone of data changes, the ticker labels formatted incorrectly. It seems to format time zone part instead of the correct base, in this example the day.
### Code for reproduction
```Python
import datetime
import matplotlib.pyplot as plt
import numpy as np
import matplotlib.dates as mdates
from zoneinfo import ZoneInfo
base = datetime.datetime(2005, 2, 1)
dates = [base + datetime.timedelta(hours=(2 * i)) for i in range(732)]
N = len(dates)
np.random.seed(19680801)
y = np.cumsum(np.random.randn(N))
fig, axs = plt.subplots(3, 1, layout='constrained', figsize=(6, 6))
lims = [(np.datetime64('2024-10-27'), np.datetime64('2024-11-10')),
(np.datetime64('2024-11-11'), np.datetime64('2024-11-22')),
(np.datetime64('2005-02-03 11:00'), np.datetime64('2005-02-04 13:20'))
]
fig, axs = plt.subplots(3, 1, layout='constrained', figsize=(6, 6))
for nn, ax in enumerate(axs):
# locator = mdates.AutoDateLocator()
locator = mdates.AutoDateLocator(tz=ZoneInfo("America/Los_Angeles"))
formatter = mdates.ConciseDateFormatter(locator)
formatter.formats = ['%y', # ticks are mostly years
'%b', # ticks are mostly months
'%d', # ticks are mostly days
'%H:%M', # hrs
'%H:%M', # min
'%S.%f', ] # secs
# these are mostly just the level above...
formatter.zero_formats = [''] + formatter.formats[:-1]
# ...except for ticks that are mostly hours, then it is nice to have
# month-day:
formatter.zero_formats[3] = '%d-%b'
formatter.offset_formats = ['',
'%Y',
'%b %Y',
'%d %b %Y',
'%d %b %Y',
'%d %b %Y %H:%M', ]
ax.xaxis.set_major_locator(locator)
ax.xaxis.set_major_formatter(formatter)
ax.plot(dates, y)
ax.set_xlim(lims[nn])
axs[0].set_title('Concise Date Formatter')
plt.show()
```
### Actual outcome
<img width="597" alt="Screenshot 2024-11-20 at 4 19 38 PM" src="https://github.com/user-attachments/assets/c94b0e61-44b1-4272-8c39-4653a32add4e">
### Expected outcome
2nd graph of the same image shows the expected tick labels in the graph.
### Additional information
_No response_
### Operating system
_No response_
### Matplotlib Version
3.9.2
### Matplotlib Backend
macosx
### Python version
Python 3.11.6
### Jupyter version
_No response_
### Installation
pip | open | 2024-11-20T05:27:42Z | 2024-11-21T23:26:58Z | https://github.com/matplotlib/matplotlib/issues/29160 | [
"topic: date handling"
] | otourzan | 2 |
cvat-ai/cvat | pytorch | 8,687 | Assignee set to None in API response | ### Actions before raising this issue
- [X] I searched the existing issues and did not find anything similar.
- [X] I read/searched [the docs](https://docs.cvat.ai/docs/)
### Steps to Reproduce
1. Create project with a task
2. Open task and assign job to a user
3. Use CVAT API to get project id by name
```
response = client.api_client.projects_api.list(name="test")
project_id = response[0]["results"][0]["id"]
```
4. List tasks
```
tasks = client.api_client.tasks_api.list(project_id=project_id)
```
5. Check `assignee` field value
### Expected Behavior
I expected value to be equal to some ID or username
### Possible Solution
_No response_
### Context
_No response_
### Environment
```Markdown
I'm using `app.cvat.ai`
```
| open | 2024-11-12T14:04:39Z | 2024-12-05T10:01:31Z | https://github.com/cvat-ai/cvat/issues/8687 | [
"bug",
"need info"
] | cile98 | 7 |
ultralytics/ultralytics | pytorch | 19,790 | In the validation images, there are many objects that are not detected | ### Search before asking
- [x] I have searched the Ultralytics YOLO [issues](https://github.com/ultralytics/ultralytics/issues) and found no similar bug report.
### Ultralytics YOLO Component
Val
### Bug
I trained a model to detect berries. I consider the dataset is very representative: around 4k images of hight-resolution with a lot of variability. In general, the object to be detected appears multiple times in each image, averaging around 30-50 instances per image. That is, many instances of different sizes and locations. After training this model, the metrics indicate that the model performs very well. However, when reviewing the test images, I notice that several objects are not detected (it detects about 80% of the objects).
My question is: how can I solve this? Is the model appropriate (YOLOv11)? Should I try another model or adjust some parameters?
btw, I have trained this model (`yolo11s` and `yolo11n`) several times but I still getting the same results
### Environment
```
Ultralytics 8.3.90 🚀 Python-3.12.3 torch-2.5.1 CUDA:0 (NVIDIA GeForce RTX 4070 Ti SUPER, 16376MiB)
Setup complete ✅ (24 CPUs, 95.2 GB RAM, 1804.4/2725.3 GB disk)
OS Windows-11-10.0.26100-SP0
Environment Windows
Python 3.12.3
Install pip
Path C:\Users\Josue\miniconda3\envs\gpu\lib\site-packages\ultralytics
RAM 95.19 GB
Disk 1804.4/2725.3 GB
CPU AMD Ryzen 9 7900X3D 12-Core Processor
CPU count 24
GPU count 1
CUDA 12.4
numpy ✅ 1.26.4<=2.1.1,>=1.23.0
matplotlib ✅ 3.9.0>=3.3.0
opencv-python ✅ 4.10.0.84>=4.6.0
pillow ✅ 10.4.0>=7.1.2
pyyaml ✅ 6.0.1>=5.3.1
requests ✅ 2.32.3>=2.23.0
scipy ✅ 1.14.0>=1.4.1
torch ✅ 2.5.1>=1.8.0
torch ✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"
torchvision ✅ 0.20.1>=0.9.0
tqdm ✅ 4.66.5>=4.64.0
psutil ✅ 5.9.0
py-cpuinfo ✅ 9.0.0
pandas ✅ 2.2.2>=1.1.4
seaborn ✅ 0.13.2>=0.11.0
ultralytics-thop ✅ 2.0.0>=2.0.0
{'OS': 'Windows-11-10.0.26100-SP0', 'Environment': 'Windows', 'Python': '3.12.3', 'Install': 'pip', 'Path': 'C:\\Users\\Josue\\miniconda3\\envs\\gpu\\lib\\site-packages\\ultralytics', 'RAM': '95.19 GB', 'Disk': '1804.4/2725.3 GB', 'CPU': 'AMD Ryzen 9 7900X3D 12-Core Processor', 'CPU count': 24, 'GPU': 'NVIDIA GeForce RTX 4070 Ti SUPER, 16376MiB', 'GPU count': 1, 'CUDA': '12.4', 'Package Info': {'numpy': '✅ 1.26.4<=2.1.1,>=1.23.0', 'matplotlib': '✅ 3.9.0>=3.3.0', 'opencv-python': '✅ 4.10.0.84>=4.6.0', 'pillow': '✅ 10.4.0>=7.1.2', 'pyyaml': '✅ 6.0.1>=5.3.1', 'requests': '✅ 2.32.3>=2.23.0', 'scipy': '✅ 1.14.0>=1.4.1', 'torch': '✅ 2.5.1!=2.4.0,>=1.8.0; sys_platform == "win32"', 'torchvision': '✅ 0.20.1>=0.9.0', 'tqdm': '✅ 4.66.5>=4.64.0', 'psutil': '✅ 5.9.0', 'py-cpuinfo': '✅ 9.0.0', 'pandas': '✅ 2.2.2>=1.1.4', 'seaborn': '✅ 0.13.2>=0.11.0', 'ultralytics-thop': '✅ 2.0.0>=2.0.0'}}
```
### Minimal Reproducible Example
Dataset yaml:
```yaml
path: C:/Users/Josue/strawberry/data/datasets/berry_polygon
train: train
val: test
nc: 1
names:
0: berry
```
Training code:
```python
from ultralytics.data.augment import Albumentations
from ultralytics.engine.trainer import BaseTrainer
import albumentations as A
def callback_custom_albumentations(trainer: BaseTrainer):
T = [
A.OneOf([A.ToGray(), A.RandomGamma(), A.CLAHE()]),
A.OneOf([A.Blur(blur_limit=(3, 9)), A.MedianBlur(blur_limit=(3, 7))]),
A.RandomBrightnessContrast(brightness_limit=0.2, contrast_limit=0.2, p=0.25),
A.HueSaturationValue(
hue_shift_limit=10, sat_shift_limit=10, val_shift_limit=10, p=0.25
),
]
transform = A.Compose(T)
for t in trainer.train_loader.dataset.transforms.transforms:
if isinstance(t, Albumentations):
t.transform = transform
break
trainer.train_loader.reset()
dataset_path = "../../data/datasets/berry_dataset.yaml"
wandb.init(project=PROJECT, job_type=RUN_NAME, name=RUN_NAME, notes=DESCRIPTION)
model = YOLO("yolo11s-seg.pt")
model.add_callback("on_pretrain_routine_end", callback_custom_albumentations)
results = model.train(
data=dataset_path,
project=PROJECT,
name=RUN_NAME,
single_cls=True,
plots=True,
batch=32,
epochs=256,
imgsz=imgsz,
seed=7,
degrees=2,
dropout=0.2,
mosaic=0.5,
)
wandb.finish()
```
### Additional




### Are you willing to submit a PR?
- [ ] Yes I'd like to help by submitting a PR! | closed | 2025-03-20T03:24:12Z | 2025-03-20T13:37:24Z | https://github.com/ultralytics/ultralytics/issues/19790 | [
"fixed",
"detect"
] | JosueDavalos | 3 |
autokey/autokey | automation | 172 | Pass user input to script | ## Classification:
Enhancement
## Summary
It would be nice to get access to what triggered the script from within it. This would enable us to create much more advanced scripts that produce different output depending on the trigger.
Is this doable and something you might consider? (might give it a go myself)
## Steps to Reproduce (if applicable)
- Setup multiple abbreviations to trigger the same script: `foo_bar, bar_foo, foo_foo`
- Type abbreviation `foo_bar`
- use script to access user input, eg. `keyboard.send_keys(user_input.replace('_', '-')`)
## Expected Results
- `foo-bar` appears
## Version
AutoKey version: 0.95.1
Used GUI (Gtk, Qt, or both): Gtk
Installed via: PPA.
Distro: Ubuntu 18.04
EDIT: changed from 'matched abbeviation' to 'user input' as that would be more useful and enable scripts to leverage other Autokey functionality like ignoring case. | closed | 2018-08-06T15:37:46Z | 2018-09-20T22:08:28Z | https://github.com/autokey/autokey/issues/172 | [
"enhancement"
] | syko | 5 |
Lightning-AI/LitServe | api | 263 | Setup step is not awaited | ## 🐛 Bug
The `setup` step doesn't wait for all the defined processes to finish. This means that the application is declared working, even though the model is still not loaded.
### To Reproduce
Please take a look at the code sample below.
#### Code sample
```
import litserve as ls
from gliner import GLiNER
# (STEP 1) - DEFINE THE API
class SimpleLitAPI(ls.LitAPI):
def setup(self, device):
# setup is called once at startup. Build a compound AI system (1+ models), connect DBs, load data, etc...
self.model = GLiNER.from_pretrained("urchade/gliner_mediumv2.1")
```
Logs:
```
sp-gliner-api-v1 | INFO: Started server process [31]
sp-gliner-api-v1 | INFO: Waiting for application startup.
sp-gliner-api-v1 | INFO: Application startup complete.
sp-gliner-api-v1 | uvloop is not installed. Falling back to the default asyncio event loop.
Fetching 5 files: 100%|██████████| 5/5 [00:00<00:00, 17832.93it/s]
sp-gliner-api-v1 | /usr/local/lib/python3.9/site-packages/transformers/tokenization_utils_base.py:1601: FutureWarning: `clean_up_tokenization_spaces` was not set. It will be set to `True` by default. This behavior will be depracted in transformers v4.45, and will be then set to `False` by default. For more details check this issue: https://github.com/huggingface/transformers/issues/31884
....
```
### Expected behavior
`Application startup complete` should be triggered after the model is loaded.
### Environment
Local / Docker
### Additional context
| closed | 2024-09-02T09:03:50Z | 2024-09-08T11:42:32Z | https://github.com/Lightning-AI/LitServe/issues/263 | [
"bug",
"help wanted"
] | andreieuganox | 2 |
encode/httpx | asyncio | 2,537 | Multipart Form Data Headers | After testing #2382 it seems that the request headers are not being updated correctly.
```
filename = 'test.tar'
data = {"file": (Path(filename).name, open(filename, "rb"), "application/x-tar")}
resp = httpx.request("POST", 'URL', files=data)
```
The request has the following header ` {'Content-Type': 'multipart/form-data; boundary=b194f2c9bc744ebf40c3fa03d6d53987'}`
However if using an initialized client:
```
client = httpx.Client(headers={"Content-Type": "application/json"})
filename = 'test.tar'
data = {"file": (Path(filename).name, open(filename, "rb"), "application/x-tar")}
resp = client.post('URL', files=data)
```
The request is still using ` {'Content-Type': 'application/json'}`
Error message from our API: `'{"code":"API_MALFORMED_BODY","message":"Malformed JSON"}'`
Shouldn't the POST change the headers if using multipart forms?
Specifying the following also does not work because the boundaries are not included in the headers:
`resp = client.post('URL', files=data, headers={'Content-Type': 'multipart/form-data'})` | closed | 2023-01-03T14:48:28Z | 2023-01-05T10:46:06Z | https://github.com/encode/httpx/issues/2537 | [
"discussion"
] | ghost | 1 |
wkentaro/labelme | deep-learning | 835 | Rotated rectangle implementation [Question] | closed | 2021-02-03T23:03:59Z | 2023-03-01T05:53:06Z | https://github.com/wkentaro/labelme/issues/835 | [] | edu638s | 1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.