instance_id
stringlengths 17
36
| text
stringlengths 14k
547k
| repo
stringclasses 4
values | base_commit
stringlengths 40
40
| problem_statement
stringclasses 10
values | hints_text
stringclasses 8
values | created_at
stringlengths 20
20
| patch
stringclasses 10
values | test_patch
stringclasses 10
values | version
stringclasses 2
values | FAIL_TO_PASS
stringclasses 1
value | PASS_TO_PASS
stringclasses 1
value | environment_setup_commit
stringclasses 1
value |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
mixpanel__mixpanel-python-64
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
flush function for Buffered Consumer not working
Hi,
in class BufferedConsumer the flush function in line 338 should change to
def flush (self,api_key=None)
and then in line 444-445 should change to:
for endpoint in self._buffers.keys():
self._flush_endpoint(endpoint,api_key=api_key)
</issue>
<code>
[start of README.rst]
1 mixpanel-python |travis-badge|
2 ==============================
3
4 This is the official Mixpanel Python library. This library allows for
5 server-side integration of Mixpanel.
6
7
8 Installation
9 ------------
10
11 The library can be installed using pip::
12
13 pip install mixpanel
14
15
16 Getting Started
17 ---------------
18
19 Typical usage usually looks like this::
20
21 from mixpanel import Mixpanel
22
23 mp = Mixpanel(YOUR_TOKEN)
24
25 # tracks an event with certain properties
26 mp.track(DISTINCT_ID, 'button clicked', {'color' : 'blue', 'size': 'large'})
27
28 # sends an update to a user profile
29 mp.people_set(DISTINCT_ID, {'$first_name' : 'Ilya', 'favorite pizza': 'margherita'})
30
31 You can use an instance of the Mixpanel class for sending all of your events
32 and people updates.
33
34
35 Additional Information
36 ----------------------
37
38 * `Help Docs`_
39 * `Full Documentation`_
40 * mixpanel-python-async_; a third party tool for sending data asynchronously
41 from the tracking python process.
42
43
44 .. |travis-badge| image:: https://travis-ci.org/mixpanel/mixpanel-python.svg?branch=master
45 :target: https://travis-ci.org/mixpanel/mixpanel-python
46 .. _Help Docs: https://www.mixpanel.com/help/reference/python
47 .. _Full Documentation: http://mixpanel.github.io/mixpanel-python/
48 .. _mixpanel-python-async: https://github.com/jessepollak/mixpanel-python-async
49
[end of README.rst]
[start of mixpanel/__init__.py]
1 # -*- coding: utf-8 -*-
2 """This is the official Mixpanel client library for Python.
3
4 Mixpanel client libraries allow for tracking events and setting properties on
5 People Analytics profiles from your server-side projects. This is the API
6 documentation; you may also be interested in the higher-level `usage
7 documentation`_. If your users are interacting with your application via the
8 web, you may also be interested in our `JavaScript library`_.
9
10 .. _`Javascript library`: https://mixpanel.com/help/reference/javascript
11 .. _`usage documentation`: https://mixpanel.com/help/reference/python
12
13 :class:`~.Mixpanel` is the primary class for tracking events and sending People
14 Analytics updates. :class:`~.Consumer` and :class:`~.BufferedConsumer` allow
15 callers to customize the IO characteristics of their tracking.
16 """
17 from __future__ import absolute_import, unicode_literals
18 import base64
19 import datetime
20 import json
21 import time
22
23 import six
24 from six.moves import urllib
25
26 __version__ = '4.3.1'
27 VERSION = __version__ # TODO: remove when bumping major version.
28
29
30 class DatetimeSerializer(json.JSONEncoder):
31 def default(self, obj):
32 if isinstance(obj, datetime.datetime):
33 fmt = '%Y-%m-%dT%H:%M:%S'
34 return obj.strftime(fmt)
35
36 return json.JSONEncoder.default(self, obj)
37
38
39 def json_dumps(data, cls=None):
40 # Separators are specified to eliminate whitespace.
41 return json.dumps(data, separators=(',', ':'), cls=cls)
42
43
44 class Mixpanel(object):
45 """Instances of Mixpanel are used for all events and profile updates.
46
47 :param str token: your project's Mixpanel token
48 :param consumer: can be used to alter the behavior of tracking (default
49 :class:`~.Consumer`)
50 :param json.JSONEncoder serializer: a JSONEncoder subclass used to handle
51 JSON serialization (default :class:`~.DatetimeSerializer`)
52
53 See `Built-in consumers`_ for details about the consumer interface.
54
55 .. versionadded:: 4.2.0
56 The *serializer* parameter.
57 """
58
59 def __init__(self, token, consumer=None, serializer=DatetimeSerializer):
60 self._token = token
61 self._consumer = consumer or Consumer()
62 self._serializer = serializer
63
64 def _now(self):
65 return time.time()
66
67 def track(self, distinct_id, event_name, properties=None, meta=None):
68 """Record an event.
69
70 :param str distinct_id: identifies the user triggering the event
71 :param str event_name: a name describing the event
72 :param dict properties: additional data to record; keys should be
73 strings, and values should be strings, numbers, or booleans
74 :param dict meta: overrides Mixpanel special properties
75
76 ``properties`` should describe the circumstances of the event, or
77 aspects of the source or user associated with it. ``meta`` is used
78 (rarely) to override special values sent in the event object.
79 """
80 all_properties = {
81 'token': self._token,
82 'distinct_id': distinct_id,
83 'time': int(self._now()),
84 'mp_lib': 'python',
85 '$lib_version': __version__,
86 }
87 if properties:
88 all_properties.update(properties)
89 event = {
90 'event': event_name,
91 'properties': all_properties,
92 }
93 if meta:
94 event.update(meta)
95 self._consumer.send('events', json_dumps(event, cls=self._serializer))
96
97 def import_data(self, api_key, distinct_id, event_name, timestamp,
98 properties=None, meta=None):
99 """Record an event that occured more than 5 days in the past.
100
101 :param str api_key: your Mixpanel project's API key
102 :param str distinct_id: identifies the user triggering the event
103 :param str event_name: a name describing the event
104 :param int timestamp: UTC seconds since epoch
105 :param dict properties: additional data to record; keys should be
106 strings, and values should be strings, numbers, or booleans
107 :param dict meta: overrides Mixpanel special properties
108
109 To avoid accidentally recording invalid events, the Mixpanel API's
110 ``track`` endpoint disallows events that occurred too long ago. This
111 method can be used to import such events. See our online documentation
112 for `more details
113 <https://mixpanel.com/docs/api-documentation/importing-events-older-than-31-days>`__.
114 """
115 all_properties = {
116 'token': self._token,
117 'distinct_id': distinct_id,
118 'time': int(timestamp),
119 'mp_lib': 'python',
120 '$lib_version': __version__,
121 }
122 if properties:
123 all_properties.update(properties)
124 event = {
125 'event': event_name,
126 'properties': all_properties,
127 }
128 if meta:
129 event.update(meta)
130 self._consumer.send('imports', json_dumps(event, cls=self._serializer), api_key)
131
132 def alias(self, alias_id, original, meta=None):
133 """Apply a custom alias to a people record.
134
135 :param str alias_id: the new distinct_id
136 :param str original: the previous distinct_id
137 :param dict meta: overrides Mixpanel special properties
138
139 Immediately creates a one-way mapping between two ``distinct_ids``.
140 Events triggered by the new id will be associated with the existing
141 user's profile and behavior. See our online documentation for `more
142 details
143 <https://mixpanel.com/docs/integration-libraries/using-mixpanel-alias>`__.
144
145 .. note::
146 Calling this method *always* results in a synchronous HTTP request
147 to Mixpanel servers, regardless of any custom consumer.
148 """
149 sync_consumer = Consumer()
150 event = {
151 'event': '$create_alias',
152 'properties': {
153 'distinct_id': original,
154 'alias': alias_id,
155 'token': self._token,
156 },
157 }
158 if meta:
159 event.update(meta)
160 sync_consumer.send('events', json_dumps(event, cls=self._serializer))
161
162 def people_set(self, distinct_id, properties, meta=None):
163 """Set properties of a people record.
164
165 :param str distinct_id: the profile to update
166 :param dict properties: properties to set
167 :param dict meta: overrides Mixpanel `special properties`_
168
169 .. _`special properties`: https://mixpanel.com/help/reference/http#people-analytics-updates
170
171 If the profile does not exist, creates a new profile with these properties.
172 """
173 return self.people_update({
174 '$distinct_id': distinct_id,
175 '$set': properties,
176 }, meta=meta or {})
177
178 def people_set_once(self, distinct_id, properties, meta=None):
179 """Set properties of a people record if they are not already set.
180
181 :param str distinct_id: the profile to update
182 :param dict properties: properties to set
183
184 Any properties that already exist on the profile will not be
185 overwritten. If the profile does not exist, creates a new profile with
186 these properties.
187 """
188 return self.people_update({
189 '$distinct_id': distinct_id,
190 '$set_once': properties,
191 }, meta=meta or {})
192
193 def people_increment(self, distinct_id, properties, meta=None):
194 """Increment/decrement numerical properties of a people record.
195
196 :param str distinct_id: the profile to update
197 :param dict properties: properties to increment/decrement; values
198 should be numeric
199
200 Adds numerical values to properties of a people record. Nonexistent
201 properties on the record default to zero. Negative values in
202 ``properties`` will decrement the given property.
203 """
204 return self.people_update({
205 '$distinct_id': distinct_id,
206 '$add': properties,
207 }, meta=meta or {})
208
209 def people_append(self, distinct_id, properties, meta=None):
210 """Append to the list associated with a property.
211
212 :param str distinct_id: the profile to update
213 :param dict properties: properties to append
214
215 Adds items to list-style properties of a people record. Appending to
216 nonexistent properties results in a list with a single element. For
217 example::
218
219 mp.people_append('123', {'Items': 'Super Arm'})
220 """
221 return self.people_update({
222 '$distinct_id': distinct_id,
223 '$append': properties,
224 }, meta=meta or {})
225
226 def people_union(self, distinct_id, properties, meta=None):
227 """Merge the values of a list associated with a property.
228
229 :param str distinct_id: the profile to update
230 :param dict properties: properties to merge
231
232 Merges list values in ``properties`` with existing list-style
233 properties of a people record. Duplicate values are ignored. For
234 example::
235
236 mp.people_union('123', {'Items': ['Super Arm', 'Fire Storm']})
237 """
238 return self.people_update({
239 '$distinct_id': distinct_id,
240 '$union': properties,
241 }, meta=meta or {})
242
243 def people_unset(self, distinct_id, properties, meta=None):
244 """Permanently remove properties from a people record.
245
246 :param str distinct_id: the profile to update
247 :param list properties: property names to remove
248 """
249 return self.people_update({
250 '$distinct_id': distinct_id,
251 '$unset': properties,
252 }, meta=meta)
253
254 def people_delete(self, distinct_id, meta=None):
255 """Permanently delete a people record.
256
257 :param str distinct_id: the profile to delete
258 """
259 return self.people_update({
260 '$distinct_id': distinct_id,
261 '$delete': "",
262 }, meta=meta or None)
263
264 def people_track_charge(self, distinct_id, amount,
265 properties=None, meta=None):
266 """Track a charge on a people record.
267
268 :param str distinct_id: the profile with which to associate the charge
269 :param numeric amount: number of dollars charged
270 :param dict properties: extra properties related to the transaction
271
272 Record that you have charged the current user a certain amount of
273 money. Charges recorded with this way will appear in the Mixpanel
274 revenue report.
275 """
276 if properties is None:
277 properties = {}
278 properties.update({'$amount': amount})
279 return self.people_append(
280 distinct_id, {'$transactions': properties or {}}, meta=meta or {}
281 )
282
283 def people_clear_charges(self, distinct_id, meta=None):
284 """Permanently clear all charges on a people record.
285
286 :param str distinct_id: the profile whose charges will be cleared
287 """
288 return self.people_unset(
289 distinct_id, ["$transactions"], meta=meta or {},
290 )
291
292 def people_update(self, message, meta=None):
293 """Send a generic update to Mixpanel people analytics.
294
295 :param dict message: the message to send
296
297 Callers are responsible for formatting the update message as documented
298 in the `Mixpanel HTTP specification`_. This method may be useful if you
299 want to use very new or experimental features of people analytics, but
300 please use the other ``people_*`` methods where possible.
301
302 .. _`Mixpanel HTTP specification`: https://mixpanel.com/help/reference/http
303 """
304 record = {
305 '$token': self._token,
306 '$time': int(self._now() * 1000),
307 }
308 record.update(message)
309 if meta:
310 record.update(meta)
311 self._consumer.send('people', json_dumps(record, cls=self._serializer))
312
313
314 class MixpanelException(Exception):
315 """Raised by consumers when unable to send messages.
316
317 This could be caused by a network outage or interruption, or by an invalid
318 endpoint passed to :meth:`.Consumer.send`.
319 """
320 pass
321
322
323 class Consumer(object):
324 """
325 A consumer that sends an HTTP request directly to the Mixpanel service, one
326 per call to :meth:`~.send`.
327
328 :param str events_url: override the default events API endpoint
329 :param str people_url: override the default people API endpoint
330 :param str import_url: override the default import API endpoint
331 :param int request_timeout: connection timeout in seconds
332 """
333
334 def __init__(self, events_url=None, people_url=None, import_url=None, request_timeout=None):
335 self._endpoints = {
336 'events': events_url or 'https://api.mixpanel.com/track',
337 'people': people_url or 'https://api.mixpanel.com/engage',
338 'imports': import_url or 'https://api.mixpanel.com/import',
339 }
340 self._request_timeout = request_timeout
341
342 def send(self, endpoint, json_message, api_key=None):
343 """Immediately record an event or a profile update.
344
345 :param endpoint: the Mixpanel API endpoint appropriate for the message
346 :type endpoint: "events" | "people" | "imports"
347 :param str json_message: a JSON message formatted for the endpoint
348 :raises MixpanelException: if the endpoint doesn't exist, the server is
349 unreachable, or the message cannot be processed
350 """
351 if endpoint in self._endpoints:
352 self._write_request(self._endpoints[endpoint], json_message, api_key)
353 else:
354 raise MixpanelException('No such endpoint "{0}". Valid endpoints are one of {1}'.format(endpoint, self._endpoints.keys()))
355
356 def _write_request(self, request_url, json_message, api_key=None):
357 data = {
358 'data': base64.b64encode(json_message.encode('utf8')),
359 'verbose': 1,
360 'ip': 0,
361 }
362 if api_key:
363 data.update({'api_key': api_key})
364 encoded_data = urllib.parse.urlencode(data).encode('utf8')
365 try:
366 request = urllib.request.Request(request_url, encoded_data)
367
368 # Note: We don't send timeout=None here, because the timeout in urllib2 defaults to
369 # an internal socket timeout, not None.
370 if self._request_timeout is not None:
371 response = urllib.request.urlopen(request, timeout=self._request_timeout).read()
372 else:
373 response = urllib.request.urlopen(request).read()
374 except urllib.error.URLError as e:
375 six.raise_from(MixpanelException(e), e)
376
377 try:
378 response = json.loads(response.decode('utf8'))
379 except ValueError:
380 raise MixpanelException('Cannot interpret Mixpanel server response: {0}'.format(response))
381
382 if response['status'] != 1:
383 raise MixpanelException('Mixpanel error: {0}'.format(response['error']))
384
385 return True
386
387
388 class BufferedConsumer(object):
389 """
390 A consumer that maintains per-endpoint buffers of messages and then sends
391 them in batches. This can save bandwidth and reduce the total amount of
392 time required to post your events to Mixpanel.
393
394 .. note::
395 Because :class:`~.BufferedConsumer` holds events, you need to call
396 :meth:`~.flush` when you're sure you're done sending them—for example,
397 just before your program exits. Calls to :meth:`~.flush` will send all
398 remaining unsent events being held by the instance.
399
400 :param int max_size: number of :meth:`~.send` calls for a given endpoint to
401 buffer before flushing automatically
402 :param str events_url: override the default events API endpoint
403 :param str people_url: override the default people API endpoint
404 :param str import_url: override the default import API endpoint
405 :param int request_timeout: connection timeout in seconds
406 """
407 def __init__(self, max_size=50, events_url=None, people_url=None, import_url=None, request_timeout=None):
408 self._consumer = Consumer(events_url, people_url, import_url, request_timeout)
409 self._buffers = {
410 'events': [],
411 'people': [],
412 'imports': [],
413 }
414 self._max_size = min(50, max_size)
415
416 def send(self, endpoint, json_message, api_key=None):
417 """Record an event or profile update.
418
419 Internally, adds the message to a buffer, and then flushes the buffer
420 if it has reached the configured maximum size. Note that exceptions
421 raised may have been caused by a message buffered by an earlier call to
422 :meth:`~.send`.
423
424 :param endpoint: the Mixpanel API endpoint appropriate for the message
425 :type endpoint: "events" | "people" | "imports"
426 :param str json_message: a JSON message formatted for the endpoint
427 :raises MixpanelException: if the endpoint doesn't exist, the server is
428 unreachable, or any buffered message cannot be processed
429 """
430 if endpoint not in self._buffers:
431 raise MixpanelException('No such endpoint "{0}". Valid endpoints are one of {1}'.format(endpoint, self._buffers.keys()))
432
433 buf = self._buffers[endpoint]
434 buf.append(json_message)
435 if len(buf) >= self._max_size:
436 self._flush_endpoint(endpoint, api_key)
437
438 def flush(self):
439 """Immediately send all buffered messages to Mixpanel.
440
441 :raises MixpanelException: if the server is unreachable or any buffered
442 message cannot be processed
443 """
444 for endpoint in self._buffers.keys():
445 self._flush_endpoint(endpoint)
446
447 def _flush_endpoint(self, endpoint, api_key=None):
448 buf = self._buffers[endpoint]
449 while buf:
450 batch = buf[:self._max_size]
451 batch_json = '[{0}]'.format(','.join(batch))
452 try:
453 self._consumer.send(endpoint, batch_json, api_key)
454 except MixpanelException as orig_e:
455 mp_e = MixpanelException(orig_e)
456 mp_e.message = batch_json
457 mp_e.endpoint = endpoint
458 six.raise_from(mp_e, orig_e)
459 buf = buf[self._max_size:]
460 self._buffers[endpoint] = buf
461
[end of mixpanel/__init__.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
mixpanel/mixpanel-python
|
40c98e0b285898384cc4aa6cc803d8d0f46f6218
|
flush function for Buffered Consumer not working
Hi,
in class BufferedConsumer the flush function in line 338 should change to
def flush (self,api_key=None)
and then in line 444-445 should change to:
for endpoint in self._buffers.keys():
self._flush_endpoint(endpoint,api_key=api_key)
|
+1
I have the same issue. The exception is: "Mixpanel error: token, missing or empty" because of this bug.
+1 I also just ran into this. Is it worth submitting a PR for this? I see 3 unmerged PRs that are a few years old.
|
2016-12-22T00:07:05Z
|
<patch>
diff --git a/mixpanel/__init__.py b/mixpanel/__init__.py
--- a/mixpanel/__init__.py
+++ b/mixpanel/__init__.py
@@ -345,6 +345,7 @@ def send(self, endpoint, json_message, api_key=None):
:param endpoint: the Mixpanel API endpoint appropriate for the message
:type endpoint: "events" | "people" | "imports"
:param str json_message: a JSON message formatted for the endpoint
+ :param str api_key: your Mixpanel project's API key
:raises MixpanelException: if the endpoint doesn't exist, the server is
unreachable, or the message cannot be processed
"""
@@ -412,6 +413,7 @@ def __init__(self, max_size=50, events_url=None, people_url=None, import_url=Non
'imports': [],
}
self._max_size = min(50, max_size)
+ self._api_key = None
def send(self, endpoint, json_message, api_key=None):
"""Record an event or profile update.
@@ -424,16 +426,22 @@ def send(self, endpoint, json_message, api_key=None):
:param endpoint: the Mixpanel API endpoint appropriate for the message
:type endpoint: "events" | "people" | "imports"
:param str json_message: a JSON message formatted for the endpoint
+ :param str api_key: your Mixpanel project's API key
:raises MixpanelException: if the endpoint doesn't exist, the server is
unreachable, or any buffered message cannot be processed
+
+ .. versionadded:: 4.3.2
+ The *api_key* parameter.
"""
if endpoint not in self._buffers:
raise MixpanelException('No such endpoint "{0}". Valid endpoints are one of {1}'.format(endpoint, self._buffers.keys()))
buf = self._buffers[endpoint]
buf.append(json_message)
+ if api_key is not None:
+ self._api_key = api_key
if len(buf) >= self._max_size:
- self._flush_endpoint(endpoint, api_key)
+ self._flush_endpoint(endpoint)
def flush(self):
"""Immediately send all buffered messages to Mixpanel.
@@ -444,13 +452,13 @@ def flush(self):
for endpoint in self._buffers.keys():
self._flush_endpoint(endpoint)
- def _flush_endpoint(self, endpoint, api_key=None):
+ def _flush_endpoint(self, endpoint):
buf = self._buffers[endpoint]
while buf:
batch = buf[:self._max_size]
batch_json = '[{0}]'.format(','.join(batch))
try:
- self._consumer.send(endpoint, batch_json, api_key)
+ self._consumer.send(endpoint, batch_json, self._api_key)
except MixpanelException as orig_e:
mp_e = MixpanelException(orig_e)
mp_e.message = batch_json
</patch>
|
diff --git a/test_mixpanel.py b/test_mixpanel.py
--- a/test_mixpanel.py
+++ b/test_mixpanel.py
@@ -353,40 +353,32 @@ class TestBufferedConsumer:
def setup_class(cls):
cls.MAX_LENGTH = 10
cls.consumer = mixpanel.BufferedConsumer(cls.MAX_LENGTH)
- cls.mock = Mock()
- cls.mock.read.return_value = six.b('{"status":1, "error": null}')
+ cls.consumer._consumer = LogConsumer()
+ cls.log = cls.consumer._consumer.log
- def test_buffer_hold_and_flush(self):
- with patch('six.moves.urllib.request.urlopen', return_value=self.mock) as urlopen:
- self.consumer.send('events', '"Event"')
- assert not self.mock.called
- self.consumer.flush()
+ def setup_method(self):
+ del self.log[:]
- assert urlopen.call_count == 1
-
- (call_args, kwargs) = urlopen.call_args
- (request,) = call_args
- timeout = kwargs.get('timeout', None)
-
- assert request.get_full_url() == 'https://api.mixpanel.com/track'
- assert qs(request.data) == qs('ip=0&data=WyJFdmVudCJd&verbose=1')
- assert timeout is None
+ def test_buffer_hold_and_flush(self):
+ self.consumer.send('events', '"Event"')
+ assert len(self.log) == 0
+ self.consumer.flush()
+ assert self.log == [('events', ['Event'])]
def test_buffer_fills_up(self):
- with patch('six.moves.urllib.request.urlopen', return_value=self.mock) as urlopen:
- for i in range(self.MAX_LENGTH - 1):
- self.consumer.send('events', '"Event"')
- assert not self.mock.called
-
- self.consumer.send('events', '"Last Event"')
+ for i in range(self.MAX_LENGTH - 1):
+ self.consumer.send('events', '"Event"')
+ assert len(self.log) == 0
- assert urlopen.call_count == 1
- ((request,), _) = urlopen.call_args
- assert request.get_full_url() == 'https://api.mixpanel.com/track'
- assert qs(request.data) == \
- qs('ip=0&data=WyJFdmVudCIsIkV2ZW50IiwiRXZlbnQiLCJFdmVudCIsIkV2ZW50IiwiRXZlbnQiLCJFdmVudCIsIkV2ZW50IiwiRXZlbnQiLCJMYXN0IEV2ZW50Il0%3D&verbose=1')
+ self.consumer.send('events', '"Last Event"')
+ assert len(self.log) == 1
+ assert self.log == [('events', [
+ 'Event', 'Event', 'Event', 'Event', 'Event',
+ 'Event', 'Event', 'Event', 'Event', 'Last Event',
+ ])]
- def test_unknown_endpoint(self):
+ def test_unknown_endpoint_raises_on_send(self):
+ # Ensure the exception isn't hidden until a flush.
with pytest.raises(mixpanel.MixpanelException):
self.consumer.send('unknown', '1')
@@ -394,17 +386,19 @@ def test_useful_reraise_in_flush_endpoint(self):
error_mock = Mock()
error_mock.read.return_value = six.b('{"status": 0, "error": "arbitrary error"}')
broken_json = '{broken JSON'
+ consumer = mixpanel.BufferedConsumer(2)
with patch('six.moves.urllib.request.urlopen', return_value=error_mock):
- self.consumer.send('events', broken_json)
+ consumer.send('events', broken_json)
with pytest.raises(mixpanel.MixpanelException) as excinfo:
- self.consumer.flush()
+ consumer.flush()
assert excinfo.value.message == '[%s]' % broken_json
assert excinfo.value.endpoint == 'events'
- def test_import_data_receives_api_key(self):
- # Ensure BufferedConsumer.send accepts the API_KEY parameter needed for
- # import_data; see #62.
+ def test_send_remembers_api_key(self):
self.consumer.send('imports', '"Event"', api_key='MY_API_KEY')
+ assert len(self.log) == 0
+ self.consumer.flush()
+ assert self.log == [('imports', ['Event'], 'MY_API_KEY')]
class TestFunctional:
|
4.3
| |||
NVIDIA__NeMo-7124
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Installation instructions should better indicate mandatory steps to make tests pass (or reinstall.sh needs an update)
**Is your feature request related to a problem? Please describe.**
I wanted to setup a dev conda environment for NeMo, so I followed steps at https://github.com/NVIDIA/NeMo/tree/main#from-source
Afterwards `pytest --cpu` was failing (before it could even run any test) with two errors:
* `module 'nvidia' has no attribute 'dali'`
* `No module named 'pynvml'`
**Describe the solution you'd like**
After manually installing both libraries with
* pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-cuda120
* pip install pynvml
the tests were able to pass (`1420 passed, 304 skipped, 261 warnings`).
Ideally these libraries would be installed automatically by the `reinstall.sh` script.
</issue>
<code>
[start of README.rst]
1
2 |status| |documentation| |codeql| |license| |pypi| |pyversion| |downloads| |black|
3
4 .. |status| image:: http://www.repostatus.org/badges/latest/active.svg
5 :target: http://www.repostatus.org/#active
6 :alt: Project Status: Active – The project has reached a stable, usable state and is being actively developed.
7
8 .. |documentation| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
9 :alt: Documentation
10 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
11
12 .. |license| image:: https://img.shields.io/badge/License-Apache%202.0-brightgreen.svg
13 :target: https://github.com/NVIDIA/NeMo/blob/master/LICENSE
14 :alt: NeMo core license and license for collections in this repo
15
16 .. |pypi| image:: https://badge.fury.io/py/nemo-toolkit.svg
17 :target: https://badge.fury.io/py/nemo-toolkit
18 :alt: Release version
19
20 .. |pyversion| image:: https://img.shields.io/pypi/pyversions/nemo-toolkit.svg
21 :target: https://badge.fury.io/py/nemo-toolkit
22 :alt: Python version
23
24 .. |downloads| image:: https://static.pepy.tech/personalized-badge/nemo-toolkit?period=total&units=international_system&left_color=grey&right_color=brightgreen&left_text=downloads
25 :target: https://pepy.tech/project/nemo-toolkit
26 :alt: PyPi total downloads
27
28 .. |codeql| image:: https://github.com/nvidia/nemo/actions/workflows/codeql.yml/badge.svg?branch=main&event=push
29 :target: https://github.com/nvidia/nemo/actions/workflows/codeql.yml
30 :alt: CodeQL
31
32 .. |black| image:: https://img.shields.io/badge/code%20style-black-000000.svg
33 :target: https://github.com/psf/black
34 :alt: Code style: black
35
36 .. _main-readme:
37
38 **NVIDIA NeMo**
39 ===============
40
41 Introduction
42 ------------
43
44 NVIDIA NeMo is a conversational AI toolkit built for researchers working on automatic speech recognition (ASR),
45 text-to-speech synthesis (TTS), large language models (LLMs), and
46 natural language processing (NLP).
47 The primary objective of NeMo is to help researchers from industry and academia to reuse prior work (code and pretrained models)
48 and make it easier to create new `conversational AI models <https://developer.nvidia.com/conversational-ai#started>`_.
49
50 All NeMo models are trained with `Lightning <https://github.com/Lightning-AI/lightning>`_ and
51 training is automatically scalable to 1000s of GPUs.
52 Additionally, NeMo Megatron LLM models can be trained up to 1 trillion parameters using tensor and pipeline model parallelism.
53 NeMo models can be optimized for inference and deployed for production use-cases with `NVIDIA Riva <https://developer.nvidia.com/riva>`_.
54
55 Getting started with NeMo is simple.
56 State of the Art pretrained NeMo models are freely available on `HuggingFace Hub <https://huggingface.co/models?library=nemo&sort=downloads&search=nvidia>`_ and
57 `NVIDIA NGC <https://catalog.ngc.nvidia.com/models?query=nemo&orderBy=weightPopularDESC>`_.
58 These models can be used to transcribe audio, synthesize speech, or translate text in just a few lines of code.
59
60 We have extensive `tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_ that
61 can all be run on `Google Colab <https://colab.research.google.com>`_.
62
63 For advanced users that want to train NeMo models from scratch or finetune existing NeMo models
64 we have a full suite of `example scripts <https://github.com/NVIDIA/NeMo/tree/main/examples>`_ that support multi-GPU/multi-node training.
65
66 For scaling NeMo LLM training on Slurm clusters or public clouds, please see the `NVIDIA NeMo Megatron Launcher <https://github.com/NVIDIA/NeMo-Megatron-Launcher>`_.
67 The NM launcher has extensive recipes, scripts, utilities, and documentation for training NeMo LLMs and also has an `Autoconfigurator <https://github.com/NVIDIA/NeMo-Megatron-Launcher#53-using-autoconfigurator-to-find-the-optimal-configuration>`_
68 which can be used to find the optimal model parallel configuration for training on a specific cluster.
69
70 Also see our `introductory video <https://www.youtube.com/embed/wBgpMf_KQVw>`_ for a high level overview of NeMo.
71
72 Key Features
73 ------------
74
75 * Speech processing
76 * `HuggingFace Space for Audio Transcription (File, Microphone and YouTube) <https://huggingface.co/spaces/smajumdar/nemo_multilingual_language_id>`_
77 * `Automatic Speech Recognition (ASR) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/intro.html>`_
78 * Supported ASR models: `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html>`_
79 * Jasper, QuartzNet, CitriNet, ContextNet
80 * Conformer-CTC, Conformer-Transducer, FastConformer-CTC, FastConformer-Transducer
81 * Squeezeformer-CTC and Squeezeformer-Transducer
82 * LSTM-Transducer (RNNT) and LSTM-CTC
83 * Supports the following decoders/losses:
84 * CTC
85 * Transducer/RNNT
86 * Hybrid Transducer/CTC
87 * NeMo Original `Multi-blank Transducers <https://arxiv.org/abs/2211.03541>`_ and `Token-and-Duration Transducers (TDT) <https://arxiv.org/abs/2304.06795>`_
88 * Streaming/Buffered ASR (CTC/Transducer) - `Chunked Inference Examples <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_chunked_inference>`_
89 * Cache-aware Streaming Conformer with multiple lookaheads - `<https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/models.html#cache-aware-streaming-conformer>`_
90 * Beam Search decoding
91 * `Language Modelling for ASR (CTC and RNNT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/asr_language_modeling.html>`_: N-gram LM in fusion with Beam Search decoding, Neural Rescoring with Transformer
92 * `Support of long audios for Conformer with memory efficient local attention <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/results.html#inference-on-long-audio>`_
93 * `Speech Classification, Speech Command Recognition and Language Identification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_classification/intro.html>`_: MatchboxNet (Command Recognition), AmberNet (LangID)
94 * `Voice activity Detection (VAD) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/asr/speech_classification/models.html#marblenet-vad>`_: MarbleNet
95 * ASR with VAD Inference - `Example <https://github.com/NVIDIA/NeMo/tree/stable/examples/asr/asr_vad>`_
96 * `Speaker Recognition <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_recognition/intro.html>`_: TitaNet, ECAPA_TDNN, SpeakerNet
97 * `Speaker Diarization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speaker_diarization/intro.html>`_
98 * Clustering Diarizer: TitaNet, ECAPA_TDNN, SpeakerNet
99 * Neural Diarizer: MSDD (Multi-scale Diarization Decoder)
100 * `Speech Intent Detection and Slot Filling <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/speech_intent_slot/intro.html>`_: Conformer-Transformer
101 * `Pretrained models on different languages. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_: English, Spanish, German, Russian, Chinese, French, Italian, Polish, ...
102 * `NGC collection of pre-trained speech processing models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_asr>`_
103 * Natural Language Processing
104 * `NeMo Megatron pre-training of Large Language Models <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/nlp/nemo_megatron/intro.html>`_
105 * `Neural Machine Translation (NMT) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/machine_translation/machine_translation.html>`_
106 * `Punctuation and Capitalization <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/punctuation_and_capitalization.html>`_
107 * `Token classification (named entity recognition) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/token_classification.html>`_
108 * `Text classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_classification.html>`_
109 * `Joint Intent and Slot Classification <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/joint_intent_slot.html>`_
110 * `Question answering <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/question_answering.html>`_
111 * `GLUE benchmark <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/glue_benchmark.html>`_
112 * `Information retrieval <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/information_retrieval.html>`_
113 * `Entity Linking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/entity_linking.html>`_
114 * `Dialogue State Tracking <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/sgd_qa.html>`_
115 * `Prompt Learning <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/nemo_megatron/prompt_learning.html>`_
116 * `NGC collection of pre-trained NLP models. <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_nlp>`_
117 * `Synthetic Tabular Data Generation <https://developer.nvidia.com/blog/generating-synthetic-data-with-transformers-a-solution-for-enterprise-data-challenges/>`_
118 * Text-to-Speech Synthesis (TTS):
119 * `Documentation <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tts/intro.html#>`_
120 * Mel-Spectrogram generators: FastPitch, SSL FastPitch, Mixer-TTS/Mixer-TTS-X, RAD-TTS, Tacotron2
121 * Vocoders: HiFiGAN, UnivNet, WaveGlow
122 * End-to-End Models: VITS
123 * `Pre-trained Model Checkpoints in NVIDIA GPU Cloud (NGC) <https://ngc.nvidia.com/catalog/collections/nvidia:nemo_tts>`_
124 * `Tools <https://github.com/NVIDIA/NeMo/tree/stable/tools>`_
125 * `Text Processing (text normalization and inverse text normalization) <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/nlp/text_normalization/intro.html>`_
126 * `CTC-Segmentation tool <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/ctc_segmentation.html>`_
127 * `Speech Data Explorer <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/tools/speech_data_explorer.html>`_: a dash-based tool for interactive exploration of ASR/TTS datasets
128 * `Speech Data Processor <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/tools/speech_data_processor.html>`_
129
130
131 Built for speed, NeMo can utilize NVIDIA's Tensor Cores and scale out training to multiple GPUs and multiple nodes.
132
133 Requirements
134 ------------
135
136 1) Python 3.9 or above
137 2) Pytorch 1.13.1 or above
138 3) NVIDIA GPU for training
139
140 Documentation
141 -------------
142
143 .. |main| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=main
144 :alt: Documentation Status
145 :scale: 100%
146 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/
147
148 .. |stable| image:: https://readthedocs.com/projects/nvidia-nemo/badge/?version=stable
149 :alt: Documentation Status
150 :scale: 100%
151 :target: https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/
152
153 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
154 | Version | Status | Description |
155 +=========+=============+==========================================================================================================================================+
156 | Latest | |main| | `Documentation of the latest (i.e. main) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/>`_ |
157 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
158 | Stable | |stable| | `Documentation of the stable (i.e. most recent release) branch. <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/>`_ |
159 +---------+-------------+------------------------------------------------------------------------------------------------------------------------------------------+
160
161 Tutorials
162 ---------
163 A great way to start with NeMo is by checking `one of our tutorials <https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/stable/starthere/tutorials.html>`_.
164
165 Getting help with NeMo
166 ----------------------
167 FAQ can be found on NeMo's `Discussions board <https://github.com/NVIDIA/NeMo/discussions>`_. You are welcome to ask questions or start discussions there.
168
169
170 Installation
171 ------------
172
173 Conda
174 ~~~~~
175
176 We recommend installing NeMo in a fresh Conda environment.
177
178 .. code-block:: bash
179
180 conda create --name nemo python==3.8.10
181 conda activate nemo
182
183 Install PyTorch using their `configurator <https://pytorch.org/get-started/locally/>`_.
184
185 .. code-block:: bash
186
187 conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
188
189 The command used to install PyTorch may depend on your system. Please use the configurator linked above to find the right command for your system.
190
191 Pip
192 ~~~
193 Use this installation mode if you want the latest released version.
194
195 .. code-block:: bash
196
197 apt-get update && apt-get install -y libsndfile1 ffmpeg
198 pip install Cython
199 pip install nemo_toolkit['all']
200
201 Depending on the shell used, you may need to use ``"nemo_toolkit[all]"`` instead in the above command.
202
203 Pip from source
204 ~~~~~~~~~~~~~~~
205 Use this installation mode if you want the version from a particular GitHub branch (e.g main).
206
207 .. code-block:: bash
208
209 apt-get update && apt-get install -y libsndfile1 ffmpeg
210 pip install Cython
211 python -m pip install git+https://github.com/NVIDIA/NeMo.git@{BRANCH}#egg=nemo_toolkit[all]
212
213
214 From source
215 ~~~~~~~~~~~
216 Use this installation mode if you are contributing to NeMo.
217
218 .. code-block:: bash
219
220 apt-get update && apt-get install -y libsndfile1 ffmpeg
221 git clone https://github.com/NVIDIA/NeMo
222 cd NeMo
223 ./reinstall.sh
224
225 If you only want the toolkit without additional conda-based dependencies, you may replace ``reinstall.sh``
226 with ``pip install -e .`` when your PWD is the root of the NeMo repository.
227
228 RNNT
229 ~~~~
230 Note that RNNT requires numba to be installed from conda.
231
232 .. code-block:: bash
233
234 conda remove numba
235 pip uninstall numba
236 conda install -c conda-forge numba
237
238 NeMo Megatron
239 ~~~~~~~~~~~~~
240 NeMo Megatron training requires NVIDIA Apex to be installed.
241 Install it manually if not using the NVIDIA PyTorch container.
242
243 To install Apex, run
244
245 .. code-block:: bash
246
247 git clone https://github.com/NVIDIA/apex.git
248 cd apex
249 git checkout 57057e2fcf1c084c0fcc818f55c0ff6ea1b24ae2
250 pip install -v --disable-pip-version-check --no-cache-dir --global-option="--cpp_ext" --global-option="--cuda_ext" --global-option="--fast_layer_norm" --global-option="--distributed_adam" --global-option="--deprecated_fused_adam" ./
251
252 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Apex or any other dependencies.
253
254 While installing Apex, it may raise an error if the CUDA version on your system does not match the CUDA version torch was compiled with.
255 This raise can be avoided by commenting it here: https://github.com/NVIDIA/apex/blob/master/setup.py#L32
256
257 cuda-nvprof is needed to install Apex. The version should match the CUDA version that you are using:
258
259 .. code-block:: bash
260
261 conda install -c nvidia cuda-nvprof=11.8
262
263 packaging is also needed:
264
265 .. code-block:: bash
266
267 pip install packaging
268
269
270 Transformer Engine
271 ~~~~~~~~~~~~~~~~~~
272 NeMo Megatron GPT has been integrated with `NVIDIA Transformer Engine <https://github.com/NVIDIA/TransformerEngine>`_
273 Transformer Engine enables FP8 training on NVIDIA Hopper GPUs.
274 `Install <https://docs.nvidia.com/deeplearning/transformer-engine/user-guide/installation.html>`_ it manually if not using the NVIDIA PyTorch container.
275
276 .. code-block:: bash
277
278 pip install --upgrade git+https://github.com/NVIDIA/TransformerEngine.git@stable
279
280 It is highly recommended to use the NVIDIA PyTorch or NeMo container if having issues installing Transformer Engine or any other dependencies.
281
282 Transformer Engine requires PyTorch to be built with CUDA 11.8.
283
284
285 Flash Attention
286 ~~~~~~~~~~~~~~~~~~~~
287 Transformer Engine already supports Flash Attention for GPT models. If you want to use Flash Attention for non-causal models or use with attention bias (introduced from position encoding, e.g. Alibi), please install `flash-attn <https://github.com/HazyResearch/flash-attention>`_.
288
289 .. code-block:: bash
290
291 pip install flash-attn
292 pip install triton==2.0.0.dev20221202
293
294 NLP inference UI
295 ~~~~~~~~~~~~~~~~~~~~
296 To launch the inference web UI server, please install the gradio `gradio <https://gradio.app/>`_.
297
298 .. code-block:: bash
299
300 pip install gradio==3.34.0
301
302 NeMo Text Processing
303 ~~~~~~~~~~~~~~~~~~~~
304 NeMo Text Processing, specifically (Inverse) Text Normalization, is now a separate repository `https://github.com/NVIDIA/NeMo-text-processing <https://github.com/NVIDIA/NeMo-text-processing>`_.
305
306 Docker containers:
307 ~~~~~~~~~~~~~~~~~~
308 We release NeMo containers alongside NeMo releases. For example, NeMo ``r1.19.0`` comes with container ``nemo:23.04``, you may find more details about released containers in `releases page <https://github.com/NVIDIA/NeMo/releases>`_.
309
310 To use built container, please run
311
312 .. code-block:: bash
313
314 docker pull nvcr.io/nvidia/nemo:23.04
315
316 To build a nemo container with Dockerfile from a branch, please run
317
318 .. code-block:: bash
319
320 DOCKER_BUILDKIT=1 docker build -f Dockerfile -t nemo:latest .
321
322
323 If you chose to work with main branch, we recommend using NVIDIA's PyTorch container version 23.06-py3 and then installing from GitHub.
324
325 .. code-block:: bash
326
327 docker run --gpus all -it --rm -v <nemo_github_folder>:/NeMo --shm-size=8g \
328 -p 8888:8888 -p 6006:6006 --ulimit memlock=-1 --ulimit \
329 stack=67108864 --device=/dev/snd nvcr.io/nvidia/pytorch:23.06-py3
330
331 Examples
332 --------
333
334 Many examples can be found under the `"Examples" <https://github.com/NVIDIA/NeMo/tree/stable/examples>`_ folder.
335
336
337 Contributing
338 ------------
339
340 We welcome community contributions! Please refer to the `CONTRIBUTING.md <https://github.com/NVIDIA/NeMo/blob/stable/CONTRIBUTING.md>`_ CONTRIBUTING.md for the process.
341
342 Publications
343 ------------
344
345 We provide an ever growing list of publications that utilize the NeMo framework. Please refer to `PUBLICATIONS.md <https://github.com/NVIDIA/NeMo/tree/stable/PUBLICATIONS.md>`_. We welcome the addition of your own articles to this list !
346
347 License
348 -------
349 NeMo is under `Apache 2.0 license <https://github.com/NVIDIA/NeMo/blob/stable/LICENSE>`_.
350
[end of README.rst]
[start of nemo/utils/model_utils.py]
1 # Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
2 #
3 # Licensed under the Apache License, Version 2.0 (the "License");
4 # you may not use this file except in compliance with the License.
5 # You may obtain a copy of the License at
6 #
7 # http://www.apache.org/licenses/LICENSE-2.0
8 #
9 # Unless required by applicable law or agreed to in writing, software
10 # distributed under the License is distributed on an "AS IS" BASIS,
11 # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 # See the License for the specific language governing permissions and
13 # limitations under the License.
14
15 import copy
16 import importlib
17 import os
18 from dataclasses import dataclass, is_dataclass
19 from enum import Enum
20 from functools import lru_cache
21 from pathlib import Path
22 from typing import List, Optional, Tuple, Union
23
24 import wrapt
25
26 from nemo.utils import AppState, logging
27 from nemo.utils.data_utils import resolve_cache_dir # imported for compatibility: model_utils.resolve_cache_dir()
28 from nemo.utils.data_utils import is_datastore_path
29
30 # TODO @blisc: Perhaps refactor instead of import guarding
31
32 _HAS_HYDRA = True
33
34 try:
35 from omegaconf import DictConfig, ListConfig, OmegaConf
36 from omegaconf import errors as omegaconf_errors
37 from packaging import version
38 except ModuleNotFoundError:
39 _HAS_HYDRA = False
40
41
42 _VAL_TEST_FASTPATH_KEY = 'ds_item'
43
44
45 class ArtifactPathType(Enum):
46 """
47 ArtifactPathType refers to the type of the path that the artifact is located at.
48
49 LOCAL_PATH: A user local filepath that exists on the file system.
50 TAR_PATH: A (generally flattened) filepath that exists inside of an archive (that may have its own full path).
51 """
52
53 LOCAL_PATH = 0
54 TAR_PATH = 1
55
56
57 @dataclass(init=False)
58 class ArtifactItem:
59 path: str
60 path_type: ArtifactPathType
61 hashed_path: Optional[str] = None
62
63
64 def resolve_dataset_name_from_cfg(cfg: 'DictConfig') -> Optional[str]:
65 """
66 Parses items of the provided sub-config to find the first potential key that
67 resolves to an existing file or directory.
68
69 # Fast-path Resolution
70 In order to handle cases where we need to resolve items that are not paths, a fastpath
71 key can be provided as defined in the global `_VAL_TEST_FASTPATH_KEY`.
72
73 This key can be used in two ways :
74
75 ## _VAL_TEST_FASTPATH_KEY points to another key in the config
76
77 If this _VAL_TEST_FASTPATH_KEY points to another key in this config itself,
78 then we assume we want to loop through the values of that key.
79
80 This allows for any key in the config to become a fastpath key.
81
82 Example:
83 validation_ds:
84 splits: "val"
85 ...
86 <_VAL_TEST_FASTPATH_KEY>: "splits" <-- this points to the key name "splits"
87
88 Then we can write the following when overriding in hydra:
89 ```python
90 python train_file.py ... \
91 model.validation_ds.splits=[val1, val2, dev1, dev2] ...
92 ```
93
94 ## _VAL_TEST_FASTPATH_KEY itself acts as the resolved key
95
96 If this _VAL_TEST_FASTPATH_KEY does not point to another key in the config, then
97 it is assumed that the items of this key itself are used for resolution.
98
99 Example:
100 validation_ds:
101 ...
102 <_VAL_TEST_FASTPATH_KEY>: "val" <-- this points to the key name "splits"
103
104 Then we can write the following when overriding in hydra:
105 ```python
106 python train_file.py ... \
107 model.validation_ds.<_VAL_TEST_FASTPATH_KEY>=[val1, val2, dev1, dev2] ...
108 ```
109
110 # IMPORTANT NOTE:
111 It <can> potentially mismatch if there exist more than 2 valid paths, and the
112 first path does *not* resolve the the path of the data file (but does resolve to
113 some other valid path).
114
115 To avoid this side-effect, place the data path as the first item on the config file.
116
117 Args:
118 cfg: DictConfig (Sub-config) that should be parsed.
119
120 Returns:
121 A str representing the `key` of the config which hosts the filepath(s),
122 or None in case path could not be resolved.
123 """
124 if _VAL_TEST_FASTPATH_KEY in cfg and cfg[_VAL_TEST_FASTPATH_KEY] is not None:
125 fastpath_key = cfg[_VAL_TEST_FASTPATH_KEY]
126
127 if isinstance(fastpath_key, str) and fastpath_key in cfg:
128 return cfg[fastpath_key]
129 else:
130 return _VAL_TEST_FASTPATH_KEY
131
132 for key, value in cfg.items():
133 if type(value) in [list, tuple, ListConfig]:
134 # Count the number of valid paths in the list
135 values_are_paths = 0
136 for val_i in value:
137 val_i = str(val_i)
138 if os.path.exists(val_i) or os.path.isdir(val_i) or is_datastore_path(val_i):
139 values_are_paths += 1
140 else:
141 # reset counter and break inner loop
142 break
143
144 if values_are_paths == len(value):
145 return key
146
147 else:
148 if os.path.exists(str(value)) or os.path.isdir(str(value)) or is_datastore_path(str(value)):
149 return key
150
151 return None
152
153
154 def parse_dataset_as_name(name: str) -> str:
155 """
156 Constructs a valid prefix-name from a provided file path.
157
158 Args:
159 name: str path to some valid data/manifest file or a python object that
160 will be used as a name for the data loader (via str() cast).
161
162 Returns:
163 str prefix used to identify uniquely this data/manifest file.
164 """
165 if os.path.exists(str(name)) or os.path.isdir(str(name)) or is_datastore_path(str(name)):
166 name = Path(name).stem
167 else:
168 name = str(name)
169
170 # cleanup name
171 name = name.replace('-', '_')
172
173 if 'manifest' in name:
174 name = name.replace('manifest', '')
175
176 if 'dataset' in name:
177 name = name.replace('dataset', '')
178
179 # Test if the manifes/dataset name was simply `manifest.yaml` or `dataset.yaml`: Invalid names.
180 if name == '':
181 raise ValueError(
182 "Provided dataset / manifest filename was `manifest.json` or `dataset.json`.\n"
183 "Such a name is invalid, since multiple datasets/manifests can share the same name,\n"
184 "thereby overriding their results during logging. Please pick a more discriptive filename \n"
185 "for the provided dataset / manifest file."
186 )
187
188 if '_' != name[-1]:
189 name = name + '_'
190
191 return name
192
193
194 def unique_names_check(name_list: Optional[List[str]]):
195 """
196 Performs a uniqueness check on the name list resolved, so that it can warn users
197 about non-unique keys.
198
199 Args:
200 name_list: List of strings resolved for data loaders.
201 """
202 if name_list is None:
203 return
204
205 # Name uniqueness checks
206 names = set()
207 for name in name_list:
208 if name in names:
209 logging.warning(
210 "Name resolution has found more than one data loader having the same name !\n"
211 "In such cases, logs will nor be properly generated. "
212 "Please rename the item to have unique names.\n"
213 f"Resolved name : {name}"
214 )
215 else:
216 names.add(name) # we need just hash key check, value is just a placeholder
217
218
219 def resolve_validation_dataloaders(model: 'ModelPT'):
220 """
221 Helper method that operates on the ModelPT class to automatically support
222 multiple dataloaders for the validation set.
223
224 It does so by first resolving the path to one/more data files via `resolve_dataset_name_from_cfg()`.
225 If this resolution fails, it assumes the data loader is prepared to manually support / not support
226 multiple data loaders and simply calls the appropriate setup method.
227
228 If resolution succeeds:
229 Checks if provided path is to a single file or a list of files.
230 If a single file is provided, simply tags that file as such and loads it via the setup method.
231 If multiple files are provided:
232 Inject a new manifest path at index "i" into the resolved key.
233 Calls the appropriate setup method to set the data loader.
234 Collects the initialized data loader in a list and preserves it.
235 Once all data loaders are processed, assigns the list of loaded loaders to the ModelPT.
236 Finally assigns a list of unique names resolved from the file paths to the ModelPT.
237
238 Args:
239 model: ModelPT subclass, which requires >=1 Validation Dataloaders to be setup.
240 """
241 if not _HAS_HYDRA:
242 logging.error("This function requires Hydra/Omegaconf and it was not installed.")
243 exit(1)
244 cfg = copy.deepcopy(model._cfg)
245 dataloaders = []
246
247 # process val_loss_idx
248 if 'val_dl_idx' in cfg.validation_ds:
249 cfg = OmegaConf.to_container(cfg)
250 val_dl_idx = cfg['validation_ds'].pop('val_dl_idx')
251 cfg = OmegaConf.create(cfg)
252 else:
253 val_dl_idx = 0
254
255 # Set val_loss_idx
256 model._val_dl_idx = val_dl_idx
257
258 ds_key = resolve_dataset_name_from_cfg(cfg.validation_ds)
259
260 if ds_key is None or val_dl_idx < 0:
261 logging.debug(
262 "Could not resolve file path from provided config - {}. "
263 "Disabling support for multi-dataloaders.".format(cfg.validation_ds)
264 )
265
266 model.setup_validation_data(cfg.validation_ds)
267 return
268
269 ds_values = cfg.validation_ds[ds_key]
270
271 if isinstance(ds_values, (list, tuple, ListConfig)):
272
273 for ds_value in ds_values:
274 if isinstance(ds_value, (dict, DictConfig)):
275 # this is a nested dataset
276 cfg.validation_ds = ds_value
277 else:
278 cfg.validation_ds[ds_key] = ds_value
279
280 model.setup_validation_data(cfg.validation_ds)
281 dataloaders.append(model._validation_dl)
282
283 model._validation_dl = dataloaders
284 if len(ds_values) > 0 and isinstance(ds_values[0], (dict, DictConfig)):
285 # using the name of each of the nested dataset
286 model._validation_names = [ds.name for ds in ds_values]
287 else:
288 model._validation_names = [parse_dataset_as_name(ds) for ds in ds_values]
289 unique_names_check(name_list=model._validation_names)
290 return
291
292 else:
293 model.setup_validation_data(cfg.validation_ds)
294 model._validation_names = [parse_dataset_as_name(ds_values)]
295 unique_names_check(name_list=model._validation_names)
296
297
298 def resolve_test_dataloaders(model: 'ModelPT'):
299 """
300 Helper method that operates on the ModelPT class to automatically support
301 multiple dataloaders for the test set.
302
303 It does so by first resolving the path to one/more data files via `resolve_dataset_name_from_cfg()`.
304 If this resolution fails, it assumes the data loader is prepared to manually support / not support
305 multiple data loaders and simply calls the appropriate setup method.
306
307 If resolution succeeds:
308 Checks if provided path is to a single file or a list of files.
309 If a single file is provided, simply tags that file as such and loads it via the setup method.
310 If multiple files are provided:
311 Inject a new manifest path at index "i" into the resolved key.
312 Calls the appropriate setup method to set the data loader.
313 Collects the initialized data loader in a list and preserves it.
314 Once all data loaders are processed, assigns the list of loaded loaders to the ModelPT.
315 Finally assigns a list of unique names resolved from the file paths to the ModelPT.
316
317 Args:
318 model: ModelPT subclass, which requires >=1 Test Dataloaders to be setup.
319 """
320 if not _HAS_HYDRA:
321 logging.error("This function requires Hydra/Omegaconf and it was not installed.")
322 exit(1)
323 cfg = copy.deepcopy(model._cfg)
324 dataloaders = []
325
326 # process test_loss_idx
327 if 'test_dl_idx' in cfg.test_ds:
328 cfg = OmegaConf.to_container(cfg)
329 test_dl_idx = cfg['test_ds'].pop('test_dl_idx')
330 cfg = OmegaConf.create(cfg)
331 else:
332 test_dl_idx = 0
333
334 # Set val_loss_idx
335 model._test_dl_idx = test_dl_idx
336
337 ds_key = resolve_dataset_name_from_cfg(cfg.test_ds)
338
339 if ds_key is None:
340 logging.debug(
341 "Could not resolve file path from provided config - {}. "
342 "Disabling support for multi-dataloaders.".format(cfg.test_ds)
343 )
344
345 model.setup_test_data(cfg.test_ds)
346 return
347
348 ds_values = cfg.test_ds[ds_key]
349
350 if isinstance(ds_values, (list, tuple, ListConfig)):
351
352 for ds_value in ds_values:
353 if isinstance(ds_value, (dict, DictConfig)):
354 # this is a nested dataset
355 cfg.test_ds = ds_value
356 else:
357 cfg.test_ds[ds_key] = ds_value
358
359 model.setup_test_data(cfg.test_ds)
360 dataloaders.append(model._test_dl)
361
362 model._test_dl = dataloaders
363 if len(ds_values) > 0 and isinstance(ds_values[0], (dict, DictConfig)):
364 # using the name of each of the nested dataset
365 model._test_names = [ds.name for ds in ds_values]
366 else:
367 model._test_names = [parse_dataset_as_name(ds) for ds in ds_values]
368
369 unique_names_check(name_list=model._test_names)
370 return
371
372 else:
373 model.setup_test_data(cfg.test_ds)
374 model._test_names = [parse_dataset_as_name(ds_values)]
375
376 unique_names_check(name_list=model._test_names)
377
378
379 @wrapt.decorator
380 def wrap_training_step(wrapped, instance: 'pl.LightningModule', args, kwargs):
381 output_dict = wrapped(*args, **kwargs)
382
383 if isinstance(output_dict, dict) and output_dict is not None and 'log' in output_dict:
384 log_dict = output_dict.pop('log')
385 instance.log_dict(log_dict, on_step=True)
386
387 return output_dict
388
389
390 def convert_model_config_to_dict_config(cfg: Union['DictConfig', 'NemoConfig']) -> 'DictConfig':
391 """
392 Converts its input into a standard DictConfig.
393 Possible input values are:
394 - DictConfig
395 - A dataclass which is a subclass of NemoConfig
396
397 Args:
398 cfg: A dict-like object.
399
400 Returns:
401 The equivalent DictConfig
402 """
403 if not _HAS_HYDRA:
404 logging.error("This function requires Hydra/Omegaconf and it was not installed.")
405 exit(1)
406 if not isinstance(cfg, (OmegaConf, DictConfig)) and is_dataclass(cfg):
407 cfg = OmegaConf.structured(cfg)
408
409 if not isinstance(cfg, DictConfig):
410 raise ValueError(f"cfg constructor argument must be of type DictConfig/dict but got {type(cfg)} instead.")
411
412 config = OmegaConf.to_container(cfg, resolve=True)
413 config = OmegaConf.create(config)
414 return config
415
416
417 def _convert_config(cfg: 'OmegaConf'):
418 """ Recursive function convertint the configuration from old hydra format to the new one. """
419 if not _HAS_HYDRA:
420 logging.error("This function requires Hydra/Omegaconf and it was not installed.")
421 exit(1)
422
423 # Get rid of cls -> _target_.
424 if 'cls' in cfg and '_target_' not in cfg:
425 cfg._target_ = cfg.pop('cls')
426
427 # Get rid of params.
428 if 'params' in cfg:
429 params = cfg.pop('params')
430 for param_key, param_val in params.items():
431 cfg[param_key] = param_val
432
433 # Recursion.
434 try:
435 for _, sub_cfg in cfg.items():
436 if isinstance(sub_cfg, DictConfig):
437 _convert_config(sub_cfg)
438 except omegaconf_errors.OmegaConfBaseException as e:
439 logging.warning(f"Skipped conversion for config/subconfig:\n{cfg}\n Reason: {e}.")
440
441
442 def maybe_update_config_version(cfg: 'DictConfig'):
443 """
444 Recursively convert Hydra 0.x configs to Hydra 1.x configs.
445
446 Changes include:
447 - `cls` -> `_target_`.
448 - `params` -> drop params and shift all arguments to parent.
449 - `target` -> `_target_` cannot be performed due to ModelPT injecting `target` inside class.
450
451 Args:
452 cfg: Any Hydra compatible DictConfig
453
454 Returns:
455 An updated DictConfig that conforms to Hydra 1.x format.
456 """
457 if not _HAS_HYDRA:
458 logging.error("This function requires Hydra/Omegaconf and it was not installed.")
459 exit(1)
460 if cfg is not None and not isinstance(cfg, DictConfig):
461 try:
462 temp_cfg = OmegaConf.create(cfg)
463 cfg = temp_cfg
464 except omegaconf_errors.OmegaConfBaseException:
465 # Cannot be cast to DictConfig, skip updating.
466 return cfg
467
468 # Make a copy of model config.
469 cfg = copy.deepcopy(cfg)
470 OmegaConf.set_struct(cfg, False)
471
472 # Convert config.
473 _convert_config(cfg)
474
475 # Update model config.
476 OmegaConf.set_struct(cfg, True)
477
478 return cfg
479
480
481 @lru_cache(maxsize=1024)
482 def import_class_by_path(path: str):
483 """
484 Recursive import of class by path string.
485 """
486 paths = path.split('.')
487 path = ".".join(paths[:-1])
488 class_name = paths[-1]
489 mod = __import__(path, fromlist=[class_name])
490 mod = getattr(mod, class_name)
491 return mod
492
493
494 def resolve_subclass_pretrained_model_info(base_class) -> List['PretrainedModelInfo']:
495 """
496 Recursively traverses the inheritance graph of subclasses to extract all pretrained model info.
497 First constructs a set of unique pretrained model info by performing DFS over the inheritance graph.
498 All model info belonging to the same class is added together.
499
500 Args:
501 base_class: The root class, whose subclass graph will be traversed.
502
503 Returns:
504 A list of unique pretrained model infos belonging to all of the inherited subclasses of
505 this baseclass.
506 """
507 list_of_models = set()
508
509 def recursive_subclass_walk(cls):
510 for subclass in cls.__subclasses__():
511 # step into its immediate subclass
512 recursive_subclass_walk(subclass)
513
514 subclass_models = subclass.list_available_models()
515
516 if subclass_models is not None and len(subclass_models) > 0:
517 # Inject subclass info into pretrained model info
518 # if not already overriden by subclass
519 for model_info in subclass_models:
520 # If subclass manually injects class_, dont override.
521 if model_info.class_ is None:
522 model_info.class_ = subclass
523
524 for model_info in subclass_models:
525 list_of_models.add(model_info)
526
527 recursive_subclass_walk(base_class)
528
529 list_of_models = list(sorted(list_of_models))
530 return list_of_models
531
532
533 def check_lib_version(lib_name: str, checked_version: str, operator) -> Tuple[Optional[bool], str]:
534 """
535 Checks if a library is installed, and if it is, checks the operator(lib.__version__, checked_version) as a result.
536 This bool result along with a string analysis of result is returned.
537
538 If the library is not installed at all, then returns None instead, along with a string explaining
539 that the library is not installed
540
541 Args:
542 lib_name: lower case str name of the library that must be imported.
543 checked_version: semver string that is compared against lib.__version__.
544 operator: binary callable function func(a, b) -> bool; that compares lib.__version__ against version in
545 some manner. Must return a boolean.
546
547 Returns:
548 A tuple of results:
549 - Bool or None. Bool if the library could be imported, and the result of
550 operator(lib.__version__, checked_version) or False if __version__ is not implemented in lib.
551 None is passed if the library is not installed at all.
552 - A string analysis of the check.
553 """
554 try:
555 if '.' in lib_name:
556 mod = import_class_by_path(lib_name)
557 else:
558 mod = importlib.import_module(lib_name)
559
560 if hasattr(mod, '__version__'):
561 lib_ver = version.Version(mod.__version__)
562 match_ver = version.Version(checked_version)
563
564 if operator(lib_ver, match_ver):
565 msg = f"Lib {lib_name} version is satisfied !"
566 return True, msg
567 else:
568 msg = (
569 f"Lib {lib_name} version ({lib_ver}) is not {operator.__name__} than required version {checked_version}.\n"
570 f"Please upgrade the lib using either pip or conda to the latest version."
571 )
572 return False, msg
573 else:
574 msg = (
575 f"Lib {lib_name} does not implement __version__ in its init file. "
576 f"Could not check version compatibility."
577 )
578 return False, msg
579 except (ImportError, ModuleNotFoundError):
580 pass
581
582 msg = f"Lib {lib_name} has not been installed. Please use pip or conda to install this package."
583 return None, msg
584
585
586 def uninject_model_parallel_rank(filepath):
587 filepath = str(filepath)
588 if 'mp_rank' in filepath or 'tp_rank' in filepath:
589 dirname = os.path.dirname(os.path.dirname(filepath))
590 basename = os.path.basename(filepath)
591 filepath = os.path.join(dirname, basename)
592 return filepath
593 else:
594 return filepath
595
596
597 def inject_model_parallel_rank(filepath):
598 """
599 Injects tensor/pipeline model parallel ranks into the filepath.
600 Does nothing if not using model parallelism.
601 """
602 # first make sure filepath does not have rank
603 filepath = uninject_model_parallel_rank(filepath)
604
605 app_state = AppState()
606 if app_state.model_parallel_size is not None and app_state.model_parallel_size > 1:
607 # filepath needs to be updated to include mp_rank
608 dirname = os.path.dirname(filepath)
609 basename = os.path.basename(filepath)
610 if app_state.pipeline_model_parallel_size is None or app_state.pipeline_model_parallel_size == 1:
611 filepath = f'{dirname}/mp_rank_{app_state.tensor_model_parallel_rank:02d}/{basename}'
612 else:
613 filepath = f'{dirname}/tp_rank_{app_state.tensor_model_parallel_rank:02d}_pp_rank_{app_state.pipeline_model_parallel_rank:03d}/{basename}'
614 return filepath
615 else:
616 return filepath
617
[end of nemo/utils/model_utils.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
NVIDIA/NeMo
|
fcfc0ebb23b428a9bee6d847d1e0b37ca0784ba5
|
Installation instructions should better indicate mandatory steps to make tests pass (or reinstall.sh needs an update)
**Is your feature request related to a problem? Please describe.**
I wanted to setup a dev conda environment for NeMo, so I followed steps at https://github.com/NVIDIA/NeMo/tree/main#from-source
Afterwards `pytest --cpu` was failing (before it could even run any test) with two errors:
* `module 'nvidia' has no attribute 'dali'`
* `No module named 'pynvml'`
**Describe the solution you'd like**
After manually installing both libraries with
* pip install --extra-index-url https://developer.download.nvidia.com/compute/redist --upgrade nvidia-dali-cuda120
* pip install pynvml
the tests were able to pass (`1420 passed, 304 skipped, 261 warnings`).
Ideally these libraries would be installed automatically by the `reinstall.sh` script.
|
These libraries cannot be installed automatically due to this dependence on on extra index for distribution.
But these tests and the dali support itself should be import guarded. Do you have a stack trace of which tests failed ?
Sure, here's the stack trace:
```shell
> pytest --cpu
A valid `test_data.tar.gz` test archive (10445891B) found in the `/home/odelalleau/src/NeMo/tests/.data` folder.
Setting numba compat : True
=============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.8.17, pytest-7.4.0, pluggy-1.2.0 -- /home/odelalleau/miniconda3/envs/py38-tmp/bin/python
cachedir: .pytest_cache
rootdir: /home/odelalleau/src/NeMo
configfile: pyproject.toml
testpaths: tests
plugins: hydra-core-1.2.0
collected 1694 items / 2 errors / 2 skipped
===================================================================================================== ERRORS ======================================================================================================
___________________________________________________________________________ ERROR collecting tests/collections/asr/test_asr_datasets.py ___________________________________________________________________________
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/_pytest/runner.py:341: in from_call
result: Optional[TResult] = func()
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/_pytest/runner.py:372: in <lambda>
call = CallInfo.from_call(lambda: list(collector.collect()), "collect")
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/_pytest/python.py:531: in collect
self._inject_setup_module_fixture()
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/_pytest/python.py:545: in _inject_setup_module_fixture
self.obj, ("setUpModule", "setup_module")
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/_pytest/python.py:310: in obj
self._obj = obj = self._getobj()
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/_pytest/python.py:528: in _getobj
return self._importtestmodule()
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/_pytest/python.py:617: in _importtestmodule
mod = import_path(self.path, mode=importmode, root=self.config.rootpath)
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/_pytest/pathlib.py:565: in import_path
importlib.import_module(module_name)
../../miniconda3/envs/py38-tmp/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
<frozen importlib._bootstrap>:1014: in _gcd_import
???
<frozen importlib._bootstrap>:991: in _find_and_load
???
<frozen importlib._bootstrap>:975: in _find_and_load_unlocked
???
<frozen importlib._bootstrap>:671: in _load_unlocked
???
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:178: in exec_module
exec(co, module.__dict__)
tests/collections/asr/test_asr_datasets.py:58: in <module>
HAVE_DALI = is_dali_supported(__DALI_MINIMUM_VERSION__)
nemo/collections/asr/data/audio_to_text_dali.py:68: in is_dali_supported
module_available, _ = model_utils.check_lib_version(
nemo/utils/model_utils.py:556: in check_lib_version
mod = import_class_by_path(lib_name)
nemo/utils/model_utils.py:490: in import_class_by_path
mod = getattr(mod, class_name)
E AttributeError: module 'nvidia' has no attribute 'dali'
_________________________________________________________________________ ERROR collecting tests/collections/nlp/test_flash_attention.py __________________________________________________________________________
ImportError while importing test module '/home/odelalleau/src/NeMo/tests/collections/nlp/test_flash_attention.py'.
Hint: make sure your test modules/packages have valid Python names.
Traceback:
../../miniconda3/envs/py38-tmp/lib/python3.8/importlib/__init__.py:127: in import_module
return _bootstrap._gcd_import(name[level:], package, level)
tests/collections/nlp/test_flash_attention.py:47: in <module>
import pynvml
E ModuleNotFoundError: No module named 'pynvml'
================================================================================================ warnings summary =================================================================================================
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/torchmetrics/utilities/imports.py:24
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/torchmetrics/utilities/imports.py:24
/home/odelalleau/miniconda3/envs/py38-tmp/lib/python3.8/site-packages/torchmetrics/utilities/imports.py:24: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
_PYTHON_LOWER_3_8 = LooseVersion(_PYTHON_VERSION) < LooseVersion("3.8")
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:4
/home/odelalleau/miniconda3/envs/py38-tmp/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:4: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if not hasattr(tensorboard, "__version__") or LooseVersion(
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:6
/home/odelalleau/miniconda3/envs/py38-tmp/lib/python3.8/site-packages/torch/utils/tensorboard/__init__.py:6: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
) < LooseVersion("1.15"):
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/jupyter_client/connect.py:20
/home/odelalleau/miniconda3/envs/py38-tmp/lib/python3.8/site-packages/jupyter_client/connect.py:20: DeprecationWarning: Jupyter is migrating its paths to use standard platformdirs
given by the platformdirs library. To remove this warning and
see the appropriate new directories, set the environment variable
`JUPYTER_PLATFORM_DIRS=1` and then run `jupyter --paths`.
The use of platformdirs will be the default in `jupyter_core` v6
from jupyter_core.paths import jupyter_data_dir, jupyter_runtime_dir, secure_write
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/faiss/loader.py:28
/home/odelalleau/miniconda3/envs/py38-tmp/lib/python3.8/site-packages/faiss/loader.py:28: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
if LooseVersion(numpy.__version__) >= "1.19":
../../miniconda3/envs/py38-tmp/lib/python3.8/site-packages/setuptools/_distutils/version.py:346
/home/odelalleau/miniconda3/envs/py38-tmp/lib/python3.8/site-packages/setuptools/_distutils/version.py:346: DeprecationWarning: distutils Version classes are deprecated. Use packaging.version instead.
other = LooseVersion(other)
-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
============================================================================================= short test summary info =============================================================================================
ERROR tests/collections/asr/test_asr_datasets.py - AttributeError: module 'nvidia' has no attribute 'dali'
ERROR tests/collections/nlp/test_flash_attention.py
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!! Interrupted: 2 errors during collection !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
==================================================================================== 2 skipped, 7 warnings, 2 errors in 1.88s =====================================================================================
```
Odd - its imported guarded right here - https://github.com/NVIDIA/NeMo/blob/main/tests/collections/asr/test_asr_datasets.py#L57-L60
Ah ok its Attribute error for some reason. Interesting. Let me add a patch
|
2023-07-28T19:34:30Z
|
<patch>
diff --git a/nemo/utils/model_utils.py b/nemo/utils/model_utils.py
--- a/nemo/utils/model_utils.py
+++ b/nemo/utils/model_utils.py
@@ -576,7 +576,7 @@ def check_lib_version(lib_name: str, checked_version: str, operator) -> Tuple[Op
f"Could not check version compatibility."
)
return False, msg
- except (ImportError, ModuleNotFoundError):
+ except (ImportError, ModuleNotFoundError, AttributeError):
pass
msg = f"Lib {lib_name} has not been installed. Please use pip or conda to install this package."
</patch>
|
diff --git a/tests/collections/nlp/test_flash_attention.py b/tests/collections/nlp/test_flash_attention.py
--- a/tests/collections/nlp/test_flash_attention.py
+++ b/tests/collections/nlp/test_flash_attention.py
@@ -44,16 +44,23 @@
except (ImportError, ModuleNotFoundError):
HAVE_TRITON = False
-import pynvml
+try:
+ import pynvml
+
+ HAVE_PYNVML = True
+except (ImportError, ModuleNotFoundError):
+ HAVE_PYNVML = False
def HAVE_AMPERE_GPU():
- pynvml.nvmlInit()
- handle = pynvml.nvmlDeviceGetHandleByIndex(0)
- device_arch = pynvml.nvmlDeviceGetArchitecture(handle)
- pynvml.nvmlShutdown()
- return device_arch == pynvml.NVML_DEVICE_ARCH_AMPERE
-
+ if HAVE_PYNVML:
+ pynvml.nvmlInit()
+ handle = pynvml.nvmlDeviceGetHandleByIndex(0)
+ device_arch = pynvml.nvmlDeviceGetArchitecture(handle)
+ pynvml.nvmlShutdown()
+ return device_arch == pynvml.NVML_DEVICE_ARCH_AMPERE
+ else:
+ return False
@pytest.mark.run_only_on('GPU')
@pytest.mark.skipif(not HAVE_APEX, reason="apex is not installed")
|
1.0
| |||
slackapi__python-slack-events-api-34
|
You will be provided with a partial code base and an issue statement explaining a problem to resolve.
<issue>
Add support for request signing
### Description
Request signing went live and we should add support into our SDKs. https://api.slack.com/docs/verifying-requests-from-slack
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [ ] bug
- [x] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [x] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [x] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [x] I've searched for any related issues and avoided creating a duplicate issue.
</issue>
<code>
[start of README.rst]
1 Slack Events API adapter for Python
2 ===================================
3
4 .. image:: https://travis-ci.org/slackapi/python-slack-events-api.svg?branch=master
5 :target: https://travis-ci.org/slackapi/python-slack-events-api
6 .. image:: https://codecov.io/gh/slackapi/python-slack-events-api/branch/master/graph/badge.svg
7 :target: https://codecov.io/gh/slackapi/python-slack-events-api
8
9
10 The Slack Events Adapter is a Python-based solution to receive and parse events
11 from Slack’s Events API. This library uses an event emitter framework to allow
12 you to easily process Slack events by simply attaching functions
13 to event listeners.
14
15 This adapter enhances and simplifies Slack's Events API by incorporating useful best practices, patterns, and opportunities to abstract out common tasks.
16
17 💡 We wrote a `blog post which explains how`_ the Events API can help you, why we built these tools, and how you can use them to build production-ready Slack apps.
18
19 .. _blog post which explains how: https://medium.com/@SlackAPI/enhancing-slacks-events-api-7535827829ab
20
21
22 🤖 Installation
23 ----------------
24
25 .. code:: shell
26
27 pip install slackeventsapi
28
29 🤖 App Setup
30 --------------------
31
32 Before you can use the `Events API`_ you must
33 `create a Slack App`_, and turn on
34 `Event Subscriptions`_.
35
36 💡 When you add the Request URL to your app's Event Subscription settings,
37 Slack will send a request containing a `challenge` code to verify that your
38 server is alive. This package handles that URL Verification event for you, so
39 all you need to do is start the example app, start ngrok and configure your
40 URL accordingly.
41
42 ✅ Once you have your `Request URL` verified, your app is ready to start
43 receiving Team Events.
44
45 🔑 Your server will begin receiving Events from Slack's Events API as soon as a
46 user has authorized your app.
47
48 🤖 Development workflow:
49 ===========================
50
51 (1) Create a Slack app on https://api.slack.com/apps/
52 (2) Add a `bot user` for your app
53 (3) Start the example app on your **Request URL** endpoint
54 (4) Start ngrok and copy the **HTTPS** URL
55 (5) Add your **Request URL** and subscribe your app to events
56 (6) Go to your ngrok URL (e.g. https://myapp12.ngrok.com/) and auth your app
57
58 **🎉 Once your app has been authorized, you will begin receiving Slack Events**
59
60 ⚠️ Ngrok is a great tool for developing Slack apps, but we don't recommend using ngrok
61 for production apps.
62
63 🤖 Usage
64 ----------
65 **⚠️ Keep your app's credentials safe!**
66
67 - For development, keep them in virtualenv variables.
68
69 - For production, use a secure data store.
70
71 - Never post your app's credentials to github.
72
73 .. code:: python
74
75 SLACK_VERIFICATION_TOKEN = os.environ["SLACK_VERIFICATION_TOKEN"]
76
77 Create a Slack Event Adapter for receiving actions via the Events API
78 -----------------------------------------------------------------------
79 **Using the built-in Flask server:**
80
81 .. code:: python
82
83 from slackeventsapi import SlackEventAdapter
84
85
86 slack_events_adapter = SlackEventAdapter(SLACK_VERIFICATION_TOKEN, endpoint="/slack/events")
87
88
89 # Create an event listener for "reaction_added" events and print the emoji name
90 @slack_events_adapter.on("reaction_added")
91 def reaction_added(event):
92 emoji = event.get("reaction")
93 print(emoji)
94
95
96 # Start the server on port 3000
97 slack_events_adapter.start(port=3000)
98
99
100 **Using your existing Flask instance:**
101
102
103 .. code:: python
104
105 from flask import Flask
106 from slackeventsapi import SlackEventAdapter
107
108
109 # This `app` represents your existing Flask app
110 app = Flask(__name__)
111
112
113 # An example of one of your Flask app's routes
114 @app.route("/")
115 def hello():
116 return "Hello there!"
117
118
119 # Bind the Events API route to your existing Flask app by passing the server
120 # instance as the last param, or with `server=app`.
121 slack_events_adapter = SlackEventAdapter(SLACK_VERIFICATION_TOKEN, "/slack/events", app)
122
123
124 # Create an event listener for "reaction_added" events and print the emoji name
125 @slack_events_adapter.on("reaction_added")
126 def reaction_added(event):
127 emoji = event.get("reaction")
128 print(emoji)
129
130
131 # Start the server on port 3000
132 if __name__ == "__main__":
133 app.run(port=3000)
134
135 For a comprehensive list of available Slack `Events` and more information on
136 `Scopes`, see https://api.slack.com/events-api
137
138 🤖 Example event listeners
139 -----------------------------
140
141 See `example.py`_ for usage examples. This example also utilizes the
142 SlackClient Web API client.
143
144 .. _example.py: /example/
145
146 🤔 Support
147 -----------
148
149 Need help? Join `Bot Developer Hangout`_ and talk to us in `#slack-api`_.
150
151 You can also `create an Issue`_ right here on GitHub.
152
153 .. _Events API: https://api.slack.com/events-api
154 .. _create a Slack App: https://api.slack.com/apps/new
155 .. _Event Subscriptions: https://api.slack.com/events-api#subscriptions
156 .. _Bot Developer Hangout: http://dev4slack.xoxco.com/
157 .. _#slack-api: https://dev4slack.slack.com/messages/slack-api/
158 .. _create an Issue: https://github.com/slackapi/python-slack-events-api/issues/new
159
[end of README.rst]
[start of example/example.py]
1 from slackeventsapi import SlackEventAdapter
2 from slackclient import SlackClient
3 import os
4
5 # Our app's Slack Event Adapter for receiving actions via the Events API
6 SLACK_VERIFICATION_TOKEN = os.environ["SLACK_VERIFICATION_TOKEN"]
7 slack_events_adapter = SlackEventAdapter(SLACK_VERIFICATION_TOKEN, "/slack/events")
8
9 # Create a SlackClient for your bot to use for Web API requests
10 SLACK_BOT_TOKEN = os.environ["SLACK_BOT_TOKEN"]
11 CLIENT = SlackClient(SLACK_BOT_TOKEN)
12
13 # Example responder to greetings
14 @slack_events_adapter.on("message")
15 def handle_message(event_data):
16 message = event_data["event"]
17 # If the incoming message contains "hi", then respond with a "Hello" message
18 if message.get("subtype") is None and "hi" in message.get('text'):
19 channel = message["channel"]
20 message = "Hello <@%s>! :tada:" % message["user"]
21 CLIENT.api_call("chat.postMessage", channel=channel, text=message)
22
23
24 # Example reaction emoji echo
25 @slack_events_adapter.on("reaction_added")
26 def reaction_added(event_data):
27 event = event_data["event"]
28 emoji = event["reaction"]
29 channel = event["item"]["channel"]
30 text = ":%s:" % emoji
31 CLIENT.api_call("chat.postMessage", channel=channel, text=text)
32
33 # Once we have our event listeners configured, we can start the
34 # Flask server with the default `/events` endpoint on port 3000
35 slack_events_adapter.start(port=3000)
36
[end of example/example.py]
[start of slackeventsapi/__init__.py]
1 from pyee import EventEmitter
2 from .server import SlackServer
3
4
5 class SlackEventAdapter(EventEmitter):
6 # Initialize the Slack event server
7 # If no endpoint is provided, default to listening on '/slack/events'
8 def __init__(self, verification_token, endpoint="/slack/events", server=None):
9 EventEmitter.__init__(self)
10 self.verification_token = verification_token
11 self.server = SlackServer(verification_token, endpoint, self, server)
12
13 def start(self, host='127.0.0.1', port=None, debug=False, **kwargs):
14 """
15 Start the built in webserver, bound to the host and port you'd like.
16 Default host is `127.0.0.1` and port 8080.
17
18 :param host: The host you want to bind the build in webserver to
19 :param port: The port number you want the webserver to run on
20 :param debug: Set to `True` to enable debug level logging
21 :param kwargs: Additional arguments you'd like to pass to Flask
22 """
23 self.server.run(host=host, port=port, debug=debug, **kwargs)
24
[end of slackeventsapi/__init__.py]
[start of slackeventsapi/server.py]
1 from flask import Flask, request, make_response
2 import json
3 import platform
4 import sys
5 from .version import __version__
6
7
8 class SlackServer(Flask):
9 def __init__(self, verification_token, endpoint, emitter, server):
10 self.verification_token = verification_token
11 self.emitter = emitter
12 self.endpoint = endpoint
13 self.package_info = self.get_package_info()
14
15 # If a server is passed in, bind the event handler routes to it,
16 # otherwise create a new Flask instance.
17 if server:
18 if isinstance(server, Flask):
19 self.bind_route(server)
20 else:
21 raise TypeError("Server must be an instance of Flask")
22 else:
23 Flask.__init__(self, __name__)
24 self.bind_route(self)
25
26 def get_package_info(self):
27 client_name = __name__.split('.')[0]
28 client_version = __version__ # Version is returned from version.py
29
30 # Collect the package info, Python version and OS version.
31 package_info = {
32 "client": "{0}/{1}".format(client_name, client_version),
33 "python": "Python/{v.major}.{v.minor}.{v.micro}".format(v=sys.version_info),
34 "system": "{0}/{1}".format(platform.system(), platform.release())
35 }
36
37 # Concatenate and format the user-agent string to be passed into request headers
38 ua_string = []
39 for key, val in package_info.items():
40 ua_string.append(val)
41
42 return " ".join(ua_string)
43
44 def bind_route(self, server):
45 @server.route(self.endpoint, methods=['GET', 'POST'])
46 def event():
47 # If a GET request is made, return 404.
48 if request.method == 'GET':
49 return make_response("These are not the slackbots you're looking for.", 404)
50
51 # Parse the request payload into JSON
52 event_data = json.loads(request.data.decode('utf-8'))
53
54 # Echo the URL verification challenge code
55 if "challenge" in event_data:
56 return make_response(
57 event_data.get("challenge"), 200, {"content_type": "application/json"}
58 )
59
60 # Verify the request token
61 request_token = event_data.get("token")
62 if self.verification_token != request_token:
63 self.emitter.emit('error', Exception('invalid verification token'))
64 return make_response("Request contains invalid Slack verification token", 403)
65
66 # Parse the Event payload and emit the event to the event listener
67 if "event" in event_data:
68 event_type = event_data["event"]["type"]
69 self.emitter.emit(event_type, event_data)
70 response = make_response("", 200)
71 response.headers['X-Slack-Powered-By'] = self.package_info
72 return response
73
[end of slackeventsapi/server.py]
</code>
Here is an example of a patch file. It consists of changes to the code base. It specifies the file names, the line numbers of each change, and the removed and added lines. A single patch file can contain changes to multiple files.
<patch>
--- a/file.py
+++ b/file.py
@@ -1,27 +1,35 @@
def euclidean(a, b):
- while b:
- a, b = b, a % b
- return a
+ if b == 0:
+ return a
+ return euclidean(b, a % b)
def bresenham(x0, y0, x1, y1):
points = []
dx = abs(x1 - x0)
dy = abs(y1 - y0)
- sx = 1 if x0 < x1 else -1
- sy = 1 if y0 < y1 else -1
- err = dx - dy
+ x, y = x0, y0
+ sx = -1 if x0 > x1 else 1
+ sy = -1 if y0 > y1 else 1
- while True:
- points.append((x0, y0))
- if x0 == x1 and y0 == y1:
- break
- e2 = 2 * err
- if e2 > -dy:
+ if dx > dy:
+ err = dx / 2.0
+ while x != x1:
+ points.append((x, y))
err -= dy
- x0 += sx
- if e2 < dx:
- err += dx
- y0 += sy
+ if err < 0:
+ y += sy
+ err += dx
+ x += sx
+ else:
+ err = dy / 2.0
+ while y != y1:
+ points.append((x, y))
+ err -= dx
+ if err < 0:
+ x += sx
+ err += dy
+ y += sy
+ points.append((x, y))
return points
</patch>
I need you to solve the provided issue by generating a single patch file that I can apply directly to this repository using git apply. Please respond with a single patch file in the format shown above.
Respond below:
|
slackapi/python-slack-events-api
|
79949e66cc442b241eaed08ffe79d8dc7a166638
|
Add support for request signing
### Description
Request signing went live and we should add support into our SDKs. https://api.slack.com/docs/verifying-requests-from-slack
### What type of issue is this? (place an `x` in one of the `[ ]`)
- [ ] bug
- [x] enhancement (feature request)
- [ ] question
- [ ] documentation related
- [ ] testing related
- [ ] discussion
### Requirements
* [x] I've read and understood the [Contributing guidelines](https://github.com/slackapi/python-slack-events-api/blob/master/.github/contributing.md) and have done my best effort to follow them.
* [x] I've read and agree to the [Code of Conduct](https://slackhq.github.io/code-of-conduct).
* [x] I've searched for any related issues and avoided creating a duplicate issue.
|
someone has a branch somewhere with this implemented, i hear 👂
we'll get some more details into here asap.
|
2018-08-08T18:22:04Z
|
<patch>
diff --git a/example/example.py b/example/example.py
--- a/example/example.py
+++ b/example/example.py
@@ -3,12 +3,12 @@
import os
# Our app's Slack Event Adapter for receiving actions via the Events API
-SLACK_VERIFICATION_TOKEN = os.environ["SLACK_VERIFICATION_TOKEN"]
-slack_events_adapter = SlackEventAdapter(SLACK_VERIFICATION_TOKEN, "/slack/events")
+slack_signing_secret = os.environ["SLACK_SIGNING_SECRET"]
+slack_events_adapter = SlackEventAdapter(slack_signing_secret, "/slack/events")
# Create a SlackClient for your bot to use for Web API requests
-SLACK_BOT_TOKEN = os.environ["SLACK_BOT_TOKEN"]
-CLIENT = SlackClient(SLACK_BOT_TOKEN)
+slack_bot_token = os.environ["SLACK_BOT_TOKEN"]
+slack_client = SlackClient(slack_bot_token)
# Example responder to greetings
@slack_events_adapter.on("message")
@@ -18,7 +18,7 @@ def handle_message(event_data):
if message.get("subtype") is None and "hi" in message.get('text'):
channel = message["channel"]
message = "Hello <@%s>! :tada:" % message["user"]
- CLIENT.api_call("chat.postMessage", channel=channel, text=message)
+ slack_client.api_call("chat.postMessage", channel=channel, text=message)
# Example reaction emoji echo
@@ -28,7 +28,12 @@ def reaction_added(event_data):
emoji = event["reaction"]
channel = event["item"]["channel"]
text = ":%s:" % emoji
- CLIENT.api_call("chat.postMessage", channel=channel, text=text)
+ slack_client.api_call("chat.postMessage", channel=channel, text=text)
+
+# Error events
+@slack_events_adapter.on("error")
+def error_handler(err):
+ print("ERROR: " + str(err))
# Once we have our event listeners configured, we can start the
# Flask server with the default `/events` endpoint on port 3000
diff --git a/slackeventsapi/__init__.py b/slackeventsapi/__init__.py
--- a/slackeventsapi/__init__.py
+++ b/slackeventsapi/__init__.py
@@ -5,10 +5,10 @@
class SlackEventAdapter(EventEmitter):
# Initialize the Slack event server
# If no endpoint is provided, default to listening on '/slack/events'
- def __init__(self, verification_token, endpoint="/slack/events", server=None):
+ def __init__(self, signing_secret, endpoint="/slack/events", server=None, **kwargs):
EventEmitter.__init__(self)
- self.verification_token = verification_token
- self.server = SlackServer(verification_token, endpoint, self, server)
+ self.signing_secret = signing_secret
+ self.server = SlackServer(signing_secret, endpoint, self, server, **kwargs)
def start(self, host='127.0.0.1', port=None, debug=False, **kwargs):
"""
diff --git a/slackeventsapi/server.py b/slackeventsapi/server.py
--- a/slackeventsapi/server.py
+++ b/slackeventsapi/server.py
@@ -2,12 +2,15 @@
import json
import platform
import sys
+import hmac
+import hashlib
+from time import time
from .version import __version__
class SlackServer(Flask):
- def __init__(self, verification_token, endpoint, emitter, server):
- self.verification_token = verification_token
+ def __init__(self, signing_secret, endpoint, emitter, server):
+ self.signing_secret = signing_secret
self.emitter = emitter
self.endpoint = endpoint
self.package_info = self.get_package_info()
@@ -41,6 +44,44 @@ def get_package_info(self):
return " ".join(ua_string)
+ def verify_signature(self, timestamp, signature):
+ # Verify the request signature of the request sent from Slack
+ # Generate a new hash using the app's signing secret and request data
+
+ # Compare the generated hash and incoming request signature
+ # Python 2.7.6 doesn't support compare_digest
+ # It's recommended to use Python 2.7.7+
+ # noqa See https://docs.python.org/2/whatsnew/2.7.html#pep-466-network-security-enhancements-for-python-2-7
+ if hasattr(hmac, "compare_digest"):
+ req = str.encode('v0:' + str(timestamp) + ':') + request.data
+ request_hash = 'v0=' + hmac.new(
+ str.encode(self.signing_secret),
+ req, hashlib.sha256
+ ).hexdigest()
+ # Compare byte strings for Python 2
+ if (sys.version_info[0] == 2):
+ return hmac.compare_digest(bytes(request_hash), bytes(signature))
+ else:
+ return hmac.compare_digest(request_hash, signature)
+ else:
+ # So, we'll compare the signatures explicitly
+ req = str.encode('v0:' + str(timestamp) + ':') + request.data
+ request_hash = 'v0=' + hmac.new(
+ str.encode(self.signing_secret),
+ req, hashlib.sha256
+ ).hexdigest()
+
+ if len(request_hash) != len(signature):
+ return False
+ result = 0
+ if isinstance(request_hash, bytes) and isinstance(signature, bytes):
+ for x, y in zip(request_hash, signature):
+ result |= x ^ y
+ else:
+ for x, y in zip(request_hash, signature):
+ result |= ord(x) ^ ord(y)
+ return result == 0
+
def bind_route(self, server):
@server.route(self.endpoint, methods=['GET', 'POST'])
def event():
@@ -48,21 +89,31 @@ def event():
if request.method == 'GET':
return make_response("These are not the slackbots you're looking for.", 404)
+ # Each request comes with request timestamp and request signature
+ # emit an error if the timestamp is out of range
+ req_timestamp = request.headers.get('X-Slack-Request-Timestamp')
+ if abs(time() - int(req_timestamp)) > 60 * 5:
+ slack_exception = SlackEventAdapterException('Invalid request timestamp')
+ self.emitter.emit('error', slack_exception)
+ return make_response("", 403)
+
+ # Verify the request signature using the app's signing secret
+ # emit an error if the signature can't be verified
+ req_signature = request.headers.get('X-Slack-Signature')
+ if not self.verify_signature(req_timestamp, req_signature):
+ slack_exception = SlackEventAdapterException('Invalid request signature')
+ self.emitter.emit('error', slack_exception)
+ return make_response("", 403)
+
# Parse the request payload into JSON
event_data = json.loads(request.data.decode('utf-8'))
- # Echo the URL verification challenge code
+ # Echo the URL verification challenge code back to Slack
if "challenge" in event_data:
return make_response(
event_data.get("challenge"), 200, {"content_type": "application/json"}
)
- # Verify the request token
- request_token = event_data.get("token")
- if self.verification_token != request_token:
- self.emitter.emit('error', Exception('invalid verification token'))
- return make_response("Request contains invalid Slack verification token", 403)
-
# Parse the Event payload and emit the event to the event listener
if "event" in event_data:
event_type = event_data["event"]["type"]
@@ -70,3 +121,14 @@ def event():
response = make_response("", 200)
response.headers['X-Slack-Powered-By'] = self.package_info
return response
+
+
+class SlackEventAdapterException(Exception):
+ """
+ Base exception for all errors raised by the SlackClient library
+ """
+ def __init__(self, msg=None):
+ if msg is None:
+ # default error message
+ msg = "An error occurred in the SlackEventsApiAdapter library"
+ super(SlackEventAdapterException, self).__init__(msg)
</patch>
|
diff --git a/tests/conftest.py b/tests/conftest.py
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -1,8 +1,19 @@
-import pytest
import json
+import hashlib
+import hmac
+import pytest
from slackeventsapi import SlackEventAdapter
+def create_signature(secret, timestamp, data):
+ req = str.encode('v0:' + str(timestamp) + ':') + str.encode(data)
+ request_signature= 'v0='+hmac.new(
+ str.encode(secret),
+ req, hashlib.sha256
+ ).hexdigest()
+ return request_signature
+
+
def load_event_fixture(event, as_string=True):
filename = "tests/data/{}.json".format(event)
with open(filename) as json_data:
@@ -23,12 +34,14 @@ def pytest_namespace():
return {
'reaction_event_fixture': load_event_fixture('reaction_added'),
'url_challenge_fixture': load_event_fixture('url_challenge'),
- 'bad_token_fixture': event_with_bad_token()
+ 'bad_token_fixture': event_with_bad_token(),
+ 'create_signature': create_signature
}
@pytest.fixture
def app():
- adapter = SlackEventAdapter("vFO9LARnLI7GflLR8tGqHgdy")
+ adapter = SlackEventAdapter("SIGNING_SECRET")
app = adapter.server
+ app.testing = True
return app
diff --git a/tests/test_events.py b/tests/test_events.py
--- a/tests/test_events.py
+++ b/tests/test_events.py
@@ -1,21 +1,27 @@
+import time
import pytest
from slackeventsapi import SlackEventAdapter
-ADAPTER = SlackEventAdapter('vFO9LARnLI7GflLR8tGqHgdy')
-
+ADAPTER = SlackEventAdapter('SIGNING_SECRET')
def test_event_emission(client):
# Events should trigger an event
- data = pytest.reaction_event_fixture
-
@ADAPTER.on('reaction_added')
def event_handler(event):
assert event["reaction"] == 'grinning'
+ data = pytest.reaction_event_fixture
+ timestamp = int(time.time())
+ signature = pytest.create_signature(ADAPTER.signing_secret, timestamp, data)
+
res = client.post(
'/slack/events',
data=data,
- content_type='application/json'
+ content_type='application/json',
+ headers={
+ 'X-Slack-Request-Timestamp': timestamp,
+ 'X-Slack-Signature': signature
+ }
)
assert res.status_code == 200
diff --git a/tests/test_server.py b/tests/test_server.py
--- a/tests/test_server.py
+++ b/tests/test_server.py
@@ -2,20 +2,23 @@
from flask import Flask
import pytest
import sys
+import hmac
+import time
from slackeventsapi import SlackEventAdapter
+from slackeventsapi.server import SlackEventAdapterException
from slackeventsapi.version import __version__
def test_existing_flask():
valid_flask = Flask(__name__)
- valid_adapter = SlackEventAdapter("vFO9LARnLI7GflLR8tGqHgdy", "/slack/events", valid_flask)
+ valid_adapter = SlackEventAdapter("SIGNING_SECRET", "/slack/events", valid_flask)
assert isinstance(valid_adapter, SlackEventAdapter)
def test_server_not_flask():
with pytest.raises(TypeError) as e:
invalid_flask = "I am not a Flask"
- SlackEventAdapter("vFO9LARnLI7GflLR8tGqHgdy", "/slack/events", invalid_flask)
+ SlackEventAdapter("SIGNING_SECRET", "/slack/events", invalid_flask)
assert e.value.args[0] == 'Server must be an instance of Flask'
@@ -26,33 +29,110 @@ def test_event_endpoint_get(client):
def test_url_challenge(client):
+ slack_adapter = SlackEventAdapter("SIGNING_SECRET")
data = pytest.url_challenge_fixture
+ timestamp = int(time.time())
+ signature = pytest.create_signature(slack_adapter.signing_secret, timestamp, data)
+
res = client.post(
'/slack/events',
data=data,
- content_type='application/json')
+ content_type='application/json',
+ headers={
+ 'X-Slack-Request-Timestamp': timestamp,
+ 'X-Slack-Signature': signature
+ }
+ )
assert res.status_code == 200
assert bytes.decode(res.data) == "valid_challenge_token"
-def test_valid_event_request(client):
+def test_invalid_request_signature(client):
+ # Verify [package metadata header is set
+ slack_adapter = SlackEventAdapter("SIGNING_SECRET")
+
+ data = pytest.reaction_event_fixture
+ timestamp = int(time.time())
+ signature = "bad signature"
+
+ with pytest.raises(SlackEventAdapterException) as excinfo:
+ res = client.post(
+ '/slack/events',
+ data=data,
+ content_type='application/json',
+ headers={
+ 'X-Slack-Request-Timestamp': timestamp,
+ 'X-Slack-Signature': signature
+ }
+ )
+
+ assert str(excinfo.value) == 'Invalid request signature'
+
+
+def test_invalid_request_timestamp(client):
+ # Verify [package metadata header is set
+ slack_adapter = SlackEventAdapter("SIGNING_SECRET")
+
+ data = pytest.reaction_event_fixture
+ timestamp = int(time.time()+1000)
+ signature = "bad timestamp"
+
+ with pytest.raises(SlackEventAdapterException) as excinfo:
+ res = client.post(
+ '/slack/events',
+ data=data,
+ content_type='application/json',
+ headers={
+ 'X-Slack-Request-Timestamp': timestamp,
+ 'X-Slack-Signature': signature
+ }
+ )
+
+ assert str(excinfo.value) == 'Invalid request timestamp'
+
+
+def test_compare_digest_fallback(client, monkeypatch):
+ # Verify [package metadata header is set
+ slack_adapter = SlackEventAdapter("SIGNING_SECRET")
+
+ if hasattr(hmac, "compare_digest"):
+ monkeypatch.delattr(hmac, 'compare_digest')
+
data = pytest.reaction_event_fixture
+ timestamp = int(time.time())
+ signature =pytest.create_signature(slack_adapter.signing_secret, timestamp, data)
+
res = client.post(
'/slack/events',
data=data,
- content_type='application/json')
+ content_type='application/json',
+ headers={
+ 'X-Slack-Request-Timestamp': timestamp,
+ 'X-Slack-Signature': signature
+ }
+ )
+
assert res.status_code == 200
def test_version_header(client):
# Verify [package metadata header is set
- package_info = SlackEventAdapter("token").server.package_info
+ slack_adapter = SlackEventAdapter("SIGNING_SECRET")
+ package_info = slack_adapter.server.package_info
data = pytest.reaction_event_fixture
+ timestamp = int(time.time())
+ signature = pytest.create_signature(slack_adapter.signing_secret, timestamp, data)
+
res = client.post(
'/slack/events',
data=data,
- content_type='application/json')
+ content_type='application/json',
+ headers={
+ 'X-Slack-Request-Timestamp': timestamp,
+ 'X-Slack-Signature': signature
+ }
+ )
assert res.status_code == 200
assert res.headers["X-Slack-Powered-By"] == package_info
@@ -60,7 +140,14 @@ def test_version_header(client):
def test_server_start(mocker):
# Verify server started with correct params
- slack_events_adapter = SlackEventAdapter("token")
+ slack_events_adapter = SlackEventAdapter("SIGNING_SECRET")
mocker.spy(slack_events_adapter, 'server')
slack_events_adapter.start(port=3000)
slack_events_adapter.server.run.assert_called_once_with(debug=False, host='127.0.0.1', port=3000)
+
+
+def test_default_exception_msg(mocker):
+ with pytest.raises(SlackEventAdapterException) as excinfo:
+ raise SlackEventAdapterException
+
+ assert str(excinfo.value) == 'An error occurred in the SlackEventsApiAdapter library'
|
1.0
| |||
celery__celery-2666
| "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED)
|
celery/celery
|
6bf4664e076c4d8b6d728190802124aa5c112c5d
| "Celerybeat runs periodic tasks every 5 seconds regardless of interval\nI recently upgraded to celer(...TRUNCATED)
| "Could you try upgrading to celery 3.0.7? Also please delete an existing `celerybeat-schedule` file(...TRUNCATED)
|
2015-06-19T00:01:16Z
| "<patch>\ndiff --git a/celery/schedules.py b/celery/schedules.py\n--- a/celery/schedules.py\n+++ b/c(...TRUNCATED)
| "diff --git a/celery/tests/app/test_beat.py b/celery/tests/app/test_beat.py\n--- a/celery/tests/app/(...TRUNCATED)
|
1.0
| |||
NVIDIA__NeMo-5724
| "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED)
|
NVIDIA/NeMo
|
eee715f831f2b088075f75cc7c95de60f4ef1d38
| "EMA Doesn't delete previous EMA ckpts when k > 0 for checkpointing\n**Describe the bug**\r\n\r\nEMA(...TRUNCATED)
| "cc @carmocca\nIdeally, we would find a better solution. However, since that would require a larger (...TRUNCATED)
|
2023-01-03T11:05:25Z
| "<patch>\ndiff --git a/nemo/collections/common/callbacks/ema.py b/nemo/collections/common/callbacks/(...TRUNCATED)
| "diff --git a/tests/collections/common/test_ema.py b/tests/collections/common/test_ema.py\n--- a/tes(...TRUNCATED)
|
1.0
| |||
NVIDIA__NeMo-6097
| "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED)
|
NVIDIA/NeMo
|
66aeb4c36dd86a777cc47e9878e701bd8029b654
| "Spectrogram Enhancer doesn't generalize to spectrogram lengths unseen during training\n**Describe t(...TRUNCATED)
| "A temporary fix: given a trained model, clone first patch of the initial tensor length-wise:\r\n```(...TRUNCATED)
|
2023-02-23T22:43:15Z
| "<patch>\ndiff --git a/nemo/collections/tts/modules/spectrogram_enhancer.py b/nemo/collections/tts/m(...TRUNCATED)
| "diff --git a/tests/collections/tts/test_spectrogram_enhancer.py b/tests/collections/tts/test_spectr(...TRUNCATED)
|
1.0
| |||
NVIDIA__NeMo-3159
| "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED)
|
NVIDIA/NeMo
|
c607061264713c9f4c35d1fbc5afaaf41471317e
| "Punctuation data set uses too much memory\n**Describe the bug**\r\n\r\nPunctuation datasets cannot (...TRUNCATED)
|
2021-11-10T13:43:43Z
| "<patch>\ndiff --git a/examples/nlp/token_classification/data/create_punctuation_capitalization_tarr(...TRUNCATED)
| "diff --git a/tests/collections/nlp/test_pretrained_models_performance.py b/tests/collections/nlp/te(...TRUNCATED)
|
1.0
| ||||
NVIDIA__NeMo-6060
| "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED)
|
NVIDIA/NeMo
|
64b74dc9eaa6a23e52b697c9f9b7ad87528a2373
| "Spectrogram Enhancer doesn't generalize to spectrogram lengths unseen during training\n**Describe t(...TRUNCATED)
| "A temporary fix: given a trained model, clone first patch of the initial tensor length-wise:\r\n```(...TRUNCATED)
|
2023-02-20T16:02:45Z
| "<patch>\ndiff --git a/nemo/collections/tts/modules/spectrogram_enhancer.py b/nemo/collections/tts/m(...TRUNCATED)
| "diff --git a/tests/collections/tts/test_spectrogram_enhancer.py b/tests/collections/tts/test_spectr(...TRUNCATED)
|
1.0
| |||
celery__celery-567
| "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED)
|
celery/celery
|
9998b55af267446a077b31fdf35806c59b943b2d
| "Introduce CELERYCTL variable in /etc/init.d/celeryd and /etc/default/celeryd\nI ran into a problem (...TRUNCATED)
|
2011-12-12T12:49:09Z
| "<patch>\ndiff --git a/celery/__init__.py b/celery/__init__.py\n--- a/celery/__init__.py\n+++ b/cele(...TRUNCATED)
| "diff --git a/celery/tests/config.py b/celery/tests/config.py\n--- a/celery/tests/config.py\n+++ b/c(...TRUNCATED)
|
1.0
| ||||
NVIDIA__NeMo-1323
| "You will be provided with a partial code base and an issue statement explaining a problem to resolv(...TRUNCATED)
|
NVIDIA/NeMo
|
5cf042856bf718d27233a7538e0b094ce576d5c4
| "Loading NLP and ASR models might result in `Missing key(s) in state_dict` error\n**Describe the bug(...TRUNCATED)
|
2020-10-21T20:01:26Z
| "<patch>\ndiff --git a/examples/asr/speech_to_text_infer.py b/examples/asr/speech_to_text_infer.py\n(...TRUNCATED)
| "diff --git a/examples/tts/test_tts_infer.py b/examples/tts/test_tts_infer.py\n--- a/examples/tts/te(...TRUNCATED)
|
1.0
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 1