code stringlengths 66 870k | docstring stringlengths 19 26.7k | func_name stringlengths 1 138 | language stringclasses 1
value | repo stringlengths 7 68 | path stringlengths 5 324 | url stringlengths 46 389 | license stringclasses 7
values |
|---|---|---|---|---|---|---|---|
def compiled_revision(self):
"""
Reads the compiled revision from the revision file.
Returns:
the revision of this vocabulary (i.e. the string
inside the revision file), or None if is_compiled
if False
"""
if not self.is_compiled:
... |
Reads the compiled revision from the revision file.
Returns:
the revision of this vocabulary (i.e. the string
inside the revision file), or None if is_compiled
if False
| compiled_revision | python | jasperproject/jasper-client | client/vocabcompiler.py | https://github.com/jasperproject/jasper-client/blob/master/client/vocabcompiler.py | MIT |
def compile(self, phrases, force=False):
"""
Compiles this vocabulary. If the force argument is True, compilation
will be forced regardless of necessity (which means that the
preliminary check if the current revision already equals the
revision after compilation will be skipped).... |
Compiles this vocabulary. If the force argument is True, compilation
will be forced regardless of necessity (which means that the
preliminary check if the current revision already equals the
revision after compilation will be skipped).
This method is not meant to be overridden b... | compile | python | jasperproject/jasper-client | client/vocabcompiler.py | https://github.com/jasperproject/jasper-client/blob/master/client/vocabcompiler.py | MIT |
def _compile_vocabulary(self, phrases):
"""
Abstract method that should be overridden in subclasses with custom
compilation code.
Arguments:
phrases -- a list of phrases that this vocabulary will contain
""" |
Abstract method that should be overridden in subclasses with custom
compilation code.
Arguments:
phrases -- a list of phrases that this vocabulary will contain
| _compile_vocabulary | python | jasperproject/jasper-client | client/vocabcompiler.py | https://github.com/jasperproject/jasper-client/blob/master/client/vocabcompiler.py | MIT |
def is_compiled(self):
"""
Checks if the vocabulary is compiled by checking if the revision,
languagemodel and dictionary files are readable.
Returns:
True if this vocabulary has been compiled, else False
"""
return (super(self.__class__, self).is_compiled an... |
Checks if the vocabulary is compiled by checking if the revision,
languagemodel and dictionary files are readable.
Returns:
True if this vocabulary has been compiled, else False
| is_compiled | python | jasperproject/jasper-client | client/vocabcompiler.py | https://github.com/jasperproject/jasper-client/blob/master/client/vocabcompiler.py | MIT |
def _compile_vocabulary(self, phrases):
"""
Compiles the vocabulary to the Pocketsphinx format by creating a
languagemodel and a dictionary.
Arguments:
phrases -- a list of phrases that this vocabulary will contain
"""
text = " ".join([("<s> %s </s>" % phrase... |
Compiles the vocabulary to the Pocketsphinx format by creating a
languagemodel and a dictionary.
Arguments:
phrases -- a list of phrases that this vocabulary will contain
| _compile_vocabulary | python | jasperproject/jasper-client | client/vocabcompiler.py | https://github.com/jasperproject/jasper-client/blob/master/client/vocabcompiler.py | MIT |
def _compile_languagemodel(self, text, output_file):
"""
Compiles the languagemodel from a text.
Arguments:
text -- the text the languagemodel will be generated from
output_file -- the path of the file this languagemodel will
be written to
... |
Compiles the languagemodel from a text.
Arguments:
text -- the text the languagemodel will be generated from
output_file -- the path of the file this languagemodel will
be written to
Returns:
A list of all unique words this vocabu... | _compile_languagemodel | python | jasperproject/jasper-client | client/vocabcompiler.py | https://github.com/jasperproject/jasper-client/blob/master/client/vocabcompiler.py | MIT |
def _compile_dictionary(self, words, output_file):
"""
Compiles the dictionary from a list of words.
Arguments:
words -- a list of all unique words this vocabulary contains
output_file -- the path of the file this dictionary will
be written to
... |
Compiles the dictionary from a list of words.
Arguments:
words -- a list of all unique words this vocabulary contains
output_file -- the path of the file this dictionary will
be written to
| _compile_dictionary | python | jasperproject/jasper-client | client/vocabcompiler.py | https://github.com/jasperproject/jasper-client/blob/master/client/vocabcompiler.py | MIT |
def get_keyword_phrases():
"""
Gets the keyword phrases from the keywords file in the jasper data dir.
Returns:
A list of keyword phrases.
"""
phrases = []
with open(jasperpath.data('keyword_phrases'), mode="r") as f:
for line in f:
phrase = line.strip()
... |
Gets the keyword phrases from the keywords file in the jasper data dir.
Returns:
A list of keyword phrases.
| get_keyword_phrases | python | jasperproject/jasper-client | client/vocabcompiler.py | https://github.com/jasperproject/jasper-client/blob/master/client/vocabcompiler.py | MIT |
def get_all_phrases():
"""
Gets phrases from all modules.
Returns:
A list of phrases in all modules plus additional phrases passed to this
function.
"""
phrases = []
modules = brain.Brain.get_modules()
for module in modules:
phrases.extend(get_phrases_from_module(mo... |
Gets phrases from all modules.
Returns:
A list of phrases in all modules plus additional phrases passed to this
function.
| get_all_phrases | python | jasperproject/jasper-client | client/vocabcompiler.py | https://github.com/jasperproject/jasper-client/blob/master/client/vocabcompiler.py | MIT |
def handle(text, mic, profile):
"""
Responds to user-input, typically speech text, by listing the user's
Facebook friends with birthdays today.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
... |
Responds to user-input, typically speech text, by listing the user's
Facebook friends with birthdays today.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information rela... | handle | python | jasperproject/jasper-client | client/modules/Birthday.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Birthday.py | MIT |
def getSender(email):
"""
Returns the best-guess sender of an email.
Arguments:
email -- the email whose sender is desired
Returns:
Sender of the email.
"""
sender = email['From']
m = re.match(r'(.*)\s<.*>', sender)
if m:
return m.group(1)
return... |
Returns the best-guess sender of an email.
Arguments:
email -- the email whose sender is desired
Returns:
Sender of the email.
| getSender | python | jasperproject/jasper-client | client/modules/Gmail.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Gmail.py | MIT |
def getMostRecentDate(emails):
"""
Returns the most recent date of any email in the list provided.
Arguments:
emails -- a list of emails to check
Returns:
Date of the most recent email.
"""
dates = [getDate(e) for e in emails]
dates.sort(reverse=True)
if dat... |
Returns the most recent date of any email in the list provided.
Arguments:
emails -- a list of emails to check
Returns:
Date of the most recent email.
| getMostRecentDate | python | jasperproject/jasper-client | client/modules/Gmail.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Gmail.py | MIT |
def fetchUnreadEmails(profile, since=None, markRead=False, limit=None):
"""
Fetches a list of unread email objects from a user's Gmail inbox.
Arguments:
profile -- contains information related to the user (e.g., Gmail
address)
since -- if provided, no emails befor... |
Fetches a list of unread email objects from a user's Gmail inbox.
Arguments:
profile -- contains information related to the user (e.g., Gmail
address)
since -- if provided, no emails before this date will be returned
markRead -- if True, marks all returned em... | fetchUnreadEmails | python | jasperproject/jasper-client | client/modules/Gmail.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Gmail.py | MIT |
def handle(text, mic, profile):
"""
Responds to user-input, typically speech text, with a summary of
the user's Gmail inbox, reporting on the number of unread emails
in the inbox, as well as their senders.
Arguments:
text -- user-input, typically transcribed speech
m... |
Responds to user-input, typically speech text, with a summary of
the user's Gmail inbox, reporting on the number of unread emails
in the inbox, as well as their senders.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (f... | handle | python | jasperproject/jasper-client | client/modules/Gmail.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Gmail.py | MIT |
def getTopStories(maxResults=None):
"""
Returns the top headlines from Hacker News.
Arguments:
maxResults -- if provided, returns a random sample of size maxResults
"""
hdr = {'User-Agent': 'Mozilla/5.0'}
req = urllib2.Request(URL, headers=hdr)
page = urllib2.urlopen(req).re... |
Returns the top headlines from Hacker News.
Arguments:
maxResults -- if provided, returns a random sample of size maxResults
| getTopStories | python | jasperproject/jasper-client | client/modules/HN.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/HN.py | MIT |
def handle(text, mic, profile):
"""
Responds to user-input, typically speech text, with a sample of
Hacker News's top headlines, sending them to the user over email
if desired.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with t... |
Responds to user-input, typically speech text, with a sample of
Hacker News's top headlines, sending them to the user over email
if desired.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
... | handle | python | jasperproject/jasper-client | client/modules/HN.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/HN.py | MIT |
def handle(text, mic, profile):
"""
Responds to user-input, typically speech text, by telling a joke.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information related to the ... |
Responds to user-input, typically speech text, by telling a joke.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information related to the user (e.g., phone
nu... | handle | python | jasperproject/jasper-client | client/modules/Joke.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Joke.py | MIT |
def handle(text, mic, profile):
"""
Responds to user-input, typically speech text, by relaying the
meaning of life.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains infor... |
Responds to user-input, typically speech text, by relaying the
meaning of life.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information related to the user (e.g., phone... | handle | python | jasperproject/jasper-client | client/modules/Life.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Life.py | MIT |
def handle(text, mic, profile):
"""
Responds to user-input, typically speech text, by telling a joke.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information related to the user (e.... |
Responds to user-input, typically speech text, by telling a joke.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information related to the user (e.g., phone
number)
... | handle | python | jasperproject/jasper-client | client/modules/MPDControl.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/MPDControl.py | MIT |
def __init__(self, server="localhost", port=6600):
"""
Prepare the client and music variables
"""
self.server = server
self.port = port
# prepare client
self.client = mpd.MPDClient()
self.client.timeout = None
self.client.idletimeout = None
... |
Prepare the client and music variables
| __init__ | python | jasperproject/jasper-client | client/modules/MPDControl.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/MPDControl.py | MIT |
def play(self, songs=False, playlist_name=False):
"""
Plays the current song or accepts a song to play.
Arguments:
songs -- a list of song objects
playlist_name -- user-defined, something like "Love Song Playlist"
"""
if songs:
self.cl... |
Plays the current song or accepts a song to play.
Arguments:
songs -- a list of song objects
playlist_name -- user-defined, something like "Love Song Playlist"
| play | python | jasperproject/jasper-client | client/modules/MPDControl.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/MPDControl.py | MIT |
def get_soup(self):
"""
Returns the list of unique words that comprise song and artist titles
"""
soup = []
for song in self.songs:
song_words = song.title.split(" ")
artist_words = song.artist.split(" ")
soup.extend(song_words)
s... |
Returns the list of unique words that comprise song and artist titles
| get_soup | python | jasperproject/jasper-client | client/modules/MPDControl.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/MPDControl.py | MIT |
def get_soup_playlist(self):
"""
Returns the list of unique words that comprise playlist names
"""
soup = []
for name in self.playlists:
soup.extend(name.split(" "))
title_trans = ''.join(chr(c) if chr(c).isupper() or chr(c).islower()
... |
Returns the list of unique words that comprise playlist names
| get_soup_playlist | python | jasperproject/jasper-client | client/modules/MPDControl.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/MPDControl.py | MIT |
def get_soup_separated(self):
"""
Returns the list of PHRASES that comprise song and artist titles
"""
title_soup = [song.title for song in self.songs]
artist_soup = [song.artist for song in self.songs]
soup = list(set(title_soup + artist_soup))
title_trans = '... |
Returns the list of PHRASES that comprise song and artist titles
| get_soup_separated | python | jasperproject/jasper-client | client/modules/MPDControl.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/MPDControl.py | MIT |
def fuzzy_songs(self, query):
"""
Returns songs matching a query best as possible on either artist
field, etc
"""
query = query.upper()
matched_song_titles = difflib.get_close_matches(query,
self.song_titles)
... |
Returns songs matching a query best as possible on either artist
field, etc
| fuzzy_songs | python | jasperproject/jasper-client | client/modules/MPDControl.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/MPDControl.py | MIT |
def fuzzy_playlists(self, query):
"""
returns playlist names that match query best as possible
"""
query = query.upper()
lookup = {n.upper(): n for n in self.playlists}
results = [lookup[r] for r in difflib.get_close_matches(query, lookup)]
return results |
returns playlist names that match query best as possible
| fuzzy_playlists | python | jasperproject/jasper-client | client/modules/MPDControl.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/MPDControl.py | MIT |
def handle(text, mic, profile):
"""
Responds to user-input, typically speech text, with a summary of
the day's top news headlines, sending them to the user over email
if desired.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with... |
Responds to user-input, typically speech text, with a summary of
the day's top news headlines, sending them to the user over email
if desired.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
... | handle | python | jasperproject/jasper-client | client/modules/News.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/News.py | MIT |
def handle(text, mic, profile):
"""
Responds to user-input, typically speech text, with a summary of
the user's Facebook notifications, including a count and details
related to each individual notification.
Arguments:
text -- user-input, typically transcribed speech
... |
Responds to user-input, typically speech text, with a summary of
the user's Facebook notifications, including a count and details
related to each individual notification.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (... | handle | python | jasperproject/jasper-client | client/modules/Notifications.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Notifications.py | MIT |
def handle(text, mic, profile):
"""
Reports the current time based on the user's timezone.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information related to the user (e.g.,... |
Reports the current time based on the user's timezone.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information related to the user (e.g., phone
number)
| handle | python | jasperproject/jasper-client | client/modules/Time.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Time.py | MIT |
def handle(text, mic, profile):
"""
Reports that the user has unclear or unusable input.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information related to the user (e.g., p... |
Reports that the user has unclear or unusable input.
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the user (for both input and output)
profile -- contains information related to the user (e.g., phone
number)
| handle | python | jasperproject/jasper-client | client/modules/Unclear.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Unclear.py | MIT |
def replaceAcronyms(text):
"""
Replaces some commonly-used acronyms for an improved verbal weather report.
"""
def parseDirections(text):
words = {
'N': 'north',
'S': 'south',
'E': 'east',
'W': 'west',
}
output = [words[w] for w in... |
Replaces some commonly-used acronyms for an improved verbal weather report.
| replaceAcronyms | python | jasperproject/jasper-client | client/modules/Weather.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Weather.py | MIT |
def handle(text, mic, profile):
"""
Responds to user-input, typically speech text, with a summary of
the relevant weather for the requested date (typically, weather
information will not be available for days beyond tomorrow).
Arguments:
text -- user-input, typically transcribed speech
... |
Responds to user-input, typically speech text, with a summary of
the relevant weather for the requested date (typically, weather
information will not be available for days beyond tomorrow).
Arguments:
text -- user-input, typically transcribed speech
mic -- used to interact with the use... | handle | python | jasperproject/jasper-client | client/modules/Weather.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Weather.py | MIT |
def isValid(text):
"""
Returns True if the text is related to the weather.
Arguments:
text -- user-input, typically transcribed speech
"""
return bool(re.search(r'\b(weathers?|temperature|forecast|outside|hot|' +
r'cold|jacket|coat|rain)\b', text, re.IGNORE... |
Returns True if the text is related to the weather.
Arguments:
text -- user-input, typically transcribed speech
| isValid | python | jasperproject/jasper-client | client/modules/Weather.py | https://github.com/jasperproject/jasper-client/blob/master/client/modules/Weather.py | MIT |
def testLog(self):
"""Does Brain correctly log errors when raised by modules?"""
my_brain = TestBrain._emptyBrain()
unclear = my_brain.modules[-1]
with mock.patch.object(unclear, 'handle') as mocked_handle:
with mock.patch.object(my_brain._logger, 'error') as mocked_log:
... | Does Brain correctly log errors when raised by modules? | testLog | python | jasperproject/jasper-client | tests/test_brain.py | https://github.com/jasperproject/jasper-client/blob/master/tests/test_brain.py | MIT |
def testSortByPriority(self):
"""Does Brain sort modules by priority?"""
my_brain = TestBrain._emptyBrain()
priorities = filter(lambda m: hasattr(m, 'PRIORITY'), my_brain.modules)
target = sorted(priorities, key=lambda m: m.PRIORITY, reverse=True)
self.assertEqual(target, priorit... | Does Brain sort modules by priority? | testSortByPriority | python | jasperproject/jasper-client | tests/test_brain.py | https://github.com/jasperproject/jasper-client/blob/master/tests/test_brain.py | MIT |
def testPriority(self):
"""Does Brain correctly send query to higher-priority module?"""
my_brain = TestBrain._emptyBrain()
hn_module = 'HN'
hn = filter(lambda m: m.__name__ == hn_module, my_brain.modules)[0]
with mock.patch.object(hn, 'handle') as mocked_handle:
my_... | Does Brain correctly send query to higher-priority module? | testPriority | python | jasperproject/jasper-client | tests/test_brain.py | https://github.com/jasperproject/jasper-client/blob/master/tests/test_brain.py | MIT |
def runConversation(self, query, inputs, module):
"""Generic method for spoofing conversation.
Arguments:
query -- The initial input to the server.
inputs -- Additional input, if conversation is extended.
Returns:
The server's responses, in a list.
"""
s... | Generic method for spoofing conversation.
Arguments:
query -- The initial input to the server.
inputs -- Additional input, if conversation is extended.
Returns:
The server's responses, in a list.
| runConversation | python | jasperproject/jasper-client | tests/test_modules.py | https://github.com/jasperproject/jasper-client/blob/master/tests/test_modules.py | MIT |
def testTranscribeJasper(self):
"""
Does Jasper recognize his name (i.e., passive listen)?
"""
with open(self.jasper_clip, mode="rb") as f:
transcription = self.passive_stt_engine.transcribe(f)
self.assertIn("JASPER", transcription) |
Does Jasper recognize his name (i.e., passive listen)?
| testTranscribeJasper | python | jasperproject/jasper-client | tests/test_stt.py | https://github.com/jasperproject/jasper-client/blob/master/tests/test_stt.py | MIT |
def testTranscribe(self):
"""
Does Jasper recognize 'time' (i.e., active listen)?
"""
with open(self.time_clip, mode="rb") as f:
transcription = self.active_stt_engine.transcribe(f)
self.assertIn("TIME", transcription) |
Does Jasper recognize 'time' (i.e., active listen)?
| testTranscribe | python | jasperproject/jasper-client | tests/test_stt.py | https://github.com/jasperproject/jasper-client/blob/master/tests/test_stt.py | MIT |
def prepare_latents(
self,
batch_size: int, # Number of videos to generate in parallel
num_channels_latents: int, # Number of channels in the latents
width: int, # Width of the video frame
height: int, ... |
Prepares the initial latents for video generation.
Args:
batch_size (int): Number of videos to generate in parallel.
num_channels_latents (int): Number of channels in the latents.
width (int): Width of the video frame.
height (int): Height of the video f... | prepare_latents | python | jdh-algo/JoyHallo | joyhallo/animate/face_animate.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/animate/face_animate.py | MIT |
def decode_latents(self, latents):
"""
Decode the latents to produce a video.
Parameters:
latents (torch.Tensor): The latents to be decoded.
Returns:
video (torch.Tensor): The decoded video.
video_length (int): The length of the video in frames.
"""
... |
Decode the latents to produce a video.
Parameters:
latents (torch.Tensor): The latents to be decoded.
Returns:
video (torch.Tensor): The decoded video.
video_length (int): The length of the video in frames.
| decode_latents | python | jdh-algo/JoyHallo | joyhallo/animate/face_animate.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/animate/face_animate.py | MIT |
def enable_sequential_cpu_offload(self, gpu_id=0):
"""
Offloads selected models to the GPU for increased performance.
Args:
gpu_id (int, optional): The ID of the GPU to offload models to. Defaults to 0.
"""
device = torch.device(f"cuda:{gpu_id}")
for cpu_off... |
Offloads selected models to the GPU for increased performance.
Args:
gpu_id (int, optional): The ID of the GPU to offload models to. Defaults to 0.
| enable_sequential_cpu_offload | python | jdh-algo/JoyHallo | joyhallo/animate/face_animate_static.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/animate/face_animate_static.py | MIT |
def decode_latents(self, latents):
"""
Decode the given latents to video frames.
Parameters:
latents (torch.Tensor): The latents to be decoded. Shape: (batch_size, num_channels_latents, video_length, height, width).
Returns:
video (torch.Tensor): The decoded video frame... |
Decode the given latents to video frames.
Parameters:
latents (torch.Tensor): The latents to be decoded. Shape: (batch_size, num_channels_latents, video_length, height, width).
Returns:
video (torch.Tensor): The decoded video frames. Shape: (batch_size, num_channels_latents, v... | decode_latents | python | jdh-algo/JoyHallo | joyhallo/animate/face_animate_static.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/animate/face_animate_static.py | MIT |
def prepare_latents(
self,
batch_size,
num_channels_latents,
width,
height,
dtype,
device,
generator,
latents=None,
):
"""
Prepares the initial latents for the diffusion pipeline.
Args:
batch_size (int): The... |
Prepares the initial latents for the diffusion pipeline.
Args:
batch_size (int): The number of images to generate in one forward pass.
num_channels_latents (int): The number of channels in the latents tensor.
width (int): The width of the latents tensor.
... | prepare_latents | python | jdh-algo/JoyHallo | joyhallo/animate/face_animate_static.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/animate/face_animate_static.py | MIT |
def prepare_condition(
self,
cond_image,
width,
height,
device,
dtype,
do_classififer_free_guidance=False,
):
"""
Prepares the condition for the face animation pipeline.
Args:
cond_image (torch.Tensor): The conditional imag... |
Prepares the condition for the face animation pipeline.
Args:
cond_image (torch.Tensor): The conditional image tensor.
width (int): The width of the output image.
height (int): The height of the output image.
device (torch.device): The device to run the ... | prepare_condition | python | jdh-algo/JoyHallo | joyhallo/animate/face_animate_static.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/animate/face_animate_static.py | MIT |
def preprocess(self, wav_file: str, clip_length: int=-1):
"""
Preprocess a WAV audio file by separating the vocals from the background and resampling it to a 16 kHz sample rate.
The separated vocal track is then converted into wav2vec2 for further processing or analysis.
Args:
... |
Preprocess a WAV audio file by separating the vocals from the background and resampling it to a 16 kHz sample rate.
The separated vocal track is then converted into wav2vec2 for further processing or analysis.
Args:
wav_file (str): The path to the WAV file to be processed. This fil... | preprocess | python | jdh-algo/JoyHallo | joyhallo/datasets/audio_processor.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/datasets/audio_processor.py | MIT |
def get_embedding(self, wav_file: str):
"""preprocess wav audio file convert to embeddings
Args:
wav_file (str): The path to the WAV file to be processed. This file should be accessible and in WAV format.
Returns:
torch.tensor: Returns an audio embedding as a torch.tens... | preprocess wav audio file convert to embeddings
Args:
wav_file (str): The path to the WAV file to be processed. This file should be accessible and in WAV format.
Returns:
torch.tensor: Returns an audio embedding as a torch.tensor
| get_embedding | python | jdh-algo/JoyHallo | joyhallo/datasets/audio_processor.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/datasets/audio_processor.py | MIT |
def preprocess(self, source_image_path: str, cache_dir: str, face_region_ratio: float):
"""
Apply preprocessing to the source image to prepare for face analysis.
Parameters:
source_image_path (str): The path to the source image.
cache_dir (str): The directory to cache in... |
Apply preprocessing to the source image to prepare for face analysis.
Parameters:
source_image_path (str): The path to the source image.
cache_dir (str): The directory to cache intermediate results.
Returns:
None
| preprocess | python | jdh-algo/JoyHallo | joyhallo/datasets/image_processor.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/datasets/image_processor.py | MIT |
def close(self):
"""
Closes the ImageProcessor and releases any resources held by the FaceAnalysis instance.
Args:
self: The ImageProcessor instance.
Returns:
None.
"""
for _, model in self.face_analysis.models.items():
if hasattr(mod... |
Closes the ImageProcessor and releases any resources held by the FaceAnalysis instance.
Args:
self: The ImageProcessor instance.
Returns:
None.
| close | python | jdh-algo/JoyHallo | joyhallo/datasets/image_processor.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/datasets/image_processor.py | MIT |
def augmentation(self, image, transform, state=None):
"""
Apply data augmentation to the input image.
Args:
image (PIL.Image): The input image.
transform (torchvision.transforms.Compose): The data augmentation transforms.
state (dict, optional): The random st... |
Apply data augmentation to the input image.
Args:
image (PIL.Image): The input image.
transform (torchvision.transforms.Compose): The data augmentation transforms.
state (dict, optional): The random state for reproducibility. Defaults to None.
Returns:
... | augmentation | python | jdh-algo/JoyHallo | joyhallo/datasets/mask_image.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/datasets/mask_image.py | MIT |
def augmentation(self, images, transform, state=None):
"""
Apply the given transformation to the input images.
Args:
images (List[PIL.Image] or PIL.Image): The input images to be transformed.
transform (torchvision.transforms.Compose): The transformation to be ap... |
Apply the given transformation to the input images.
Args:
images (List[PIL.Image] or PIL.Image): The input images to be transformed.
transform (torchvision.transforms.Compose): The transformation to be applied to the images.
state (torch.ByteTensor, optional... | augmentation | python | jdh-algo/JoyHallo | joyhallo/datasets/talk_video.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/datasets/talk_video.py | MIT |
def forward(self, x: torch.Tensor, objs: torch.Tensor) -> torch.Tensor:
"""
Apply the Gated Self-Attention mechanism to the input tensor `x` and object tensor `objs`.
Args:
x (torch.Tensor): The input tensor.
objs (torch.Tensor): The object tensor.
Returns:
... |
Apply the Gated Self-Attention mechanism to the input tensor `x` and object tensor `objs`.
Args:
x (torch.Tensor): The input tensor.
objs (torch.Tensor): The object tensor.
Returns:
torch.Tensor: The output tensor after applying Gated Self-Attention.
... | forward | python | jdh-algo/JoyHallo | joyhallo/models/attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/attention.py | MIT |
def set_chunk_feed_forward(self, chunk_size: Optional[int], dim: int = 0):
"""
Sets the chunk size for feed-forward processing in the transformer block.
Args:
chunk_size (Optional[int]): The size of the chunks to process in feed-forward layers.
If None, the chunk size i... |
Sets the chunk size for feed-forward processing in the transformer block.
Args:
chunk_size (Optional[int]): The size of the chunks to process in feed-forward layers.
If None, the chunk size is set to the maximum possible value.
dim (int, optional): The dimension al... | set_chunk_feed_forward | python | jdh-algo/JoyHallo | joyhallo/models/attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/attention.py | MIT |
def forward(
self,
hidden_states: torch.FloatTensor,
attention_mask: Optional[torch.FloatTensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
encoder_attention_mask: Optional[torch.FloatTensor] = None,
timestep: Optional[torch.LongTensor] = None,
... |
This function defines the forward pass of the BasicTransformerBlock.
Args:
self (BasicTransformerBlock):
An instance of the BasicTransformerBlock class.
hidden_states (torch.FloatTensor):
A tensor containing the hidden states.
attenti... | forward | python | jdh-algo/JoyHallo | joyhallo/models/attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/attention.py | MIT |
def __init__(
self,
dim: int,
num_attention_heads: int,
attention_head_dim: int,
dropout=0.0,
cross_attention_dim: Optional[int] = None,
activation_fn: str = "geglu",
num_embeds_ada_norm: Optional[int] = None,
attention_bias: bool = False,
... |
The TemporalBasicTransformerBlock class is a PyTorch module that extends the BasicTransformerBlock to include temporal attention mechanisms.
This is particularly useful for video-related tasks, where the model needs to capture the temporal information within the sequence of frames.
The block ... | __init__ | python | jdh-algo/JoyHallo | joyhallo/models/attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/attention.py | MIT |
def forward(
self,
hidden_states,
encoder_hidden_states=None,
timestep=None,
attention_mask=None,
video_length=None,
):
"""
Forward pass for the TemporalBasicTransformerBlock.
Args:
hidden_states (torch.FloatTensor): The input hidd... |
Forward pass for the TemporalBasicTransformerBlock.
Args:
hidden_states (torch.FloatTensor): The input hidden states with shape (batch_size, seq_len, dim).
encoder_hidden_states (torch.FloatTensor, optional): The encoder hidden states with shape (batch_size, src_seq_len, dim).
... | forward | python | jdh-algo/JoyHallo | joyhallo/models/attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/attention.py | MIT |
def __init__(
self,
dim: int,
num_attention_heads: int,
attention_head_dim: int,
dropout=0.0,
cross_attention_dim: Optional[int] = None,
activation_fn: str = "geglu",
num_embeds_ada_norm: Optional[int] = None,
attention_bias: bool = False,
... |
Initializes the AudioTemporalBasicTransformerBlock module.
Args:
dim (int): The dimension of the input and output embeddings.
num_attention_heads (int): The number of attention heads in the multi-head self-attention mechanism.
attention_head_dim (int): The dimension of... | __init__ | python | jdh-algo/JoyHallo | joyhallo/models/attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/attention.py | MIT |
def forward(
self,
hidden_states,
encoder_hidden_states=None,
timestep=None,
attention_mask=None,
full_mask=None,
face_mask=None,
lip_mask=None,
motion_scale=None,
video_length=None,
):
"""
Forward pass for the AudioTemp... |
Forward pass for the AudioTemporalBasicTransformerBlock.
Args:
hidden_states (torch.FloatTensor): The input hidden states.
encoder_hidden_states (torch.FloatTensor, optional): The encoder hidden states. Defaults to None.
timestep (torch.LongTensor, optional): The ti... | forward | python | jdh-algo/JoyHallo | joyhallo/models/attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/attention.py | MIT |
def zero_module(module):
"""
Zeroes out the parameters of a given module.
Args:
module (nn.Module): The module whose parameters need to be zeroed out.
Returns:
None.
"""
for p in module.parameters():
nn.init.zeros_(p)
return module |
Zeroes out the parameters of a given module.
Args:
module (nn.Module): The module whose parameters need to be zeroed out.
Returns:
None.
| zero_module | python | jdh-algo/JoyHallo | joyhallo/models/attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/attention.py | MIT |
def forward(self, audio_embeds):
"""
Defines the forward pass for the AudioProjModel.
Parameters:
audio_embeds (torch.Tensor): The input audio embeddings with shape (batch_size, video_length, blocks, channels).
Returns:
context_tokens (torch.Tensor): The output ... |
Defines the forward pass for the AudioProjModel.
Parameters:
audio_embeds (torch.Tensor): The input audio embeddings with shape (batch_size, video_length, blocks, channels).
Returns:
context_tokens (torch.Tensor): The output context tokens with shape (batch_size, video... | forward | python | jdh-algo/JoyHallo | joyhallo/models/audio_proj.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/audio_proj.py | MIT |
def forward(self, conditioning):
"""
Forward pass of the FaceLocator model.
Args:
conditioning (Tensor): The input conditioning tensor.
Returns:
Tensor: The output embedding tensor.
"""
embedding = self.conv_in(conditioning)
embedding = F... |
Forward pass of the FaceLocator model.
Args:
conditioning (Tensor): The input conditioning tensor.
Returns:
Tensor: The output embedding tensor.
| forward | python | jdh-algo/JoyHallo | joyhallo/models/face_locator.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/face_locator.py | MIT |
def forward(self, image_embeds):
"""
Forward pass of the ImageProjModel, which takes in image embeddings and returns the
projected tokens after reshaping and normalization.
Args:
image_embeds (torch.Tensor): The input image embeddings, with shape
batch_size x num... |
Forward pass of the ImageProjModel, which takes in image embeddings and returns the
projected tokens after reshaping and normalization.
Args:
image_embeds (torch.Tensor): The input image embeddings, with shape
batch_size x num_image_tokens x clip_embeddings_dim.
... | forward | python | jdh-algo/JoyHallo | joyhallo/models/image_proj.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/image_proj.py | MIT |
def zero_module(module):
"""
Zero out the parameters of a module and return it.
Args:
- module: A PyTorch module to zero out its parameters.
Returns:
A zeroed out PyTorch module.
"""
for p in module.parameters():
p.detach().zero_()
return module |
Zero out the parameters of a module and return it.
Args:
- module: A PyTorch module to zero out its parameters.
Returns:
A zeroed out PyTorch module.
| zero_module | python | jdh-algo/JoyHallo | joyhallo/models/motion_module.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/motion_module.py | MIT |
def get_motion_module(in_channels, motion_module_type: str, motion_module_kwargs: dict):
"""
This function returns a motion module based on the given type and parameters.
Args:
- in_channels (int): The number of input channels for the motion module.
- motion_module_type (str): The type of motio... |
This function returns a motion module based on the given type and parameters.
Args:
- in_channels (int): The number of input channels for the motion module.
- motion_module_type (str): The type of motion module to create. Currently, only "Vanilla" is supported.
- motion_module_kwargs (dict): A... | get_motion_module | python | jdh-algo/JoyHallo | joyhallo/models/motion_module.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/motion_module.py | MIT |
def forward(
self,
input_tensor,
encoder_hidden_states,
attention_mask=None,
):
"""
Forward pass of the TemporalTransformer3DModel.
Args:
hidden_states (torch.Tensor): The hidden states of the model.
encoder_hidden_states (torch.Tensor... |
Forward pass of the TemporalTransformer3DModel.
Args:
hidden_states (torch.Tensor): The hidden states of the model.
encoder_hidden_states (torch.Tensor, optional): The hidden states of the encoder.
attention_mask (torch.Tensor, optional): The attention mask.
... | forward | python | jdh-algo/JoyHallo | joyhallo/models/motion_module.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/motion_module.py | MIT |
def forward(self, hidden_states, encoder_hidden_states=None):
"""
Forward pass for the TemporalTransformer3DModel.
Args:
hidden_states (torch.Tensor): The input hidden states with shape (batch_size, sequence_length, in_channels).
encoder_hidden_states (torch.Tensor, opti... |
Forward pass for the TemporalTransformer3DModel.
Args:
hidden_states (torch.Tensor): The input hidden states with shape (batch_size, sequence_length, in_channels).
encoder_hidden_states (torch.Tensor, optional): The encoder hidden states with shape (batch_size, encoder_sequence... | forward | python | jdh-algo/JoyHallo | joyhallo/models/motion_module.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/motion_module.py | MIT |
def forward(
self,
hidden_states,
encoder_hidden_states=None,
video_length=None,
):
"""
Forward pass for the TemporalTransformerBlock.
Args:
hidden_states (torch.Tensor): The input hidden states with shape
(batch_size, video_length... |
Forward pass for the TemporalTransformerBlock.
Args:
hidden_states (torch.Tensor): The input hidden states with shape
(batch_size, video_length, in_channels).
encoder_hidden_states (torch.Tensor, optional): The encoder hidden states
with shape (b... | forward | python | jdh-algo/JoyHallo | joyhallo/models/motion_module.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/motion_module.py | MIT |
def set_use_memory_efficient_attention_xformers(
self,
use_memory_efficient_attention_xformers: bool,
attention_op = None,
):
"""
Sets the use of memory-efficient attention xformers for the VersatileAttention class.
Args:
use_memory_efficient_attention_xf... |
Sets the use of memory-efficient attention xformers for the VersatileAttention class.
Args:
use_memory_efficient_attention_xformers (bool): A boolean flag indicating whether to use memory-efficient attention xformers or not.
Returns:
None
| set_use_memory_efficient_attention_xformers | python | jdh-algo/JoyHallo | joyhallo/models/motion_module.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/motion_module.py | MIT |
def forward(
self,
hidden_states,
encoder_hidden_states=None,
attention_mask=None,
video_length=None,
**cross_attention_kwargs,
):
"""
Args:
hidden_states (`torch.Tensor`):
The hidden states to be passed through the model.
... |
Args:
hidden_states (`torch.Tensor`):
The hidden states to be passed through the model.
encoder_hidden_states (`torch.Tensor`, optional):
The encoder hidden states to be passed through the model.
attention_mask (`torch.Tensor`, optional):
... | forward | python | jdh-algo/JoyHallo | joyhallo/models/motion_module.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/motion_module.py | MIT |
def torch_dfs(model: torch.nn.Module):
"""
Perform a depth-first search (DFS) traversal on a PyTorch model's neural network architecture.
This function recursively traverses all the children modules of a given PyTorch model and returns a list
containing all the modules in the model's architecture. The ... |
Perform a depth-first search (DFS) traversal on a PyTorch model's neural network architecture.
This function recursively traverses all the children modules of a given PyTorch model and returns a list
containing all the modules in the model's architecture. The DFS approach starts with the input model and
... | torch_dfs | python | jdh-algo/JoyHallo | joyhallo/models/mutual_self_attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/mutual_self_attention.py | MIT |
def __init__(
self,
unet,
mode="write",
do_classifier_free_guidance=False,
attention_auto_machine_weight=float("inf"),
gn_auto_machine_weight=1.0,
style_fidelity=1.0,
reference_attn=True,
reference_adain=False,
fusion_blocks="midup",
... |
Initializes the ReferenceAttentionControl class.
Args:
unet (torch.nn.Module): The UNet model.
mode (str, optional): The mode of operation. Defaults to "write".
do_classifier_free_guidance (bool, optional): Whether to do classifier-free guidance. Defaults to False.
... | __init__ | python | jdh-algo/JoyHallo | joyhallo/models/mutual_self_attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/mutual_self_attention.py | MIT |
def update(self, writer, dtype=torch.float16):
"""
Update the model's parameters.
Args:
writer (torch.nn.Module): The model's writer object.
dtype (torch.dtype, optional): The data type to be used for the update. Defaults to torch.float16.
Returns:
N... |
Update the model's parameters.
Args:
writer (torch.nn.Module): The model's writer object.
dtype (torch.dtype, optional): The data type to be used for the update. Defaults to torch.float16.
Returns:
None.
| update | python | jdh-algo/JoyHallo | joyhallo/models/mutual_self_attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/mutual_self_attention.py | MIT |
def clear(self):
"""
Clears the attention bank of all reader attention modules.
This method is used when the `reference_attn` attribute is set to `True`.
It clears the attention bank of all reader attention modules inside the UNet
model based on the selected `fusion_blocks` mode... |
Clears the attention bank of all reader attention modules.
This method is used when the `reference_attn` attribute is set to `True`.
It clears the attention bank of all reader attention modules inside the UNet
model based on the selected `fusion_blocks` mode.
If `fusion_blocks... | clear | python | jdh-algo/JoyHallo | joyhallo/models/mutual_self_attention.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/mutual_self_attention.py | MIT |
def forward(self, x):
"""
Forward pass of the InflatedConv3d layer.
Args:
x (torch.Tensor): Input tensor to the layer.
Returns:
torch.Tensor: Output tensor after applying the InflatedConv3d layer.
"""
video_length = x.shape[2]
x = rearra... |
Forward pass of the InflatedConv3d layer.
Args:
x (torch.Tensor): Input tensor to the layer.
Returns:
torch.Tensor: Output tensor after applying the InflatedConv3d layer.
| forward | python | jdh-algo/JoyHallo | joyhallo/models/resnet.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/resnet.py | MIT |
def forward(self, x):
"""
Performs a forward pass through the CustomClassName.
:param x: Input tensor of shape (batch_size, channels, video_length, height, width).
:return: Output tensor of shape (batch_size, channels, video_length, height, width).
"""
video_leng... |
Performs a forward pass through the CustomClassName.
:param x: Input tensor of shape (batch_size, channels, video_length, height, width).
:return: Output tensor of shape (batch_size, channels, video_length, height, width).
| forward | python | jdh-algo/JoyHallo | joyhallo/models/resnet.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/resnet.py | MIT |
def forward(self, hidden_states, output_size=None):
"""
Forward pass of the Upsample3D class.
Args:
hidden_states (torch.Tensor): Input tensor to be upsampled.
output_size (tuple, optional): Desired output size of the upsampled tensor.
Returns:
torch... |
Forward pass of the Upsample3D class.
Args:
hidden_states (torch.Tensor): Input tensor to be upsampled.
output_size (tuple, optional): Desired output size of the upsampled tensor.
Returns:
torch.Tensor: Upsampled tensor.
Raises:
Asserti... | forward | python | jdh-algo/JoyHallo | joyhallo/models/resnet.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/resnet.py | MIT |
def __init__(
self, channels, use_conv=False, out_channels=None, padding=1, name="conv"
):
"""
Downsamples the given input in the 3D space.
Args:
channels: The number of input channels.
use_conv: Whether to use a convolutional layer for downsampling.
... |
Downsamples the given input in the 3D space.
Args:
channels: The number of input channels.
use_conv: Whether to use a convolutional layer for downsampling.
out_channels: The number of output channels. If None, the input channels are used.
padding: The am... | __init__ | python | jdh-algo/JoyHallo | joyhallo/models/resnet.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/resnet.py | MIT |
def forward(self, hidden_states):
"""
Forward pass for the Downsample3D class.
Args:
hidden_states (torch.Tensor): Input tensor to be downsampled.
Returns:
torch.Tensor: Downsampled tensor.
Raises:
AssertionError: If the number of channels i... |
Forward pass for the Downsample3D class.
Args:
hidden_states (torch.Tensor): Input tensor to be downsampled.
Returns:
torch.Tensor: Downsampled tensor.
Raises:
AssertionError: If the number of channels in the input tensor does not match the expecte... | forward | python | jdh-algo/JoyHallo | joyhallo/models/resnet.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/resnet.py | MIT |
def forward(self, input_tensor, temb):
"""
Forward pass for the ResnetBlock3D class.
Args:
input_tensor (torch.Tensor): Input tensor to the ResnetBlock3D layer.
temb (torch.Tensor): Token embedding tensor.
Returns:
torch.Tensor: Output tensor after p... |
Forward pass for the ResnetBlock3D class.
Args:
input_tensor (torch.Tensor): Input tensor to the ResnetBlock3D layer.
temb (torch.Tensor): Token embedding tensor.
Returns:
torch.Tensor: Output tensor after passing through the ResnetBlock3D layer.
| forward | python | jdh-algo/JoyHallo | joyhallo/models/resnet.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/resnet.py | MIT |
def forward(
self,
hidden_states: torch.Tensor,
encoder_hidden_states: Optional[torch.Tensor] = None,
timestep: Optional[torch.LongTensor] = None,
_added_cond_kwargs: Dict[str, torch.Tensor] = None,
class_labels: Optional[torch.LongTensor] = None,
cross_attention_... |
The [`Transformer2DModel`] forward method.
Args:
hidden_states (`torch.LongTensor` of shape `(batch size, num latent pixels)` if discrete,
`torch.FloatTensor` of shape `(batch size, channel, height, width)` if continuous):
Input `hidden_states`.
enc... | forward | python | jdh-algo/JoyHallo | joyhallo/models/transformer_2d.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/transformer_2d.py | MIT |
def forward(
self,
hidden_states,
encoder_hidden_states=None,
attention_mask=None,
full_mask=None,
face_mask=None,
lip_mask=None,
motion_scale=None,
timestep=None,
return_dict: bool = True,
):
"""
Forward pass for the Tr... |
Forward pass for the Transformer3DModel.
Args:
hidden_states (torch.Tensor): The input hidden states.
encoder_hidden_states (torch.Tensor, optional): The input encoder hidden states.
attention_mask (torch.Tensor, optional): The attention mask.
full_mask ... | forward | python | jdh-algo/JoyHallo | joyhallo/models/transformer_3d.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/transformer_3d.py | MIT |
def get_down_block(
down_block_type: str,
num_layers: int,
in_channels: int,
out_channels: int,
temb_channels: int,
add_downsample: bool,
resnet_eps: float,
resnet_act_fn: str,
transformer_layers_per_block: int = 1,
num_attention_heads: Optional[int] = None,
resnet_groups: Op... | This function creates and returns a UpBlock2D or CrossAttnUpBlock2D object based on the given up_block_type.
Args:
up_block_type (str): The type of up block to create. Must be either "UpBlock2D" or "CrossAttnUpBlock2D".
num_layers (int): The number of layers in the ResNet block.
in_channel... | get_down_block | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_blocks.py | MIT |
def forward(
self, hidden_states: torch.FloatTensor, temb: Optional[torch.FloatTensor] = None
) -> torch.FloatTensor:
"""
Forward pass of the UNetMidBlock2D class.
Args:
hidden_states (torch.FloatTensor): The input tensor to the UNetMidBlock2D.
temb (Optional... |
Forward pass of the UNetMidBlock2D class.
Args:
hidden_states (torch.FloatTensor): The input tensor to the UNetMidBlock2D.
temb (Optional[torch.FloatTensor], optional): The token embedding tensor. Defaults to None.
Returns:
torch.FloatTensor: The output ten... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_blocks.py | MIT |
def forward(
self,
hidden_states: torch.FloatTensor,
temb: Optional[torch.FloatTensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
e... |
Forward pass for the UNetMidBlock2DCrossAttn class.
Args:
hidden_states (torch.FloatTensor): The input hidden states tensor.
temb (Optional[torch.FloatTensor], optional): The optional tensor for time embeddings.
encoder_hidden_states (Optional[torch.FloatTensor], op... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_blocks.py | MIT |
def forward(
self,
hidden_states: torch.FloatTensor,
temb: Optional[torch.FloatTensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
attention_mask: Optional[torch.FloatTensor] = None,
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
e... |
Forward pass for the CrossAttnDownBlock2D class.
Args:
hidden_states (torch.FloatTensor): The input hidden states.
temb (Optional[torch.FloatTensor], optional): The token embeddings. Defaults to None.
encoder_hidden_states (Optional[torch.FloatTensor], optional): Th... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_blocks.py | MIT |
def forward(
self,
hidden_states: torch.FloatTensor,
temb: Optional[torch.FloatTensor] = None,
scale: float = 1.0,
) -> Tuple[torch.FloatTensor, Tuple[torch.FloatTensor, ...]]:
"""
Forward pass of the DownBlock2D class.
Args:
hidden_states (torch.... |
Forward pass of the DownBlock2D class.
Args:
hidden_states (torch.FloatTensor): The input tensor to the DownBlock2D layer.
temb (Optional[torch.FloatTensor], optional): The token embedding tensor. Defaults to None.
scale (float, optional): The scale factor for the i... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_blocks.py | MIT |
def forward(
self,
hidden_states: torch.FloatTensor,
res_hidden_states_tuple: Tuple[torch.FloatTensor, ...],
temb: Optional[torch.FloatTensor] = None,
encoder_hidden_states: Optional[torch.FloatTensor] = None,
cross_attention_kwargs: Optional[Dict[str, Any]] = None,
... |
Forward pass for the CrossAttnUpBlock2D class.
Args:
self (CrossAttnUpBlock2D): An instance of the CrossAttnUpBlock2D class.
hidden_states (torch.FloatTensor): The input hidden states tensor.
res_hidden_states_tuple (Tuple[torch.FloatTensor, ...]): A tuple of residu... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_blocks.py | MIT |
def forward(
self,
hidden_states: torch.FloatTensor,
res_hidden_states_tuple: Tuple[torch.FloatTensor, ...],
temb: Optional[torch.FloatTensor] = None,
upsample_size: Optional[int] = None,
scale: float = 1.0,
) -> torch.FloatTensor:
"""
Forward pass fo... |
Forward pass for the UpBlock2D class.
Args:
self (UpBlock2D): An instance of the UpBlock2D class.
hidden_states (torch.FloatTensor): The input tensor to the block.
res_hidden_states_tuple (Tuple[torch.FloatTensor, ...]): A tuple of residual hidden states.
... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_blocks.py | MIT |
def attn_processors(self) -> Dict[str, AttentionProcessor]:
r"""
Returns:
`dict` of attention processors: A dictionary containing all attention processors used in the model with
indexed by its weight name.
"""
# set recursively
processors = {}
def... |
Returns:
`dict` of attention processors: A dictionary containing all attention processors used in the model with
indexed by its weight name.
| attn_processors | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_condition.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_condition.py | MIT |
def set_attn_processor(
self,
processor: Union[AttentionProcessor, Dict[str, AttentionProcessor]],
_remove_lora=False,
):
r"""
Sets the attention processor to use to compute attention.
Parameters:
processor (`dict` of `AttentionProcessor` or only `Attenti... |
Sets the attention processor to use to compute attention.
Parameters:
processor (`dict` of `AttentionProcessor` or only `AttentionProcessor`):
The instantiated processor class or a dictionary of processor classes that will be set as the processor
for **all**... | set_attn_processor | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_condition.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_condition.py | MIT |
def set_default_attn_processor(self):
"""
Disables custom attention processors and sets the default attention implementation.
"""
if all(
proc.__class__ in ADDED_KV_ATTENTION_PROCESSORS
for proc in self.attn_processors.values()
):
processor = A... |
Disables custom attention processors and sets the default attention implementation.
| set_default_attn_processor | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_condition.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_condition.py | MIT |
def set_attention_slice(self, slice_size):
r"""
Enable sliced attention computation.
When this option is enabled, the attention module splits the input tensor in slices to compute attention in
several steps. This is useful for saving some memory in exchange for a small decrease in speed... |
Enable sliced attention computation.
When this option is enabled, the attention module splits the input tensor in slices to compute attention in
several steps. This is useful for saving some memory in exchange for a small decrease in speed.
Args:
slice_size (`str` or `int`... | set_attention_slice | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_condition.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_condition.py | MIT |
def forward(
self,
sample: torch.FloatTensor,
timestep: Union[torch.Tensor, float, int],
encoder_hidden_states: torch.Tensor,
cond_tensor: torch.FloatTensor=None,
class_labels: Optional[torch.Tensor] = None,
timestep_cond: Optional[torch.Tensor] = None,
at... |
The [`UNet2DConditionModel`] forward method.
Args:
sample (`torch.FloatTensor`):
The noisy input tensor with the following shape `(batch, channel, height, width)`.
timestep (`torch.FloatTensor` or `float` or `int`): The number of timesteps to denoise an input.
... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_condition.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_condition.py | MIT |
def load_change_cross_attention_dim(
cls,
pretrained_model_path: PathLike,
subfolder=None,
# unet_additional_kwargs=None,
):
"""
Load or change the cross-attention dimension of a pre-trained model.
Parameters:
pretrained_model_name_or_path (:class... |
Load or change the cross-attention dimension of a pre-trained model.
Parameters:
pretrained_model_name_or_path (:class:`~typing.Union[str, :class:`~pathlib.Path`]`):
The identifier of the pre-trained model or the path to the local folder containing the model.
fo... | load_change_cross_attention_dim | python | jdh-algo/JoyHallo | joyhallo/models/unet_2d_condition.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_2d_condition.py | MIT |
def set_attention_slice(self, slice_size):
r"""
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention
in several steps. This is useful to save some memory in exchange for a small speed decrease.... |
Enable sliced attention computation.
When this option is enabled, the attention module will split the input tensor in slices, to compute attention
in several steps. This is useful to save some memory in exchange for a small speed decrease.
Args:
slice_size (`str` or `int` ... | set_attention_slice | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d.py | MIT |
def forward(
self,
sample: torch.FloatTensor,
timestep: Union[torch.Tensor, float, int],
encoder_hidden_states: torch.Tensor,
audio_embedding: Optional[torch.Tensor] = None,
class_labels: Optional[torch.Tensor] = None,
mask_cond_fea: Optional[torch.Tensor] = None,... |
Args:
sample (`torch.FloatTensor`): (batch, channel, height, width) noisy inputs tensor
timestep (`torch.FloatTensor` or `float` or `int`): (batch) timesteps
encoder_hidden_states (`torch.FloatTensor`): (batch, sequence_length, feature_dim) encoder hidden states, face_emb
... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d.py | MIT |
def from_pretrained_2d(
cls,
pretrained_model_path: PathLike,
motion_module_path: PathLike,
subfolder=None,
unet_additional_kwargs=None,
mm_zero_proj_out=False,
use_landmark=True,
):
"""
Load a pre-trained 2D UNet model from a given directory.
... |
Load a pre-trained 2D UNet model from a given directory.
Parameters:
pretrained_model_path (`str` or `PathLike`):
Path to the directory containing a pre-trained 2D UNet model.
dtype (`torch.dtype`, *optional*):
The data type of the loaded model. ... | from_pretrained_2d | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d.py | MIT |
def get_down_block(
down_block_type,
num_layers,
in_channels,
out_channels,
temb_channels,
add_downsample,
resnet_eps,
resnet_act_fn,
attn_num_head_channels,
resnet_groups=None,
cross_attention_dim=None,
audio_attention_dim=None,
downsample_padding=None,
dual_cros... |
Factory function to instantiate a down-block module for the 3D UNet architecture.
Down blocks are used in the downsampling part of the U-Net to reduce the spatial dimensions
of the feature maps while increasing the depth. This function can create blocks with or without
cross attention based on the... | get_down_block | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d_blocks.py | MIT |
def get_up_block(
up_block_type,
num_layers,
in_channels,
out_channels,
prev_output_channel,
temb_channels,
add_upsample,
resnet_eps,
resnet_act_fn,
attn_num_head_channels,
resnet_groups=None,
cross_attention_dim=None,
audio_attention_dim=None,
dual_cross_attentio... |
Factory function to instantiate an up-block module for the 3D UNet architecture.
Up blocks are used in the upsampling part of the U-Net to increase the spatial dimensions
of the feature maps while decreasing the depth. This function can create blocks with or without
cross attention based on the specif... | get_up_block | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d_blocks.py | MIT |
def forward(
self,
hidden_states,
temb=None,
encoder_hidden_states=None,
attention_mask=None,
full_mask=None,
face_mask=None,
lip_mask=None,
audio_embedding=None,
motion_scale=None,
):
"""
Forward pass for the UNetMidBlo... |
Forward pass for the UNetMidBlock3DCrossAttn class.
Args:
self (UNetMidBlock3DCrossAttn): An instance of the UNetMidBlock3DCrossAttn class.
hidden_states (Tensor): The input hidden states tensor.
temb (Tensor, optional): The input temporal embedding tensor. Defaults... | forward | python | jdh-algo/JoyHallo | joyhallo/models/unet_3d_blocks.py | https://github.com/jdh-algo/JoyHallo/blob/master/joyhallo/models/unet_3d_blocks.py | MIT |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.